docs: Document Ollama perplexity limitation — no logprob support (closes #63)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 26s
All checks were successful
Smoke Test / smoke (pull_request) Successful in 26s
Ollama lacks token logprob API, so true perplexity cannot be measured via the Ollama backend. Added warning to run_benchmarks.py docstring directing users to run_perplexity.py (llama-perplexity binary) for real PPL measurement with --logprobs support.
This commit is contained in:
@@ -5,8 +5,16 @@ TurboQuant Benchmarking Suite — Multi-Backend (Issue #29)
|
||||
Supports Ollama and llama-server backends with KV cache type configuration.
|
||||
Measures: TTFT, tokens/sec, latency, peak memory.
|
||||
|
||||
IMPORTANT — Perplexity Limitation (Issue #63):
|
||||
Ollama does NOT expose token logprobs. This means:
|
||||
- True perplexity (PPL) cannot be measured via the Ollama backend
|
||||
- The metrics here (tok/s, latency) are throughput proxies, not quality gates
|
||||
- For real perplexity measurement, use benchmarks/run_perplexity.py
|
||||
which calls llama-perplexity directly (--logprobs support)
|
||||
- The pass criterion "PPL delta <= 0.5" cannot be validated via Ollama
|
||||
|
||||
Usage:
|
||||
# Ollama (default)
|
||||
# Ollama (default) — throughput benchmarks only, NOT perplexity
|
||||
python3 benchmarks/run_benchmarks.py --backend ollama --model llama3
|
||||
|
||||
# llama-server with turbo4 KV
|
||||
|
||||
Reference in New Issue
Block a user