All checks were successful
Smoke Test / smoke (pull_request) Successful in 27s
- Add --quality flag to run_benchmarks.py that delegates to llama-perplexity - Clarify token/sec is an efficiency metric, not perplexity - Ollama cannot provide true logprob-based PPL (no logprob API) - Quality gate now runs llama-perplexity binary directly when requested Closes #63