- 10 prompts covering all required categories: 1. Factual recall (thermodynamics) 2. Code generation (merge sorted lists) 3. Reasoning (syllogism) 4. Long-form writing (AI sovereignty essay) 5. Summarization (~250 word passage) 6. Tool-call format (JSON output) 7. Multi-turn context (number: 7429) 8. Math (17*23+156/12) 9. Creative (haiku about ML dreams) 10. Instruction following (numbered, bold, code block) - Each prompt includes expected_pattern for automated scoring - Multi-turn prompt has both initial and follow-up questions
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- BUILD-SPEC.md — Full build specification (Strago, v2.2)
Languages
C++
37.3%
Python
34.2%
Metal
28.5%