0c0c5223c90bded27b3bf4138bf1d7399c2c28e4
All checks were successful
Smoke Test / smoke (pull_request) Successful in 8s
- New tests/test_polar_quant.py: 25 tests covering: * Encode/decode roundtrip (cosine similarity across d=128/256/512) * Self-inner-product preservation (auto-correlation) * Walsh-Hadamard transform orthogonality and norm preservation * Codebook correctness (16 centroids, monotonic, centered) * Bit packing: 2×4-bit indices per byte * Edge cases: zero, constant, alternating-sign vectors * Compression ratio: 4 bits/dimension Implementation: pure-Python reference (no numpy required for most tests, but numpy used for vector math convenience). All thresholds calibrated against C++ llama-turbo.cpp baseline (roundtrip_test.cpp). Closes #54
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- Project Status — Full project status and build specification
Languages
Python
91.3%
C++
5.7%
Metal
2.2%
CMake
0.8%