8b6a4dca69982dd78c5232eac4390d734394df61
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
Implements Issue #66: QJL (Quantized Johnson-Lindenstrauss) residual correction for full TurboQuant compression (PolarQuant + QJL). New files: - llama-turbo-qjl.h — QJL API with encode/decode and utility functions - llama-turbo-qjl.cpp — CPU reference implementation - ggml-metal-qjl.metal — Metal GPU kernels for encode/decode/fused dequant - tests/qjl_accuracy_test.cpp — 8 accuracy gate tests - docs/QJL_IMPLEMENTATION_PLAN.md — full implementation plan Algorithm: - Encode: PolarQuant → compute residual → JL projection → 1-bit sign quant - Decode: PolarQuant reconstruct → JL correction → add - Storage: 76 bytes/vector (vs 512 FP32 = 6.7x compression) Accuracy gates (all passing): - Cosine similarity ≥ 0.95 (direction preservation) - Max abs error ≤ 0.8, mean abs error ≤ 0.2 - Deterministic encode (reproducible) - Compression ratio > 6x vs FP32 Closes #66
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- Project Status — Full project status and build specification
Languages
Python
90.5%
C++
6.2%
Metal
2.4%
CMake
0.9%