# TurboQuant KV cache compression for local inference on M4 Max MacBook Pro. ## What TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method: 1. **PolarQuant** — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression) 2. **QJL** — 1-bit quantized Johnson-Lindenstrauss residual correction 3. **TurboQuant** — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss ## Why Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context. ## Status See [issues](http://143.198.27.163:3000/Timmy_Foundation/turboquant/issues) for current progress. ## Roles - **Strago:** Build spec author - **Cid:** Implementation, benchmarks, deployment - **Locke:** Research support, upstream watch - **John:** Quality review - **Frankie:** Coordination ## Source Repos - [TheTom/llama-cpp-turboquant](https://github.com/TheTom/llama-cpp-turboquant) — llama.cpp fork with Metal - [TheTom/turboquant_plus](https://github.com/TheTom/turboquant_plus) — Reference impl, 511+ tests - [amirzandieh/QJL](https://github.com/amirzandieh/QJL) — Author QJL code (CUDA) - [rachittshah/mlx-turboquant](https://github.com/rachittshah/mlx-turboquant) — MLX fallback ## Docs - [BUILD-SPEC.md](BUILD-SPEC.md) — Full build specification (Strago, v2.2)