Add build spec v2.2 and README
TurboQuant KV cache compression for M4 Max local inference. Spec by Strago, triaged into 16 issues across 4 phases. Ref #1
This commit is contained in:
33
README.md
33
README.md
@@ -1,3 +1,32 @@
|
||||
# turboquant
|
||||
# TurboQuant
|
||||
|
||||
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
|
||||
KV cache compression for local inference on M4 Max MacBook Pro.
|
||||
|
||||
## What
|
||||
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
|
||||
1. **PolarQuant** — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
|
||||
2. **QJL** — 1-bit quantized Johnson-Lindenstrauss residual correction
|
||||
3. **TurboQuant** — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
|
||||
|
||||
## Why
|
||||
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory.
|
||||
A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
|
||||
|
||||
## Status
|
||||
See [issues](http://143.198.27.163:3000/Timmy_Foundation/turboquant/issues) for current progress.
|
||||
|
||||
## Roles
|
||||
- **Strago:** Build spec author
|
||||
- **Cid:** Implementation, benchmarks, deployment
|
||||
- **Locke:** Research support, upstream watch
|
||||
- **John:** Quality review
|
||||
- **Frankie:** Coordination
|
||||
|
||||
## Source Repos
|
||||
- [TheTom/llama-cpp-turboquant](https://github.com/TheTom/llama-cpp-turboquant) — llama.cpp fork with Metal
|
||||
- [TheTom/turboquant_plus](https://github.com/TheTom/turboquant_plus) — Reference impl, 511+ tests
|
||||
- [amirzandieh/QJL](https://github.com/amirzandieh/QJL) — Author QJL code (CUDA)
|
||||
- [rachittshah/mlx-turboquant](https://github.com/rachittshah/mlx-turboquant) — MLX fallback
|
||||
|
||||
## Docs
|
||||
- [BUILD-SPEC.md](BUILD-SPEC.md) — Full build specification (Strago, v2.2)
|
||||
|
||||
Reference in New Issue
Block a user