Files
turboquant/README.md
Hermes Agent 5f0d00f127
All checks were successful
Smoke Test / smoke (pull_request) Successful in 6s
fix(docs): resolve broken markdown links and stale forge URL
- Update raw-IP forge URL to canonical forge domain in README.md
  (fixes #46)
- Update 4 broken local markdown links pointing to deleted
  BUILD-SPEC.md, PHASE1-REPORT.md, FULL-REPORT.md to
  docs/PROJECT_STATUS.md (fixes #44)
2026-04-14 18:07:25 -04:00

33 lines
1.4 KiB
Markdown

# TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
## What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
1. **PolarQuant** — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
2. **QJL** — 1-bit quantized Johnson-Lindenstrauss residual correction
3. **TurboQuant** — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
## Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory.
A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
## Status
See [issues](https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant/issues) for current progress.
## Roles
- **Strago:** Build spec author
- **Cid:** Implementation, benchmarks, deployment
- **Locke:** Research support, upstream watch
- **John:** Quality review
- **Frankie:** Coordination
## Source Repos
- [TheTom/llama-cpp-turboquant](https://github.com/TheTom/llama-cpp-turboquant) — llama.cpp fork with Metal
- [TheTom/turboquant_plus](https://github.com/TheTom/turboquant_plus) — Reference impl, 511+ tests
- [amirzandieh/QJL](https://github.com/amirzandieh/QJL) — Author QJL code (CUDA)
- [rachittshah/mlx-turboquant](https://github.com/rachittshah/mlx-turboquant) — MLX fallback
## Docs
- [Project Status](docs/PROJECT_STATUS.md) — Full project status and build specification