Alex Payne 0c0c5223c9
All checks were successful
Smoke Test / smoke (pull_request) Successful in 8s
Tests #54: Add unit tests for PolarQuant encode/decode
- New tests/test_polar_quant.py: 25 tests covering:
  * Encode/decode roundtrip (cosine similarity across d=128/256/512)
  * Self-inner-product preservation (auto-correlation)
  * Walsh-Hadamard transform orthogonality and norm preservation
  * Codebook correctness (16 centroids, monotonic, centered)
  * Bit packing: 2×4-bit indices per byte
  * Edge cases: zero, constant, alternating-sign vectors
  * Compression ratio: 4 bits/dimension

Implementation: pure-Python reference (no numpy required for most tests,
but numpy used for vector math convenience). All thresholds calibrated
against C++ llama-turbo.cpp baseline (roundtrip_test.cpp).

Closes #54
2026-04-26 06:45:00 -04:00
2026-03-30 17:08:45 +00:00
2026-03-30 21:06:49 +00:00

TurboQuant

KV cache compression for local inference on M4 Max MacBook Pro.

What

TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:

  1. PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
  2. QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
  3. TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss

Why

Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.

Status

See issues for current progress.

Roles

  • Strago: Build spec author
  • Cid: Implementation, benchmarks, deployment
  • Locke: Research support, upstream watch
  • John: Quality review
  • Frankie: Coordination

Source Repos

Docs

Description
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
Readme MIT 28 MiB
Languages
Python 91.3%
C++ 5.7%
Metal 2.2%
CMake 0.8%