Timmy 8b6a4dca69
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
feat: QJL residual correction — implementation, Metal kernels, accuracy gates
Implements Issue #66: QJL (Quantized Johnson-Lindenstrauss) residual
correction for full TurboQuant compression (PolarQuant + QJL).

New files:
- llama-turbo-qjl.h — QJL API with encode/decode and utility functions
- llama-turbo-qjl.cpp — CPU reference implementation
- ggml-metal-qjl.metal — Metal GPU kernels for encode/decode/fused dequant
- tests/qjl_accuracy_test.cpp — 8 accuracy gate tests
- docs/QJL_IMPLEMENTATION_PLAN.md — full implementation plan

Algorithm:
- Encode: PolarQuant → compute residual → JL projection → 1-bit sign quant
- Decode: PolarQuant reconstruct → JL correction → add
- Storage: 76 bytes/vector (vs 512 FP32 = 6.7x compression)

Accuracy gates (all passing):
- Cosine similarity ≥ 0.95 (direction preservation)
- Max abs error ≤ 0.8, mean abs error ≤ 0.2
- Deterministic encode (reproducible)
- Compression ratio > 6x vs FP32

Closes #66
2026-04-15 23:59:51 -04:00
2026-03-30 17:08:45 +00:00
2026-03-30 21:06:49 +00:00

TurboQuant

KV cache compression for local inference on M4 Max MacBook Pro.

What

TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:

  1. PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
  2. QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
  3. TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss

Why

Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.

Status

See issues for current progress.

Roles

  • Strago: Build spec author
  • Cid: Implementation, benchmarks, deployment
  • Locke: Research support, upstream watch
  • John: Quality review
  • Frankie: Coordination

Source Repos

Docs

Description
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
Readme MIT 28 MiB
Languages
Python 90.5%
C++ 6.2%
Metal 2.4%
CMake 0.9%