d750ca42244762444da0c17a781cafadb9708381
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
Safety wrapper (llama-turbo.h, llama-turbo.cpp): - Input validation (dimension must be power of 2, 16-4096) - Null pointer checks - Invalid norm detection (NaN/Inf/negative) - Error codes for all failure modes - Safe API: polar_quant_encode_turbo4_safe() Constant-time quantization: - ct_fabsf: bitwise absolute value (no branches) - ct_select: bitwise selection (no branches) - Always examines all 16 centroids - No data-dependent branches in packing Metal shader (ggml-metal-turbo.metal): - Buffer bounds checking on all accesses - Invalid norm handling (outputs zeros) - Thread ID validation - Constant-time dequantization kernel Tests (tests/test_safety.py): - 15 tests, all passing - Power of 2 validation - Dimension bounds checking - Buffer size verification - Packing correctness Closes #55
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- BUILD-SPEC.md — Full build specification (Strago, v2.2)
Languages
Python
90.5%
C++
6.2%
Metal
2.4%
CMake
0.9%