Alexander Whitestone d750ca4224
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
feat: Safety wrapper and constant-time implementation (#55)
Safety wrapper (llama-turbo.h, llama-turbo.cpp):
- Input validation (dimension must be power of 2, 16-4096)
- Null pointer checks
- Invalid norm detection (NaN/Inf/negative)
- Error codes for all failure modes
- Safe API: polar_quant_encode_turbo4_safe()

Constant-time quantization:
- ct_fabsf: bitwise absolute value (no branches)
- ct_select: bitwise selection (no branches)
- Always examines all 16 centroids
- No data-dependent branches in packing

Metal shader (ggml-metal-turbo.metal):
- Buffer bounds checking on all accesses
- Invalid norm handling (outputs zeros)
- Thread ID validation
- Constant-time dequantization kernel

Tests (tests/test_safety.py):
- 15 tests, all passing
- Power of 2 validation
- Dimension bounds checking
- Buffer size verification
- Packing correctness

Closes #55
2026-04-14 22:14:51 -04:00
2026-03-30 17:08:45 +00:00
2026-03-30 13:11:45 -04:00

TurboQuant

KV cache compression for local inference on M4 Max MacBook Pro.

What

TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:

  1. PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
  2. QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
  3. TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss

Why

Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.

Status

See issues for current progress.

Roles

  • Strago: Build spec author
  • Cid: Implementation, benchmarks, deployment
  • Locke: Research support, upstream watch
  • John: Quality review
  • Frankie: Coordination

Source Repos

Docs

Description
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
Readme MIT 28 MiB
Languages
Python 90.5%
C++ 6.2%
Metal 2.4%
CMake 0.9%