8946f0fef26753326bc7a84f723d18d9e73458ca
All checks were successful
Smoke Test / smoke (pull_request) Successful in 21s
turboquant/auto_select.py: - Preset selection: turboquant_k8v4 (8+ GB overhead), turboquant_4bit_nc (4+ GB), turboquant_3bit_nc (2+ GB), q4_0 (fallback). - SystemInfo.detect(): macOS (sysctl/vm_stat), Linux (/proc/meminfo + nvidia-smi), fallback (psutil). - auto_select(): Full pipeline — detect hardware, check config override, select preset, populate env vars + server flags. - Config file: $HERMES_HOME/turboquant.json with preset_override and context_length. save_config() merges with existing. - SelectionResult: dataclass with to_dict(), env_vars, server_flags, warnings for low headroom / overcommitted memory. - CLI: --model-size, --json, --shell, --list, --detect-only, --preset. - format_env_commands(): Shell export output for quick deployment. turboquant/__init__.py: - Package init re-exporting public API. tests/test_auto_select.py (35 tests): - Preset selection: overhead thresholds, boundary conditions, zero model. - vLLM requirement filtering. - SelectionResult: to_dict, env_vars, server_flags. - Preset definitions: required fields, quality order consistency. - SystemInfo detection. - Config: load, save, merge, missing file. - Auto-select: override, config file override, mocked detection. - Issue spec: exact threshold tests from #97. Closes #97
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- Project Status — Full project status and build specification
Languages
Python
91.3%
C++
5.7%
Metal
2.2%
CMake
0.8%