Alexander Whitestone 8946f0fef2
All checks were successful
Smoke Test / smoke (pull_request) Successful in 21s
feat: Auto-select TurboQuant preset based on available memory (#97)
turboquant/auto_select.py:
- Preset selection: turboquant_k8v4 (8+ GB overhead), turboquant_4bit_nc
  (4+ GB), turboquant_3bit_nc (2+ GB), q4_0 (fallback).
- SystemInfo.detect(): macOS (sysctl/vm_stat), Linux (/proc/meminfo +
  nvidia-smi), fallback (psutil).
- auto_select(): Full pipeline — detect hardware, check config override,
  select preset, populate env vars + server flags.
- Config file: $HERMES_HOME/turboquant.json with preset_override and
  context_length. save_config() merges with existing.
- SelectionResult: dataclass with to_dict(), env_vars, server_flags,
  warnings for low headroom / overcommitted memory.
- CLI: --model-size, --json, --shell, --list, --detect-only, --preset.
- format_env_commands(): Shell export output for quick deployment.

turboquant/__init__.py:
- Package init re-exporting public API.

tests/test_auto_select.py (35 tests):
- Preset selection: overhead thresholds, boundary conditions, zero model.
- vLLM requirement filtering.
- SelectionResult: to_dict, env_vars, server_flags.
- Preset definitions: required fields, quality order consistency.
- SystemInfo detection.
- Config: load, save, merge, missing file.
- Auto-select: override, config file override, mocked detection.
- Issue spec: exact threshold tests from #97.

Closes #97
2026-04-21 07:39:53 -04:00
2026-03-30 17:08:45 +00:00
2026-03-30 21:06:49 +00:00

TurboQuant

KV cache compression for local inference on M4 Max MacBook Pro.

What

TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:

  1. PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
  2. QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
  3. TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss

Why

Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.

Status

See issues for current progress.

Roles

  • Strago: Build spec author
  • Cid: Implementation, benchmarks, deployment
  • Locke: Research support, upstream watch
  • John: Quality review
  • Frankie: Coordination

Source Repos

Docs

Description
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
Readme MIT 28 MiB
Languages
Python 91.3%
C++ 5.7%
Metal 2.2%
CMake 0.8%