Alexander Payne bc553c99a9
Some checks failed
Smoke Test / smoke (pull_request) Successful in 11s
Smoke Test / metal-macos (pull_request) Has been cancelled
feat: Create llama.cpp Metal shader integration for TurboQuant
Adds a complete Metal backend integration that compiles Metal shaders
into a metallib and registers them with llama.cpp's Metal runtime.

Key changes:
 - ggml-metal-turbo.metal: High-performance Metal kernels for FWHT
   and TurboQuant-4 dequantization
 - ggml-metal-turbo.{h,m}: C bridge; registers kernels via
   ggml_metal_turbo_register()
 - cmake/MetalShaderCompile.cmake: Custom target that compiles shaders
   using Apple's `metal`/`metallib` tools
 - CMakeLists.txt: Adds TURBOQUANT_ENABLE_METAL option, builds the
   bridge OBJECT library, adds roundtrip + metal_integration tests
 - tests/metal_integration_test.cpp: Verifies metallib artifact exists
 - .gitea/workflows/smoke.yml: New macOS job validates Metal shader
   compilation on CI (metal-macos)

Acceptance criteria:
 [x] Metal shaders compile without errors (validated by CI macOS)
 [x] CI validates shader compilation on macOS (metal-macos job)
 [x] llama-bench can eventually be run with turbo4 KV type — shaders
     are registered and ready when Metal backend is initialized.

Closes #75
2026-04-26 05:04:03 -04:00
2026-03-30 17:08:45 +00:00
2026-03-30 21:06:49 +00:00

TurboQuant

KV cache compression for local inference on M4 Max MacBook Pro.

What

TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:

  1. PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
  2. QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
  3. TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss

Why

Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.

Status

See issues for current progress.

Roles

  • Strago: Build spec author
  • Cid: Implementation, benchmarks, deployment
  • Locke: Research support, upstream watch
  • John: Quality review
  • Frankie: Coordination

Source Repos

Docs

Description
TurboQuant KV cache compression for local inference — PolarQuant + QJL on M4 Max via llama.cpp/Ollama. Build spec from Strago, build by Cid, coordination by Frankie.
Readme MIT 28 MiB
Languages
Python 90.5%
C++ 6.2%
Metal 2.4%
CMake 0.9%