704d284d148ed9aaccf3e7eb1b07856b5c48a662
All checks were successful
Smoke Test / smoke (pull_request) Successful in 10s
The DFlash benchmark with --draft-sliding-window-size 4096 on the 9B model causes a Metal GPU timeout on Apple Silicon (kIOGPUCommandBufferCallbackErrorTimeout). Root cause: the 9B model's larger compute workload combined with a 4096-size draft sliding window produces GPU command buffers that exceed the watchdog timeout. The 4B model does not exhibit this problem. Mitigation: lower the default draft sliding window for the 9B pair from 4096 to 2048. This avoids the timeout while still providing meaningful speedup. Changes: - Add benchmarks/dflash_apple_silicon.py (DFlash benchmark planner) - 9B pair now uses draft_sliding_window_size=2048 - 4B pair retains draft_sliding_window_size=4096 - Add tests/test_dflash_apple_silicon.py with #154-specific test - Add docs/DFLASH_APPLE_SILICON.md documenting the mitigation - Add benchmarks/reports/dflash_m3max_36gb_qwen35_9b_timeout.md recording failure Verification: pytest -q tests/test_dflash_apple_silicon.py Test explicitly asserts 9B uses window=2048 to prevent timeout regression. Closes #154
TurboQuant
KV cache compression for local inference on M4 Max MacBook Pro.
What
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
- PolarQuant — WHT rotation + polar coordinates + Lloyd-Max codebook (~4.2x compression)
- QJL — 1-bit quantized Johnson-Lindenstrauss residual correction
- TurboQuant — PolarQuant + QJL = ~3.5 bits/channel, zero accuracy loss
Why
Unlock 64K-128K context on qwen3.5:27b within 32GB unified memory. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
Status
See issues for current progress.
Roles
- Strago: Build spec author
- Cid: Implementation, benchmarks, deployment
- Locke: Research support, upstream watch
- John: Quality review
- Frankie: Coordination
Source Repos
- TheTom/llama-cpp-turboquant — llama.cpp fork with Metal
- TheTom/turboquant_plus — Reference impl, 511+ tests
- amirzandieh/QJL — Author QJL code (CUDA)
- rachittshah/mlx-turboquant — MLX fallback
Docs
- Project Status — Full project status and build specification
Languages
Python
90.5%
C++
6.2%
Metal
2.4%
CMake
0.9%