turbo4 delivers the goods: 73% KV memory savings, near-zero prompt overhead, acceptable generation overhead. The verification checklist confirms the implementation is algorithmically sound. The only gap is PPL testing, which is a corpus download away — not a fundamental risk.
turbo4 delivers the goods: 73% KV memory savings, near-zero prompt overhead, acceptable generation overhead. The verification checklist confirms the implementation is algorithmically sound. The only gap is PPL testing, which is a corpus download away â not a fundamental risk.
The real unlock — 128K context on 36GB hardware — is within reach. Phase 2 is Ollama integration and production deployment.
The real unlock â 128K context on 36GB hardware â is within reach. Phase 2 is Ollama integration and production deployment.
---
## Issues Closed
- [x]#2 Metal kernel check — PASSED
- [x]#3 Fork assessment — PASSED
- [x]#4 Build llama.cpp fork — COMPLETE
- [x]#5 PolarQuant verification — 5/6 PASS
- [x]#6 FP16 baseline benchmarks — RECORDED
- [x]#7 TurboQuant benchmarks — RECORDED
- [x]#8 Memory profiling — COMPLETE
- [x]#2 Metal kernel check â PASSED
- [x]#3 Fork assessment â PASSED
- [x]#4 Build llama.cpp fork â COMPLETE
- [x]#5 PolarQuant verification â 5/6 PASS
- [x]#6 FP16 baseline benchmarks â RECORDED
- [x]#7 TurboQuant benchmarks â RECORDED
- [x]#8 Memory profiling â COMPLETE
---
@@ -143,7 +158,7 @@ The real unlock — 128K context on 36GB hardware — is within reach. Phase 2 i
---
# TurboQuant — Full Knowledge Transfer Report
# TurboQuant â Full Knowledge Transfer Report
**Date:** 2026-03-30
**Prepared for:** Frankie's Team (Strago, Cid, Locke, John)
@@ -153,7 +168,7 @@ The real unlock — 128K context on 36GB hardware — is within reach. Phase 2 i
## TL;DR
TurboQuant works. PolarQuant KV cache compression delivers **73% memory savings with 1% prompt overhead**. 128K context on the MacBook becomes viable. Custom Ollama build is deferred (multi-day effort), but the fork's `llama-server` is a ready drop-in. Per-layer adaptive quantization is already implemented. QJL is infrastructure-only — not needed at current compression targets.
TurboQuant works. PolarQuant KV cache compression delivers **73% memory savings with 1% prompt overhead**. 128K context on the MacBook becomes viable. Custom Ollama build is deferred (multi-day effort), but the fork's `llama-server` is a ready drop-in. Per-layer adaptive quantization is already implemented. QJL is infrastructure-only â not needed at current compression targets.
---
@@ -166,14 +181,14 @@ Impact: Memory budget **increases** from ~27GB to ~31GB usable. Model ceiling im
---
## Phase 1 — PolarQuant MVP: COMPLETE ✅
## Phase 1 â PolarQuant MVP: COMPLETE â
### Gate Check (#2): Metal Shaders EXIST
The `feature/turboquant-kv-cache` branch has production-quality Metal support:
- Flash attention for turbo2/3/4 (all dk variants)
- WHT rotation kernels (turbo_fwht_128)
- Lloyd-Max codebooks (hardcoded, non-uniform)
- Asymmetric K/V (q8_0 × turbo mixed)
- Asymmetric K/V (q8_0 Ã turbo mixed)
- Runtime optimizations: 4-mag LUT (M4+), sparse V dequant, profiling
**Note:** Allegro's analysis (checking only `master` branch) incorrectly concluded "NO TurboQuant." The implementation lives on the feature branch.
@@ -197,9 +212,9 @@ The `feature/turboquant-kv-cache` branch has production-quality Metal support:
**turbo4 is pure 4-bit PolarQuant** — QJL is NOT active.
**turbo4 is pure 4-bit PolarQuant** â QJL is NOT active.
`TURBO4_USE_4BIT` defaults to 1 in `ggml-common.h`. The legacy 3-bit+QJL path exists but is disabled. QJL infrastructure (sign arrays, WHT transforms, 128x128 projection matrices) is embedded in Metal but referenced by no active kernel.
@@ -404,64 +419,64 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
John wants maximum local inference quality on the MacBook Pro (M4 Max, 32GB unified memory) using TurboQuant-level KV cache compression. Currently running `qwen3.5:27b` via Ollama at `10.0.0.133:11434`. The goal: run a larger or better model within the same 32GB memory envelope by compressing the KV cache during inference.
TurboQuant (Google, ICLR 2026) is a three-stage KV cache compression method:
1.**PolarQuant**— random rotation + polar coordinates + fixed scalar codebook. No normalization constants. ~4.2× compression.
2.**QJL**— 1-bit quantized Johnson-Lindenstrauss on the residual. Zero-overhead bias correction.
3.**TurboQuant**— PolarQuant for main signal + QJL for residual = unbiased inner product quantizer at ~3.5 bits/channel with zero accuracy loss.
1.**PolarQuant**â random rotation + polar coordinates + fixed scalar codebook. No normalization constants. ~4.2Ã compression.
2.**QJL**â 1-bit quantized Johnson-Lindenstrauss on the residual. Zero-overhead bias correction.
3.**TurboQuant**â PolarQuant for main signal + QJL for residual = unbiased inner product quantizer at ~3.5 bits/channel with zero accuracy loss.
Community status: multiple `llama.cpp` forks, MLX proof-of-concepts, and a vLLM plugin exist. Nothing upstreamed to official `llama.cpp`, MLX, or Ollama yet. Author QJL code is public. Enough is public to build from.
---
## 1a. PolarQuant Technical Detail — What Cid Needs to Verify
## 1a. PolarQuant Technical Detail â What Cid Needs to Verify
This section specifies the PolarQuant algorithm concretely so Cid can verify that the community fork implements it correctly. A fork that gets the rotation wrong or uses the wrong codebook boundaries will compress successfully but degrade quality in ways that short PPL benchmarks may not catch — the damage surfaces during long production sessions with sustained context pressure.
This section specifies the PolarQuant algorithm concretely so Cid can verify that the community fork implements it correctly. A fork that gets the rotation wrong or uses the wrong codebook boundaries will compress successfully but degrade quality in ways that short PPL benchmarks may not catch â the damage surfaces during long production sessions with sustained context pressure.
### The Algorithm (per KV vector)
**Step 1 — Random Rotation (Preconditioning):**
**Step 1 â Random Rotation (Preconditioning):**
- Apply a fixed random orthogonal rotation to each KV vector before quantization.
- The paper uses a **Walsh-Hadamard transform** (WHT) — a structured orthogonal matrix that's O(d log d) to apply, not O(d²) like a dense random matrix.
- The paper uses a **Walsh-Hadamard transform** (WHT) â a structured orthogonal matrix that's O(d log d) to apply, not O(d²) like a dense random matrix.
- **Why:** Raw KV vectors have non-uniform coordinate distributions (some dimensions carry more energy). WHT spreads energy uniformly across all coordinates, making the post-rotation distribution predictable and concentrated. This is what eliminates the need for per-vector normalization constants.
- **Cid verification:** The fork must use a fixed WHT (or equivalent structured orthogonal rotation), not a learned or per-layer rotation. The rotation matrix must be identical at quantization and dequantization. If the fork uses a dense random matrix instead of WHT, it's functionally correct but slower — flag it.
- **Cid verification:** The fork must use a fixed WHT (or equivalent structured orthogonal rotation), not a learned or per-layer rotation. The rotation matrix must be identical at quantization and dequantization. If the fork uses a dense random matrix instead of WHT, it's functionally correct but slower â flag it.
**Step 2 — Polar Coordinate Transform:**
**Step 2 â Polar Coordinate Transform:**
- After rotation, decompose each vector into **radius** (L2 norm / signal strength) and **angle** (direction on the unit sphere).
- The radius is stored at higher precision (FP16 or FP32) — it's one scalar per vector, negligible overhead.
- The radius is stored at higher precision (FP16 or FP32) â it's one scalar per vector, negligible overhead.
- The angle coordinates are what get quantized. Because WHT made their distribution predictable, you can use a fixed codebook without per-vector calibration.
**Step 3 — Lloyd-Max Scalar Quantization:**
**Step 3 â Lloyd-Max Scalar Quantization:**
- Each angle coordinate is independently quantized using a **Lloyd-Max optimal scalar quantizer**.
- Lloyd-Max minimizes mean squared error for a known distribution. Because WHT makes the distribution analytically computable, the codebook boundaries are **precomputed once** and fixed for all vectors.
- **Codebook sizes by compression target:**
-`turbo4` = 4 bits per coordinate = 16 codebook entries per dimension
-`turbo3` = 3 bits = 8 entries
-`turbo2` = 2 bits = 4 entries
- **Cid verification:** Check that the fork's codebook boundaries match what the paper/PolarQuant paper specifies for the target distribution. If the fork uses uniform quantization instead of Lloyd-Max, that's a quality regression — uniform is simpler but wastes bits on low-probability regions.
- **Cid verification:** Check that the fork's codebook boundaries match what the paper/PolarQuant paper specifies for the target distribution. If the fork uses uniform quantization instead of Lloyd-Max, that's a quality regression â uniform is simpler but wastes bits on low-probability regions.
**Step 4 — Bit Packing + Storage:**
**Step 4 â Bit Packing + Storage:**
- Quantized indices are packed into the KV cache format (turbo2/3/4 nibble-packed).
- Radius stored separately. No normalization constants, no scale factors, no zero-points — this is the key advantage over standard quantization.
- Radius stored separately. No normalization constants, no scale factors, no zero-points â this is the key advantage over standard quantization.
### Dequantization During Attention
When the model computes attention scores (Q·K^T) and weighted values (softmax·V):
When the model computes attention scores (Q·K^T) and weighted values (softmax·V):
1. Read packed indices from cache
2. Look up codebook values (single table lookup per coordinate)
3. Reconstruct angle coordinates
4. Scale by stored radius
5. Compute dot product in reconstructed space
**Critical property:** The inner product between a full-precision query Q and a PolarQuant-compressed K must be an unbiased estimator of the true Q·K dot product. The WHT rotation preserves this because orthogonal transforms preserve inner products. If the fork adds any non-orthogonal transformation (e.g., learned projection, PCA), the unbiasedness guarantee breaks.
**Critical property:** The inner product between a full-precision query Q and a PolarQuant-compressed K must be an unbiased estimator of the true Q·K dot product. The WHT rotation preserves this because orthogonal transforms preserve inner products. If the fork adds any non-orthogonal transformation (e.g., learned projection, PCA), the unbiasedness guarantee breaks.
PolarQuant requires two things to be initialized before inference can start:
1.**Walsh-Hadamard rotation matrix:** This is deterministic — a WHT of size d (model head dimension, typically 128) is computed from the recursive Hadamard construction. It's the same for every session, every model. Compute once at model load, store in memory. Cost: O(d log d) per head dimension — microseconds. No impact on model load time.
1.**Walsh-Hadamard rotation matrix:** This is deterministic â a WHT of size d (model head dimension, typically 128) is computed from the recursive Hadamard construction. It's the same for every session, every model. Compute once at model load, store in memory. Cost: O(d log d) per head dimension â microseconds. No impact on model load time.
2.**Lloyd-Max codebook:** The quantization boundaries are precomputed for the known post-WHT distribution. For a given bit width (turbo4 = 4 bits = 16 entries), the codebook is a fixed lookup table of 16 boundary values + 16 reconstruction values. This is identical across sessions and models of the same head dimension. Can be hardcoded as a constant array or computed once at load time from the analytical distribution formula.
**Expected initialization overhead:** Negligible — both are small deterministic computations. But **measure it during Phase 1**: time the gap between Ollama receiving a request and the first token appearing, with and without TurboQuant. If initialization adds >1 second to cold model load, investigate caching the tables to disk alongside the model file.
**Expected initialization overhead:** Negligible â both are small deterministic computations. But **measure it during Phase 1**: time the gap between Ollama receiving a request and the first token appearing, with and without TurboQuant. If initialization adds >1 second to cold model load, investigate caching the tables to disk alongside the model file.
**Cid measurement target:** Report model load time (cold start) with and without TurboQuant. If >5 second delta, flag as UX issue.
@@ -475,9 +490,9 @@ PolarQuant requires two things to be initialized before inference can start:
---
## 1. Model Targeting — What Can We Run?
## 1. Model Targeting â What Can We Run?
### Memory Budget — Realistic, Not Theoretical
### Memory Budget â Realistic, Not Theoretical
On a 32GB M4 Max running macOS, you do NOT have 32GB for inference. Realistic budget:
@@ -489,18 +504,18 @@ On a 32GB M4 Max running macOS, you do NOT have 32GB for inference. Realistic bu
| Activation memory (intermediate tensors during forward pass) | ~1-3GB (varies by model/batch) |
| **Available for model weights + KV cache** | **~26-28GB** |
**Use 27GB as the planning ceiling.** The v1 spec said "leaves 2GB for OS" at 30GB peak — that's too tight. All memory calculations below use 27GB available.
**Use 27GB as the planning ceiling.** The v1 spec said "leaves 2GB for OS" at 30GB peak â that's too tight. All memory calculations below use 27GB available.
### Current State (No TurboQuant)
- **qwen3.5:27b** at Q4_K_M (~16GB model weights) — fits within 27GB budget with room for KV cache
- At 32K context, KV cache for a 27B model at FP16 ≈ 4-6GB → total ~20-22GB. Comfortable.
- At 64K context, KV cache ≈ 8-12GB → total ~24-28GB. Marginal — may swap.
- At 128K context, KV cache grows to ~16-24GB → doesn't fit. Context-limited.
- **qwen3.5:27b** at Q4_K_M (~16GB model weights) â fits within 27GB budget with room for KV cache
- At 32K context, KV cache for a 27B model at FP16 â 4-6GB â total ~20-22GB. Comfortable.
- At 64K context, KV cache â 8-12GB â total ~24-28GB. Marginal â may swap.
- At 128K context, KV cache grows to ~16-24GB â doesn't fit. Context-limited.
### With TurboQuant (4× KV Compression)
- KV cache at 32K drops from ~5GB → ~1.2GB
- KV cache at 64K drops from ~10GB → ~2.5GB
- KV cache at 128K drops from ~20GB → ~5GB
### With TurboQuant (4Ã KV Compression)
- KV cache at 32K drops from ~5GB â ~1.2GB
- KV cache at 64K drops from ~10GB â ~2.5GB
- KV cache at 128K drops from ~20GB â ~5GB
- This frees 4-15GB of headroom depending on context length
**Important:** These are calculated estimates, not measured values. Actual memory consumption can exceed theoretical due to fragmentation, allocation overhead, and implementation-specific buffering. Phase 1 **must** include actual peak memory measurement (see validation section). If measured exceeds calculated by >15%, the context ceiling drops accordingly.
@@ -509,31 +524,31 @@ On a 32GB M4 Max running macOS, you do NOT have 32GB for inference. Realistic bu
**Primary target: qwen3.5:27b at Q4_K_M with extended context**
- Model weights: ~16GB at Q4_K_M
- With TurboQuant KV cache at 64K context: ~2.5GB cache + ~2GB activations → ~20-21GB total. Comfortable within 27GB budget.
- With TurboQuant at 128K: ~5GB cache + ~2GB activations → ~23GB total. Fits, but tight —**needs measured validation.**
- **Win: 64K context becomes reliable, 128K becomes possible.** This is the real unlock.
**Stretch target: Qwen 3.5 32B (Q4_K_M)**
- Model weights: ~18-19GB at Q4_K_M
- With TurboQuant at 64K: ~2.5GB cache + ~2.5GB activations → ~23-24GB. Fits within 27GB but leaves only ~3GB headroom.
- With TurboQuant at 64K: ~2.5GB cache + ~2.5GB activations â ~23-24GB. Fits within 27GB but leaves only ~3GB headroom.
- **Verdict: worth testing in Phase 1 benchmarks alongside 27B.** If it fits, marginally better quality. If it's marginal, stay on 27B.
**Not recommended: Qwen 3.5 72B (Q2_K or IQ3_XXS)**
- Model weights at Q2_K: ~27GB. Leaves ~0GB for anything else.
- **Verdict: does not fit.** Even with TurboQuant, no room for KV cache + activations + Metal overhead. And quality at Q2_K is poor — weight quantization damage cancels the parameter count advantage.
- **Verdict: does not fit.** Even with TurboQuant, no room for KV cache + activations + Metal overhead. And quality at Q2_K is poor â weight quantization damage cancels the parameter count advantage.
**Recommended path: Stay on 27B class, use TurboQuant to unlock longer context (64K-128K) rather than a bigger model.** The real win on 32GB unified is context length, not parameter count. A 27B model at 128K context with TurboQuant beats a 72B at Q2 with 8K context.
**Alternative worth testing: Mistral/Codestral 25B-class models** at Q5_K_M (~18GB) with TurboQuant. Locke's research notes TurboQuant was benchmarked on Mistral — community results may be more reproducible.
**Alternative worth testing: Mistral/Codestral 25B-class models** at Q5_K_M (~18GB) with TurboQuant. Locke's research notes TurboQuant was benchmarked on Mistral â community results may be more reproducible.
---
## 2. Implementation Path — PolarQuant First, Then Full TurboQuant
## 2. Implementation Path â PolarQuant First, Then Full TurboQuant
- PolarQuant alone delivers ~4.2× compression — that's the bulk of the win
- PolarQuant alone delivers ~4.2Ã compression â that's the bulk of the win
- Full TurboQuant adds QJL residual correction for marginal quality improvement at extreme compression (2.5 bits)
- At 3.5+ bits/channel, PolarQuant is sufficient for zero accuracy loss
- QJL adds kernel complexity for small incremental gain at our target compression ratio
@@ -543,36 +558,36 @@ On a 32GB M4 Max running macOS, you do NOT have 32GB for inference. Realistic bu
| Repo | What | Why | Risk |
|------|------|-----|------|
| **`TheTom/llama-cpp-turboquant`** | `llama.cpp` fork with Metal support | Most directly useful — same stack as Ollama. Reports PPL numbers on M-series. | Community fork, not upstream. May lag `llama.cpp` HEAD. |
| **`TheTom/llama-cpp-turboquant`** | `llama.cpp` fork with Metal support | Most directly useful â same stack as Ollama. Reports PPL numbers on M-series. | Community fork, not upstream. May lag `llama.cpp` HEAD. |
| **`TheTom/turboquant_plus`** | Standalone C implementation + Python tests | Most detailed reverse-engineering. 511+ tests. PolarQuant + Walsh-Hadamard + turbo2/3/4 formats. | Extends beyond paper ("Plus"). May include non-paper innovations. |
| **`amirzandieh/QJL`** | Author's QJL CUDA implementation | Official author code. CUDA kernels, eval scripts, LongBench commands. | CUDA only — needs Metal port for MacBook. Phase 2 dependency. |
| **`amirzandieh/QJL`** | Author's QJL CUDA implementation | Official author code. CUDA kernels, eval scripts, LongBench commands. | CUDA only â needs Metal port for MacBook. Phase 2 dependency. |
| **`rachittshah/mlx-turboquant`** | MLX proof-of-concept | Native Apple Silicon. Correct module layout (codebooks, polar_quant, qjl). | May be partial implementation. Naming drift noted. |
The v1 spec understated this. Community `llama.cpp` forks can diverge significantly from HEAD, especially in the Metal backend where Apple Silicon optimizations change frequently. The risk isn't "it doesn't build" — it's "it builds fine on the fork's base commit but breaks when cherry-picked onto current HEAD."
The v1 spec understated this. Community `llama.cpp` forks can diverge significantly from HEAD, especially in the Metal backend where Apple Silicon optimizations change frequently. The risk isn't "it doesn't build" â it's "it builds fine on the fork's base commit but breaks when cherry-picked onto current HEAD."
**Specific risk areas:**
- **KV cache layer:** `llama.cpp` has refactored KV cache internals multiple times in 2026. A fork based on a 4-week-old commit may touch structs/functions that have been renamed or restructured upstream.
- **Metal shaders:** Apple Silicon Metal optimizations are actively changing. Custom Metal kernels for TurboQuant dequant may conflict with upstream shader refactors.
- **Memory management:** `ggml` memory allocation has evolved. The fork's cache allocation assumptions may not match current `ggml` memory pools.
**Mitigation plan (Phase 1 Step 0 — before any benchmarking):**
**Mitigation plan (Phase 1 Step 0 â before any benchmarking):**
1.**Check fork freshness:**`git log --oneline -1` on the fork. Compare base commit date against `llama.cpp` HEAD. If >4 weeks stale, flag as HIGH risk.
3.**If stale (2-4 weeks):** Attempt cherry-pick of TurboQuant-specific commits onto current HEAD. If merge conflicts are limited to TurboQuant files → resolve manually. If conflicts touch core KV cache / Metal code → stop, evaluate effort.
4.**If very stale (> 4 weeks) or conflicts are extensive:** Switch to **clean-room approach**— use `TheTom/turboquant_plus` as the algorithm reference and implement the KV cache types directly into current `llama.cpp` HEAD. This is more work (~60-90 min instead of ~20-40 min) but avoids the merge conflict maze.
3.**If stale (2-4 weeks):** Attempt cherry-pick of TurboQuant-specific commits onto current HEAD. If merge conflicts are limited to TurboQuant files â resolve manually. If conflicts touch core KV cache / Metal code â stop, evaluate effort.
4.**If very stale (> 4 weeks) or conflicts are extensive:** Switch to **clean-room approach**â use `TheTom/turboquant_plus` as the algorithm reference and implement the KV cache types directly into current `llama.cpp` HEAD. This is more work (~60-90 min instead of ~20-40 min) but avoids the merge conflict maze.
5.**Escape hatch:** If `llama.cpp` path is blocked, fall back to `rachittshah/mlx-turboquant` (MLX native, no fork divergence risk, but requires API proxy for Ollama compatibility).
**Cid decision point:** After Step 0, report fork age + conflict assessment before proceeding. If clean-room is needed, update the time estimate and Frankie adjusts the schedule. Don't spend more than 15 minutes fighting merge conflicts — switch to clean-room.
**Cid decision point:** After Step 0, report fork age + conflict assessment before proceeding. If clean-room is needed, update the time estimate and Frankie adjusts the schedule. Don't spend more than 15 minutes fighting merge conflicts â switch to clean-room.
### Metal Kernel Risk — The Single Highest-Risk Assumption
### Metal Kernel Risk â The Single Highest-Risk Assumption
The spec assumes the `llama.cpp` fork has working **Metal shaders** for PolarQuant KV dequantization. KV dequant happens in the attention computation hot path — every token, every layer, every head. If the fork only has CPU or CUDA dequant kernels and no Metal implementation, the MacBook will either:
- Fall back to CPU dequant →**catastrophic** performance loss (10-50× slower attention)
The spec assumes the `llama.cpp` fork has working **Metal shaders** for PolarQuant KV dequantization. KV dequant happens in the attention computation hot path â every token, every layer, every head. If the fork only has CPU or CUDA dequant kernels and no Metal implementation, the MacBook will either:
- Fall back to CPU dequant â**catastrophic** performance loss (10-50Ã slower attention)
- Fail to build entirely for Metal backend
**Cid's actual first action** (before building, before benchmarking, before anything):
@@ -597,12 +612,12 @@ This check takes 2 minutes and determines the entire build strategy. Do it first
- Our entire fleet uses Ollama. Model management, API compatibility, endpoint routing — all built around Ollama.
- Our entire fleet uses Ollama. Model management, API compatibility, endpoint routing â all built around Ollama.
- MLX would require a separate inference server, separate model format, separate API integration.
- Ollama is built on `llama.cpp`/`ggml`. KV cache changes in `llama.cpp` propagate to Ollama.
@@ -611,13 +626,13 @@ Why not MLX:
2. Validate quality + performance
3. Build custom Ollama from our `llama.cpp` fork (Ollama builds `llama.cpp` as a submodule)
4. Deploy to MacBook as replacement Ollama binary
5. Existing model files, API, and endpoint (`10.0.0.133:11434`) remain identical — only the inference engine changes
5. Existing model files, API, and endpoint (`10.0.0.133:11434`) remain identical â only the inference engine changes
**Fallback: MLX standalone** if `llama.cpp` Metal integration proves too complex. `rachittshah/mlx-turboquant` as starting point. Would require a small proxy server to maintain API compatibility with our Ollama endpoint.
---
## 4. Validation Plan — How We Know It Works
## 4. Validation Plan â How We Know It Works
### Quality Validation
@@ -625,24 +640,24 @@ Why not MLX:
| Test | What It Measures | Tool | Pass Criteria |
|------|-----------------|------|--------------|
| Perplexity (PPL) | Overall language modeling quality | `llama-perplexity` on WikiText-2 | PPL delta ≤ 0.5 from baseline (FP16 KV) |
| Perplexity (PPL) | Overall language modeling quality | `llama-perplexity` on WikiText-2 | PPL delta ⤠0.5 from baseline (FP16 KV) |
| Needle-in-Haystack | Long context retrieval | Custom prompt at 8K/16K/32K/64K/128K | 100% retrieval at all lengths where baseline passes |
| Practical generation | Subjective quality | 10 predefined prompts (see test suite below) | Human review: no degradation on ≥9/10 |
| 1 | Long-context summarization | Feed 20K tokens of a research paper, ask for structured summary with citations | KV cache quality at length — compressed K/V must retain source detail |
| 1 | Long-context summarization | Feed 20K tokens of a research paper, ask for structured summary with citations | KV cache quality at length â compressed K/V must retain source detail |
| 2 | Multi-step reasoning | 5-step math word problem requiring chain-of-thought | Whether compressed KV degrades intermediate reasoning steps |
| 3 | Code generation | Write a Python script with 3 functions, error handling, type hints | Precise token prediction — code is unforgiving of subtle quality drops |
| 4 | Code debugging | Provide buggy code (3 bugs), ask to identify and fix all three | Attention to detail across context — must reference earlier code correctly |
| 3 | Code generation | Write a Python script with 3 functions, error handling, type hints | Precise token prediction â code is unforgiving of subtle quality drops |
| 4 | Code debugging | Provide buggy code (3 bugs), ask to identify and fix all three | Attention to detail across context â must reference earlier code correctly |
| 5 | Factual recall (early context) | Provide 10 facts in the first 1K tokens, continue for 8K tokens of filler, ask about fact #3 | Retrieval from early context through compressed KV |
| 6 | Creative writing | Write a 500-word short story with specific constraints (setting, character, twist) | Compression artifacts surface as repetition or coherence loss |
| 7 | Multi-turn conversation | 10-turn technical Q&A where later questions reference earlier answers | Cross-turn coherence through accumulated compressed KV |
| 8 | Structured output | Generate a JSON schema with 15+ fields, nested objects, and validation rules | Format precision — compressed KV must maintain structural consistency |
| 9 | Translation + analysis | Translate a paragraph EN→ES, then analyze the translation choices | Tests both generation quality and meta-reasoning about own output |
| 8 | Structured output | Generate a JSON schema with 15+ fields, nested objects, and validation rules | Format precision â compressed KV must maintain structural consistency |
| 9 | Translation + analysis | Translate a paragraph ENâES, then analyze the translation choices | Tests both generation quality and meta-reasoning about own output |
| 10 | Instruction following | Complex prompt with 8 specific formatting requirements (headers, bullet style, word limits, etc.) | Whether compression causes the model to "forget" constraints mid-generation |
**Prompts must be written and saved to `projects/sovereign-stack/turboquant-test-prompts.md` before Phase 2 benchmarks run.** Same prompts, same order, both configurations. This prevents unconscious cherry-picking.
@@ -650,7 +665,7 @@ Why not MLX:
**Asymmetric K/V test:** Run K at Q8_0, V at turbo4. Community reports this works well on sensitive models. Compare PPL vs symmetric turbo4 K+V.
**Long-session quality test (Phase 2 only):** Short-context PPL benchmarks can miss quality degradation that surfaces during sustained context pressure. During Phase 2, run one extended production simulation:
- Generate a 50-turn multi-step reasoning conversation (code gen → debug → refactor → test → iterate)
- Generate a 50-turn multi-step reasoning conversation (code gen â debug â refactor â test â iterate)
- Compare output quality vs same conversation on FP16 KV baseline
- Specifically watch for: coherence drift after turn 30+, hallucinated references to earlier context, attention score softmax concentration (if measurable)
- This catches the case where codebook boundary errors accumulate over many KV cache writes in a single session
| Validation | Cid | Run test matrix, report PPL/performance numbers |
| Model selection | Cid | Test qwen3.5:27b + one Mistral variant, recommend best config |
@@ -688,48 +703,48 @@ Why not MLX:
## 6. Phasing
### Phase 1 — PolarQuant MVP (Target: this week)
### Phase 1 â PolarQuant MVP (Target: this week)
**Scope:**
**Step 0 — Fork Assessment (do this FIRST, report before proceeding):**
**Step 0 â Fork Assessment (do this FIRST, report before proceeding):**
- Clone `TheTom/llama-cpp-turboquant`
- Check base commit age vs `llama.cpp` HEAD (`git log --oneline -1`)
- Check `sysctl hw.memsize` on MacBook (resolve the 32/36/48GB question)
- If fork < 2 weeks stale → proceed to build
- If 2-4 weeks stale → attempt cherry-pick, report conflict scope
- If > 4 weeks or conflicts extensive → switch to clean-room (see Fork Risk Assessment above)
- If fork < 2 weeks stale â proceed to build
- If 2-4 weeks stale â attempt cherry-pick, report conflict scope
- If > 4 weeks or conflicts extensive â switch to clean-room (see Fork Risk Assessment above)
- Report: fork age, conflict assessment, MacBook actual RAM, estimated build path time
**Step 1 — Build + Verify:**
**Step 1 â Build + Verify:**
- Build `llama.cpp` fork (or clean-room) with Metal backend on MacBook (M4 Max)
- Run the Section 1a verification checklist against the fork's implementation before trusting any benchmarks
- Run FP16 KV baseline: `llama-perplexity` on WikiText-2 with `qwen3.5:27b` at 8K context (this is the number we're comparing against)
**Step 2 — Benchmark PolarQuant:**
**Step 2 â Benchmark PolarQuant:**
- Run perplexity test with PolarQuant KV (turbo4 format) vs FP16 KV baseline
- Run `llama-bench` for tok/s comparison
- Test at 8K, 32K, and 64K context lengths
- Run asymmetric test: K at Q8_0, V at turbo4
- **Measure actual peak resident memory** at each context length (`footprint -p <pid>` or `vmmap --summary`). Compare measured vs calculated. If measured exceeds calculated by >15%, note the delta — it reduces the achievable context ceiling.
- **Measure actual peak resident memory** at each context length (`footprint -p <pid>` or `vmmap --summary`). Compare measured vs calculated. If measured exceeds calculated by >15%, note the delta â it reduces the achievable context ceiling.
- Report: PPL delta per context length, tok/s delta, **measured peak memory per context length**, max context before OOM/swap, asymmetric vs symmetric results
**Deliverable:** Working `llama.cpp` build on MacBook with PolarQuant KV cache. PPL + performance numbers. Section 1a verification checklist completed.
**Estimated Cid time (honest range):**
- **Best case** — fork is fresh, builds clean on first try, Metal shaders work: 20-40 min
- **Typical case** — fork needs CMake flag tweaks, Xcode SDK adjustments, minor Metal fixes: 1-2 hours
- **Worst case** — fork is stale, conflicts extensive, or Metal shaders missing: clean-room build 2-4 hours, or MLX pivot
- **Best case** â fork is fresh, builds clean on first try, Metal shaders work: 20-40 min
- **Typical case** â fork needs CMake flag tweaks, Xcode SDK adjustments, minor Metal fixes: 1-2 hours
- **Worst case** â fork is stale, conflicts extensive, or Metal shaders missing: clean-room build 2-4 hours, or MLX pivot
**2-hour build troubleshooting cap:** If the `llama.cpp` fork doesn't compile and pass a basic smoke test (load model, generate 10 tokens) within 2 hours of troubleshooting, stop. Pivot to MLX path. Don't sink more time into Xcode/CMake/Metal debug loops when a working MLX PoC exists. Report what broke — the information is useful even if the path is abandoned.
**2-hour build troubleshooting cap:** If the `llama.cpp` fork doesn't compile and pass a basic smoke test (load model, generate 10 tokens) within 2 hours of troubleshooting, stop. Pivot to MLX path. Don't sink more time into Xcode/CMake/Metal debug loops when a working MLX PoC exists. Report what broke â the information is useful even if the path is abandoned.
**Decision gate:** If PPL delta ≤ 0.5 and tok/s ≥ 90% baseline AND Section 1a checklist passes → proceed to Phase 2. If PPL fails but checklist passes → try asymmetric K/V or lower compression (turbo3 instead of turbo4). If checklist fails → fix implementation before trusting benchmarks.
**Decision gate:** If PPL delta ⤠0.5 and tok/s ⥠90% baseline AND Section 1a checklist passes â proceed to Phase 2. If PPL fails but checklist passes â try asymmetric K/V or lower compression (turbo3 instead of turbo4). If checklist fails â fix implementation before trusting benchmarks.
### Phase 2 — Ollama Integration + Production Deploy
### Phase 2 â Ollama Integration + Production Deploy
**Scope:**
**Step 0 — Ollama API Compatibility Check (before building):**
**Step 0 â Ollama API Compatibility Check (before building):**
Ollama pins a specific `llama.cpp` commit and calls it through CGo bindings in `llm/`. If our fork changes any function signatures, struct layouts, or enum values that Ollama's Go code references, the build will either fail or produce subtle runtime bugs.
```bash
@@ -761,7 +776,7 @@ If API surface differs: check if TurboQuant changes are additive (new functions/
**Estimated Cid time:** 15-25 min (Ollama build is straightforward once `llama.cpp` fork is validated).
Not all transformer layers have equal sensitivity to KV cache quantization. Research and community experimentation show early layers (first 2-4) and late layers (last 2-4) tend to be more sensitive than middle layers. If the fork supports per-layer KV cache type configuration:
@@ -774,19 +789,19 @@ This gives the same average compression ratio as uniform turbo4 but concentrates
**Cid note:** During Phase 1, check whether the fork exposes per-layer KV type config. If it does, note it for later. Don't implement it yet.
- We want to push to 2.5 bits/channel for even more context headroom
**Source:**`amirzandieh/QJL` repo (CUDA → Metal port needed)
**Source:**`amirzandieh/QJL` repo (CUDA â Metal port needed)
**Estimated Cid time:** 30-60 min (Metal port of QJL kernels is real engineering work)
**Decision gate:** Only proceed if PolarQuant alone doesn't meet quality bar at target compression.
### Phase 4 — Upstream Watch
### Phase 4 â Upstream Watch
**Scope:** Monitor `llama.cpp` upstream and Ollama for official TurboQuant support. When it lands:
- Evaluate upstream implementation vs our fork
@@ -799,10 +814,10 @@ This gives the same average compression ratio as uniform turbo4 but concentrates
## What This Spec Does NOT Cover
- **Weight quantization** — TurboQuant is KV cache compression only. Model weight quantization (GGUF Q4_K_M etc.) is a separate concern and already handled by Ollama.
- **Predator (desktop) deployment** — this spec targets MacBook only. Predator runs NVIDIA (CUDA) which is a different kernel backend. Can extend later.
- **Multi-model serving** — TurboQuant helps with single-model memory but doesn't change Ollama's single-model-at-a-time constraint.
- **Ollama upstream contribution** — out of scope for now. We build for ourselves first.
- **Weight quantization** â TurboQuant is KV cache compression only. Model weight quantization (GGUF Q4_K_M etc.) is a separate concern and already handled by Ollama.
- **Predator (desktop) deployment** â this spec targets MacBook only. Predator runs NVIDIA (CUDA) which is a different kernel backend. Can extend later.
- **Multi-model serving** â TurboQuant helps with single-model memory but doesn't change Ollama's single-model-at-a-time constraint.
- **Ollama upstream contribution** â out of scope for now. We build for ourselves first.
---
@@ -810,7 +825,7 @@ This gives the same average compression ratio as uniform turbo4 but concentrates
**None blocking.** One informational:
- **MacBook Pro memory:** Confirmed M4 Max 32GB from memory/2026-03-14.md. If it's actually 36GB or 48GB (M4 Max comes in 36/48/128 configs), that changes the model ceiling. Can Cid check `sysctl hw.memsize` on the MacBook during Phase 1? Non-blocking — doesn't change the approach, just the model size ceiling.
- **MacBook Pro memory:** Confirmed M4 Max 32GB from memory/2026-03-14.md. If it's actually 36GB or 48GB (M4 Max comes in 36/48/128 configs), that changes the model ceiling. Can Cid check `sysctl hw.memsize` on the MacBook during Phase 1? Non-blocking â doesn't change the approach, just the model size ceiling.
---
@@ -835,8 +850,8 @@ This gives the same average compression ratio as uniform turbo4 but concentrates
- **v1 (2026-03-30 12:26 ET):** Initial spec.
- **v2 (2026-03-30 12:55 ET):** Added Section 1a (PolarQuant technical detail + Cid verification checklist), expanded fork risk assessment with mitigation plan, added Phase 1 Step 0 (fork assessment before benchmarking), added long-session quality test for Phase 2, updated Phase 1 time estimate for clean-room path. Changes driven by external Opus review round 1.
- **v2.1 (2026-03-30 13:00 ET):** Added Metal kernel risk check (grep before build — determines llama.cpp vs MLX primary path), corrected memory budget (27GB available, not 30GB — accounts for OS + Metal driver + activations), added measured memory profiling requirement to Phase 1, added Ollama CGo API compatibility check to Phase 2 Step 0, tightened model ceiling estimates. Changes driven by external Opus review round 2.
- **v2.2 (2026-03-30 13:05 ET):** Added honest time estimate range (20 min best → 2-4 hr worst), 2-hour build troubleshooting cap before MLX pivot, PolarQuant initialization detail (WHT + Lloyd-Max codebook setup + cold-start measurement target), 10 predefined test prompts with rationale (prevents cherry-picking), per-layer quantization profiles as Phase 2.5 optimization path. Changes driven by external Opus review round 3.
- **v2.1 (2026-03-30 13:00 ET):** Added Metal kernel risk check (grep before build â determines llama.cpp vs MLX primary path), corrected memory budget (27GB available, not 30GB â accounts for OS + Metal driver + activations), added measured memory profiling requirement to Phase 1, added Ollama CGo API compatibility check to Phase 2 Step 0, tightened model ceiling estimates. Changes driven by external Opus review round 2.
- **v2.2 (2026-03-30 13:05 ET):** Added honest time estimate range (20 min best â 2-4 hr worst), 2-hour build troubleshooting cap before MLX pivot, PolarQuant initialization detail (WHT + Lloyd-Max codebook setup + cold-start measurement target), 10 predefined test prompts with rationale (prevents cherry-picking), per-layer quantization profiles as Phase 2.5 optimization path. Changes driven by external Opus review round 3.
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.