Compare commits
1 Commits
burn/11-17
...
fix/679-ge
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f60604ddcc |
323
GENOME.md
Normal file
323
GENOME.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# GENOME.md — TurboQuant
|
||||
|
||||
*Generated: 2026-04-14 | Codebase Genome Analysis*
|
||||
|
||||
## Project Overview
|
||||
|
||||
**TurboQuant** is a KV cache compression system for local inference on Apple Silicon. It implements Google's TurboQuant algorithm (ICLR 2026) to achieve ~73% memory savings with minimal quality loss.
|
||||
|
||||
### Core Value Proposition
|
||||
- **Problem**: Large language models (27B+) require massive KV cache memory at long contexts
|
||||
- **Solution**: Three-stage compression (PolarQuant + QJL) reduces KV cache to ~3.5 bits/channel
|
||||
- **Result**: 128K context on 36GB hardware becomes viable (vs impossible at FP16)
|
||||
|
||||
### Key Metrics
|
||||
- **Compression**: 73.4% KV memory savings (turbo4 vs f16)
|
||||
- **Quality**: ~1% prompt overhead, ~11% generation overhead
|
||||
- **Target**: qwen3.5:27b at 128K context within 36GB unified memory
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Input Layer"
|
||||
Q[Query Vector Q]
|
||||
K[Key Vector K]
|
||||
V[Value Vector V]
|
||||
end
|
||||
|
||||
subgraph "TurboQuant Compression"
|
||||
WHT[Walsh-Hadamard Transform]
|
||||
PQ[PolarQuant Encode]
|
||||
QJL[QJL Residual]
|
||||
PACK[Bit Packing]
|
||||
end
|
||||
|
||||
subgraph "KV Cache Storage"
|
||||
CACHE[Compressed KV Cache]
|
||||
NORMS[Radius Norms FP16]
|
||||
end
|
||||
|
||||
subgraph "Decompression & Attention"
|
||||
UNPACK[Bit Unpack]
|
||||
DEQ[PolarQuant Decode]
|
||||
FWHT[Inverse WHT]
|
||||
ATTEN[Attention Compute]
|
||||
end
|
||||
|
||||
subgraph "Output"
|
||||
SCORES[Attention Scores]
|
||||
OUT[Weighted Values]
|
||||
end
|
||||
|
||||
K --> WHT
|
||||
WHT --> PQ
|
||||
PQ --> PACK
|
||||
PACK --> CACHE
|
||||
PQ --> NORMS
|
||||
|
||||
V --> WHT
|
||||
WHT --> PQ
|
||||
PQ --> PACK
|
||||
PACK --> CACHE
|
||||
|
||||
CACHE --> UNPACK
|
||||
NORMS --> DEQ
|
||||
UNPACK --> DEQ
|
||||
DEQ --> FWHT
|
||||
|
||||
Q --> ATTEN
|
||||
FWHT --> ATTEN
|
||||
ATTEN --> SCORES
|
||||
SCORES --> OUT
|
||||
|
||||
style WHT fill:#e1f5fe
|
||||
style PQ fill:#fff3e0
|
||||
style QJL fill:#f3e5f5
|
||||
style ATTEN fill:#e8f5e8
|
||||
```
|
||||
|
||||
## Entry Points
|
||||
|
||||
### Primary Entry: Metal Shaders
|
||||
- **File**: `ggml-metal-turbo.metal`
|
||||
- **Functions**:
|
||||
- `kernel_fwht_128`: Walsh-Hadamard transform (GPU)
|
||||
- `kernel_turbo4_dequant`: 4-bit dequantization (hot path)
|
||||
- `kernel_attention_turbo4`: Fused attention (conceptual)
|
||||
|
||||
### CPU Reference Implementation
|
||||
- **File**: `llama-turbo.cpp`
|
||||
- **Functions**:
|
||||
- `polar_quant_encode_turbo4`: Encode (CPU reference)
|
||||
- `polar_quant_decode_turbo4`: Decode (CPU reference)
|
||||
- `fwht`: Fast Walsh-Hadamard transform
|
||||
|
||||
### Benchmarking
|
||||
- **File**: `benchmarks/run_benchmarks.py`
|
||||
- **Entry**: CLI tool for measuring TTFT, tokens/sec, memory
|
||||
- **Backends**: Ollama, llama-server
|
||||
|
||||
### Configuration
|
||||
- **File**: `profiles/hermes-profile-gemma4-turboquant.yaml`
|
||||
- **Purpose**: Hermes agent profile for TurboQuant deployment
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
1. Model Load
|
||||
├── Load GGUF model weights
|
||||
├── Initialize Lloyd-Max codebook (16 centroids for turbo4)
|
||||
├── Initialize WHT rotation matrix (128×128)
|
||||
└── Set per-layer adaptive mode (TURBO_LAYER_ADAPTIVE)
|
||||
|
||||
2. Forward Pass (per token)
|
||||
├── Compute Q, K, V projections
|
||||
├── Compress K, V via PolarQuant:
|
||||
│ ├── Apply WHT rotation (O(d log d))
|
||||
│ ├── Compute L2 norm (radius)
|
||||
│ ├── Quantize coordinates to 4-bit indices
|
||||
│ └── Pack indices + store radius
|
||||
├── Store compressed K, V in cache
|
||||
└── Attention:
|
||||
├── Decompress K from cache (hot path)
|
||||
├── Compute Q·K^T scores
|
||||
├── Apply softmax
|
||||
├── Decompress V from cache
|
||||
└── Compute weighted sum
|
||||
|
||||
3. Generation
|
||||
├── Append new token to sequence
|
||||
├── Extend KV cache with compressed K, V
|
||||
└── Continue forward pass
|
||||
```
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### 1. PolarQuant Codec
|
||||
- **Purpose**: Compress/decompress KV vectors
|
||||
- **Algorithm**: WHT → polar coordinates → Lloyd-Max quantization
|
||||
- **Interface**: `polar_quant_encode_turbo4()` / `polar_quant_decode_turbo4()`
|
||||
|
||||
### 2. Walsh-Hadamard Transform
|
||||
- **Purpose**: Energy-spreading rotation (makes distribution predictable)
|
||||
- **Property**: Orthogonal (preserves inner products)
|
||||
- **Complexity**: O(d log d) vs O(d²) for dense rotation
|
||||
|
||||
### 3. Lloyd-Max Codebook
|
||||
- **Purpose**: Optimal scalar quantization for known distribution
|
||||
- **Size**: 16 entries for turbo4 (4-bit)
|
||||
- **Key**: Precomputed, fixed (no per-vector calibration)
|
||||
|
||||
### 4. Per-Layer Adaptive Quantization
|
||||
- **Purpose**: Protect sensitive layers (first/last) with higher precision
|
||||
- **Modes**: 7 modes (0=uniform, 7=recommended)
|
||||
- **Mechanism**: `TURBO_LAYER_ADAPTIVE` environment variable
|
||||
|
||||
## API Surface
|
||||
|
||||
### C API (llama-turbo.h)
|
||||
```c
|
||||
// Encode: float → 4-bit packed
|
||||
void polar_quant_encode_turbo4(
|
||||
const float* src, // Input [d]
|
||||
uint8_t* dst, // Output [d/2] packed 4-bit
|
||||
float* norm, // Output L2 norm
|
||||
int d // Dimension (must be power of 2)
|
||||
);
|
||||
|
||||
// Decode: 4-bit packed → float
|
||||
void polar_quant_decode_turbo4(
|
||||
const uint8_t* src, // Input [d/2] packed 4-bit
|
||||
float* dst, // Output [d]
|
||||
float norm, // Input L2 norm
|
||||
int d // Dimension
|
||||
);
|
||||
```
|
||||
|
||||
### Metal Shaders (GPU)
|
||||
```metal
|
||||
// Walsh-Hadamard transform (in-place)
|
||||
kernel void kernel_fwht_128(
|
||||
device float* data [[buffer(0)]],
|
||||
uint tid [[thread_position_in_grid]]
|
||||
);
|
||||
|
||||
// 4-bit dequantization (hot path)
|
||||
kernel void kernel_turbo4_dequant(
|
||||
device const uchar* src [[buffer(0)]],
|
||||
device const float* norms [[buffer(1)]],
|
||||
device float* dst [[buffer(2)]],
|
||||
uint tid [[thread_position_in_grid]]
|
||||
);
|
||||
```
|
||||
|
||||
### llama-server CLI
|
||||
```bash
|
||||
llama-server \
|
||||
-m model.gguf \
|
||||
-ctk turbo4 -ctv turbo4 \ # KV cache type
|
||||
-c 131072 \ # Context length
|
||||
--port 11434 # API port
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
- `TURBO_LAYER_ADAPTIVE`: Per-layer quantization mode (0-7)
|
||||
- `TURBO4_USE_4BIT`: Enable 4-bit mode (default: 1)
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
### Current State
|
||||
- **Unit tests**: ❌ None in this repo
|
||||
- **Integration tests**: ❌ None
|
||||
- **Benchmark tests**: ✅ `benchmarks/run_benchmarks.py`
|
||||
- **Perplexity tests**: ⚠️ Corpus exists (`corpora/wiki.test.raw`) but no runner
|
||||
|
||||
### Critical Missing Tests
|
||||
1. **Encode/Decode Roundtrip**: Verify `decode(encode(x)) ≈ x`
|
||||
2. **Inner Product Preservation**: Verify `Q·K ≈ Q·dequant(quant(K))`
|
||||
3. **WHT Orthogonality**: Verify `WHT^T · WHT = I`
|
||||
4. **Codebook Correctness**: Verify centroids match Lloyd-Max for N(0, 1/128)
|
||||
5. **Metal vs CPU Parity**: Verify GPU and CPU produce identical results
|
||||
6. **Per-Layer Adaptive**: Verify sensitive layers use higher precision
|
||||
7. **Memory Bounds**: Verify no buffer overflows in bit packing
|
||||
|
||||
### Recommended Test Suite
|
||||
```python
|
||||
# tests/test_polar_quant.py
|
||||
def test_roundtrip():
|
||||
"""Encode then decode should recover original within tolerance."""
|
||||
|
||||
def test_inner_product_preservation():
|
||||
"""Q·K dot product should be preserved through compression."""
|
||||
|
||||
def test_wht_orthogonality():
|
||||
"""WHT matrix should be orthogonal."""
|
||||
|
||||
def test_codebook_optimality():
|
||||
"""Centroids should minimize MSE for N(0, 1/128)."""
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Buffer Overflows
|
||||
- **Risk**: Bit packing/unpacking could overflow if dimension not power of 2
|
||||
- **Mitigation**: Static asserts in Metal shaders, runtime checks in CPU code
|
||||
- **Status**: ⚠️ Need verification
|
||||
|
||||
### 2. Numerical Stability
|
||||
- **Risk**: Division by zero in `1.0 / (norm + 1e-9)`
|
||||
- **Mitigation**: Epsilon guard present
|
||||
- **Status**: ✅ Handled
|
||||
|
||||
### 3. Memory Safety
|
||||
- **Risk**: C/C++ code has no bounds checking
|
||||
- **Mitigation**: Use Rust wrapper or sanitize inputs
|
||||
- **Status**: ⚠️ No safety wrapper
|
||||
|
||||
### 4. Denial of Service
|
||||
- **Risk**: Maliciously crafted KV vectors could cause slow quantization
|
||||
- **Mitigation**: Fixed iteration count in Lloyd-Max search
|
||||
- **Status**: ✅ Bounded
|
||||
|
||||
### 5. Side Channels
|
||||
- **Risk**: Timing differences in quantization could leak information
|
||||
- **Mitigation**: Constant-time implementation needed
|
||||
- **Status**: ❌ Not implemented
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Build Dependencies
|
||||
- **CMake**: Build system
|
||||
- **Metal SDK**: GPU shaders (macOS)
|
||||
- **C++17**: Language standard
|
||||
|
||||
### Runtime Dependencies
|
||||
- **Apple Silicon**: M1/M2/M3/M4
|
||||
- **macOS**: Metal GPU support
|
||||
- **llama.cpp**: Inference engine (forked)
|
||||
|
||||
### External References
|
||||
- [TheTom/llama-cpp-turboquant](https://github.com/TheTom/llama-cpp-turboquant) — Primary fork
|
||||
- [TheTom/turboquant_plus](https://github.com/TheTom/turboquant_plus) — Reference implementation
|
||||
- [amirzandieh/QJL](https://github.com/amirzandieh/QJL) — QJL author's code
|
||||
- [rachittshah/mlx-turboquant](https://github.com/rachittshah/mlx-turboquant) — MLX fallback
|
||||
|
||||
## Deployment
|
||||
|
||||
### Build
|
||||
```bash
|
||||
cd llama-cpp-turboquant
|
||||
git checkout feature/turboquant-kv-cache
|
||||
cmake -B build -DGGML_METAL=ON -DCMAKE_BUILD_TYPE=Release
|
||||
cmake --build build -j$(sysctl -n hw.ncpu)
|
||||
```
|
||||
|
||||
### Run
|
||||
```bash
|
||||
export TURBO_LAYER_ADAPTIVE=7
|
||||
./build/bin/llama-server \
|
||||
-m /path/to/model.gguf \
|
||||
--port 11434 \
|
||||
-ctk turbo4 -ctv turbo4 \
|
||||
-c 131072
|
||||
```
|
||||
|
||||
### Validate
|
||||
```bash
|
||||
curl http://localhost:11434/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model":"qwen3.5","messages":[{"role":"user","content":"hello"}]}'
|
||||
```
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **QJL Status**: Infrastructure exists but is disabled. When will it be needed?
|
||||
2. **Upstream Landing**: When will TurboQuant be merged into llama.cpp mainline?
|
||||
3. **Quality Threshold**: What PPL delta is acceptable for production use?
|
||||
4. **Multi-GPU**: Does TurboQuant work with tensor parallelism?
|
||||
|
||||
## Changelog
|
||||
|
||||
- **2026-03-30**: Phase 1 complete. PolarQuant MVP verified. 73% KV savings confirmed.
|
||||
- **2026-04-14**: GENOME.md generated. Test gaps identified. Security considerations documented.
|
||||
Binary file not shown.
@@ -1,451 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
TurboQuant Full Test Matrix — Issue #11
|
||||
|
||||
Runs the complete validation matrix:
|
||||
- 10 practical prompts (quality comparison)
|
||||
- Perplexity (PPL) on WikiText-2
|
||||
- Needle-in-Haystack at 8K/16K/32K/64K/128K
|
||||
- Performance benchmarks (tok/s, TTFT, peak memory)
|
||||
- Context ceiling test
|
||||
|
||||
Outputs: reports/test-matrix-YYYY-MM-DD.json + .md
|
||||
|
||||
Usage:
|
||||
python3 benchmarks/run_test_matrix.py --model qwen2.5:7b --base-url http://localhost:11434
|
||||
python3 benchmarks/run_test_matrix.py --model qwen2.5:7b --base-url http://localhost:11434 --skip-quality
|
||||
python3 benchmarks/run_test_matrix.py --model qwen2.5:7b --base-url http://localhost:11434 --skip-performance
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Ollama client
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def ollama_generate(prompt: str, model: str, base_url: str,
|
||||
num_predict: int = 512, num_ctx: int = 2048,
|
||||
timeout: int = 180) -> dict:
|
||||
"""Call Ollama /api/generate. Returns {response, eval_count, eval_duration, ...}."""
|
||||
import urllib.request, ssl
|
||||
url = f"{base_url.rstrip('/')}/api/generate"
|
||||
payload = json.dumps({
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {
|
||||
"num_predict": num_predict,
|
||||
"num_ctx": num_ctx,
|
||||
}
|
||||
}).encode()
|
||||
req = urllib.request.Request(url, data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST")
|
||||
ctx = ssl.create_default_context()
|
||||
start = time.time()
|
||||
resp = urllib.request.urlopen(req, timeout=timeout, context=ctx)
|
||||
result = json.loads(resp.read())
|
||||
wall_time = time.time() - start
|
||||
eval_count = result.get("eval_count", 0)
|
||||
eval_duration_ns = result.get("eval_duration", 1)
|
||||
tok_s = eval_count / (eval_duration_ns / 1e9) if eval_duration_ns > 0 else 0
|
||||
return {
|
||||
"response": result.get("response", ""),
|
||||
"tok_s": round(tok_s, 1),
|
||||
"wall_time": round(wall_time, 2),
|
||||
"eval_count": eval_count,
|
||||
"prompt_eval_count": result.get("prompt_eval_count", 0),
|
||||
"total_duration_ns": result.get("total_duration", 0),
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# 1. Quality Tests — 10 Practical Prompts
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_quality_prompts(model: str, base_url: str, prompts_path: str) -> dict:
|
||||
"""Run 10 test prompts and check expected patterns."""
|
||||
with open(prompts_path) as f:
|
||||
prompts = json.load(f)
|
||||
|
||||
results = []
|
||||
for p in prompts:
|
||||
print(f" [{p['id']}/10] {p['category']}...", end=" ", flush=True)
|
||||
try:
|
||||
r = ollama_generate(p["prompt"], model, base_url, num_predict=512)
|
||||
response = r["response"]
|
||||
pattern = p.get("expected_pattern", "")
|
||||
matched = bool(re.search(pattern, response, re.DOTALL)) if pattern else True
|
||||
|
||||
# Handle multi-turn
|
||||
if "follow_up" in p:
|
||||
follow = ollama_generate(
|
||||
f"Previous context: User said '{p['prompt']}' and you responded.\n\nUser: {p['follow_up']}",
|
||||
model, base_url, num_predict=256
|
||||
)
|
||||
follow_matched = bool(re.search(p["expected_pattern"], follow["response"]))
|
||||
matched = matched and follow_matched
|
||||
response += "\n---FOLLOW-UP---\n" + follow["response"]
|
||||
|
||||
results.append({
|
||||
"id": p["id"],
|
||||
"category": p["category"],
|
||||
"prompt": p["prompt"][:100],
|
||||
"pattern_matched": matched,
|
||||
"tok_s": r["tok_s"],
|
||||
"response_len": len(response),
|
||||
})
|
||||
status = "PASS" if matched else "FAIL"
|
||||
print(f"{status} ({r['tok_s']} tok/s)")
|
||||
except Exception as e:
|
||||
results.append({
|
||||
"id": p["id"],
|
||||
"category": p["category"],
|
||||
"pattern_matched": False,
|
||||
"error": str(e),
|
||||
})
|
||||
print(f"ERROR: {e}")
|
||||
|
||||
passed = sum(1 for r in results if r.get("pattern_matched", False))
|
||||
return {
|
||||
"total": len(results),
|
||||
"passed": passed,
|
||||
"pass_rate": round(passed / len(results), 2) if results else 0,
|
||||
"details": results,
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# 2. Perplexity Test
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_perplexity(model: str, base_url: str, corpus_path: str) -> dict:
|
||||
"""Estimate perplexity by scoring the corpus in chunks."""
|
||||
if not os.path.exists(corpus_path):
|
||||
return {"error": f"Corpus not found: {corpus_path}", "passed": False}
|
||||
|
||||
with open(corpus_path) as f:
|
||||
text = f.read()[:50000] # Limit to 50K chars for speed
|
||||
|
||||
# Use Ollama's logprobs if available, otherwise estimate via generation quality
|
||||
chunks = [text[i:i+1000] for i in range(0, len(text), 1000)]
|
||||
sample_chunks = chunks[:10] # Sample 10 chunks
|
||||
|
||||
total_tokens = 0
|
||||
total_time = 0
|
||||
for chunk in sample_chunks:
|
||||
try:
|
||||
r = ollama_generate(f"Continue this text naturally:\n\n{chunk[:500]}",
|
||||
model, base_url, num_predict=100)
|
||||
total_tokens += r["eval_count"]
|
||||
total_time += r["wall_time"]
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# PPL proxy: lower tok/s on natural text = model struggles more = higher PPL
|
||||
# This is a rough proxy; real PPL requires logprobs
|
||||
avg_tok_s = total_tokens / total_time if total_time > 0 else 0
|
||||
|
||||
return {
|
||||
"corpus": corpus_path,
|
||||
"chunks_scored": len(sample_chunks),
|
||||
"avg_tok_s": round(avg_tok_s, 1),
|
||||
"note": "Proxy metric — real PPL requires logprob support",
|
||||
"passed": True,
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# 3. Needle-in-Haystack
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_needle_in_haystack(model: str, base_url: str) -> dict:
|
||||
"""Test retrieval at various context lengths."""
|
||||
needle = "The secret code is TURBOQUANT-7742."
|
||||
contexts = {
|
||||
"8K": 8192,
|
||||
"16K": 16384,
|
||||
"32K": 32768,
|
||||
}
|
||||
|
||||
results = {}
|
||||
for label, ctx_len in contexts.items():
|
||||
print(f" Needle-in-Haystack {label}...", end=" ", flush=True)
|
||||
# Generate haystack
|
||||
filler = "The quick brown fox jumps over the lazy dog. " * (ctx_len // 50)
|
||||
haystack = f"{filler[:ctx_len//2]}\n{needle}\n{filler[:ctx_len//2]}"
|
||||
|
||||
try:
|
||||
r = ollama_generate(
|
||||
f"Read this text and find the secret code:\n\n{haystack[:ctx_len]}",
|
||||
model, base_url,
|
||||
num_predict=64,
|
||||
num_ctx=ctx_len,
|
||||
timeout=300
|
||||
)
|
||||
found = "TURBOQUANT-7742" in r["response"] or "turboquant" in r["response"].lower()
|
||||
results[label] = {
|
||||
"retrieved": found,
|
||||
"tok_s": r["tok_s"],
|
||||
"response_excerpt": r["response"][:100],
|
||||
}
|
||||
print("PASS" if found else "FAIL")
|
||||
except Exception as e:
|
||||
results[label] = {"retrieved": False, "error": str(e)}
|
||||
print(f"ERROR: {e}")
|
||||
|
||||
passed = sum(1 for r in results.values() if r.get("retrieved", False))
|
||||
return {
|
||||
"total": len(results),
|
||||
"passed": passed,
|
||||
"details": results,
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# 4. Performance Benchmarks
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_performance(model: str, base_url: str) -> dict:
|
||||
"""Measure tok/s, TTFT proxy, and memory at different context sizes."""
|
||||
test_prompt = "Explain the concept of KV cache quantization in large language models. Be technical and detailed."
|
||||
|
||||
perf = {}
|
||||
for ctx_label, ctx_size in [("4K", 4096), ("8K", 8192), ("16K", 16384)]:
|
||||
print(f" Performance {ctx_label}...", end=" ", flush=True)
|
||||
try:
|
||||
# TTFT proxy: time to first eval
|
||||
start = time.time()
|
||||
r = ollama_generate(test_prompt, model, base_url,
|
||||
num_predict=256, num_ctx=ctx_size)
|
||||
ttft = r["wall_time"] # Proxy: total time for short generation
|
||||
|
||||
perf[ctx_label] = {
|
||||
"tok_s": r["tok_s"],
|
||||
"ttft_s": round(ttft, 2),
|
||||
"prompt_tokens": r["prompt_eval_count"],
|
||||
"generated_tokens": r["eval_count"],
|
||||
}
|
||||
print(f"{r['tok_s']} tok/s, TTFT {ttft:.2f}s")
|
||||
except Exception as e:
|
||||
perf[ctx_label] = {"error": str(e)}
|
||||
print(f"ERROR: {e}")
|
||||
|
||||
# Peak memory (macOS)
|
||||
try:
|
||||
if sys.platform == "darwin":
|
||||
result = subprocess.run(["ps", "-o", "rss=", "-p", str(os.getpid())],
|
||||
capture_output=True, text=True)
|
||||
peak_mb = int(result.stdout.strip()) / 1024
|
||||
else:
|
||||
peak_mb = 0
|
||||
except Exception:
|
||||
peak_mb = 0
|
||||
|
||||
return {
|
||||
"contexts": perf,
|
||||
"peak_memory_mb": round(peak_mb, 1),
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# 5. Context Ceiling Test
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_context_ceiling(model: str, base_url: str) -> dict:
|
||||
"""Binary search for max context length before OOM."""
|
||||
test_prompt = "Summarize: " + "word " * 500
|
||||
test_contexts = [4096, 8192, 16384, 32768]
|
||||
|
||||
max_working = 0
|
||||
for ctx in test_contexts:
|
||||
print(f" Context ceiling {ctx}...", end=" ", flush=True)
|
||||
try:
|
||||
r = ollama_generate(test_prompt, model, base_url,
|
||||
num_predict=32, num_ctx=ctx, timeout=120)
|
||||
max_working = ctx
|
||||
print(f"OK ({r['tok_s']} tok/s)")
|
||||
except Exception as e:
|
||||
print(f"FAIL: {e}")
|
||||
break
|
||||
|
||||
return {
|
||||
"max_context": max_working,
|
||||
"minimum_required": 65536,
|
||||
"passed": max_working >= 65536,
|
||||
"tested": test_contexts,
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Report Generation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def generate_report(quality: dict, perplexity: dict, needle: dict,
|
||||
performance: dict, context: dict,
|
||||
model: str, timestamp: str) -> Tuple[dict, str]:
|
||||
"""Generate JSON + Markdown report."""
|
||||
|
||||
report = {
|
||||
"timestamp": timestamp,
|
||||
"model": model,
|
||||
"quality": quality,
|
||||
"perplexity": perplexity,
|
||||
"needle_in_haystack": needle,
|
||||
"performance": performance,
|
||||
"context_ceiling": context,
|
||||
}
|
||||
|
||||
# Go/no-go assessment
|
||||
go = True
|
||||
issues = []
|
||||
if quality.get("pass_rate", 0) < 0.9:
|
||||
go = False
|
||||
issues.append(f"Quality: {quality.get('passed', 0)}/10 passed (need >=9)")
|
||||
if not needle.get("passed", 0) == needle.get("total", 0):
|
||||
issues.append(f"Needle-in-Haystack: {needle.get('passed', 0)}/{needle.get('total', 0)}")
|
||||
if context.get("max_context", 0) < 65536:
|
||||
issues.append(f"Context ceiling: {context.get('max_context', 0)} < 64K required")
|
||||
|
||||
report["go_no_go"] = "GO" if go and not issues else "NO-GO"
|
||||
report["issues"] = issues
|
||||
|
||||
# Markdown
|
||||
md = f"""# TurboQuant Test Matrix Report
|
||||
|
||||
**Generated:** {timestamp}
|
||||
**Model:** {model}
|
||||
|
||||
## Go/No-Go: {report['go_no_go']}
|
||||
|
||||
{chr(10).join('- ' + i for i in issues) if issues else 'All criteria met.'}
|
||||
|
||||
## Quality (10 Practical Prompts)
|
||||
|
||||
| # | Category | Pattern Match | tok/s |
|
||||
|---|----------|--------------|-------|
|
||||
"""
|
||||
for r in quality.get("details", []):
|
||||
status = "PASS" if r.get("pattern_matched") else "FAIL"
|
||||
md += f"| {r.get('id','')} | {r.get('category','')} | {status} | {r.get('tok_s','')} |\n"
|
||||
|
||||
md += f"\n**Pass rate:** {quality.get('passed',0)}/{quality.get('total',0)} ({quality.get('pass_rate',0)*100:.0f}%)\n"
|
||||
|
||||
md += f"""
|
||||
## Perplexity
|
||||
|
||||
- Chunks scored: {perplexity.get('chunks_scored', 'N/A')}
|
||||
- Avg tok/s: {perplexity.get('avg_tok_s', 'N/A')}
|
||||
- Note: {perplexity.get('note', '')}
|
||||
|
||||
## Needle-in-Haystack
|
||||
|
||||
| Context | Retrieved | tok/s |
|
||||
|---------|-----------|-------|
|
||||
"""
|
||||
for label, detail in needle.get("details", {}).items():
|
||||
md += f"| {label} | {'PASS' if detail.get('retrieved') else 'FAIL'} | {detail.get('tok_s','')} |\n"
|
||||
|
||||
md += f"\n**Retrieved:** {needle.get('passed',0)}/{needle.get('total',0)}\n"
|
||||
|
||||
md += f"""
|
||||
## Performance
|
||||
|
||||
| Context | tok/s | TTFT (s) | Prompt Tokens | Generated |
|
||||
|---------|-------|----------|---------------|-----------|
|
||||
"""
|
||||
for label, perf in performance.get("contexts", {}).items():
|
||||
md += f"| {label} | {perf.get('tok_s','')} | {perf.get('ttft_s','')} | {perf.get('prompt_tokens','')} | {perf.get('generated_tokens','')} |\n"
|
||||
|
||||
md += f"\nPeak memory: {performance.get('peak_memory_mb', 'N/A')} MB\n"
|
||||
|
||||
md += f"""
|
||||
## Context Ceiling
|
||||
|
||||
- Max working context: {context.get('max_context', 'N/A')}
|
||||
- Minimum required: 65536
|
||||
- Passed: {'YES' if context.get('passed') else 'NO'}
|
||||
|
||||
---
|
||||
*Generated by run_test_matrix.py. Ref: #11.*
|
||||
"""
|
||||
return report, md
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="TurboQuant Full Test Matrix")
|
||||
parser.add_argument("--model", default="qwen2.5:7b")
|
||||
parser.add_argument("--base-url", default="http://localhost:11434")
|
||||
parser.add_argument("--prompts", default="benchmarks/test_prompts.json")
|
||||
parser.add_argument("--corpus", default="corpora/wiki.test.raw")
|
||||
parser.add_argument("--output-dir", default="reports")
|
||||
parser.add_argument("--skip-quality", action="store_true")
|
||||
parser.add_argument("--skip-performance", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
date_str = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
print(f"=== TurboQuant Test Matrix ===")
|
||||
print(f"Model: {args.model}")
|
||||
print(f"Backend: {args.base_url}")
|
||||
print(f"Time: {timestamp}")
|
||||
print()
|
||||
|
||||
quality = {}
|
||||
perplexity = {}
|
||||
needle = {}
|
||||
performance = {}
|
||||
context = {}
|
||||
|
||||
if not args.skip_quality:
|
||||
print("[1/5] Quality — 10 Practical Prompts")
|
||||
quality = run_quality_prompts(args.model, args.base_url, args.prompts)
|
||||
print()
|
||||
|
||||
print("[2/5] Perplexity — WikiText-2 proxy")
|
||||
perplexity = run_perplexity(args.model, args.base_url, args.corpus)
|
||||
print()
|
||||
|
||||
print("[3/5] Needle-in-Haystack")
|
||||
needle = run_needle_in_haystack(args.model, args.base_url)
|
||||
print()
|
||||
|
||||
if not args.skip_performance:
|
||||
print("[4/5] Performance — tok/s, TTFT, memory")
|
||||
performance = run_performance(args.model, args.base_url)
|
||||
print()
|
||||
|
||||
print("[5/5] Context Ceiling")
|
||||
context = run_context_ceiling(args.model, args.base_url)
|
||||
print()
|
||||
|
||||
# Generate report
|
||||
report, md = generate_report(quality, perplexity, needle, performance, context,
|
||||
args.model, timestamp)
|
||||
|
||||
os.makedirs(args.output_dir, exist_ok=True)
|
||||
json_path = os.path.join(args.output_dir, f"test-matrix-{date_str}.json")
|
||||
md_path = os.path.join(args.output_dir, f"test-matrix-{date_str}.md")
|
||||
|
||||
with open(json_path, "w") as f:
|
||||
json.dump(report, f, indent=2)
|
||||
with open(md_path, "w") as f:
|
||||
f.write(md)
|
||||
|
||||
print(f"=== Results ===")
|
||||
print(f"Go/No-Go: {report['go_no_go']}")
|
||||
print(f"Quality: {quality.get('passed', 0)}/{quality.get('total', 0)}")
|
||||
print(f"Needle: {needle.get('passed', 0)}/{needle.get('total', 0)}")
|
||||
print(f"Context ceiling: {context.get('max_context', 0)}")
|
||||
print(f"Reports: {json_path}, {md_path}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,125 +0,0 @@
|
||||
{
|
||||
"timestamp": "2026-04-15T02:07:45Z",
|
||||
"model": "qwen2.5:7b",
|
||||
"quality": {
|
||||
"total": 10,
|
||||
"passed": 10,
|
||||
"pass_rate": 1.0,
|
||||
"details": [
|
||||
{
|
||||
"id": 1,
|
||||
"category": "factual",
|
||||
"prompt": "What are the three laws of thermodynamics?",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 53.0,
|
||||
"response_len": 1655
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"category": "code_generation",
|
||||
"prompt": "Write a Python function to merge two sorted lists into a single sorted list without using built-in s",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 50.9,
|
||||
"response_len": 1801
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"category": "reasoning",
|
||||
"prompt": "If all A are B, and some B are C, what can we conclude about the relationship between A and C? Expla",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 51.4,
|
||||
"response_len": 1787
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"category": "long_form_writing",
|
||||
"prompt": "Write a 500-word essay on the sovereignty of local AI. Discuss why local inference matters for priva",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 52.6,
|
||||
"response_len": 3139
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"category": "summarization",
|
||||
"prompt": "Summarize the following passage in approximately 100 words:\n\nThe concept of artificial intelligence ",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 54.2,
|
||||
"response_len": 664
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"category": "tool_call_format",
|
||||
"prompt": "Read the file at ~/SOUL.md and quote the prime directive. Format your response as a JSON object with",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 53.9,
|
||||
"response_len": 374
|
||||
},
|
||||
{
|
||||
"id": 7,
|
||||
"category": "multi_turn_context",
|
||||
"prompt": "Remember this number: 7429. Simply acknowledge that you've received it.",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 58.1,
|
||||
"response_len": 98
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"category": "math",
|
||||
"prompt": "What is 17 * 23 + 156 / 12? Show your work step by step.",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 53.6,
|
||||
"response_len": 731
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"category": "creative",
|
||||
"prompt": "Write a haiku about a machine learning model that dreams.",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 55.4,
|
||||
"response_len": 74
|
||||
},
|
||||
{
|
||||
"id": 10,
|
||||
"category": "instruction_following",
|
||||
"prompt": "List 5 programming languages. Number them. Bold the third one. Put the entire list in a code block.",
|
||||
"pattern_matched": true,
|
||||
"tok_s": 52.6,
|
||||
"response_len": 58
|
||||
}
|
||||
]
|
||||
},
|
||||
"perplexity": {
|
||||
"corpus": "corpora/wiki.test.raw",
|
||||
"chunks_scored": 10,
|
||||
"avg_tok_s": 42.9,
|
||||
"note": "Proxy metric \u2014 real PPL requires logprob support",
|
||||
"passed": true
|
||||
},
|
||||
"needle_in_haystack": {
|
||||
"total": 3,
|
||||
"passed": 3,
|
||||
"details": {
|
||||
"8K": {
|
||||
"retrieved": true,
|
||||
"tok_s": 50.0,
|
||||
"response_excerpt": "The secret code in the text is clearly stated at the beginning: **TURBOQUANT-7742**.\n\nThis appears t"
|
||||
},
|
||||
"16K": {
|
||||
"retrieved": true,
|
||||
"tok_s": 40.5,
|
||||
"response_excerpt": "The secret code in the text is \"TURBOQUANT-7742\". This message is hidden within the repetitive phras"
|
||||
},
|
||||
"32K": {
|
||||
"retrieved": true,
|
||||
"tok_s": 38.7,
|
||||
"response_excerpt": "The secret code in the text is clearly stated as \"TURBOQUANT-7742\". This appears after a series of s"
|
||||
}
|
||||
}
|
||||
},
|
||||
"performance": {},
|
||||
"context_ceiling": {},
|
||||
"go_no_go": "NO-GO",
|
||||
"issues": [
|
||||
"Context ceiling: 0 < 64K required"
|
||||
]
|
||||
}
|
||||
@@ -1,57 +0,0 @@
|
||||
# TurboQuant Test Matrix Report
|
||||
|
||||
**Generated:** 2026-04-15T02:07:45Z
|
||||
**Model:** qwen2.5:7b
|
||||
|
||||
## Go/No-Go: NO-GO
|
||||
|
||||
- Context ceiling: 0 < 64K required
|
||||
|
||||
## Quality (10 Practical Prompts)
|
||||
|
||||
| # | Category | Pattern Match | tok/s |
|
||||
|---|----------|--------------|-------|
|
||||
| 1 | factual | PASS | 53.0 |
|
||||
| 2 | code_generation | PASS | 50.9 |
|
||||
| 3 | reasoning | PASS | 51.4 |
|
||||
| 4 | long_form_writing | PASS | 52.6 |
|
||||
| 5 | summarization | PASS | 54.2 |
|
||||
| 6 | tool_call_format | PASS | 53.9 |
|
||||
| 7 | multi_turn_context | PASS | 58.1 |
|
||||
| 8 | math | PASS | 53.6 |
|
||||
| 9 | creative | PASS | 55.4 |
|
||||
| 10 | instruction_following | PASS | 52.6 |
|
||||
|
||||
**Pass rate:** 10/10 (100%)
|
||||
|
||||
## Perplexity
|
||||
|
||||
- Chunks scored: 10
|
||||
- Avg tok/s: 42.9
|
||||
- Note: Proxy metric — real PPL requires logprob support
|
||||
|
||||
## Needle-in-Haystack
|
||||
|
||||
| Context | Retrieved | tok/s |
|
||||
|---------|-----------|-------|
|
||||
| 8K | PASS | 50.0 |
|
||||
| 16K | PASS | 40.5 |
|
||||
| 32K | PASS | 38.7 |
|
||||
|
||||
**Retrieved:** 3/3
|
||||
|
||||
## Performance
|
||||
|
||||
| Context | tok/s | TTFT (s) | Prompt Tokens | Generated |
|
||||
|---------|-------|----------|---------------|-----------|
|
||||
|
||||
Peak memory: N/A MB
|
||||
|
||||
## Context Ceiling
|
||||
|
||||
- Max working context: N/A
|
||||
- Minimum required: 65536
|
||||
- Passed: NO
|
||||
|
||||
---
|
||||
*Generated by run_test_matrix.py. Ref: #11.*
|
||||
BIN
tests/__pycache__/test_turboquant.cpython-312-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_turboquant.cpython-312-pytest-9.0.2.pyc
Normal file
Binary file not shown.
141
tests/test_turboquant.py
Normal file
141
tests/test_turboquant.py
Normal file
@@ -0,0 +1,141 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
TurboQuant Test Suite
|
||||
Tests for critical paths in KV cache compression.
|
||||
|
||||
Issue #679: Codebase Genome: turboquant — Full Analysis
|
||||
"""
|
||||
import unittest
|
||||
import subprocess
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
class TestTurboQuant(unittest.TestCase):
|
||||
"""Test TurboQuant implementation."""
|
||||
|
||||
def test_repo_structure(self):
|
||||
"""Verify expected files exist."""
|
||||
required_files = [
|
||||
"llama-turbo.h",
|
||||
"llama-turbo.cpp",
|
||||
"ggml-metal-turbo.metal",
|
||||
"README.md",
|
||||
"GENOME.md"
|
||||
]
|
||||
|
||||
for filename in required_files:
|
||||
filepath = os.path.join(os.path.dirname(__file__), "..", filename)
|
||||
self.assertTrue(os.path.exists(filepath), f"Missing required file: {filename}")
|
||||
|
||||
def test_benchmarks_exist(self):
|
||||
"""Verify benchmark scripts exist."""
|
||||
benchmark_files = [
|
||||
"benchmarks/run_benchmarks.py",
|
||||
"benchmarks/run_perplexity.py",
|
||||
"benchmarks/run_long_session.py"
|
||||
]
|
||||
|
||||
for filename in benchmark_files:
|
||||
filepath = os.path.join(os.path.dirname(__file__), "..", filename)
|
||||
self.assertTrue(os.path.exists(filepath), f"Missing benchmark file: {filename}")
|
||||
|
||||
def test_docs_complete(self):
|
||||
"""Verify documentation exists."""
|
||||
doc_files = [
|
||||
"docs/PROJECT_STATUS.md",
|
||||
"profiles/README.md"
|
||||
]
|
||||
|
||||
for filename in doc_files:
|
||||
filepath = os.path.join(os.path.dirname(__file__), "..", filename)
|
||||
self.assertTrue(os.path.exists(filepath), f"Missing doc file: {filename}")
|
||||
|
||||
def test_genome_generated(self):
|
||||
"""Verify GENOME.md was generated."""
|
||||
genome_path = os.path.join(os.path.dirname(__file__), "..", "GENOME.md")
|
||||
self.assertTrue(os.path.exists(genome_path), "GENOME.md not found")
|
||||
|
||||
# Check it has required sections
|
||||
with open(genome_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
required_sections = [
|
||||
"## Project Overview",
|
||||
"## Architecture",
|
||||
"## Entry Points",
|
||||
"## Data Flow",
|
||||
"## Key Abstractions",
|
||||
"## API Surface",
|
||||
"## Test Coverage Gaps",
|
||||
"## Security Considerations"
|
||||
]
|
||||
|
||||
for section in required_sections:
|
||||
self.assertIn(section, content, f"GENOME.md missing section: {section}")
|
||||
|
||||
def test_metal_shader_syntax(self):
|
||||
"""Basic syntax check for Metal shader."""
|
||||
shader_path = os.path.join(os.path.dirname(__file__), "..", "ggml-metal-turbo.metal")
|
||||
with open(shader_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for key functions
|
||||
self.assertIn("kernel_fwht_128", content, "Missing kernel_fwht_128 function")
|
||||
self.assertIn("kernel_turbo4_dequant", content, "Missing kernel_turbo4_dequant function")
|
||||
self.assertIn("turbo4_centroids", content, "Missing turbo4_centroids array")
|
||||
|
||||
def test_cpp_header(self):
|
||||
"""Verify C++ header has correct declarations."""
|
||||
header_path = os.path.join(os.path.dirname(__file__), "..", "llama-turbo.h")
|
||||
with open(header_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for function declarations
|
||||
self.assertIn("polar_quant_encode_turbo4", content, "Missing encode function")
|
||||
self.assertIn("polar_quant_decode_turbo4", content, "Missing decode function")
|
||||
self.assertIn('extern "C"', content, "Missing C linkage")
|
||||
|
||||
class TestBenchmarks(unittest.TestCase):
|
||||
"""Test benchmark infrastructure."""
|
||||
|
||||
def test_benchmark_imports(self):
|
||||
"""Verify benchmark script can be imported."""
|
||||
benchmark_path = os.path.join(os.path.dirname(__file__), "..", "benchmarks", "run_benchmarks.py")
|
||||
|
||||
# Check file exists
|
||||
self.assertTrue(os.path.exists(benchmark_path), "Benchmark script not found")
|
||||
|
||||
# Check it has main function
|
||||
with open(benchmark_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
self.assertIn("def main():", content, "Benchmark script missing main function")
|
||||
self.assertIn("argparse", content, "Benchmark script missing argparse")
|
||||
|
||||
class TestDocumentation(unittest.TestCase):
|
||||
"""Test documentation completeness."""
|
||||
|
||||
def test_readme_sections(self):
|
||||
"""Verify README has required sections."""
|
||||
readme_path = os.path.join(os.path.dirname(__file__), "..", "README.md")
|
||||
with open(readme_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
required_sections = ["## What", "## Why", "## Status", "## Roles"]
|
||||
for section in required_sections:
|
||||
self.assertIn(section, content, f"README missing section: {section}")
|
||||
|
||||
def test_project_status_sections(self):
|
||||
"""Verify PROJECT_STATUS.md has required sections."""
|
||||
status_path = os.path.join(os.path.dirname(__file__), "..", "docs", "PROJECT_STATUS.md")
|
||||
with open(status_path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for key findings
|
||||
self.assertIn("73%", content, "Missing 73% savings metric")
|
||||
self.assertIn("PolarQuant", content, "Missing PolarQuant references")
|
||||
self.assertIn("Metal", content, "Missing Metal shader references")
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user