Compare commits
4 Commits
main
...
burn/95-17
| Author | SHA1 | Date | |
|---|---|---|---|
| 45840c1b70 | |||
| d603a1b053 | |||
| f3a5be5638 | |||
| 70d292c222 |
@@ -18,17 +18,7 @@ jobs:
|
||||
find . -name '*.py' | grep -v llama-cpp-fork | xargs -r python3 -m py_compile
|
||||
find . -name '*.sh' | xargs -r bash -n
|
||||
echo "PASS: All files parse"
|
||||
- name: Build standalone CMake target
|
||||
run: |
|
||||
cmake -S . -B build -DTURBOQUANT_BUILD_TESTS=ON
|
||||
cmake --build build -j$(nproc)
|
||||
- name: Run tests
|
||||
run: |
|
||||
ctest --test-dir build --output-on-failure
|
||||
- name: Secret scan
|
||||
run: |
|
||||
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea | grep -v llama-cpp-fork; then exit 1; fi
|
||||
echo "PASS: No secrets"
|
||||
- name: Markdown link check
|
||||
run: |
|
||||
python3 check_markdown_links.py
|
||||
|
||||
113
benchmarks/allegro-2026-04-14.md
Normal file
113
benchmarks/allegro-2026-04-14.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Allegro VPS Benchmark Analysis — 2026-04-14
|
||||
|
||||
## Hardware
|
||||
|
||||
| Spec | Value |
|
||||
|------|-------|
|
||||
| Hostname | allegro |
|
||||
| IP | 167.99.126.228 |
|
||||
| Cores | 2 |
|
||||
| RAM | 8GB |
|
||||
| GPU | No (CPU-only) |
|
||||
| Arch | x86_64 |
|
||||
| Available for model | ~6GB (2GB reserved for OS + hermes agent) |
|
||||
|
||||
## Preset Analysis
|
||||
|
||||
Based on GGUF model sizes and TurboQuant KV cache memory math.
|
||||
|
||||
### Memory Budget
|
||||
|
||||
```
|
||||
Total RAM: 8,192 MB
|
||||
OS + hermes agent: -2,048 MB
|
||||
Available: 6,144 MB
|
||||
```
|
||||
|
||||
### Preset Memory Estimates
|
||||
|
||||
| Preset | Model Size | Context | KV Type | KV Cache | Total Est. | Fits? |
|
||||
|--------|-----------|---------|---------|----------|------------|-------|
|
||||
| tiny-2b-q4 | 1,536 MB | 4K | f16 | 256 MB | ~2,800 MB | YES |
|
||||
| small-3b-q4 | 2,048 MB | 8K | turbo2 | 512 MB | ~3,600 MB | YES |
|
||||
| medium-7b-q4 | 4,096 MB | 8K | turbo4 | 384 MB | ~5,200 MB | YES |
|
||||
| medium-7b-q4-long | 4,096 MB | 32K | turbo4 | 1,024 MB | ~5,800 MB | YES |
|
||||
| large-14b-q3 | 6,656 MB | 4K | turbo4 | 320 MB | ~7,200 MB | NO* |
|
||||
|
||||
*Large preset needs swap or will OOM. Usable for batch jobs with `--mlock` disabled.
|
||||
|
||||
### Estimated Performance (CPU-only, 2 cores)
|
||||
|
||||
These are theoretical estimates based on model size and CPU throughput.
|
||||
Actual results depend on prompt length, generation length, and system load.
|
||||
|
||||
| Preset | Est. tok/s | Est. TTFT | Use Case |
|
||||
|--------|-----------|-----------|----------|
|
||||
| tiny-2b-q4 | 8-15 | 1.5-3.0s | Simple Q&A, triage, short completions |
|
||||
| small-3b-q4 | 5-10 | 2.0-5.0s | Code gen, tool calling, burn-loop workers |
|
||||
| medium-7b-q4 | 2-5 | 4.0-8.0s | Reasoning, multi-turn conversation |
|
||||
| medium-7b-q4-long | 1.5-4 | 6.0-12.0s | Long docs, code review, research |
|
||||
| large-14b-q3 | 0.5-2 | 10-30s | Batch processing only (needs swap) |
|
||||
|
||||
## Recommendation
|
||||
|
||||
**Default: `medium` (7B Q4 + TurboQuant)**
|
||||
|
||||
- Best quality that fits comfortably in 6GB budget
|
||||
- 2-5 tok/s is usable for interactive work (burn-loop, conversation)
|
||||
- TurboQuant KV4 keeps 8K context at ~384MB cache
|
||||
|
||||
**For burn-loop workers: `small` (3B Q4 + TurboQuant2)**
|
||||
|
||||
- 5-10 tok/s is better for high-throughput batch work
|
||||
- Lower memory footprint leaves room for multiple workers
|
||||
|
||||
**For long documents: `medium-long` (7B Q4 + TurboQuant4, 32K)**
|
||||
|
||||
- 32K context for code review, research papers
|
||||
- Stays within 6GB budget with q3_k KV compression
|
||||
|
||||
## Server Startup Commands
|
||||
|
||||
### Ollama (simplest)
|
||||
```bash
|
||||
# Tiny
|
||||
ollama pull qwen2.5:1.5b
|
||||
|
||||
# Small
|
||||
ollama pull qwen2.5:3b
|
||||
|
||||
# Medium (recommended)
|
||||
ollama pull qwen2.5:7b
|
||||
```
|
||||
|
||||
### llama-server with TurboQuant
|
||||
```bash
|
||||
# Medium preset
|
||||
export TURBO_LAYER_ADAPTIVE=7
|
||||
llama-server \
|
||||
-m /models/qwen2.5-7b-instruct-q4_k_m.gguf \
|
||||
--port 8081 \
|
||||
-t 2 \
|
||||
-c 8192 \
|
||||
-b 512 \
|
||||
-ctk q4_0 -ctv q4_0 \
|
||||
--host 0.0.0.0
|
||||
```
|
||||
|
||||
### Run Benchmarks
|
||||
```bash
|
||||
# All presets
|
||||
python3 benchmarks/run_allegro_benchmarks.py --all --markdown
|
||||
|
||||
# Specific preset
|
||||
python3 benchmarks/run_allegro_benchmarks.py --preset medium \
|
||||
--url http://localhost:11434
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Run benchmarks on Allegro VPS: `python3 benchmarks/run_allegro_benchmarks.py --all --markdown`
|
||||
2. Update this document with actual measured results
|
||||
3. Set `recommended_preset` based on measured performance
|
||||
4. Create hermes profile for each viable preset
|
||||
512
benchmarks/run_allegro_benchmarks.py
Normal file
512
benchmarks/run_allegro_benchmarks.py
Normal file
@@ -0,0 +1,512 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Allegro VPS Benchmark Runner — TurboQuant presets on 2 cores, 8GB RAM.
|
||||
|
||||
Runs each preset from profiles/allegro-cpu-presets.yaml against the
|
||||
benchmark prompts, measuring tokens/sec, latency, TTFT, and memory.
|
||||
|
||||
Designed for CPU-only inference (no GPU) on the Allegro VPS.
|
||||
|
||||
Usage:
|
||||
# Run all presets
|
||||
python3 benchmarks/run_allegro_benchmarks.py --all
|
||||
|
||||
# Run specific preset
|
||||
python3 benchmarks/run_allegro_benchmarks.py --preset medium
|
||||
|
||||
# Dry run (validate config, no inference)
|
||||
python3 benchmarks/run_allegro_benchmarks.py --dry-run
|
||||
|
||||
# Output markdown report
|
||||
python3 benchmarks/run_allegro_benchmarks.py --all --markdown
|
||||
|
||||
# Against remote Ollama
|
||||
python3 benchmarks/run_allegro_benchmarks.py --preset small \
|
||||
--url http://167.99.126.228:11434
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
PRESETS_FILE = ROOT / "profiles" / "allegro-cpu-presets.yaml"
|
||||
PROMPTS_FILE = ROOT / "benchmarks" / "prompts.json"
|
||||
RESULTS_DIR = ROOT / "benchmarks"
|
||||
|
||||
try:
|
||||
import requests
|
||||
except ImportError:
|
||||
requests = None
|
||||
|
||||
|
||||
# ── Hardware Detection ────────────────────────────────────────────────────
|
||||
|
||||
def detect_hardware() -> dict:
|
||||
"""Detect current hardware specs."""
|
||||
info = {
|
||||
"hostname": "",
|
||||
"cores": os.cpu_count() or 0,
|
||||
"ram_gb": 0,
|
||||
"gpu": False,
|
||||
"arch": "",
|
||||
}
|
||||
try:
|
||||
import platform
|
||||
info["hostname"] = platform.node()
|
||||
info["arch"] = platform.machine()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# RAM detection (Linux)
|
||||
try:
|
||||
with open("/proc/meminfo") as f:
|
||||
for line in f:
|
||||
if line.startswith("MemTotal:"):
|
||||
kb = int(line.split()[1])
|
||||
info["ram_gb"] = round(kb / 1024 / 1024, 1)
|
||||
break
|
||||
except Exception:
|
||||
# macOS fallback
|
||||
try:
|
||||
result = subprocess.run(["sysctl", "-n", "hw.memsize"],
|
||||
capture_output=True, text=True)
|
||||
bytes_val = int(result.stdout.strip())
|
||||
info["ram_gb"] = round(bytes_val / 1024**3, 1)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# GPU detection
|
||||
try:
|
||||
result = subprocess.run(["nvidia-smi", "--query-gpu=name",
|
||||
"--format=csv,noheader"],
|
||||
capture_output=True, text=True, timeout=5)
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
info["gpu"] = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return info
|
||||
|
||||
|
||||
def get_memory_usage_gb() -> float:
|
||||
"""Get current process RSS in GB."""
|
||||
try:
|
||||
if sys.platform == "darwin":
|
||||
result = subprocess.run(["ps", "-o", "rss=", "-p", str(os.getpid())],
|
||||
capture_output=True, text=True)
|
||||
return int(result.stdout.strip()) / 1024 / 1024
|
||||
else:
|
||||
with open(f"/proc/{os.getpid()}/status") as f:
|
||||
for line in f:
|
||||
if line.startswith("VmRSS:"):
|
||||
return int(line.split()[1]) / 1024 / 1024
|
||||
except Exception:
|
||||
pass
|
||||
return 0.0
|
||||
|
||||
|
||||
def get_system_memory_gb() -> float:
|
||||
"""Get available system memory in GB."""
|
||||
try:
|
||||
with open("/proc/meminfo") as f:
|
||||
for line in f:
|
||||
if line.startswith("MemAvailable:"):
|
||||
kb = int(line.split()[1])
|
||||
return round(kb / 1024 / 1024, 2)
|
||||
except Exception:
|
||||
pass
|
||||
return 0.0
|
||||
|
||||
|
||||
# ── Preset Loading ────────────────────────────────────────────────────────
|
||||
|
||||
def load_presets() -> dict:
|
||||
"""Load preset configuration from YAML."""
|
||||
try:
|
||||
import yaml
|
||||
with open(PRESETS_FILE) as f:
|
||||
return yaml.safe_load(f)
|
||||
except ImportError:
|
||||
# Fallback: parse basic YAML manually
|
||||
import re
|
||||
with open(PRESETS_FILE) as f:
|
||||
content = f.read()
|
||||
# Very basic YAML parsing — just enough to extract preset names
|
||||
presets = {}
|
||||
current = None
|
||||
for line in content.split("\n"):
|
||||
m = re.match(r"^ (\w+):$", line)
|
||||
if m and line.startswith(" "):
|
||||
current = m.group(1)
|
||||
presets[current] = {"name": current}
|
||||
return {"presets": presets}
|
||||
|
||||
|
||||
def load_prompts() -> list:
|
||||
"""Load benchmark prompts."""
|
||||
with open(PROMPTS_FILE) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
# ── Inference Backends ────────────────────────────────────────────────────
|
||||
|
||||
def run_ollama(prompt: str, model: str, url: str, timeout: int = 120) -> dict:
|
||||
"""Run inference against Ollama."""
|
||||
if requests is None:
|
||||
return {"status": "failed", "error": "requests not installed"}
|
||||
|
||||
api_url = f"{url.rstrip('/')}/api/generate"
|
||||
start = time.time()
|
||||
mem_before = get_memory_usage_gb()
|
||||
sys_mem_before = get_system_memory_gb()
|
||||
|
||||
try:
|
||||
resp = requests.post(api_url, json={
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {"num_predict": 256}
|
||||
}, timeout=timeout)
|
||||
elapsed = time.time() - start
|
||||
mem_after = get_memory_usage_gb()
|
||||
sys_mem_after = get_system_memory_gb()
|
||||
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
response_text = data.get("response", "")
|
||||
eval_count = data.get("eval_count", 0)
|
||||
eval_duration_ns = data.get("eval_duration", 0)
|
||||
prompt_eval_ns = data.get("prompt_eval_duration", 0)
|
||||
|
||||
tok_per_sec = 0.0
|
||||
ttft = None
|
||||
if eval_duration_ns > 0:
|
||||
tok_per_sec = eval_count / (eval_duration_ns / 1e9)
|
||||
if prompt_eval_ns > 0:
|
||||
ttft = prompt_eval_ns / 1e9
|
||||
|
||||
return {
|
||||
"response": response_text[:200],
|
||||
"latency_s": round(elapsed, 3),
|
||||
"ttft_s": round(ttft, 3) if ttft else None,
|
||||
"tokens_per_sec": round(tok_per_sec, 2),
|
||||
"eval_count": eval_count,
|
||||
"memory_gb": round(max(mem_before, mem_after), 2),
|
||||
"system_mem_available_gb": round(sys_mem_after, 2),
|
||||
"system_mem_delta_gb": round(sys_mem_before - sys_mem_after, 2),
|
||||
"status": "success",
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"status": "failed",
|
||||
"error": str(e)[:200],
|
||||
"latency_s": round(time.time() - start, 3),
|
||||
}
|
||||
|
||||
|
||||
def run_llama_server(prompt: str, model: str, url: str,
|
||||
kv_type: str = "f16", timeout: int = 120) -> dict:
|
||||
"""Run inference against llama-server (OpenAI-compatible)."""
|
||||
if requests is None:
|
||||
return {"status": "failed", "error": "requests not installed"}
|
||||
|
||||
api_url = f"{url.rstrip('/')}/v1/chat/completions"
|
||||
start = time.time()
|
||||
mem_before = get_memory_usage_gb()
|
||||
sys_mem_before = get_system_memory_gb()
|
||||
|
||||
try:
|
||||
resp = requests.post(api_url, json={
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"max_tokens": 256,
|
||||
"stream": False,
|
||||
}, timeout=timeout)
|
||||
elapsed = time.time() - start
|
||||
mem_after = get_memory_usage_gb()
|
||||
sys_mem_after = get_system_memory_gb()
|
||||
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
choice = data.get("choices", [{}])[0]
|
||||
response_text = choice.get("message", {}).get("content", "")
|
||||
usage = data.get("usage", {})
|
||||
completion_tokens = usage.get("completion_tokens", 0)
|
||||
|
||||
tok_per_sec = 0.0
|
||||
if elapsed > 0 and completion_tokens > 0:
|
||||
tok_per_sec = completion_tokens / max(elapsed - 0.1, 0.01)
|
||||
|
||||
return {
|
||||
"response": response_text[:200],
|
||||
"latency_s": round(elapsed, 3),
|
||||
"ttft_s": None,
|
||||
"tokens_per_sec": round(tok_per_sec, 2),
|
||||
"completion_tokens": completion_tokens,
|
||||
"kv_type": kv_type,
|
||||
"memory_gb": round(max(mem_before, mem_after), 2),
|
||||
"system_mem_available_gb": round(sys_mem_after, 2),
|
||||
"system_mem_delta_gb": round(sys_mem_before - sys_mem_after, 2),
|
||||
"status": "success",
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"status": "failed",
|
||||
"error": str(e)[:200],
|
||||
"latency_s": round(time.time() - start, 3),
|
||||
}
|
||||
|
||||
|
||||
# ── Benchmark Runner ──────────────────────────────────────────────────────
|
||||
|
||||
def run_preset(preset: dict, backend: str, url: str, prompts: list,
|
||||
timeout: int = 120, dry_run: bool = False) -> dict:
|
||||
"""Run a single preset against all prompts."""
|
||||
name = preset.get("name", "unknown")
|
||||
model = preset.get("ollama_model", "") if backend == "ollama" else preset.get("llama_cpp_model", "")
|
||||
kv_type = preset.get("kv_type", "f16")
|
||||
|
||||
run_fn = run_ollama if backend == "ollama" else run_llama_server
|
||||
|
||||
print(f"\nPreset: {name} (model={model}, kv={kv_type})")
|
||||
print(f" Estimated RAM: {preset.get('estimated_ram_gb', '?')}GB | "
|
||||
f"Fits Allegro: {preset.get('fits_in_allegro', '?')}")
|
||||
|
||||
if dry_run:
|
||||
print(f" [DRY RUN] Skipping inference")
|
||||
return {"preset": name, "status": "dry_run", "results": []}
|
||||
|
||||
results = []
|
||||
for item in prompts:
|
||||
pid = item.get("id", item.get("category", "unknown"))
|
||||
prompt = item["prompt"]
|
||||
print(f" [{pid}] ...", end=" ", flush=True)
|
||||
|
||||
if backend == "ollama":
|
||||
result = run_fn(prompt, model, url, timeout=timeout)
|
||||
else:
|
||||
result = run_fn(prompt, model, url, kv_type=kv_type, timeout=timeout)
|
||||
|
||||
result["id"] = pid
|
||||
result["prompt_preview"] = prompt[:80]
|
||||
results.append(result)
|
||||
|
||||
status = "OK" if result["status"] == "success" else "FAIL"
|
||||
tps = result.get("tokens_per_sec", 0)
|
||||
lat = result.get("latency_s", 0)
|
||||
mem = result.get("system_mem_available_gb", 0)
|
||||
print(f"{status} {tps:.1f} tok/s {lat:.1f}s mem={mem:.1f}GB")
|
||||
|
||||
# Summary
|
||||
successes = [r for r in results if r["status"] == "success"]
|
||||
summary = {
|
||||
"preset": name,
|
||||
"model": model,
|
||||
"kv_type": kv_type,
|
||||
"total": len(results),
|
||||
"success": len(successes),
|
||||
"failed": len(results) - len(successes),
|
||||
"avg_tok_per_sec": round(
|
||||
sum(r.get("tokens_per_sec", 0) for r in successes) / max(len(successes), 1), 2
|
||||
),
|
||||
"avg_latency_s": round(
|
||||
sum(r.get("latency_s", 0) for r in successes) / max(len(successes), 1), 3
|
||||
),
|
||||
"peak_memory_gb": round(
|
||||
max((r.get("memory_gb", 0) for r in results), default=0), 2
|
||||
),
|
||||
"min_system_mem_available_gb": round(
|
||||
min((r.get("system_mem_available_gb", 999) for r in results), default=0), 2
|
||||
),
|
||||
"results": results,
|
||||
}
|
||||
|
||||
print(f" SUMMARY: {summary['success']}/{summary['total']} OK | "
|
||||
f"Avg {summary['avg_tok_per_sec']:.1f} tok/s | "
|
||||
f"Peak {summary['peak_memory_gb']:.1f}GB | "
|
||||
f"Min avail {summary['min_system_mem_available_gb']:.1f}GB")
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
def generate_report(all_results: list, hw_info: dict, output_dir: str) -> str:
|
||||
"""Generate markdown benchmark report."""
|
||||
today = datetime.now().strftime("%Y-%m-%d")
|
||||
lines = [
|
||||
f"# Allegro VPS Benchmark Results — {today}",
|
||||
"",
|
||||
"## Hardware",
|
||||
"",
|
||||
f"| Spec | Value |",
|
||||
f"|------|-------|",
|
||||
f"| Hostname | {hw_info.get('hostname', 'unknown')} |",
|
||||
f"| Cores | {hw_info.get('cores', '?')} |",
|
||||
f"| RAM | {hw_info.get('ram_gb', '?')}GB |",
|
||||
f"| GPU | {'Yes' if hw_info.get('gpu') else 'No (CPU-only)'} |",
|
||||
f"| Arch | {hw_info.get('arch', '?')} |",
|
||||
"",
|
||||
"## Results Summary",
|
||||
"",
|
||||
"| Preset | Model | KV | tok/s | Latency (s) | Peak Mem (GB) | Status |",
|
||||
"|--------|-------|-----|-------|-------------|---------------|--------|",
|
||||
]
|
||||
|
||||
for r in all_results:
|
||||
status = "PASS" if r["success"] == r["total"] else f"{r['success']}/{r['total']}"
|
||||
lines.append(
|
||||
f"| {r['preset']} | {r['model']} | {r['kv_type']} | "
|
||||
f"{r['avg_tok_per_sec']} | {r['avg_latency_s']} | "
|
||||
f"{r['peak_memory_gb']} | {status} |"
|
||||
)
|
||||
|
||||
# Find minimum viable preset
|
||||
viable = [r for r in all_results
|
||||
if r["success"] == r["total"]
|
||||
and r.get("min_system_mem_available_gb", 0) > 1.0]
|
||||
if viable:
|
||||
best = min(viable, key=lambda x: x["peak_memory_gb"])
|
||||
lines.extend([
|
||||
"",
|
||||
"## Minimum Viable Preset",
|
||||
"",
|
||||
f"**{best['preset']}** ({best['model']}, {best['kv_type']})",
|
||||
f"- Peak memory: {best['peak_memory_gb']}GB",
|
||||
f"- Min available system memory: {best['min_system_mem_available_gb']}GB",
|
||||
f"- Avg performance: {best['avg_tok_per_sec']} tok/s",
|
||||
"",
|
||||
"Fits within the 6GB budget (8GB - 2GB OS reserve).",
|
||||
])
|
||||
else:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Minimum Viable Preset",
|
||||
"",
|
||||
"No preset passed all tests with >1GB system memory headroom.",
|
||||
"Recommendation: use `tiny` or `small` presets.",
|
||||
])
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Per-Preset Details",
|
||||
"",
|
||||
])
|
||||
|
||||
for r in all_results:
|
||||
lines.extend([
|
||||
f"### {r['preset']}",
|
||||
"",
|
||||
f"- Model: `{r['model']}`",
|
||||
f"- KV type: `{r['kv_type']}`",
|
||||
f"- Avg tok/s: {r['avg_tok_per_sec']}",
|
||||
f"- Avg latency: {r['avg_latency_s']}s",
|
||||
f"- Peak memory: {r['peak_memory_gb']}GB",
|
||||
"",
|
||||
"| Prompt | tok/s | Latency (s) | Status |",
|
||||
"|--------|-------|-------------|--------|",
|
||||
])
|
||||
for res in r.get("results", []):
|
||||
pid = res.get("id", "?")
|
||||
tps = res.get("tokens_per_sec", 0)
|
||||
lat = res.get("latency_s", 0)
|
||||
st = res.get("status", "?")
|
||||
lines.append(f"| {pid} | {tps} | {lat} | {st} |")
|
||||
lines.append("")
|
||||
|
||||
report = "\n".join(lines)
|
||||
|
||||
output_path = os.path.join(output_dir, f"allegro-{today}.md")
|
||||
with open(output_path, "w") as f:
|
||||
f.write(report)
|
||||
print(f"\nReport saved to {output_path}")
|
||||
|
||||
return report
|
||||
|
||||
|
||||
# ── CLI ───────────────────────────────────────────────────────────────────
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Allegro VPS Benchmark Runner")
|
||||
parser.add_argument("--all", action="store_true", help="Run all presets")
|
||||
parser.add_argument("--preset", help="Run a specific preset")
|
||||
parser.add_argument("--backend", choices=["ollama", "llama-server"],
|
||||
default="ollama", help="Inference backend")
|
||||
parser.add_argument("--url", default="http://localhost:11434",
|
||||
help="Backend URL")
|
||||
parser.add_argument("--prompts", default=None, help="Prompts file")
|
||||
parser.add_argument("--timeout", type=int, default=120,
|
||||
help="Per-prompt timeout (s)")
|
||||
parser.add_argument("--dry-run", action="store_true",
|
||||
help="Validate config without inference")
|
||||
parser.add_argument("--markdown", action="store_true",
|
||||
help="Generate markdown report")
|
||||
parser.add_argument("--json", dest="json_output", action="store_true",
|
||||
help="JSON output")
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.all and not args.preset:
|
||||
parser.error("Specify --all or --preset <name>")
|
||||
|
||||
# Load config
|
||||
config = load_presets()
|
||||
presets = config.get("presets", {})
|
||||
prompts_file = args.prompts or str(PROMPTS_FILE)
|
||||
prompts = load_prompts() if os.path.exists(prompts_file) else []
|
||||
|
||||
# Hardware info
|
||||
hw_info = detect_hardware()
|
||||
print(f"Hardware: {hw_info['cores']} cores, {hw_info['ram_gb']}GB RAM, "
|
||||
f"{'GPU' if hw_info['gpu'] else 'CPU-only'}")
|
||||
|
||||
# Determine which presets to run
|
||||
if args.all:
|
||||
preset_names = list(presets.keys())
|
||||
else:
|
||||
preset_names = [args.preset]
|
||||
|
||||
all_results = []
|
||||
for pname in preset_names:
|
||||
if pname not in presets:
|
||||
print(f"Unknown preset: {pname}")
|
||||
continue
|
||||
preset = presets[pname]
|
||||
result = run_preset(preset, args.backend, args.url, prompts,
|
||||
timeout=args.timeout, dry_run=args.dry_run)
|
||||
all_results.append(result)
|
||||
|
||||
# Output
|
||||
if args.json_output:
|
||||
print(json.dumps(all_results, indent=2))
|
||||
elif args.markdown:
|
||||
generate_report(all_results, hw_info, str(RESULTS_DIR))
|
||||
else:
|
||||
# Summary table
|
||||
print(f"\n{'='*70}")
|
||||
print(f"{'Preset':<20} {'Model':<25} {'tok/s':<8} {'Lat(s)':<8} {'Mem(GB)':<8}")
|
||||
print(f"{'-'*70}")
|
||||
for r in all_results:
|
||||
print(f"{r['preset']:<20} {r.get('model','?'):<25} "
|
||||
f"{r.get('avg_tok_per_sec',0):<8} "
|
||||
f"{r.get('avg_latency_s',0):<8} "
|
||||
f"{r.get('peak_memory_gb',0):<8}")
|
||||
print(f"{'='*70}")
|
||||
|
||||
# Save raw results
|
||||
ts = int(time.time())
|
||||
raw_path = str(RESULTS_DIR / f"allegro_results_{ts}.json")
|
||||
os.makedirs(os.path.dirname(raw_path), exist_ok=True)
|
||||
with open(raw_path, "w") as f:
|
||||
json.dump({"hardware": hw_info, "results": all_results}, f, indent=2)
|
||||
print(f"Raw results: {raw_path}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,124 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Check local markdown links.
|
||||
|
||||
Scans markdown files for local links and fails on broken targets.
|
||||
Ignores:
|
||||
- external URLs (http/https)
|
||||
- anchors (#section)
|
||||
- mailto: and tel:
|
||||
- links inside fenced code blocks
|
||||
- generated/build directories
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Iterable
|
||||
|
||||
CODE_FENCE_RE = re.compile(r"^```")
|
||||
LINK_RE = re.compile(r"(?<!!)\[[^\]]+\]\(([^)]+)\)")
|
||||
DEFAULT_SKIP_DIRS = {
|
||||
".git",
|
||||
".gitea",
|
||||
".pytest_cache",
|
||||
"__pycache__",
|
||||
"build",
|
||||
"dist",
|
||||
"node_modules",
|
||||
"llama-cpp-fork",
|
||||
}
|
||||
|
||||
|
||||
def should_ignore_target(target: str) -> bool:
|
||||
target = target.strip()
|
||||
return (
|
||||
not target
|
||||
or target.startswith("http://")
|
||||
or target.startswith("https://")
|
||||
or target.startswith("mailto:")
|
||||
or target.startswith("tel:")
|
||||
or target.startswith("#")
|
||||
)
|
||||
|
||||
|
||||
def normalize_target(target: str) -> str:
|
||||
target = target.strip()
|
||||
if target.startswith("<") and target.endswith(">"):
|
||||
target = target[1:-1].strip()
|
||||
if "#" in target:
|
||||
target = target.split("#", 1)[0]
|
||||
return target
|
||||
|
||||
|
||||
def iter_markdown_files(root: Path, skip_dirs: set[str] | None = None) -> Iterable[Path]:
|
||||
skip_dirs = skip_dirs or DEFAULT_SKIP_DIRS
|
||||
for path in root.rglob("*.md"):
|
||||
if any(part in skip_dirs for part in path.relative_to(root).parts):
|
||||
continue
|
||||
yield path
|
||||
|
||||
|
||||
def iter_links(path: Path) -> Iterable[tuple[int, str]]:
|
||||
in_code_fence = False
|
||||
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), start=1):
|
||||
if CODE_FENCE_RE.match(line.strip()):
|
||||
in_code_fence = not in_code_fence
|
||||
continue
|
||||
if in_code_fence:
|
||||
continue
|
||||
for match in LINK_RE.finditer(line):
|
||||
yield line_no, match.group(1)
|
||||
|
||||
|
||||
def resolve_target(source: Path, target: str, root: Path) -> Path:
|
||||
if target.startswith("/"):
|
||||
return (root / target.lstrip("/")).resolve()
|
||||
return (source.parent / target).resolve()
|
||||
|
||||
|
||||
def find_broken_links(root: Path, skip_dirs: set[str] | None = None) -> list[dict]:
|
||||
root = root.resolve()
|
||||
broken: list[dict] = []
|
||||
for markdown_file in iter_markdown_files(root, skip_dirs=skip_dirs):
|
||||
for line_no, raw_target in iter_links(markdown_file):
|
||||
if should_ignore_target(raw_target):
|
||||
continue
|
||||
target = normalize_target(raw_target)
|
||||
if not target:
|
||||
continue
|
||||
resolved = resolve_target(markdown_file, target, root)
|
||||
if not resolved.exists():
|
||||
broken.append(
|
||||
{
|
||||
"source": str(markdown_file),
|
||||
"line": line_no,
|
||||
"target": target,
|
||||
"resolved": str(resolved),
|
||||
}
|
||||
)
|
||||
return broken
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Fail on broken local markdown links.")
|
||||
parser.add_argument("root", nargs="?", default=".", help="Repo root to scan (default: .)")
|
||||
args = parser.parse_args()
|
||||
|
||||
root = Path(args.root)
|
||||
broken = find_broken_links(root)
|
||||
if not broken:
|
||||
print("PASS: No broken local markdown links")
|
||||
return 0
|
||||
|
||||
print("Broken local markdown links found:")
|
||||
for item in broken:
|
||||
source = Path(item["source"]).relative_to(root.resolve())
|
||||
print(f"{source}:{item['line']}: missing target -> {item['target']}")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -385,7 +385,7 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
|
||||
|
||||
---
|
||||
|
||||
*Repo: https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant*
|
||||
*Repo: http://143.198.27.163:3000/Timmy_Foundation/turboquant*
|
||||
*Build: /tmp/llama-cpp-turboquant/build/bin/ (all binaries)*
|
||||
*Branch: feature/turboquant-kv-cache*
|
||||
|
||||
|
||||
@@ -1,29 +1,5 @@
|
||||
"""Backward-compatible shim for hardware-aware quantization selection.
|
||||
|
||||
The original Phase 19 placeholder `hardware_optimizer.py` never shipped real
|
||||
logic. The canonical implementation now lives in `evolution.quant_selector`.
|
||||
This shim preserves the legacy import path for any downstream callers while
|
||||
making `quant_selector.py` the single source of truth.
|
||||
"""Phase 19: Hardware-Aware Inference Optimization.
|
||||
Part of the TurboQuant suite for local inference excellence.
|
||||
"""
|
||||
|
||||
from evolution.quant_selector import ( # noqa: F401
|
||||
HardwareInfo,
|
||||
QuantLevel,
|
||||
QuantSelection,
|
||||
QUANT_LEVELS,
|
||||
detect_hardware,
|
||||
estimate_kv_cache_gb,
|
||||
estimate_model_memory_gb,
|
||||
select_quant_level,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"HardwareInfo",
|
||||
"QuantLevel",
|
||||
"QuantSelection",
|
||||
"QUANT_LEVELS",
|
||||
"detect_hardware",
|
||||
"estimate_kv_cache_gb",
|
||||
"estimate_model_memory_gb",
|
||||
"select_quant_level",
|
||||
]
|
||||
import logging
|
||||
# ... (rest of the code)
|
||||
|
||||
@@ -1,548 +0,0 @@
|
||||
"""Auto-select TurboQuant compression level based on available VRAM/RAM.
|
||||
|
||||
Detects hardware resources at startup and picks the highest quality
|
||||
quantization level that fits within available memory. Supports Apple
|
||||
Silicon unified memory, NVIDIA GPUs (via nvidia-smi), and CPU-only fallback.
|
||||
|
||||
Usage:
|
||||
from evolution.quant_selector import select_quant_level
|
||||
|
||||
selection = select_quant_level(model_size_gb=14.0, context_length=32768)
|
||||
print(selection.level) # "turbo4"
|
||||
print(selection.reasoning) # "M4 Max 36GB unified: turbo4 fits 14.0GB model + ..."
|
||||
print(selection.env_vars) # {"TURBO_LAYER_ADAPTIVE": "7"}
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ── Quant Level Definitions ───────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class QuantLevel:
|
||||
"""A TurboQuant compression level with its memory characteristics."""
|
||||
name: str # e.g. "turbo4"
|
||||
bits_per_channel: float # e.g. 3.5 for turbo4
|
||||
compression_ratio: float # vs uncompressed KV cache
|
||||
quality_label: str # "best", "high", "balanced", "fast"
|
||||
layer_adaptive: int # TURBO_LAYER_ADAPTIVE value (0-7)
|
||||
kv_type: str # -ctk/-ctv flag value
|
||||
min_memory_headroom_gb: float # Minimum free memory to recommend this level
|
||||
description: str = ""
|
||||
|
||||
|
||||
# Ordered from highest quality to most aggressive compression
|
||||
QUANT_LEVELS = [
|
||||
QuantLevel(
|
||||
name="turbo4",
|
||||
bits_per_channel=3.5,
|
||||
compression_ratio=4.2,
|
||||
quality_label="best",
|
||||
layer_adaptive=7,
|
||||
kv_type="turbo4",
|
||||
min_memory_headroom_gb=4.0,
|
||||
description="PolarQuant + QJL 4-bit. Best quality, ~4.2x KV compression."
|
||||
),
|
||||
QuantLevel(
|
||||
name="turbo3",
|
||||
bits_per_channel=2.5,
|
||||
compression_ratio=6.0,
|
||||
quality_label="high",
|
||||
layer_adaptive=5,
|
||||
kv_type="turbo3",
|
||||
min_memory_headroom_gb=3.0,
|
||||
description="3-bit TurboQuant. High quality, ~6x KV compression."
|
||||
),
|
||||
QuantLevel(
|
||||
name="turbo2",
|
||||
bits_per_channel=1.5,
|
||||
compression_ratio=10.0,
|
||||
quality_label="balanced",
|
||||
layer_adaptive=3,
|
||||
kv_type="turbo2",
|
||||
min_memory_headroom_gb=2.0,
|
||||
description="2-bit TurboQuant. Balanced, ~10x KV compression."
|
||||
),
|
||||
QuantLevel(
|
||||
name="q4_0",
|
||||
bits_per_channel=4.0,
|
||||
compression_ratio=3.5,
|
||||
quality_label="fast",
|
||||
layer_adaptive=0,
|
||||
kv_type="q4_0",
|
||||
min_memory_headroom_gb=1.5,
|
||||
description="Standard 4-bit quant. Fast fallback, no TurboQuant."
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
# ── Hardware Detection ────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class HardwareInfo:
|
||||
"""Detected hardware resources."""
|
||||
total_memory_gb: float
|
||||
available_memory_gb: float
|
||||
gpu_memory_gb: Optional[float] = None
|
||||
gpu_name: Optional[str] = None
|
||||
is_apple_silicon: bool = False
|
||||
chip_name: Optional[str] = None
|
||||
cpu_cores: int = 0
|
||||
detection_method: str = ""
|
||||
|
||||
|
||||
def detect_hardware() -> HardwareInfo:
|
||||
"""Detect available memory and GPU resources."""
|
||||
system = platform.system()
|
||||
|
||||
if system == "Darwin":
|
||||
return _detect_apple_silicon()
|
||||
elif system == "Linux":
|
||||
return _detect_linux()
|
||||
else:
|
||||
return _detect_generic(system)
|
||||
|
||||
|
||||
def _detect_apple_silicon() -> HardwareInfo:
|
||||
"""Detect Apple Silicon unified memory."""
|
||||
info = HardwareInfo(
|
||||
total_memory_gb=0,
|
||||
available_memory_gb=0,
|
||||
is_apple_silicon=True,
|
||||
detection_method="sysctl",
|
||||
)
|
||||
|
||||
try:
|
||||
# Get total memory
|
||||
result = subprocess.run(
|
||||
["sysctl", "-n", "hw.memsize"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if result.returncode == 0:
|
||||
info.total_memory_gb = int(result.stdout.strip()) / (1024**3)
|
||||
|
||||
# Get chip name
|
||||
result = subprocess.run(
|
||||
["sysctl", "-n", "machdep.cpu.brand_string"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if result.returncode == 0:
|
||||
info.chip_name = result.stdout.strip()
|
||||
|
||||
# Try to get GPU name (Apple Silicon)
|
||||
result = subprocess.run(
|
||||
["system_profiler", "SPDisplaysDataType"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if result.returncode == 0:
|
||||
for line in result.stdout.split("\n"):
|
||||
if "Chipset" in line or "GPU" in line:
|
||||
info.gpu_name = line.split(":")[-1].strip()
|
||||
break
|
||||
|
||||
# Estimate available memory (vm_stat)
|
||||
result = subprocess.run(
|
||||
["vm_stat"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if result.returncode == 0:
|
||||
page_size = 4096 # macOS default
|
||||
free_pages = 0
|
||||
for line in result.stdout.split("\n"):
|
||||
if "Pages free:" in line:
|
||||
try:
|
||||
free_pages = int(line.split(":")[-1].strip().rstrip("."))
|
||||
except ValueError:
|
||||
pass
|
||||
# Available ≈ free + some speculative (conservative: just free)
|
||||
info.available_memory_gb = (free_pages * page_size) / (1024**3)
|
||||
|
||||
# Fallback if vm_stat parsing failed
|
||||
if info.available_memory_gb < 1:
|
||||
# Conservative: 70% of total
|
||||
info.available_memory_gb = info.total_memory_gb * 0.70
|
||||
|
||||
# Apple Silicon shares memory — GPU memory = total memory
|
||||
info.gpu_memory_gb = info.total_memory_gb
|
||||
|
||||
# Detect CPU cores
|
||||
result = subprocess.run(
|
||||
["sysctl", "-n", "hw.ncpu"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if result.returncode == 0:
|
||||
info.cpu_cores = int(result.stdout.strip())
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Apple Silicon detection failed: {e}")
|
||||
# Fallback
|
||||
info.total_memory_gb = 16.0
|
||||
info.available_memory_gb = 12.0
|
||||
info.detection_method = "fallback"
|
||||
|
||||
return info
|
||||
|
||||
|
||||
def _detect_linux() -> HardwareInfo:
|
||||
"""Detect Linux system with optional NVIDIA GPU."""
|
||||
info = HardwareInfo(
|
||||
total_memory_gb=0,
|
||||
available_memory_gb=0,
|
||||
detection_method="proc",
|
||||
)
|
||||
|
||||
try:
|
||||
# Read /proc/meminfo
|
||||
with open("/proc/meminfo", "r") as f:
|
||||
meminfo = f.read()
|
||||
|
||||
for line in meminfo.split("\n"):
|
||||
if line.startswith("MemTotal:"):
|
||||
kb = int(line.split()[1])
|
||||
info.total_memory_gb = kb / (1024 * 1024)
|
||||
elif line.startswith("MemAvailable:"):
|
||||
kb = int(line.split()[1])
|
||||
info.available_memory_gb = kb / (1024 * 1024)
|
||||
|
||||
# CPU cores
|
||||
info.cpu_cores = os.cpu_count() or 1
|
||||
|
||||
# Check for NVIDIA GPU
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["nvidia-smi", "--query-gpu=name,memory.total,memory.free",
|
||||
"--format=csv,noheader,nounits"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
lines = result.stdout.strip().split("\n")
|
||||
if lines:
|
||||
parts = lines[0].split(", ")
|
||||
if len(parts) >= 3:
|
||||
info.gpu_name = parts[0].strip()
|
||||
info.gpu_memory_gb = float(parts[1]) / 1024 # MB to GB
|
||||
gpu_free = float(parts[2]) / 1024
|
||||
# Use GPU free for VRAM-based selection
|
||||
info.available_memory_gb = max(info.available_memory_gb, gpu_free)
|
||||
info.detection_method = "nvidia-smi"
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
pass # No NVIDIA GPU
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Linux detection failed: {e}")
|
||||
info.total_memory_gb = 16.0
|
||||
info.available_memory_gb = 12.0
|
||||
info.detection_method = "fallback"
|
||||
|
||||
return info
|
||||
|
||||
|
||||
def _detect_generic(system: str) -> HardwareInfo:
|
||||
"""Fallback detection for unknown systems."""
|
||||
import psutil
|
||||
mem = psutil.virtual_memory()
|
||||
return HardwareInfo(
|
||||
total_memory_gb=mem.total / (1024**3),
|
||||
available_memory_gb=mem.available / (1024**3),
|
||||
cpu_cores=os.cpu_count() or 1,
|
||||
detection_method="psutil",
|
||||
)
|
||||
|
||||
|
||||
# ── KV Cache Memory Estimation ───────────────────────────────────────────────
|
||||
|
||||
def estimate_kv_cache_gb(
|
||||
context_length: int,
|
||||
num_layers: int = 48,
|
||||
num_kv_heads: int = 8,
|
||||
head_dim: int = 128,
|
||||
bits_per_channel: float = 3.5,
|
||||
) -> float:
|
||||
"""Estimate KV cache memory for given parameters.
|
||||
|
||||
Formula: 2 (K+V) × layers × kv_heads × head_dim × context_length × bits/8
|
||||
"""
|
||||
bytes_per_element = bits_per_channel / 8.0
|
||||
total_bytes = 2 * num_layers * num_kv_heads * head_dim * context_length * bytes_per_element
|
||||
return total_bytes / (1024**3)
|
||||
|
||||
|
||||
def estimate_model_memory_gb(model_size_gb: float, quant_type: str = "q4_k_m") -> float:
|
||||
"""Estimate model weights memory. Returns loaded size in GB.
|
||||
|
||||
This is a rough estimate — actual depends on exact quant format.
|
||||
"""
|
||||
# Common quant ratios (vs fp16)
|
||||
quant_multipliers = {
|
||||
"f16": 1.0,
|
||||
"q8_0": 0.5,
|
||||
"q6_k": 0.42,
|
||||
"q5_k_m": 0.37,
|
||||
"q4_k_m": 0.32,
|
||||
"q3_k_m": 0.27,
|
||||
"q2_k": 0.22,
|
||||
}
|
||||
# model_size_gb is already quantized size
|
||||
return model_size_gb
|
||||
|
||||
|
||||
# ── Selection Logic ───────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class QuantSelection:
|
||||
"""Result of quantization level selection."""
|
||||
level: QuantLevel
|
||||
hardware: HardwareInfo
|
||||
reasoning: str
|
||||
total_required_gb: float
|
||||
available_gb: float
|
||||
headroom_gb: float
|
||||
env_vars: dict = field(default_factory=dict)
|
||||
server_flags: dict = field(default_factory=dict)
|
||||
warnings: list = field(default_factory=list)
|
||||
|
||||
|
||||
def select_quant_level(
|
||||
model_size_gb: float = 14.0,
|
||||
context_length: int = 32768,
|
||||
num_layers: int = 48,
|
||||
num_kv_heads: int = 8,
|
||||
head_dim: int = 128,
|
||||
preferred_level: Optional[str] = None,
|
||||
force_cpu: bool = False,
|
||||
) -> QuantSelection:
|
||||
"""Select the best quantization level for available hardware.
|
||||
|
||||
Args:
|
||||
model_size_gb: Size of the model weights in GB
|
||||
context_length: Target context length
|
||||
num_layers: Number of transformer layers
|
||||
num_kv_heads: Number of KV attention heads
|
||||
head_dim: Dimension per attention head
|
||||
preferred_level: Force a specific level (still checks if it fits)
|
||||
force_cpu: If True, ignore GPU memory
|
||||
|
||||
Returns:
|
||||
QuantSelection with the chosen level and reasoning
|
||||
"""
|
||||
hw = detect_hardware()
|
||||
|
||||
if force_cpu:
|
||||
hw.gpu_memory_gb = None
|
||||
hw.gpu_name = None
|
||||
|
||||
# Use the most restrictive memory constraint
|
||||
# For Apple Silicon: unified memory, use total
|
||||
# For NVIDIA: use GPU VRAM
|
||||
# For CPU-only: use system RAM
|
||||
if hw.gpu_memory_gb and hw.gpu_name:
|
||||
memory_pool_gb = hw.gpu_memory_gb
|
||||
memory_label = f"{hw.gpu_name} {hw.gpu_memory_gb:.0f}GB VRAM"
|
||||
elif hw.is_apple_silicon:
|
||||
memory_pool_gb = hw.total_memory_gb
|
||||
memory_label = f"{hw.chip_name or 'Apple Silicon'} {hw.total_memory_gb:.0f}GB unified"
|
||||
else:
|
||||
memory_pool_gb = hw.total_memory_gb
|
||||
memory_label = f"{hw.cpu_cores}c CPU {hw.total_memory_gb:.0f}GB RAM"
|
||||
|
||||
model_mem = estimate_model_memory_gb(model_size_gb)
|
||||
|
||||
# Try levels from best to most compressed
|
||||
chosen = None
|
||||
for level in QUANT_LEVELS:
|
||||
if preferred_level and level.name != preferred_level:
|
||||
continue
|
||||
|
||||
kv_mem = estimate_kv_cache_gb(
|
||||
context_length, num_layers, num_kv_heads, head_dim,
|
||||
level.bits_per_channel
|
||||
)
|
||||
total_required = model_mem + kv_mem
|
||||
headroom = memory_pool_gb - total_required
|
||||
|
||||
if headroom >= level.min_memory_headroom_gb:
|
||||
chosen = level
|
||||
break
|
||||
|
||||
if preferred_level and level.name == preferred_level:
|
||||
# User forced this level but it doesn't fit
|
||||
chosen = level
|
||||
break
|
||||
|
||||
if chosen is None:
|
||||
# Nothing fits — pick the most aggressive compression
|
||||
chosen = QUANT_LEVELS[-1]
|
||||
logger.warning(f"No quant level fits in {memory_pool_gb:.1f}GB. Using {chosen.name}.")
|
||||
|
||||
# Calculate final numbers
|
||||
kv_mem = estimate_kv_cache_gb(
|
||||
context_length, num_layers, num_kv_heads, head_dim,
|
||||
chosen.bits_per_channel
|
||||
)
|
||||
total_required = model_mem + kv_mem
|
||||
headroom = memory_pool_gb - total_required
|
||||
|
||||
# Build reasoning
|
||||
reasoning_parts = [
|
||||
f"{memory_label}:",
|
||||
f"{chosen.name} ({chosen.quality_label}, {chosen.bits_per_channel:.1f}b/ch,",
|
||||
f"{chosen.compression_ratio:.1f}x compression)",
|
||||
f"fits {model_mem:.1f}GB model + {kv_mem:.1f}GB KV cache",
|
||||
f"@ {context_length}K context = {total_required:.1f}GB / {memory_pool_gb:.0f}GB",
|
||||
f"({headroom:.1f}GB headroom)"
|
||||
]
|
||||
reasoning = " ".join(reasoning_parts)
|
||||
|
||||
# Build environment variables for llama.cpp
|
||||
env_vars = {
|
||||
"TURBO_LAYER_ADAPTIVE": str(chosen.layer_adaptive),
|
||||
}
|
||||
|
||||
# Build server flags
|
||||
server_flags = {
|
||||
"-ctk": chosen.kv_type,
|
||||
"-ctv": chosen.kv_type,
|
||||
"-c": str(context_length),
|
||||
}
|
||||
|
||||
# Warnings
|
||||
warnings = []
|
||||
if headroom < 2.0:
|
||||
warnings.append(
|
||||
f"Low headroom ({headroom:.1f}GB). Consider reducing context length or model size."
|
||||
)
|
||||
if headroom < 0:
|
||||
warnings.append(
|
||||
f"OVERCOMMITTED: needs {total_required:.1f}GB but only {memory_pool_gb:.0f}GB available. "
|
||||
f"Inference may fail or swap heavily."
|
||||
)
|
||||
|
||||
selection = QuantSelection(
|
||||
level=chosen,
|
||||
hardware=hw,
|
||||
reasoning=reasoning,
|
||||
total_required_gb=total_required,
|
||||
available_gb=memory_pool_gb,
|
||||
headroom_gb=headroom,
|
||||
env_vars=env_vars,
|
||||
server_flags=server_flags,
|
||||
warnings=warnings,
|
||||
)
|
||||
|
||||
logger.info(f"Quant selection: {reasoning}")
|
||||
for w in warnings:
|
||||
logger.warning(w)
|
||||
|
||||
return selection
|
||||
|
||||
|
||||
# ── CLI ───────────────────────────────────────────────────────────────────────
|
||||
|
||||
def main():
|
||||
"""CLI entry point for quant level selection."""
|
||||
import argparse
|
||||
import json
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Auto-select TurboQuant compression level based on available hardware"
|
||||
)
|
||||
parser.add_argument("--model-size", type=float, default=14.0,
|
||||
help="Model size in GB (default: 14.0)")
|
||||
parser.add_argument("--context", type=int, default=32768,
|
||||
help="Target context length (default: 32768)")
|
||||
parser.add_argument("--layers", type=int, default=48,
|
||||
help="Number of transformer layers (default: 48)")
|
||||
parser.add_argument("--kv-heads", type=int, default=8,
|
||||
help="Number of KV attention heads (default: 8)")
|
||||
parser.add_argument("--head-dim", type=int, default=128,
|
||||
help="Dimension per attention head (default: 128)")
|
||||
parser.add_argument("--prefer", type=str, default=None,
|
||||
choices=[l.name for l in QUANT_LEVELS],
|
||||
help="Prefer a specific quant level")
|
||||
parser.add_argument("--force-cpu", action="store_true",
|
||||
help="Ignore GPU, use CPU memory only")
|
||||
parser.add_argument("--json", action="store_true",
|
||||
help="JSON output for automation")
|
||||
parser.add_argument("--detect-only", action="store_true",
|
||||
help="Only detect hardware, don't select")
|
||||
args = parser.parse_args()
|
||||
|
||||
logging.basicConfig(level=logging.INFO, format="%(message)s")
|
||||
|
||||
if args.detect_only:
|
||||
hw = detect_hardware()
|
||||
if args.json:
|
||||
print(json.dumps(hw.__dict__, default=str, indent=2))
|
||||
else:
|
||||
print(f"Total memory: {hw.total_memory_gb:.1f} GB")
|
||||
print(f"Available: {hw.available_memory_gb:.1f} GB")
|
||||
if hw.gpu_memory_gb:
|
||||
print(f"GPU memory: {hw.gpu_memory_gb:.1f} GB")
|
||||
if hw.gpu_name:
|
||||
print(f"GPU: {hw.gpu_name}")
|
||||
if hw.is_apple_silicon:
|
||||
print(f"Chip: {hw.chip_name or 'Apple Silicon'}")
|
||||
print(f"CPU cores: {hw.cpu_cores}")
|
||||
print(f"Detection: {hw.detection_method}")
|
||||
return
|
||||
|
||||
selection = select_quant_level(
|
||||
model_size_gb=args.model_size,
|
||||
context_length=args.context,
|
||||
num_layers=args.layers,
|
||||
num_kv_heads=args.kv_heads,
|
||||
head_dim=args.head_dim,
|
||||
preferred_level=args.prefer,
|
||||
force_cpu=args.force_cpu,
|
||||
)
|
||||
|
||||
if args.json:
|
||||
result = {
|
||||
"level": selection.level.name,
|
||||
"bits_per_channel": selection.level.bits_per_channel,
|
||||
"compression_ratio": selection.level.compression_ratio,
|
||||
"quality": selection.level.quality_label,
|
||||
"reasoning": selection.reasoning,
|
||||
"total_required_gb": round(selection.total_required_gb, 2),
|
||||
"available_gb": round(selection.available_gb, 1),
|
||||
"headroom_gb": round(selection.headroom_gb, 2),
|
||||
"env_vars": selection.env_vars,
|
||||
"server_flags": selection.server_flags,
|
||||
"warnings": selection.warnings,
|
||||
"hardware": {
|
||||
"total_memory_gb": round(selection.hardware.total_memory_gb, 1),
|
||||
"gpu_name": selection.hardware.gpu_name,
|
||||
"is_apple_silicon": selection.hardware.is_apple_silicon,
|
||||
"chip_name": selection.hardware.chip_name,
|
||||
"cpu_cores": selection.hardware.cpu_cores,
|
||||
},
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(f"Selected: {selection.level.name} ({selection.level.quality_label})")
|
||||
print(f" {selection.reasoning}")
|
||||
print()
|
||||
print(f"Environment variables:")
|
||||
for k, v in selection.env_vars.items():
|
||||
print(f" export {k}={v}")
|
||||
print()
|
||||
print(f"Server flags:")
|
||||
for k, v in selection.server_flags.items():
|
||||
print(f" {k} {v}")
|
||||
if selection.warnings:
|
||||
print()
|
||||
for w in selection.warnings:
|
||||
print(f" WARNING: {w}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
164
profiles/allegro-cpu-presets.yaml
Normal file
164
profiles/allegro-cpu-presets.yaml
Normal file
@@ -0,0 +1,164 @@
|
||||
# Allegro VPS Presets — 2 cores, 8GB RAM, CPU-only inference
|
||||
# Optimized for the Timmy Foundation Allegro server (167.99.126.228)
|
||||
#
|
||||
# Hardware constraints:
|
||||
# - 2 CPU cores (no GPU)
|
||||
# - 8GB RAM total
|
||||
# - ~2GB reserved for OS + hermes agent
|
||||
# - ~6GB available for model + KV cache
|
||||
#
|
||||
# Strategy: GGUF quantization via llama.cpp (CPU-optimized)
|
||||
# KV cache compression via TurboQuant to maximize context within RAM
|
||||
|
||||
hardware:
|
||||
hostname: "allegro"
|
||||
ip: "167.99.126.228"
|
||||
cores: 2
|
||||
ram_gb: 8
|
||||
gpu: false
|
||||
os_reserved_gb: 2
|
||||
available_gb: 6
|
||||
arch: "x86_64"
|
||||
cpu_backend: "llama.cpp"
|
||||
|
||||
presets:
|
||||
# ─── TIER 1: Conservative (fits comfortably) ──────────────────────
|
||||
tiny:
|
||||
name: "tiny-2b-q4"
|
||||
description: "2B param model, Q4_K_M — leaves headroom for other processes"
|
||||
model_size_gb: 1.5
|
||||
quantization: "Q4_K_M"
|
||||
context_tokens: 4096
|
||||
kv_type: "f16"
|
||||
estimated_ram_gb: 2.8
|
||||
fits_in_allegro: true
|
||||
server_flags:
|
||||
threads: 2
|
||||
context: 4096
|
||||
batch: 256
|
||||
expected_perf:
|
||||
tokens_per_sec: "8-15"
|
||||
ttft_s: "1.5-3.0"
|
||||
use_case: "Simple Q&A, short completions, triage"
|
||||
ollama_model: "qwen2.5:1.5b"
|
||||
llama_cpp_model: "qwen2.5-1.5b-instruct-q4_k_m.gguf"
|
||||
|
||||
small:
|
||||
name: "small-3b-q4"
|
||||
description: "3B param model, Q4_K_M — sweet spot for value on 2 cores"
|
||||
model_size_gb: 2.0
|
||||
quantization: "Q4_K_M"
|
||||
context_tokens: 8192
|
||||
kv_type: "turbo2"
|
||||
estimated_ram_gb: 3.6
|
||||
fits_in_allegro: true
|
||||
server_flags:
|
||||
threads: 2
|
||||
context: 8192
|
||||
batch: 512
|
||||
ctk: "q4_0"
|
||||
ctv: "q4_0"
|
||||
expected_perf:
|
||||
tokens_per_sec: "5-10"
|
||||
ttft_s: "2.0-5.0"
|
||||
use_case: "Code generation, tool calling, burn-loop workers"
|
||||
ollama_model: "qwen2.5:3b"
|
||||
llama_cpp_model: "qwen2.5-3b-instruct-q4_k_m.gguf"
|
||||
|
||||
# ─── TIER 2: Balanced (recommended default) ───────────────────────
|
||||
medium:
|
||||
name: "medium-7b-q4"
|
||||
description: "7B param model, Q4_K_M + TurboQuant — best quality that fits"
|
||||
model_size_gb: 4.1
|
||||
quantization: "Q4_K_M"
|
||||
context_tokens: 8192
|
||||
kv_type: "turbo4"
|
||||
estimated_ram_gb: 5.2
|
||||
fits_in_allegro: true
|
||||
server_flags:
|
||||
threads: 2
|
||||
context: 8192
|
||||
batch: 512
|
||||
ctk: "q4_0"
|
||||
ctv: "q4_0"
|
||||
layer_adaptive: 7
|
||||
expected_perf:
|
||||
tokens_per_sec: "2-5"
|
||||
ttft_s: "4.0-8.0"
|
||||
use_case: "Complex reasoning, multi-turn conversation, analysis"
|
||||
ollama_model: "qwen2.5:7b"
|
||||
llama_cpp_model: "qwen2.5-7b-instruct-q4_k_m.gguf"
|
||||
|
||||
medium_long:
|
||||
name: "medium-7b-q4-long"
|
||||
description: "7B Q4 + aggressive TurboQuant for 32K context"
|
||||
model_size_gb: 4.1
|
||||
quantization: "Q4_K_M"
|
||||
context_tokens: 32768
|
||||
kv_type: "turbo4"
|
||||
estimated_ram_gb: 5.8
|
||||
fits_in_allegro: true
|
||||
server_flags:
|
||||
threads: 2
|
||||
context: 32768
|
||||
batch: 256
|
||||
ctk: "q3_k"
|
||||
ctv: "q3_k"
|
||||
layer_adaptive: 7
|
||||
expected_perf:
|
||||
tokens_per_sec: "1.5-4"
|
||||
ttft_s: "6.0-12.0"
|
||||
use_case: "Long document analysis, code review, research"
|
||||
ollama_model: "qwen2.5:7b"
|
||||
llama_cpp_model: "qwen2.5-7b-instruct-q4_k_m.gguf"
|
||||
|
||||
# ─── TIER 3: Pushing limits (may swap) ────────────────────────────
|
||||
large:
|
||||
name: "large-14b-q3"
|
||||
description: "14B param model, Q3_K_M — may page to swap, use with caution"
|
||||
model_size_gb: 6.5
|
||||
quantization: "Q3_K_M"
|
||||
context_tokens: 4096
|
||||
kv_type: "turbo4"
|
||||
estimated_ram_gb: 7.2
|
||||
fits_in_allegro: false
|
||||
warning: "Exceeds 6GB limit. Needs swap or will OOM. Use only for batch jobs."
|
||||
server_flags:
|
||||
threads: 2
|
||||
context: 4096
|
||||
batch: 256
|
||||
ctk: "q3_k"
|
||||
ctv: "q3_k"
|
||||
layer_adaptive: 7
|
||||
expected_perf:
|
||||
tokens_per_sec: "0.5-2"
|
||||
ttft_s: "10.0-30.0"
|
||||
use_case: "Batch processing, overnight jobs (with swap)"
|
||||
ollama_model: "qwen2.5:14b"
|
||||
llama_cpp_model: "qwen2.5-14b-instruct-q3_k_m.gguf"
|
||||
|
||||
# Recommended default for Allegro
|
||||
recommended_preset: "medium"
|
||||
|
||||
# Server startup examples
|
||||
examples:
|
||||
ollama: |
|
||||
# Pull and run
|
||||
ollama pull qwen2.5:7b
|
||||
ollama run qwen2.5:7b
|
||||
|
||||
llama_cpp: |
|
||||
# With TurboQuant KV cache
|
||||
export TURBO_LAYER_ADAPTIVE=7
|
||||
llama-server \
|
||||
-m /models/qwen2.5-7b-instruct-q4_k_m.gguf \
|
||||
--port 8081 \
|
||||
-t 2 \
|
||||
-c 8192 \
|
||||
-b 512 \
|
||||
-ctk q4_0 -ctv q4_0 \
|
||||
--host 0.0.0.0
|
||||
|
||||
hermes_profile: |
|
||||
# Use with hermes agent
|
||||
hermes -p allegro-medium chat
|
||||
@@ -1,85 +0,0 @@
|
||||
"""Pytest configuration for turboquant."""
|
||||
import os
|
||||
import sys
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def turboquant_server_url():
|
||||
"""
|
||||
Session-scoped fixture providing a TurboQuant server URL.
|
||||
|
||||
If TURBOQUANT_SERVER_URL is set, uses that directly.
|
||||
Otherwise, auto-starts a llama-server with TurboQuant flags.
|
||||
|
||||
Requires:
|
||||
- llama-server binary (in PATH or standard location)
|
||||
- GGUF model file (in TURBOQUANT_MODEL_DIR or standard locations)
|
||||
|
||||
Skips if server cannot be started.
|
||||
"""
|
||||
# If URL already provided, use it
|
||||
if os.environ.get("TURBOQUANT_SERVER_URL"):
|
||||
yield os.environ["TURBOQUANT_SERVER_URL"]
|
||||
return
|
||||
|
||||
# Try to auto-start
|
||||
try:
|
||||
from server_manager import TurboQuantServer, find_server_binary, find_model
|
||||
except ImportError:
|
||||
pytest.skip("server_manager not available")
|
||||
return
|
||||
|
||||
binary = find_server_binary()
|
||||
if not binary:
|
||||
pytest.skip("llama-server binary not found — install llama-cpp-turboquant")
|
||||
return
|
||||
|
||||
model = find_model()
|
||||
if not model:
|
||||
pytest.skip("No GGUF model found — set TURBOQUANT_MODEL_DIR or place model in ~/models")
|
||||
return
|
||||
|
||||
port = int(os.environ.get("TURBOQUANT_TEST_PORT", "18081"))
|
||||
kv_type = os.environ.get("TURBOQUANT_KV_TYPE", "turbo4")
|
||||
ctx_size = int(os.environ.get("TURBOQUANT_CTX_SIZE", "8192"))
|
||||
timeout = float(os.environ.get("TURBOQUANT_STARTUP_TIMEOUT", "60"))
|
||||
|
||||
server = TurboQuantServer(
|
||||
model_path=model,
|
||||
port=port,
|
||||
kv_type=kv_type,
|
||||
context_size=ctx_size,
|
||||
server_binary=binary,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
try:
|
||||
url = server.start()
|
||||
yield url
|
||||
except Exception as e:
|
||||
pytest.skip(f"Could not start TurboQuant server: {e}")
|
||||
finally:
|
||||
server.stop()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def turboquant_model_name(turboquant_server_url):
|
||||
"""Get the model name from the running server."""
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
try:
|
||||
req = urllib.request.Request(f"{turboquant_server_url}/v1/models")
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
data = json.loads(resp.read())
|
||||
models = data.get("data", [])
|
||||
if models:
|
||||
return models[0].get("id", "unknown")
|
||||
except Exception:
|
||||
pass
|
||||
return "gemma-4"
|
||||
@@ -1,197 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
TurboQuant Server Manager
|
||||
|
||||
Manages llama-server lifecycle for integration tests:
|
||||
- Start server with TurboQuant flags
|
||||
- Wait for health check
|
||||
- Stop server on teardown
|
||||
|
||||
Usage:
|
||||
from tests.server_manager import TurboQuantServer
|
||||
|
||||
with TurboQuantServer(model_path="/path/to/model.gguf") as server:
|
||||
url = server.url # e.g. http://localhost:8081
|
||||
# Run tests against server
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class TurboQuantServer:
|
||||
"""Context manager for llama-server with TurboQuant."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_path: str,
|
||||
port: int = 8081,
|
||||
kv_type: str = "turbo4",
|
||||
context_size: int = 32768,
|
||||
server_binary: Optional[str] = None,
|
||||
timeout: float = 60.0,
|
||||
host: str = "127.0.0.1",
|
||||
):
|
||||
self.model_path = model_path
|
||||
self.port = port
|
||||
self.kv_type = kv_type
|
||||
self.context_size = context_size
|
||||
self.timeout = timeout
|
||||
self.host = host
|
||||
|
||||
# Find server binary
|
||||
if server_binary:
|
||||
self.server_binary = server_binary
|
||||
else:
|
||||
# Try common locations
|
||||
candidates = [
|
||||
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
|
||||
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
|
||||
Path("llama-server"), # PATH
|
||||
]
|
||||
self.server_binary = None
|
||||
for c in candidates:
|
||||
if c.exists() or c.name == "llama-server":
|
||||
try:
|
||||
subprocess.run([str(c), "--help"], capture_output=True, timeout=5)
|
||||
self.server_binary = str(c)
|
||||
break
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
continue
|
||||
|
||||
self.process: Optional[subprocess.Popen] = None
|
||||
|
||||
@property
|
||||
def url(self) -> str:
|
||||
return f"http://{self.host}:{self.port}"
|
||||
|
||||
def _build_command(self) -> list:
|
||||
cmd = [
|
||||
self.server_binary,
|
||||
"-m", self.model_path,
|
||||
"--port", str(self.port),
|
||||
"--host", self.host,
|
||||
"-ctk", self.kv_type,
|
||||
"-ctv", self.kv_type,
|
||||
"-c", str(self.context_size),
|
||||
]
|
||||
return cmd
|
||||
|
||||
def _check_health(self) -> bool:
|
||||
try:
|
||||
req = urllib.request.Request(f"{self.url}/v1/models")
|
||||
resp = urllib.request.urlopen(req, timeout=5)
|
||||
data = json.loads(resp.read())
|
||||
return "data" in data and len(data.get("data", [])) > 0
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def start(self) -> str:
|
||||
"""Start the server and wait for it to be healthy. Returns the server URL."""
|
||||
if not self.server_binary:
|
||||
raise RuntimeError(
|
||||
"llama-server binary not found. Set server_binary or install to standard location."
|
||||
)
|
||||
|
||||
if not Path(self.model_path).exists():
|
||||
raise FileNotFoundError(f"Model not found: {self.model_path}")
|
||||
|
||||
cmd = self._build_command()
|
||||
|
||||
# Set TurboQuant env
|
||||
env = os.environ.copy()
|
||||
env["TURBO_LAYER_ADAPTIVE"] = "7"
|
||||
|
||||
self.process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
env=env,
|
||||
)
|
||||
|
||||
# Wait for health
|
||||
start = time.time()
|
||||
while time.time() - start < self.timeout:
|
||||
if self.process.poll() is not None:
|
||||
stderr = self.process.stderr.read().decode() if self.process.stderr else ""
|
||||
raise RuntimeError(f"Server exited early (code {self.process.returncode}): {stderr[:500]}")
|
||||
|
||||
if self._check_health():
|
||||
return self.url
|
||||
|
||||
time.sleep(1.0)
|
||||
|
||||
self.stop()
|
||||
raise TimeoutError(f"Server did not become healthy within {self.timeout}s")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the server."""
|
||||
if self.process:
|
||||
try:
|
||||
self.process.send_signal(signal.SIGTERM)
|
||||
self.process.wait(timeout=10)
|
||||
except subprocess.TimeoutExpired:
|
||||
self.process.kill()
|
||||
self.process.wait(timeout=5)
|
||||
except Exception:
|
||||
pass
|
||||
self.process = None
|
||||
|
||||
def __enter__(self) -> "TurboQuantServer":
|
||||
self.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
self.stop()
|
||||
|
||||
|
||||
def find_server_binary() -> Optional[str]:
|
||||
"""Find llama-server binary in common locations."""
|
||||
candidates = [
|
||||
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
|
||||
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
|
||||
]
|
||||
for c in candidates:
|
||||
if c.exists():
|
||||
return str(c)
|
||||
|
||||
# Try PATH
|
||||
try:
|
||||
result = subprocess.run(["which", "llama-server"], capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def find_model(model_dir: Optional[str] = None) -> Optional[str]:
|
||||
"""Find a GGUF model file."""
|
||||
search_dirs = [
|
||||
model_dir,
|
||||
os.environ.get("TURBOQUANT_MODEL_DIR"),
|
||||
str(Path.home() / "models"),
|
||||
"/opt/models",
|
||||
"/tmp/models",
|
||||
]
|
||||
|
||||
for d in search_dirs:
|
||||
if not d:
|
||||
continue
|
||||
p = Path(d)
|
||||
if p.is_file() and p.suffix == ".gguf":
|
||||
return str(p)
|
||||
if p.is_dir():
|
||||
for f in sorted(p.rglob("*.gguf")):
|
||||
return str(f)
|
||||
|
||||
return None
|
||||
141
tests/test_allegro_benchmarks.py
Normal file
141
tests/test_allegro_benchmarks.py
Normal file
@@ -0,0 +1,141 @@
|
||||
"""Tests for Allegro VPS benchmark runner and preset configuration."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import pathlib
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
PRESETS_FILE = ROOT / "profiles" / "allegro-cpu-presets.yaml"
|
||||
PROMPTS_FILE = ROOT / "benchmarks" / "prompts.json"
|
||||
|
||||
sys.path.insert(0, str(ROOT / "benchmarks"))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Preset config validation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestPresetConfig:
|
||||
"""Validate allegro-cpu-presets.yaml structure."""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
import yaml
|
||||
cls.config = yaml.safe_load(PRESETS_FILE.read_text())
|
||||
|
||||
def test_config_has_hardware(self):
|
||||
assert "hardware" in self.config
|
||||
hw = self.config["hardware"]
|
||||
assert hw["cores"] == 2
|
||||
assert hw["ram_gb"] == 8
|
||||
assert hw["gpu"] is False
|
||||
|
||||
def test_config_has_presets(self):
|
||||
assert "presets" in self.config
|
||||
assert len(self.config["presets"]) >= 3
|
||||
|
||||
def test_each_preset_has_required_fields(self):
|
||||
for name, preset in self.config["presets"].items():
|
||||
assert "name" in preset, f"Preset {name} missing 'name'"
|
||||
assert "description" in preset, f"Preset {name} missing 'description'"
|
||||
assert "model_size_gb" in preset, f"Preset {name} missing 'model_size_gb'"
|
||||
assert "quantization" in preset, f"Preset {name} missing 'quantization'"
|
||||
assert "context_tokens" in preset, f"Preset {name} missing 'context_tokens'"
|
||||
assert "kv_type" in preset, f"Preset {name} missing 'kv_type'"
|
||||
assert "estimated_ram_gb" in preset, f"Preset {name} missing 'estimated_ram_gb'"
|
||||
assert "fits_in_allegro" in preset, f"Preset {name} missing 'fits_in_allegro'"
|
||||
assert "expected_perf" in preset, f"Preset {name} missing 'expected_perf'"
|
||||
assert "server_flags" in preset, f"Preset {name} missing 'server_flags'"
|
||||
|
||||
def test_tiny_fits_in_allegro(self):
|
||||
tiny = self.config["presets"]["tiny"]
|
||||
assert tiny["fits_in_allegro"] is True
|
||||
assert tiny["estimated_ram_gb"] <= 6.0
|
||||
|
||||
def test_small_fits_in_allegro(self):
|
||||
small = self.config["presets"]["small"]
|
||||
assert small["fits_in_allegro"] is True
|
||||
assert small["estimated_ram_gb"] <= 6.0
|
||||
|
||||
def test_medium_fits_in_allegro(self):
|
||||
medium = self.config["presets"]["medium"]
|
||||
assert medium["fits_in_allegro"] is True
|
||||
assert medium["estimated_ram_gb"] <= 6.0
|
||||
|
||||
def test_large_does_not_fit(self):
|
||||
large = self.config["presets"]["large"]
|
||||
assert large["fits_in_allegro"] is False
|
||||
assert large["estimated_ram_gb"] > 6.0
|
||||
|
||||
def test_recommended_preset_exists(self):
|
||||
rec = self.config.get("recommended_preset")
|
||||
assert rec is not None
|
||||
assert rec in self.config["presets"]
|
||||
|
||||
def test_server_flags_have_threads(self):
|
||||
for name, preset in self.config["presets"].items():
|
||||
flags = preset.get("server_flags", {})
|
||||
assert "threads" in flags, f"Preset {name} missing threads in server_flags"
|
||||
assert flags["threads"] == 2, f"Preset {name} should use 2 threads"
|
||||
|
||||
def test_context_tokens_reasonable(self):
|
||||
for name, preset in self.config["presets"].items():
|
||||
ctx = preset["context_tokens"]
|
||||
assert ctx >= 2048, f"Preset {name} context too small: {ctx}"
|
||||
assert ctx <= 131072, f"Preset {name} context too large: {ctx}"
|
||||
|
||||
def test_kv_types_valid(self):
|
||||
valid_types = {"f16", "q4_0", "q4_1", "q5_0", "q5_1", "q8_0",
|
||||
"turbo2", "turbo3", "turbo4", "q3_k", "q4_k", "q5_k"}
|
||||
for name, preset in self.config["presets"].items():
|
||||
kv = preset["kv_type"]
|
||||
assert kv in valid_types, f"Preset {name} has invalid kv_type: {kv}"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Benchmark prompts validation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBenchmarkPrompts:
|
||||
def test_prompts_file_exists(self):
|
||||
assert PROMPTS_FILE.exists()
|
||||
|
||||
def test_prompts_is_list(self):
|
||||
prompts = json.loads(PROMPTS_FILE.read_text())
|
||||
assert isinstance(prompts, list)
|
||||
assert len(prompts) >= 5
|
||||
|
||||
def test_each_prompt_has_required_fields(self):
|
||||
prompts = json.loads(PROMPTS_FILE.read_text())
|
||||
for p in prompts:
|
||||
assert "id" in p or "category" in p
|
||||
assert "prompt" in p
|
||||
assert len(p["prompt"]) > 10
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Hardware detection (unit tests)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestHardwareDetection:
|
||||
def test_detect_hardware_returns_dict(self):
|
||||
from run_allegro_benchmarks import detect_hardware
|
||||
hw = detect_hardware()
|
||||
assert isinstance(hw, dict)
|
||||
assert "cores" in hw
|
||||
assert "ram_gb" in hw
|
||||
assert "gpu" in hw
|
||||
|
||||
def test_cores_positive(self):
|
||||
from run_allegro_benchmarks import detect_hardware
|
||||
hw = detect_hardware()
|
||||
assert hw["cores"] > 0
|
||||
|
||||
def test_memory_usage_returns_float(self):
|
||||
from run_allegro_benchmarks import get_memory_usage_gb
|
||||
mem = get_memory_usage_gb()
|
||||
assert isinstance(mem, (int, float))
|
||||
assert mem >= 0
|
||||
@@ -1,21 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for hardware_optimizer compatibility shim."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
|
||||
|
||||
from evolution import hardware_optimizer, quant_selector
|
||||
|
||||
|
||||
def test_hardware_optimizer_reexports_quant_selector_api():
|
||||
assert hardware_optimizer.select_quant_level is quant_selector.select_quant_level
|
||||
assert hardware_optimizer.detect_hardware is quant_selector.detect_hardware
|
||||
assert hardware_optimizer.HardwareInfo is quant_selector.HardwareInfo
|
||||
assert hardware_optimizer.QuantSelection is quant_selector.QuantSelection
|
||||
|
||||
|
||||
def test_hardware_optimizer_exports_quant_level_definitions():
|
||||
assert hardware_optimizer.QUANT_LEVELS is quant_selector.QUANT_LEVELS
|
||||
assert hardware_optimizer.QuantLevel is quant_selector.QuantLevel
|
||||
@@ -1,74 +0,0 @@
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
|
||||
from check_markdown_links import find_broken_links
|
||||
|
||||
|
||||
def write(path: Path, content: str) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
|
||||
|
||||
|
||||
def test_reports_missing_local_markdown_target_with_line_number(tmp_path: Path):
|
||||
write(
|
||||
tmp_path / "README.md",
|
||||
"""
|
||||
# Repo
|
||||
|
||||
See [status](docs/status.md).
|
||||
""",
|
||||
)
|
||||
|
||||
broken = find_broken_links(tmp_path)
|
||||
|
||||
assert len(broken) == 1
|
||||
assert broken[0]["source"].endswith("README.md")
|
||||
assert broken[0]["line"] == 3
|
||||
assert broken[0]["target"] == "docs/status.md"
|
||||
|
||||
|
||||
def test_allows_existing_relative_targets(tmp_path: Path):
|
||||
write(tmp_path / "docs" / "status.md", "# Status\n")
|
||||
write(
|
||||
tmp_path / "README.md",
|
||||
"""
|
||||
# Repo
|
||||
|
||||
See [status](docs/status.md).
|
||||
""",
|
||||
)
|
||||
|
||||
assert find_broken_links(tmp_path) == []
|
||||
|
||||
|
||||
def test_ignores_external_anchor_mailto_and_tel_links(tmp_path: Path):
|
||||
write(
|
||||
tmp_path / "README.md",
|
||||
"""
|
||||
[external](https://example.com)
|
||||
[anchor](#section)
|
||||
[mail](mailto:test@example.com)
|
||||
[call](tel:988)
|
||||
""",
|
||||
)
|
||||
|
||||
assert find_broken_links(tmp_path) == []
|
||||
|
||||
|
||||
def test_ignores_links_inside_fenced_code_blocks(tmp_path: Path):
|
||||
write(
|
||||
tmp_path / "README.md",
|
||||
"""
|
||||
```md
|
||||
[broken](docs/missing.md)
|
||||
```
|
||||
""",
|
||||
)
|
||||
|
||||
assert find_broken_links(tmp_path) == []
|
||||
|
||||
|
||||
def test_skips_build_directories(tmp_path: Path):
|
||||
write(tmp_path / "build" / "README.md", "[broken](missing.md)\n")
|
||||
|
||||
assert find_broken_links(tmp_path) == []
|
||||
@@ -1,189 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for quant_selector.py"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
|
||||
from evolution.quant_selector import (
|
||||
QuantLevel,
|
||||
HardwareInfo,
|
||||
QUANT_LEVELS,
|
||||
detect_hardware,
|
||||
estimate_kv_cache_gb,
|
||||
estimate_model_memory_gb,
|
||||
select_quant_level,
|
||||
)
|
||||
|
||||
|
||||
class TestQuantLevels:
|
||||
def test_levels_ordered_by_quality(self):
|
||||
"""TurboQuant levels should be ordered from best quality to most aggressive.
|
||||
|
||||
The quality ordering invariant for TurboQuant levels is monotonically
|
||||
increasing compression_ratio (more aggressive = more compression).
|
||||
Non-TurboQuant fallbacks (e.g. q4_0) are placed after all TurboQuant
|
||||
levels and may have any compression ratio — they exist as safe defaults,
|
||||
not as part of the quality progression.
|
||||
"""
|
||||
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
|
||||
turbo_levels = [l for l in QUANT_LEVELS if l.name in turbo_quant_names]
|
||||
for i in range(len(turbo_levels) - 1):
|
||||
assert turbo_levels[i].compression_ratio <= turbo_levels[i + 1].compression_ratio, (
|
||||
f"TurboQuant {turbo_levels[i].name} (compression={turbo_levels[i].compression_ratio}x) "
|
||||
f"should have <= compression than {turbo_levels[i+1].name} "
|
||||
f"(compression={turbo_levels[i+1].compression_ratio}x)"
|
||||
)
|
||||
|
||||
def test_fallback_quant_is_last(self):
|
||||
"""Non-TurboQuant fallbacks (e.g. q4_0) should be at the end of the list."""
|
||||
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
|
||||
found_fallback = False
|
||||
for level in QUANT_LEVELS:
|
||||
if level.name not in turbo_quant_names:
|
||||
found_fallback = True
|
||||
elif found_fallback:
|
||||
pytest.fail(
|
||||
f"TurboQuant level '{level.name}' appears after a fallback level. "
|
||||
f"All TurboQuant levels must precede fallbacks."
|
||||
)
|
||||
|
||||
def test_all_levels_have_required_fields(self):
|
||||
for level in QUANT_LEVELS:
|
||||
assert level.name
|
||||
assert level.bits_per_channel > 0
|
||||
assert level.compression_ratio > 1
|
||||
assert level.quality_label
|
||||
assert level.layer_adaptive >= 0
|
||||
assert level.kv_type
|
||||
|
||||
|
||||
class TestKVEstimate:
|
||||
def test_basic_estimate(self):
|
||||
# 48 layers, 8 heads, 128 dim, 32K context, 3.5 bits
|
||||
kv_gb = estimate_kv_cache_gb(32768, 48, 8, 128, 3.5)
|
||||
assert kv_gb > 0
|
||||
assert kv_gb < 10 # Should be reasonable
|
||||
|
||||
def test_longer_context_larger(self):
|
||||
kv_32k = estimate_kv_cache_gb(32768, 48, 8, 128, 3.5)
|
||||
kv_128k = estimate_kv_cache_gb(131072, 48, 8, 128, 3.5)
|
||||
assert kv_128k > kv_32k
|
||||
|
||||
def test_higher_bits_larger(self):
|
||||
kv_4b = estimate_kv_cache_gb(32768, 48, 8, 128, 4.0)
|
||||
kv_2b = estimate_kv_cache_gb(32768, 48, 8, 128, 2.0)
|
||||
assert kv_4b > kv_2b
|
||||
|
||||
|
||||
class TestHardwareDetection:
|
||||
def test_detect_returns_info(self):
|
||||
hw = detect_hardware()
|
||||
assert hw.total_memory_gb > 0
|
||||
assert hw.available_memory_gb > 0
|
||||
assert hw.detection_method
|
||||
|
||||
@patch("evolution.quant_selector.platform.system", return_value="Linux")
|
||||
@patch("builtins.open", create=True)
|
||||
def test_linux_detection(self, mock_open, mock_system):
|
||||
mock_open.return_value.__enter__().read.return_value = (
|
||||
"MemTotal: 32000000 kB\n"
|
||||
"MemAvailable: 24000000 kB\n"
|
||||
)
|
||||
hw = _detect_linux_fallback()
|
||||
assert hw.total_memory_gb > 20
|
||||
|
||||
|
||||
def _detect_linux_fallback():
|
||||
"""Helper to test Linux detection with mocked /proc/meminfo."""
|
||||
from evolution.quant_selector import _detect_linux
|
||||
return _detect_linux()
|
||||
|
||||
|
||||
class TestSelection:
|
||||
def test_selects_turbo4_for_large_memory(self):
|
||||
"""With plenty of memory, should pick turbo4 (best quality)."""
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=64,
|
||||
available_memory_gb=48,
|
||||
gpu_memory_gb=64,
|
||||
gpu_name="Test GPU",
|
||||
cpu_cores=16,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(model_size_gb=14.0, context_length=32768)
|
||||
assert sel.level.name == "turbo4"
|
||||
assert sel.headroom_gb > 0
|
||||
|
||||
def test_selects_smaller_for_tight_memory(self):
|
||||
"""With tight memory, should pick a smaller quant."""
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=16,
|
||||
available_memory_gb=12,
|
||||
gpu_memory_gb=16,
|
||||
gpu_name="Test GPU",
|
||||
cpu_cores=8,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(model_size_gb=14.0, context_length=131072)
|
||||
# Should pick a smaller quant for 128K context on 16GB
|
||||
assert sel.level.bits_per_channel <= 4.0
|
||||
|
||||
def test_preferred_level(self):
|
||||
"""User can force a specific level."""
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=64,
|
||||
available_memory_gb=48,
|
||||
cpu_cores=16,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(
|
||||
model_size_gb=14.0, context_length=32768,
|
||||
preferred_level="turbo2"
|
||||
)
|
||||
assert sel.level.name == "turbo2"
|
||||
|
||||
def test_env_vars_populated(self):
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=64,
|
||||
available_memory_gb=48,
|
||||
cpu_cores=16,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(model_size_gb=14.0, context_length=32768)
|
||||
assert "TURBO_LAYER_ADAPTIVE" in sel.env_vars
|
||||
assert "-ctk" in sel.server_flags
|
||||
assert "-ctv" in sel.server_flags
|
||||
|
||||
def test_warnings_on_low_headroom(self):
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=18,
|
||||
available_memory_gb=14,
|
||||
gpu_memory_gb=18,
|
||||
gpu_name="Test GPU",
|
||||
cpu_cores=8,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(model_size_gb=16.0, context_length=65536)
|
||||
assert len(sel.warnings) > 0
|
||||
|
||||
def test_reasoning_contains_key_info(self):
|
||||
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
|
||||
mock_hw.return_value = HardwareInfo(
|
||||
total_memory_gb=32,
|
||||
available_memory_gb=24,
|
||||
is_apple_silicon=True,
|
||||
chip_name="M4 Max",
|
||||
cpu_cores=16,
|
||||
detection_method="mock",
|
||||
)
|
||||
sel = select_quant_level(model_size_gb=14.0, context_length=32768)
|
||||
assert "turbo4" in sel.reasoning
|
||||
assert "M4 Max" in sel.reasoning or "32GB" in sel.reasoning
|
||||
@@ -1,83 +0,0 @@
|
||||
"""Tests for smoke workflow CI configuration.
|
||||
|
||||
Validates that the GitHub Actions / Gitea Actions smoke workflow
|
||||
actually runs the standalone CMake build and test suite, not just
|
||||
parse checks.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
WORKFLOW_PATH = Path(".gitea/workflows/smoke.yml")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def workflow():
|
||||
"""Load and parse the smoke workflow YAML."""
|
||||
content = WORKFLOW_PATH.read_text(encoding="utf-8")
|
||||
return yaml.safe_load(content)
|
||||
|
||||
|
||||
def test_smoke_workflow_exists():
|
||||
"""Smoke workflow file must exist."""
|
||||
assert WORKFLOW_PATH.exists(), f"Missing {WORKFLOW_PATH}"
|
||||
|
||||
|
||||
def test_smoke_has_cmake_configure_step(workflow):
|
||||
"""Smoke workflow must configure the CMake project with tests enabled."""
|
||||
steps = workflow["jobs"]["smoke"]["steps"]
|
||||
cmake_found = False
|
||||
for step in steps:
|
||||
run = step.get("run", "")
|
||||
if "cmake -S . -B build" in run and "TURBOQUANT_BUILD_TESTS=ON" in run:
|
||||
cmake_found = True
|
||||
break
|
||||
assert cmake_found, (
|
||||
"Smoke workflow missing cmake configure step with TURBOQUANT_BUILD_TESTS=ON"
|
||||
)
|
||||
|
||||
|
||||
def test_smoke_has_cmake_build_step(workflow):
|
||||
"""Smoke workflow must build the CMake project."""
|
||||
steps = workflow["jobs"]["smoke"]["steps"]
|
||||
build_found = False
|
||||
for step in steps:
|
||||
run = step.get("run", "")
|
||||
if "cmake --build build" in run:
|
||||
build_found = True
|
||||
break
|
||||
assert build_found, "Smoke workflow missing cmake --build step"
|
||||
|
||||
|
||||
def test_smoke_has_ctest_step(workflow):
|
||||
"""Smoke workflow must run ctest."""
|
||||
steps = workflow["jobs"]["smoke"]["steps"]
|
||||
ctest_found = False
|
||||
for step in steps:
|
||||
run = step.get("run", "")
|
||||
if "ctest" in run and "output-on-failure" in run:
|
||||
ctest_found = True
|
||||
break
|
||||
assert ctest_found, "Smoke workflow missing ctest --output-on-failure step"
|
||||
|
||||
|
||||
def test_smoke_build_before_secret_scan(workflow):
|
||||
"""Build and test steps must run before secret scan (fail fast on build errors)."""
|
||||
steps = workflow["jobs"]["smoke"]["steps"]
|
||||
names = [s.get("name", "") for s in steps]
|
||||
build_idx = None
|
||||
scan_idx = None
|
||||
for i, name in enumerate(names):
|
||||
if "cmake" in name.lower() or "build" in name.lower():
|
||||
if build_idx is None:
|
||||
build_idx = i
|
||||
if "secret" in name.lower():
|
||||
scan_idx = i
|
||||
if build_idx is not None and scan_idx is not None:
|
||||
assert build_idx < scan_idx, (
|
||||
"Build step should run before secret scan to fail fast on broken code"
|
||||
)
|
||||
@@ -1,338 +0,0 @@
|
||||
"""
|
||||
Integration test: turboquant compressed model passes hermes tool calls (issue #82).
|
||||
|
||||
Validates that a TurboQuant-compressed model can:
|
||||
1. Parse hermes tool schemas correctly
|
||||
2. Format tool calls in OpenAI-compatible format
|
||||
3. Pass through the hermes agent conversation loop
|
||||
|
||||
Tests are structured as contract tests -- they validate the schema/format
|
||||
compatibility without requiring a running model server. The live inference
|
||||
test is skipped by default (requires llama-server with TurboQuant model).
|
||||
|
||||
Usage:
|
||||
pytest tests/test_tool_call_integration.py -v
|
||||
pytest tests/test_tool_call_integration.py -v -k live # run live test if server available
|
||||
"""
|
||||
import json
|
||||
import os
|
||||
import pathlib
|
||||
import re
|
||||
import unittest
|
||||
|
||||
import pytest
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
PROFILE_PATH = ROOT / "profiles" / "hermes-profile-gemma4-turboquant.yaml"
|
||||
BENCHMARKS_DIR = ROOT / "benchmarks"
|
||||
|
||||
|
||||
class TestHermesProfileSchema(unittest.TestCase):
|
||||
"""Validate the hermes profile YAML has required fields for tool calling."""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
import yaml
|
||||
cls.profile = yaml.safe_load(PROFILE_PATH.read_text())
|
||||
|
||||
def test_profile_has_providers(self):
|
||||
assert "providers" in self.profile, "Profile must define providers"
|
||||
assert "primary" in self.profile["providers"], "Must have primary provider"
|
||||
|
||||
def test_primary_provider_has_endpoint(self):
|
||||
primary = self.profile["providers"]["primary"]
|
||||
assert "endpoint" in primary, "Primary provider must have endpoint"
|
||||
assert primary["endpoint"].startswith("http"), "Endpoint must be HTTP(S) URL"
|
||||
|
||||
def test_primary_provider_has_api_path(self):
|
||||
primary = self.profile["providers"]["primary"]
|
||||
assert "api_path" in primary, "Primary provider must have api_path"
|
||||
assert "/chat/completions" in primary["api_path"], (
|
||||
"api_path should be OpenAI-compatible /chat/completions"
|
||||
)
|
||||
|
||||
def test_turboquant_settings_present(self):
|
||||
primary = self.profile["providers"]["primary"]
|
||||
assert "turboquant" in primary, "Must have turboquant config section"
|
||||
tq = primary["turboquant"]
|
||||
assert tq.get("enabled") is True, "TurboQuant must be enabled"
|
||||
assert tq.get("kv_type") in ("turbo2", "turbo3", "turbo4"), (
|
||||
"kv_type must be turbo2, turbo3, or turbo4"
|
||||
)
|
||||
|
||||
def test_context_window_configured(self):
|
||||
primary = self.profile["providers"]["primary"]
|
||||
assert "context" in primary, "Must have context config"
|
||||
ctx = primary["context"]
|
||||
assert ctx.get("max_tokens", 0) >= 8192, (
|
||||
"max_tokens should be >= 8192 for TurboQuant value proposition"
|
||||
)
|
||||
|
||||
|
||||
class TestToolSchemaCompatibility(unittest.TestCase):
|
||||
"""Verify hermes tool schemas serialize to valid JSON for OpenAI tool_calls."""
|
||||
|
||||
SAMPLE_TOOL_SCHEMAS = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "read_file",
|
||||
"description": "Read a text file with line numbers.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {"type": "string", "description": "File path"},
|
||||
"offset": {"type": "integer", "default": 1},
|
||||
"limit": {"type": "integer", "default": 500},
|
||||
},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "execute_code",
|
||||
"description": "Run a Python script.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"code": {"type": "string", "description": "Python code"},
|
||||
},
|
||||
"required": ["code"],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "web_search",
|
||||
"description": "Search the web.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {"type": "string"},
|
||||
"max_results": {"type": "integer", "default": 5},
|
||||
},
|
||||
"required": ["query"],
|
||||
},
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
def test_tool_schemas_serialize_to_json(self):
|
||||
"""Tool schemas must serialize without errors."""
|
||||
serialized = json.dumps(self.SAMPLE_TOOL_SCHEMAS)
|
||||
assert len(serialized) > 0
|
||||
parsed = json.loads(serialized)
|
||||
assert len(parsed) == len(self.SAMPLE_TOOL_SCHEMAS)
|
||||
|
||||
def test_tool_schemas_have_required_openai_fields(self):
|
||||
"""Each tool schema must have the fields OpenAI expects."""
|
||||
for tool in self.SAMPLE_TOOL_SCHEMAS:
|
||||
assert tool["type"] == "function", "Tool type must be 'function'"
|
||||
fn = tool["function"]
|
||||
assert "name" in fn, "Function must have name"
|
||||
assert "description" in fn, "Function must have description"
|
||||
assert "parameters" in fn, "Function must have parameters"
|
||||
params = fn["parameters"]
|
||||
assert params["type"] == "object", "Parameters type must be 'object'"
|
||||
assert "properties" in params, "Parameters must have properties"
|
||||
|
||||
def test_tool_call_response_format(self):
|
||||
"""Verify tool_call response matches OpenAI format."""
|
||||
tool_call = {
|
||||
"id": "call_abc123",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "read_file",
|
||||
"arguments": json.dumps({"path": "/tmp/test.txt"}),
|
||||
},
|
||||
}
|
||||
args = json.loads(tool_call["function"]["arguments"])
|
||||
assert args["path"] == "/tmp/test.txt"
|
||||
assert tool_call["function"]["name"] in [
|
||||
t["function"]["name"] for t in self.SAMPLE_TOOL_SCHEMAS
|
||||
]
|
||||
|
||||
def test_tool_names_are_valid_identifiers(self):
|
||||
"""Tool names must be valid Python identifiers for hermes dispatch."""
|
||||
for tool in self.SAMPLE_TOOL_SCHEMAS:
|
||||
name = tool["function"]["name"]
|
||||
assert re.match(r"^[a-zA-Z_][a-zA-Z0-9_]*$", name), (
|
||||
f"Tool name \'{name}\' is not a valid identifier"
|
||||
)
|
||||
|
||||
|
||||
class TestTurboquantServerConfig(unittest.TestCase):
|
||||
"""Validate server startup configuration matches hermes profile."""
|
||||
|
||||
def test_server_command_has_turboquant_flags(self):
|
||||
"""The server command in the profile must include -ctk/-ctv flags."""
|
||||
profile_text = PROFILE_PATH.read_text()
|
||||
assert "-ctk" in profile_text, "Profile server command must include -ctk flag"
|
||||
assert "-ctv" in profile_text, "Profile server command must include -ctv flag"
|
||||
|
||||
def test_server_command_has_context_flag(self):
|
||||
"""Server command must set context size."""
|
||||
profile_text = PROFILE_PATH.read_text()
|
||||
assert re.search(r"-c\s+\d+", profile_text), (
|
||||
"Server command must include -c <context_size> flag"
|
||||
)
|
||||
|
||||
def test_layer_adaptive_env_var(self):
|
||||
"""Profile must set TURBO_LAYER_ADAPTIVE env var."""
|
||||
profile_text = PROFILE_PATH.read_text()
|
||||
assert "TURBO_LAYER_ADAPTIVE" in profile_text, (
|
||||
"Profile must configure TURBO_LAYER_ADAPTIVE"
|
||||
)
|
||||
|
||||
|
||||
class TestBenchmarkData(unittest.TestCase):
|
||||
"""Validate benchmark test prompts include tool-call test cases."""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
prompts_path = BENCHMARKS_DIR / "test_prompts.json"
|
||||
cls.prompts = json.loads(prompts_path.read_text())
|
||||
|
||||
def test_has_tool_call_test_prompt(self):
|
||||
"""Benchmark prompts must include a tool-call format test."""
|
||||
categories = [p.get("category") for p in self.prompts]
|
||||
assert "tool_call_format" in categories, (
|
||||
"Benchmark must include a tool_call_format test case"
|
||||
)
|
||||
|
||||
def test_tool_call_prompt_expects_json(self):
|
||||
"""Tool call test prompt must expect JSON in the response."""
|
||||
tool_prompt = next(
|
||||
p for p in self.prompts if p.get("category") == "tool_call_format"
|
||||
)
|
||||
pattern = tool_prompt.get("expected_pattern", "")
|
||||
assert "json" in pattern.lower() or "\\{" in pattern, (
|
||||
"Tool call prompt must expect JSON-formatted response"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not os.environ.get("TURBOQUANT_SERVER_URL"),
|
||||
reason="No TurboQuant server available (set TURBOQUANT_SERVER_URL to run)",
|
||||
)
|
||||
class TestLiveToolCallIntegration:
|
||||
"""Live integration test -- requires running llama-server with TurboQuant."""
|
||||
|
||||
def test_server_health(self):
|
||||
"""Server must respond to /v1/models endpoint."""
|
||||
import requests
|
||||
url = os.environ["TURBOQUANT_SERVER_URL"]
|
||||
resp = requests.get(f"{url}/v1/models", timeout=10)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "data" in data
|
||||
assert len(data["data"]) > 0
|
||||
|
||||
def test_tool_call_completion(self):
|
||||
"""Model must return a valid tool_call for a read_file prompt."""
|
||||
import requests
|
||||
url = os.environ["TURBOQUANT_SERVER_URL"]
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "read_file",
|
||||
"description": "Read a file",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"path": {"type": "string"}},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
resp = requests.post(
|
||||
f"{url}/v1/chat/completions",
|
||||
json={
|
||||
"model": "gemma-4",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Read the file at /tmp/test.txt"}
|
||||
],
|
||||
"tools": tools,
|
||||
"tool_choice": "auto",
|
||||
},
|
||||
timeout=120,
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
choice = data["choices"][0]
|
||||
msg = choice["message"]
|
||||
if "tool_calls" in msg and msg["tool_calls"]:
|
||||
tc = msg["tool_calls"][0]
|
||||
assert tc["type"] == "function"
|
||||
assert tc["function"]["name"] == "read_file"
|
||||
args = json.loads(tc["function"]["arguments"])
|
||||
assert "path" in args
|
||||
else:
|
||||
assert len(msg.get("content", "")) > 0
|
||||
|
||||
def test_tool_call_with_multiple_tools(self):
|
||||
"""Model must handle multiple available tools."""
|
||||
import requests
|
||||
url = os.environ["TURBOQUANT_SERVER_URL"]
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "read_file",
|
||||
"description": "Read a file",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"path": {"type": "string"}},
|
||||
"required": ["path"],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "web_search",
|
||||
"description": "Search the web",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"query": {"type": "string"}},
|
||||
"required": ["query"],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "execute_code",
|
||||
"description": "Run Python code",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"code": {"type": "string"}},
|
||||
"required": ["code"],
|
||||
},
|
||||
},
|
||||
},
|
||||
]
|
||||
resp = requests.post(
|
||||
f"{url}/v1/chat/completions",
|
||||
json={
|
||||
"model": "gemma-4",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Search the web for 'bitcoin price'"}
|
||||
],
|
||||
"tools": tools,
|
||||
"tool_choice": "auto",
|
||||
},
|
||||
timeout=120,
|
||||
)
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "choices" in data
|
||||
assert len(data["choices"]) > 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user