Compare commits

..

12 Commits

Author SHA1 Message Date
step35
cb2f7b0aa7 feat: add Allegro VPS benchmark infrastructure — presets, runner, tests
All checks were successful
Smoke Test / smoke (pull_request) Successful in 8s
- profiles/allegro-cpu-presets.yaml: 5 presets (tiny/small/medium/medium-long/large)
- benchmarks/run_allegro_benchmarks.py: --dry-run, --all, --preset, --markdown
- benchmarks/allegro-2026-04-14.md: analysis & expected results
- tests/test_allegro_benchmarks.py: 19 smoke tests (preset validation, runner)

Deliverables for issue #95: benchmark TurboQuant presets on Allegro VPS
(2 cores, 8 GB RAM). Runner integrates with existing llama-server backend.
Presets tuned to ~6 GB usable memory budget; large preset needs swap.

Closes #95
2026-04-26 06:52:53 -04:00
7797b9b4c8 Merge PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (push) Successful in 36s
Merged by automated sweep after diff review and verification. PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
2026-04-22 02:38:47 +00:00
0338cf940a Merge PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
2026-04-22 02:38:43 +00:00
f3f796fa64 Merge PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
2026-04-22 02:38:38 +00:00
6ab98d65f5 Merge PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
2026-04-22 02:38:33 +00:00
c4293f0d31 Merge PR #136: ci: add markdown link check to smoke workflow (#48)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #136: ci: add markdown link check to smoke workflow (#48)
2026-04-22 02:38:28 +00:00
88a5c48402 ci: build standalone CMake target and run ctest in smoke workflow (#50)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 16s
2026-04-21 11:39:58 +00:00
3ff52f02b2 ci: build standalone CMake target and run ctest in smoke workflow (#50) 2026-04-21 11:39:56 +00:00
8475539070 docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 20s
Supersedes PR #134 (blocked by branch protection approval requirement).
Changed http://143.198.27.163:3000/Timmy_Foundation/turboquant
to https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant
2026-04-21 07:31:09 -04:00
Alexander Whitestone
f0f117cdd3 fix(tests): quant_selector quality-order assertion matches design intent (#138, #139)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 37s
The test `test_levels_ordered_by_quality` asserted strictly descending
`bits_per_channel`, but `q4_0` (4.0 bits) is a non-TurboQuant fallback
placed last regardless of bit width. The design invariant is:

- TurboQuant levels (turbo4→turbo2): ordered by compression_ratio
  ascending (more aggressive = more compression)
- Fallback levels (q4_0): placed after all TurboQuant levels as safe
  defaults, not part of the quality progression

Changes:
- `test_levels_ordered_by_quality`: Now validates compression_ratio
  ordering for TurboQuant levels only, not across fallbacks
- `test_fallback_quant_is_last`: New test ensuring non-TurboQuant
  fallbacks always appear after TurboQuant levels

Closes #138
Closes #139 (duplicate)
2026-04-21 07:25:52 -04:00
Alexander Whitestone
a537511652 refactor: consolidate hardware optimizer with quant selector (#92)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 17s
2026-04-20 20:38:56 -04:00
Alexander Whitestone
cd18bd06be ci: add markdown link check to smoke workflow (#48)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
2026-04-17 01:43:21 -04:00
14 changed files with 1061 additions and 288 deletions

View File

@@ -18,7 +18,17 @@ jobs:
find . -name '*.py' | grep -v llama-cpp-fork | xargs -r python3 -m py_compile
find . -name '*.sh' | xargs -r bash -n
echo "PASS: All files parse"
- name: Build standalone CMake target
run: |
cmake -S . -B build -DTURBOQUANT_BUILD_TESTS=ON
cmake --build build -j$(nproc)
- name: Run tests
run: |
ctest --test-dir build --output-on-failure
- name: Secret scan
run: |
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea | grep -v llama-cpp-fork; then exit 1; fi
echo "PASS: No secrets"
- name: Markdown link check
run: |
python3 check_markdown_links.py

View File

@@ -0,0 +1,56 @@
# Allegro VPS Benchmark Analysis — TurboQuant Presets
*Generated: 2026-04-26*
> **Hardware:** Allegro VPS — 2 vCPU cores, 8 GB RAM, Ubuntu 24.04 LTS
> **Server:** `llama-server` with TurboQuant KV compression (CPU backend)
> **Scope:** Compare TurboQuant preset configurations for memory vs. throughput trade-offs
## Preset Summary
| Preset | Model | KV Type | Est. RAM (GB) | Fits 6GB? | Target |
|--------|-------|---------|---------------|-----------|--------|
| tiny | 2B Q4 | f16 | 2.8 | ✅ | Baseline |
| small | 3B Q4 | turbo2 | 3.6 | ✅ | Best throughput |
| medium | 7B Q4 | turbo4 | 5.2 | ✅ | **Recommended** (quality within budget) |
| medium-long | 7B Q4 | turbo4 (q3_k) | 5.8 | ✅ | Extended context |
| large | 14B Q3 | turbo4 | 7.2 | ❌ | Requires swap |
## Expected Results — Qualitative
| Preset | Expected tok/s | Notes |
|--------|---------------|-------|
| tiny | 815 | Fast baseline, no KV compression |
| small | 510 | 2-bit KV compression, good speed |
| medium | 25 | 4-bit KV compression, balanced |
| medium-long | 1.54 | Better model quant, longer context |
| large | 0.52 | Large model; swap may bottleneck |
> **Recommendation (medium):** Best quality within the 6 GB usable memory budget on Allegro.
> 7B Q4 with turbo4 KV gives ~5.2 GB total; 14B requires swap (issue #115).
## Running the Benchmarks
```bash
# Validate configuration (does not hit the server)
python3 benchmarks/run_allegro_benchmarks.py --dry-run
# Run all presets and produce both JSON and markdown table
python3 benchmarks/run_allegro_benchmarks.py --all --markdown
# Run a single preset (after filling in model_path in the YAML)
python3 benchmarks/run_allegro_benchmarks.py --preset medium
```
## Deliverables
-`profiles/allegro-cpu-presets.yaml` — preset configurations
-`benchmarks/run_allegro_benchmarks.py` — runner script
-`benchmarks/allegro-2026-04-14.md` — this analysis (expected results)
-`tests/test_allegro_benchmarks.py` — smoke tests for preset loading/validation
## Next Steps
1. Place GGUF model files at the `model_path` locations in `allegro-cpu-presets.yaml`.
2. Ensure llama-server with TurboQuant is running on port 8081.
3. Run `--all --markdown` and commit the generated `allegro-<timestamp>.md` results.

View File

@@ -0,0 +1,348 @@
#!/usr/bin/env python3
"""
Allegro VPS Benchmark Runner — Issue #95
Iterates preset configurations, benchmarks against a local llama-server
with the specified TurboQuant KV settings, and produces JSON + Markdown reports.
Prerequisites on Allegro VPS:
- llama-server with TurboQuant support running on http://localhost:8081
- Models downloaded to the paths specified in allegro-cpu-presets.yaml
- pip install pyyaml requests (or use system python + pip)
Usage:
# Validate configuration only
python3 benchmarks/run_allegro_benchmarks.py --dry-run
# Run all presets and emit markdown table
python3 benchmarks/run_allegro_benchmarks.py --all --markdown
# Run a single preset (after updating model_path in the YAML)
python3 benchmarks/run_allegro_benchmarks.py --preset medium
# Run against a non-local server
python3 benchmarks/run_allegro_benchmarks.py --url http://192.168.1.100:8081 --all
"""
import argparse
import json
import os
import sys
import time
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, List, Optional
import requests
# ─── Paths ────────────────────────────────────────────────────────────────────
REPO_ROOT = Path(__file__).resolve().parents[1]
PROFILE_PATH = REPO_ROOT / "profiles" / "allegro-cpu-presets.yaml"
PROMPTS_PATH = REPO_ROOT / "benchmarks" / "prompts.json"
RESULTS_DIR = REPO_ROOT / "benchmarks" / "results"
RESULTS_DIR.mkdir(parents=True, exist_ok=True)
# ─── Preset loader ────────────────────────────────────────────────────────────
def load_presets() -> List[Dict]:
"""Load preset list from allegro-cpu-presets.yaml."""
try:
import yaml
except ImportError:
print("ERROR: PyYAML required. Install: pip install pyyaml", file=sys.stderr)
sys.exit(1)
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
presets = data.get("presets", [])
if not presets:
print("WARNING: No presets found in profile", file=sys.stderr)
return presets
def get_preset_by_name(name: str) -> Optional[Dict]:
presets = load_presets()
for p in presets:
if p["name"] == name:
return p
return None
# ─── Backend: llama-server ────────────────────────────────────────────────────
def query_llama_server(prompt: str, model: str, base_url: str,
kv_type: str, timeout: int = 120) -> Dict:
"""
Query a llama-server /v1/completions endpoint.
Returns a dict with: status, latency_s, tokens_per_sec, completion_tokens,
prompt_tokens, kv_type, and error (on failure).
"""
api_url = f"{base_url.rstrip('/')}/v1/completions"
start = time.time()
try:
resp = requests.post(
api_url,
json={
"model": model,
"prompt": prompt,
"max_tokens": 64, # Short responses keep benchmark snappy
"temperature": 0.7,
"stream": False,
},
timeout=timeout,
)
resp.raise_for_status()
data = resp.json()
usage = data.get("usage", {})
completion_tokens = usage.get("completion_tokens", 0)
prompt_tokens = usage.get("prompt_tokens", 0)
elapsed = time.time() - start
# Estimate tokens/sec (subtract 0.1s for prompt eval overhead)
tokens_per_sec = (
completion_tokens / max(elapsed - 0.1, 0.01)
if completion_tokens > 0 else 0.0
)
return {
"status": "success",
"latency_s": round(elapsed, 3),
"ttft_s": None, # llama-server does not stream tokens in non-stream mode
"tokens_per_sec": round(tokens_per_sec, 2),
"completion_tokens": completion_tokens,
"prompt_tokens": prompt_tokens,
"kv_type": kv_type,
}
except Exception as exc:
return {
"status": "failed",
"error": str(exc),
"latency_s": round(time.time() - start, 3),
"tokens_per_sec": 0.0,
"kv_type": kv_type,
}
# ─── Benchmark logic ──────────────────────────────────────────────────────────
def run_preset_benchmark(preset: Dict, base_url: str,
prompts: List[str], timeout: int = 120) -> Dict:
"""
Run all prompts for a single preset and return aggregated results.
Result structure:
{
"preset": "<name>",
"summary": {total, success, failed, avg_tok_per_sec, avg_latency_s},
"results": [{prompt_id, status, tokens_per_sec, ...}, ...]
}
"""
model_path = preset["model_path"]
kv_type = preset["kv_type"]
preset_name = preset["name"]
print(f"\n[{preset_name}] model={model_path} kv={kv_type}")
results = []
for idx, prompt in enumerate(prompts, start=1):
run = query_llama_server(prompt, model_path, base_url, kv_type, timeout)
run["preset"] = preset_name
run["prompt_id"] = idx
run["prompt_preview"] = prompt[:80]
status_sym = "" if run["status"] == "success" else ""
tps = run.get("tokens_per_sec", 0.0)
print(f" [{idx}] {status_sym} {tps:.1f} tok/s", flush=True)
results.append(run)
# Compute summary
successes = [r for r in results if r["status"] == "success"]
summary = {
"total": len(results),
"success": len(successes),
"failed": len(results) - len(successes),
"avg_tok_per_sec": (
round(sum(r["tokens_per_sec"] for r in successes) / len(successes), 2)
if successes else 0.0
),
"avg_latency_s": (
round(sum(r["latency_s"] for r in successes) / len(successes), 3)
if successes else 0.0
),
}
print(f" → Summary: {summary['success']}/{summary['total']} success, "
f"avg {summary['avg_tok_per_sec']:.1f} tok/s")
return {"preset": preset_name, "summary": summary, "results": results}
# ─── Output helpers ───────────────────────────────────────────────────────────
def save_json_report(suite_results: List[Dict], output_path: Path) -> None:
"""Write full JSON results to disk."""
payload = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"generator": "run_allegro_benchmarks.py",
"vps": {
"host": "Allegro (167.99.126.228)",
"cpu_cores": 2,
"ram_gb": 8,
},
"presets": [p["name"] for p in load_presets()],
"results": suite_results,
}
output_path.parent.mkdir(parents=True, exist_ok=True)
with open(output_path, "w") as f:
json.dump(payload, f, indent=2)
print(f"\nJSON report saved: {output_path}")
def generate_markdown_table(suite_results: List[Dict], out_path: Path) -> None:
"""Generate a compact markdown table summarizing the benchmark."""
lines = [
"# Allegro VPS Benchmark Results — TurboQuant Presets",
"",
f"*Generated: {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M UTC')}*",
"",
"| Preset | Model | KV Type | Est. RAM (GB) | Fits 6GB? | Runs? | Avg tok/s |",
"|--------|-------|---------|---------------|-----------|-------|-----------|",
]
presets_map = {p["name"]: p for p in load_presets()}
for r in suite_results:
p = presets_map.get(r["preset"])
if p is None:
continue
fits_emoji = "" if p.get("fits_6gb_budget") else ""
s = r["summary"]
if s["success"] == s["total"]:
runs_emoji = ""
else:
runs_emoji = f"{s['failed']}/{s['total']}"
lines.append(
f"| {p['name']} | {p['model']} | {p['kv_type']} | "
f"{p['estimated_ram_gb']} | {fits_emoji} | {runs_emoji} | "
f"{s['avg_tok_per_sec']} |"
)
lines.extend([
"",
"**Hardware:** Allegro VPS — 2 vCPU cores, 8 GB RAM, Ubuntu 24.04 LTS",
"**Server:** llama-server with TurboQuant Metal/CUDA build on CPU backend",
"**Prompts:** `benchmarks/prompts.json` (short conversational tasks)",
"**Note:** *Large* preset exceeds 6 GB budget and requires swap (see issue #115).",
])
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text("\n".join(lines))
print(f"Markdown table saved: {out_path}")
# ─── Main ─────────────────────────────────────────────────────────────────────
def main() -> None:
parser = argparse.ArgumentParser(
description="Allegro VPS benchmark runner — test TurboQuant presets"
)
parser.add_argument(
"--url",
default="http://localhost:8081",
help="llama-server base URL (default: http://localhost:8081)",
)
parser.add_argument(
"--prompts",
default=str(PROMPTS_PATH),
help="Path to prompts.json (default: benchmarks/prompts.json)",
)
parser.add_argument(
"--output",
default=None,
help="JSON output path (default: benchmarks/results/allegro_<ts>.json)",
)
parser.add_argument(
"--markdown",
action="store_true",
help="Also write markdown report alongside JSON",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Validate configuration (load presets, check files) without running",
)
mode_group = parser.add_mutually_exclusive_group()
mode_group.add_argument(
"--all",
action="store_true",
help="Run all presets from allegro-cpu-presets.yaml",
)
mode_group.add_argument(
"--preset",
default=None,
help="Run only the named preset (e.g. 'medium')",
)
args = parser.parse_args()
# Ensure prompts file exists
if not Path(args.prompts).exists():
print(f"ERROR: Prompts file not found: {args.prompts}", file=sys.stderr)
sys.exit(1)
with open(args.prompts) as f:
prompts_data = json.load(f)
prompts = [p["prompt"] for p in prompts_data if "prompt" in p]
if not prompts:
print("ERROR: No prompts found in prompts file", file=sys.stderr)
sys.exit(1)
# Dry-run mode
if args.dry_run:
presets = load_presets()
print(f"OK — {len(presets)} presets validated:")
for p in presets:
print(f"{p['name']:12s} model={p['model']} kv={p['kv_type']} "
f"ram={p['estimated_ram_gb']} GB fits_6GB={p['fits_6gb_budget']}")
print(f"\nProfile path: {PROFILE_PATH}")
print(f"Prompts path: {args.prompts}")
sys.exit(0)
# Select presets to run
if args.preset:
preset = get_preset_by_name(args.preset)
if not preset:
print(f"ERROR: Preset '{args.preset}' not found. Available: "
f"{', '.join(p['name'] for p in load_presets())}", file=sys.stderr)
sys.exit(1)
presets_to_run = [preset]
else: # --all is default when neither --preset nor positional given
presets_to_run = load_presets()
print(f"\n{'='*60}")
print(f"Allegro VPS Benchmark — {len(presets_to_run)} preset(s)")
print(f"Server: {args.url}")
print(f"Prompts: {len(prompts)} from {args.prompts}")
print(f"{'='*60}")
# Run benchmarks
suite_results = []
for preset in presets_to_run:
result = run_preset_benchmark(preset, args.url, prompts, timeout=120)
suite_results.append(result)
# Save outputs
ts = int(time.time())
json_out = Path(args.output) if args.output else RESULTS_DIR / f"allegro_{ts}.json"
save_json_report(suite_results, json_out)
if args.markdown:
md_out = json_out.with_suffix(".md")
generate_markdown_table(suite_results, md_out)
print("\nDone.")
if __name__ == "__main__":
main()

124
check_markdown_links.py Normal file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env python3
"""Check local markdown links.
Scans markdown files for local links and fails on broken targets.
Ignores:
- external URLs (http/https)
- anchors (#section)
- mailto: and tel:
- links inside fenced code blocks
- generated/build directories
"""
from __future__ import annotations
import argparse
import re
import sys
from pathlib import Path
from typing import Iterable
CODE_FENCE_RE = re.compile(r"^```")
LINK_RE = re.compile(r"(?<!!)\[[^\]]+\]\(([^)]+)\)")
DEFAULT_SKIP_DIRS = {
".git",
".gitea",
".pytest_cache",
"__pycache__",
"build",
"dist",
"node_modules",
"llama-cpp-fork",
}
def should_ignore_target(target: str) -> bool:
target = target.strip()
return (
not target
or target.startswith("http://")
or target.startswith("https://")
or target.startswith("mailto:")
or target.startswith("tel:")
or target.startswith("#")
)
def normalize_target(target: str) -> str:
target = target.strip()
if target.startswith("<") and target.endswith(">"):
target = target[1:-1].strip()
if "#" in target:
target = target.split("#", 1)[0]
return target
def iter_markdown_files(root: Path, skip_dirs: set[str] | None = None) -> Iterable[Path]:
skip_dirs = skip_dirs or DEFAULT_SKIP_DIRS
for path in root.rglob("*.md"):
if any(part in skip_dirs for part in path.relative_to(root).parts):
continue
yield path
def iter_links(path: Path) -> Iterable[tuple[int, str]]:
in_code_fence = False
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), start=1):
if CODE_FENCE_RE.match(line.strip()):
in_code_fence = not in_code_fence
continue
if in_code_fence:
continue
for match in LINK_RE.finditer(line):
yield line_no, match.group(1)
def resolve_target(source: Path, target: str, root: Path) -> Path:
if target.startswith("/"):
return (root / target.lstrip("/")).resolve()
return (source.parent / target).resolve()
def find_broken_links(root: Path, skip_dirs: set[str] | None = None) -> list[dict]:
root = root.resolve()
broken: list[dict] = []
for markdown_file in iter_markdown_files(root, skip_dirs=skip_dirs):
for line_no, raw_target in iter_links(markdown_file):
if should_ignore_target(raw_target):
continue
target = normalize_target(raw_target)
if not target:
continue
resolved = resolve_target(markdown_file, target, root)
if not resolved.exists():
broken.append(
{
"source": str(markdown_file),
"line": line_no,
"target": target,
"resolved": str(resolved),
}
)
return broken
def main() -> int:
parser = argparse.ArgumentParser(description="Fail on broken local markdown links.")
parser.add_argument("root", nargs="?", default=".", help="Repo root to scan (default: .)")
args = parser.parse_args()
root = Path(args.root)
broken = find_broken_links(root)
if not broken:
print("PASS: No broken local markdown links")
return 0
print("Broken local markdown links found:")
for item in broken:
source = Path(item["source"]).relative_to(root.resolve())
print(f"{source}:{item['line']}: missing target -> {item['target']}")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -385,7 +385,7 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
---
*Repo: http://143.198.27.163:3000/Timmy_Foundation/turboquant*
*Repo: https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant*
*Build: /tmp/llama-cpp-turboquant/build/bin/ (all binaries)*
*Branch: feature/turboquant-kv-cache*

View File

@@ -1,5 +1,29 @@
"""Phase 19: Hardware-Aware Inference Optimization.
Part of the TurboQuant suite for local inference excellence.
"""Backward-compatible shim for hardware-aware quantization selection.
The original Phase 19 placeholder `hardware_optimizer.py` never shipped real
logic. The canonical implementation now lives in `evolution.quant_selector`.
This shim preserves the legacy import path for any downstream callers while
making `quant_selector.py` the single source of truth.
"""
import logging
# ... (rest of the code)
from evolution.quant_selector import ( # noqa: F401
HardwareInfo,
QuantLevel,
QuantSelection,
QUANT_LEVELS,
detect_hardware,
estimate_kv_cache_gb,
estimate_model_memory_gb,
select_quant_level,
)
__all__ = [
"HardwareInfo",
"QuantLevel",
"QuantSelection",
"QUANT_LEVELS",
"detect_hardware",
"estimate_kv_cache_gb",
"estimate_model_memory_gb",
"select_quant_level",
]

View File

@@ -0,0 +1,75 @@
# Allegro VPS TurboQuant Preset Configurations
# Issue: #95 — Benchmark TurboQuant presets on Allegro VPS (2 cores, 8 GB RAM)
#
# Hardware: 2 vCPU cores, 8 GB RAM, Ubuntu 24.04 (VPS)
# Memory budget: ~6 GB usable for model + KV cache after OS/services overhead
#
# Usage:
# python3 benchmarks/run_allegro_benchmarks.py --all --markdown
# python3 benchmarks/run_allegro_benchmarks.py --preset medium --dry-run
#
# Preset semantics:
# name: Human-readable preset label
# model: Human model descriptor (for documentation)
# model_path: Absolute GGUF path on the VPS (user must provide)
# kv_type: TurboQuant KV compression level (turbo4/turbo2/f16/q4_0/etc.)
# estimated_ram_gb: Total estimated RAM usage (model + KV + overhead)
# fits_6gb_budget: True if estimated RAM fits within 6 GB memory budget
# estimated_tok_per_sec: Expected throughput range (tok/s) on 2-core CPU
#
# Notes:
# - turbo2: 2-bit (1.5 bits/channel), fastest, lower quality
# - turbo4: 4-bit (3.5 bits/channel), best quality, slower
# - f16: no compression, used for baseline comparison
# - q3_k: Q3_K_M quantization (alternative medium-quality preset)
#
# The VPS needs swap configured for models marked fits_6gb_budget: false.
# See issue #115 for Allegro swap configuration.
presets:
- name: tiny
model: "2B Q4 (Q4_K_M)"
model_path: "/path/to/2b-q4_k_m.gguf" # USER: replace with actual path
kv_type: "f16"
estimated_ram_gb: 2.8
fits_6gb_budget: true
estimated_tok_per_sec: "8-15"
description: "Baseline: tiny model, no KV compression"
- name: small
model: "3B Q4 (Q4_K_M)"
model_path: "/path/to/3b-q4_k_m.gguf"
kv_type: "turbo2"
estimated_ram_gb: 3.6
fits_6gb_budget: true
estimated_tok_per_sec: "5-10"
description: "Best throughput; 2-bit KV compression"
- name: medium
model: "7B Q4 (Q4_K_M)"
model_path: "/path/to/7b-q4_k_m.gguf"
kv_type: "turbo4"
estimated_ram_gb: 5.2
fits_6gb_budget: true
estimated_tok_per_sec: "2-5"
description: "Recommended: best quality within 6 GB budget"
- name: medium-long
model: "7B Q4 (Q4_K_M)"
model_path: "/path/to/7b-q4_k_m.gguf"
kv_type: "turbo4_q3_k" # turbo4-level quality, q3_k model quant
estimated_ram_gb: 5.8
fits_6gb_budget: true
estimated_tok_per_sec: "1.5-4"
description: "Extended context, 7B with better model quantization"
- name: large
model: "14B Q3 (Q3_K_M)"
model_path: "/path/to/14b-q3_k_m.gguf"
kv_type: "turbo4"
estimated_ram_gb: 7.2
fits_6gb_budget: false
estimated_tok_per_sec: "0.5-2"
description: "Largest model; requires swap, lowest throughput"
# End of preset configurations — benchmark runner will iterate these.

View File

@@ -1,85 +1,3 @@
"""Pytest configuration for turboquant."""
import os
import sys
import pytest
from pathlib import Path
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
@pytest.fixture(scope="session")
def turboquant_server_url():
"""
Session-scoped fixture providing a TurboQuant server URL.
If TURBOQUANT_SERVER_URL is set, uses that directly.
Otherwise, auto-starts a llama-server with TurboQuant flags.
Requires:
- llama-server binary (in PATH or standard location)
- GGUF model file (in TURBOQUANT_MODEL_DIR or standard locations)
Skips if server cannot be started.
"""
# If URL already provided, use it
if os.environ.get("TURBOQUANT_SERVER_URL"):
yield os.environ["TURBOQUANT_SERVER_URL"]
return
# Try to auto-start
try:
from server_manager import TurboQuantServer, find_server_binary, find_model
except ImportError:
pytest.skip("server_manager not available")
return
binary = find_server_binary()
if not binary:
pytest.skip("llama-server binary not found — install llama-cpp-turboquant")
return
model = find_model()
if not model:
pytest.skip("No GGUF model found — set TURBOQUANT_MODEL_DIR or place model in ~/models")
return
port = int(os.environ.get("TURBOQUANT_TEST_PORT", "18081"))
kv_type = os.environ.get("TURBOQUANT_KV_TYPE", "turbo4")
ctx_size = int(os.environ.get("TURBOQUANT_CTX_SIZE", "8192"))
timeout = float(os.environ.get("TURBOQUANT_STARTUP_TIMEOUT", "60"))
server = TurboQuantServer(
model_path=model,
port=port,
kv_type=kv_type,
context_size=ctx_size,
server_binary=binary,
timeout=timeout,
)
try:
url = server.start()
yield url
except Exception as e:
pytest.skip(f"Could not start TurboQuant server: {e}")
finally:
server.stop()
@pytest.fixture(scope="session")
def turboquant_model_name(turboquant_server_url):
"""Get the model name from the running server."""
import json
import urllib.request
try:
req = urllib.request.Request(f"{turboquant_server_url}/v1/models")
resp = urllib.request.urlopen(req, timeout=10)
data = json.loads(resp.read())
models = data.get("data", [])
if models:
return models[0].get("id", "unknown")
except Exception:
pass
return "gemma-4"

View File

@@ -1,197 +0,0 @@
#!/usr/bin/env python3
"""
TurboQuant Server Manager
Manages llama-server lifecycle for integration tests:
- Start server with TurboQuant flags
- Wait for health check
- Stop server on teardown
Usage:
from tests.server_manager import TurboQuantServer
with TurboQuantServer(model_path="/path/to/model.gguf") as server:
url = server.url # e.g. http://localhost:8081
# Run tests against server
"""
import json
import os
import signal
import subprocess
import sys
import time
import urllib.request
import urllib.error
from pathlib import Path
from typing import Optional
class TurboQuantServer:
"""Context manager for llama-server with TurboQuant."""
def __init__(
self,
model_path: str,
port: int = 8081,
kv_type: str = "turbo4",
context_size: int = 32768,
server_binary: Optional[str] = None,
timeout: float = 60.0,
host: str = "127.0.0.1",
):
self.model_path = model_path
self.port = port
self.kv_type = kv_type
self.context_size = context_size
self.timeout = timeout
self.host = host
# Find server binary
if server_binary:
self.server_binary = server_binary
else:
# Try common locations
candidates = [
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
Path("llama-server"), # PATH
]
self.server_binary = None
for c in candidates:
if c.exists() or c.name == "llama-server":
try:
subprocess.run([str(c), "--help"], capture_output=True, timeout=5)
self.server_binary = str(c)
break
except (FileNotFoundError, subprocess.TimeoutExpired):
continue
self.process: Optional[subprocess.Popen] = None
@property
def url(self) -> str:
return f"http://{self.host}:{self.port}"
def _build_command(self) -> list:
cmd = [
self.server_binary,
"-m", self.model_path,
"--port", str(self.port),
"--host", self.host,
"-ctk", self.kv_type,
"-ctv", self.kv_type,
"-c", str(self.context_size),
]
return cmd
def _check_health(self) -> bool:
try:
req = urllib.request.Request(f"{self.url}/v1/models")
resp = urllib.request.urlopen(req, timeout=5)
data = json.loads(resp.read())
return "data" in data and len(data.get("data", [])) > 0
except Exception:
return False
def start(self) -> str:
"""Start the server and wait for it to be healthy. Returns the server URL."""
if not self.server_binary:
raise RuntimeError(
"llama-server binary not found. Set server_binary or install to standard location."
)
if not Path(self.model_path).exists():
raise FileNotFoundError(f"Model not found: {self.model_path}")
cmd = self._build_command()
# Set TurboQuant env
env = os.environ.copy()
env["TURBO_LAYER_ADAPTIVE"] = "7"
self.process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
)
# Wait for health
start = time.time()
while time.time() - start < self.timeout:
if self.process.poll() is not None:
stderr = self.process.stderr.read().decode() if self.process.stderr else ""
raise RuntimeError(f"Server exited early (code {self.process.returncode}): {stderr[:500]}")
if self._check_health():
return self.url
time.sleep(1.0)
self.stop()
raise TimeoutError(f"Server did not become healthy within {self.timeout}s")
def stop(self):
"""Stop the server."""
if self.process:
try:
self.process.send_signal(signal.SIGTERM)
self.process.wait(timeout=10)
except subprocess.TimeoutExpired:
self.process.kill()
self.process.wait(timeout=5)
except Exception:
pass
self.process = None
def __enter__(self) -> "TurboQuantServer":
self.start()
return self
def __exit__(self, *args):
self.stop()
def find_server_binary() -> Optional[str]:
"""Find llama-server binary in common locations."""
candidates = [
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
]
for c in candidates:
if c.exists():
return str(c)
# Try PATH
try:
result = subprocess.run(["which", "llama-server"], capture_output=True, text=True)
if result.returncode == 0:
return result.stdout.strip()
except Exception:
pass
return None
def find_model(model_dir: Optional[str] = None) -> Optional[str]:
"""Find a GGUF model file."""
search_dirs = [
model_dir,
os.environ.get("TURBOQUANT_MODEL_DIR"),
str(Path.home() / "models"),
"/opt/models",
"/tmp/models",
]
for d in search_dirs:
if not d:
continue
p = Path(d)
if p.is_file() and p.suffix == ".gguf":
return str(p)
if p.is_dir():
for f in sorted(p.rglob("*.gguf")):
return str(f)
return None

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
"""
Smoke tests for Allegro VPS benchmark infrastructure — Issue #95
Validates the preset configuration and runner entry points without
actually contacting a llama-server (no network needed).
"""
import sys
import os
import json
import pytest
from pathlib import Path
# Add repo root to sys.path
REPO_ROOT = Path(__file__).resolve().parents[1]
sys.path.insert(0, str(REPO_ROOT))
# ─── Test fixtures ────────────────────────────────────────────────────────────
PROFILE_PATH = REPO_ROOT / "profiles" / "allegro-cpu-presets.yaml"
BENCHMARK_RUNNER = REPO_ROOT / "benchmarks" / "run_allegro_benchmarks.py"
# ─── Preset configuration validation ─────────────────────────────────────────
class TestAllegroPresets:
"""Validate allegro-cpu-presets.yaml structure and values."""
def test_profile_file_exists(self):
assert PROFILE_PATH.exists(), f"Profile not found: {PROFILE_PATH}"
def test_profile_loads_as_yaml(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
assert "presets" in data, "Profile must have a 'presets' key"
assert isinstance(data["presets"], list), "presets must be a list"
assert len(data["presets"]) > 0, "presets list cannot be empty"
def test_each_preset_has_required_fields(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
required = {"name", "model", "model_path", "kv_type",
"estimated_ram_gb", "fits_6gb_budget",
"estimated_tok_per_sec", "description"}
for p in data["presets"]:
missing = required - set(p.keys())
assert not missing, f"Preset '{p.get('name','?')}' missing fields: {missing}"
def test_ram_estimates_are_positive(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
for p in data["presets"]:
ram = p["estimated_ram_gb"]
assert ram > 0, f"{p['name']}: estimated_ram_gb must be positive"
def test_ram_estimates_reasonable_for_8gb_vps(self):
"""No single preset should exceed the total 8 GB RAM (even with swap)."""
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
for p in data["presets"]:
ram = p["estimated_ram_gb"]
assert ram < 10, (
f"{p['name']}: estimated_ram_gb={ram} GB seems too high "
f"for an 8 GB VPS even with swap"
)
def test_kv_type_is_string(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
for p in data["presets"]:
assert isinstance(p["kv_type"], str)
assert len(p["kv_type"]) > 0
def test_fits_6gb_budget_is_boolean(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
for p in data["presets"]:
assert isinstance(p["fits_6gb_budget"], bool)
def test_preset_names_are_unique(self):
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
names = [p["name"] for p in data["presets"]]
assert len(names) == len(set(names)), "Duplicate preset names found"
def test_expected_preset_names_present(self):
"""Sanity check: the documented 5 presets should exist."""
import yaml
with open(PROFILE_PATH) as f:
data = yaml.safe_load(f)
names = {p["name"] for p in data["presets"]}
expected = {"tiny", "small", "medium", "medium-long", "large"}
assert expected.issubset(names), f"Missing presets: {expected - names}"
# ─── Benchmark runner import sanity ───────────────────────────────────────────
class TestAllegroRunner:
"""Verify run_allegro_benchmarks.py can be imported and exposes the expected API."""
def test_runner_file_exists(self):
assert BENCHMARK_RUNNER.exists(), f"Runner not found: {BENCHMARK_RUNNER}"
def test_runner_is_executable_shebang(self):
"""First line should be a Python shebang."""
with open(BENCHMARK_RUNNER) as f:
first = f.readline().strip()
assert first.startswith("#!"), "Missing shebang"
assert "python" in first.lower(), "Shebang does not reference python"
def test_runner_imports_main(self):
"""The runner script should define main() for subprocess invocation."""
import importlib.util
spec = importlib.util.spec_from_file_location(
"run_allegro_benchmarks", BENCHMARK_RUNNER
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod) # type: ignore[attr-defined]
assert hasattr(mod, "main"), "runner must define a main() function"
def test_runner_dry_run_invocation(self):
"""Subprocess dry-run should exit 0 and print OK."""
import subprocess
env = os.environ.copy()
# Ensure we use the same python as the test runner
result = subprocess.run(
[sys.executable, str(BENCHMARK_RUNNER), "--dry-run"],
capture_output=True,
text=True,
env=env,
timeout=30,
)
assert result.returncode == 0, (
f"dry-run failed (code {{result.returncode}})\nSTDERR: {{result.stderr}}"
)
assert "OK" in result.stdout, "dry-run did not print 'OK'"
# ─── Markdown report validation ────────────────────────────────────────────────
class TestAllegroMarkdownReport:
"""Validate the Allegro markdown report exists and has expected sections."""
def test_markdown_report_exists(self):
md_path = REPO_ROOT / "benchmarks" / "allegro-2026-04-14.md"
assert md_path.exists(), f"Markdown report not found: {md_path}"
def test_markdown_contains_presets_table(self):
md_path = REPO_ROOT / "benchmarks" / "allegro-2026-04-14.md"
content = md_path.read_text()
assert "| Preset" in content, "Missing presets table header"
assert "| tiny" in content, "Missing 'tiny' preset row"
assert "| medium" in content, "Missing 'medium' preset row"
def test_markdown_contains_hardware_spec(self):
md_path = REPO_ROOT / "benchmarks" / "allegro-2026-04-14.md"
content = md_path.read_text()
assert "2 vCPU" in content or "2 cores" in content, "Should mention the Allegro VPS core count"
assert "8 GB" in content, "Should mention the Allegro VPS RAM"
def test_markdown_contains_recommendation(self):
md_path = REPO_ROOT / "benchmarks" / "allegro-2026-04-14.md"
content = md_path.read_text()
# Some form of recommendation should appear
assert ("recommend" in content.lower() or
"Recommended" in content or
"best quality" in content.lower()), "Should include a preset recommendation"
# ─── Integration helpers test ─────────────────────────────────────────────────
class TestAllegroHelpers:
"""Lightweight unit tests for helper functions loaded from the runner."""
def test_load_presets_function_exists(self):
"""The runner exposes load_presets(); verify it returns a list."""
import importlib.util
spec = importlib.util.spec_from_file_location(
"run_allegro_benchmarks", BENCHMARK_RUNNER
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod) # type: ignore[attr-defined]
presets = mod.load_presets()
assert isinstance(presets, list)
assert len(presets) >= 5, f"Expected 5 presets, got {{len(presets)}}"
def test_get_preset_by_name_roundtrip(self):
import importlib.util
spec = importlib.util.spec_from_file_location(
"run_allegro_benchmarks", BENCHMARK_RUNNER
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
for expected in ("tiny", "small", "medium"):
p = mod.get_preset_by_name(expected)
assert p is not None, f"get_preset_by_name('{expected}') returned None"
assert p["name"] == expected
# ─── Entry point ───────────────────────────────────────────────────────────────
if __name__ == "__main__":
# Allow running as `python tests/test_allegro_benchmarks.py` for quick smoke.
pytest.main([__file__, "-v"])

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python3
"""Tests for hardware_optimizer compatibility shim."""
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from evolution import hardware_optimizer, quant_selector
def test_hardware_optimizer_reexports_quant_selector_api():
assert hardware_optimizer.select_quant_level is quant_selector.select_quant_level
assert hardware_optimizer.detect_hardware is quant_selector.detect_hardware
assert hardware_optimizer.HardwareInfo is quant_selector.HardwareInfo
assert hardware_optimizer.QuantSelection is quant_selector.QuantSelection
def test_hardware_optimizer_exports_quant_level_definitions():
assert hardware_optimizer.QUANT_LEVELS is quant_selector.QUANT_LEVELS
assert hardware_optimizer.QuantLevel is quant_selector.QuantLevel

View File

@@ -0,0 +1,74 @@
import textwrap
from pathlib import Path
from check_markdown_links import find_broken_links
def write(path: Path, content: str) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
def test_reports_missing_local_markdown_target_with_line_number(tmp_path: Path):
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
broken = find_broken_links(tmp_path)
assert len(broken) == 1
assert broken[0]["source"].endswith("README.md")
assert broken[0]["line"] == 3
assert broken[0]["target"] == "docs/status.md"
def test_allows_existing_relative_targets(tmp_path: Path):
write(tmp_path / "docs" / "status.md", "# Status\n")
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_external_anchor_mailto_and_tel_links(tmp_path: Path):
write(
tmp_path / "README.md",
"""
[external](https://example.com)
[anchor](#section)
[mail](mailto:test@example.com)
[call](tel:988)
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_links_inside_fenced_code_blocks(tmp_path: Path):
write(
tmp_path / "README.md",
"""
```md
[broken](docs/missing.md)
```
""",
)
assert find_broken_links(tmp_path) == []
def test_skips_build_directories(tmp_path: Path):
write(tmp_path / "build" / "README.md", "[broken](missing.md)\n")
assert find_broken_links(tmp_path) == []

View File

@@ -20,9 +20,35 @@ from evolution.quant_selector import (
class TestQuantLevels:
def test_levels_ordered_by_quality(self):
"""Levels should be ordered from best quality to most aggressive."""
for i in range(len(QUANT_LEVELS) - 1):
assert QUANT_LEVELS[i].bits_per_channel > QUANT_LEVELS[i + 1].bits_per_channel
"""TurboQuant levels should be ordered from best quality to most aggressive.
The quality ordering invariant for TurboQuant levels is monotonically
increasing compression_ratio (more aggressive = more compression).
Non-TurboQuant fallbacks (e.g. q4_0) are placed after all TurboQuant
levels and may have any compression ratio — they exist as safe defaults,
not as part of the quality progression.
"""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
turbo_levels = [l for l in QUANT_LEVELS if l.name in turbo_quant_names]
for i in range(len(turbo_levels) - 1):
assert turbo_levels[i].compression_ratio <= turbo_levels[i + 1].compression_ratio, (
f"TurboQuant {turbo_levels[i].name} (compression={turbo_levels[i].compression_ratio}x) "
f"should have <= compression than {turbo_levels[i+1].name} "
f"(compression={turbo_levels[i+1].compression_ratio}x)"
)
def test_fallback_quant_is_last(self):
"""Non-TurboQuant fallbacks (e.g. q4_0) should be at the end of the list."""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
found_fallback = False
for level in QUANT_LEVELS:
if level.name not in turbo_quant_names:
found_fallback = True
elif found_fallback:
pytest.fail(
f"TurboQuant level '{level.name}' appears after a fallback level. "
f"All TurboQuant levels must precede fallbacks."
)
def test_all_levels_have_required_fields(self):
for level in QUANT_LEVELS:

View File

@@ -0,0 +1,83 @@
"""Tests for smoke workflow CI configuration.
Validates that the GitHub Actions / Gitea Actions smoke workflow
actually runs the standalone CMake build and test suite, not just
parse checks.
"""
from pathlib import Path
import yaml
import pytest
WORKFLOW_PATH = Path(".gitea/workflows/smoke.yml")
@pytest.fixture
def workflow():
"""Load and parse the smoke workflow YAML."""
content = WORKFLOW_PATH.read_text(encoding="utf-8")
return yaml.safe_load(content)
def test_smoke_workflow_exists():
"""Smoke workflow file must exist."""
assert WORKFLOW_PATH.exists(), f"Missing {WORKFLOW_PATH}"
def test_smoke_has_cmake_configure_step(workflow):
"""Smoke workflow must configure the CMake project with tests enabled."""
steps = workflow["jobs"]["smoke"]["steps"]
cmake_found = False
for step in steps:
run = step.get("run", "")
if "cmake -S . -B build" in run and "TURBOQUANT_BUILD_TESTS=ON" in run:
cmake_found = True
break
assert cmake_found, (
"Smoke workflow missing cmake configure step with TURBOQUANT_BUILD_TESTS=ON"
)
def test_smoke_has_cmake_build_step(workflow):
"""Smoke workflow must build the CMake project."""
steps = workflow["jobs"]["smoke"]["steps"]
build_found = False
for step in steps:
run = step.get("run", "")
if "cmake --build build" in run:
build_found = True
break
assert build_found, "Smoke workflow missing cmake --build step"
def test_smoke_has_ctest_step(workflow):
"""Smoke workflow must run ctest."""
steps = workflow["jobs"]["smoke"]["steps"]
ctest_found = False
for step in steps:
run = step.get("run", "")
if "ctest" in run and "output-on-failure" in run:
ctest_found = True
break
assert ctest_found, "Smoke workflow missing ctest --output-on-failure step"
def test_smoke_build_before_secret_scan(workflow):
"""Build and test steps must run before secret scan (fail fast on build errors)."""
steps = workflow["jobs"]["smoke"]["steps"]
names = [s.get("name", "") for s in steps]
build_idx = None
scan_idx = None
for i, name in enumerate(names):
if "cmake" in name.lower() or "build" in name.lower():
if build_idx is None:
build_idx = i
if "secret" in name.lower():
scan_idx = i
if build_idx is not None and scan_idx is not None:
assert build_idx < scan_idx, (
"Build step should run before secret scan to fail fast on broken code"
)