Compare commits

..

3 Commits

Author SHA1 Message Date
Alexander Whitestone
dabb96d315 docs: record Qwen3.5-9B DFlash Metal timeout (refs #152, #154)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 19s
2026-04-21 22:25:25 -04:00
Alexander Whitestone
69cef8a90f bench: record Apple Silicon DFlash pilot result (refs #152)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 18s
2026-04-21 22:20:15 -04:00
Alexander Whitestone
636d294896 feat: add Apple Silicon DFlash benchmark planner (refs #152)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 18s
2026-04-21 22:00:22 -04:00
20 changed files with 544 additions and 876 deletions

View File

@@ -18,17 +18,7 @@ jobs:
find . -name '*.py' | grep -v llama-cpp-fork | xargs -r python3 -m py_compile
find . -name '*.sh' | xargs -r bash -n
echo "PASS: All files parse"
- name: Build standalone CMake target
run: |
cmake -S . -B build -DTURBOQUANT_BUILD_TESTS=ON
cmake --build build -j$(nproc)
- name: Run tests
run: |
ctest --test-dir build --output-on-failure
- name: Secret scan
run: |
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea | grep -v llama-cpp-fork; then exit 1; fi
echo "PASS: No secrets"
- name: Markdown link check
run: |
python3 check_markdown_links.py

View File

@@ -30,8 +30,4 @@ See [issues](https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant/i
## Docs
- [Project Status](docs/PROJECT_STATUS.md) — Full project status and build specification
## Benchmarks
- [Bonsai 1-bit vs Q4_0 — M4 Pro Metal](benchmarks/bonsai-1bit-comparison-2025-10-06.md) — speed, memory, quality comparison (issue #100)
- Run locally: `python3 benchmarks/run_bonsai_compare.py`
- [DFlash on Apple Silicon](docs/DFLASH_APPLE_SILICON.md) — MLX benchmark planner, setup commands, and report workflow

View File

@@ -1,148 +0,0 @@
# Bonsai 1-bit vs Q4_0 Benchmark Results
> Issue #100 — bench: Bonsai 1-bit models vs Q4_0 — quality, speed, memory
> Author: Rockachopa (STEP35 FREE BURN)
> Date: 2025-10-06
## Test Host
| Item | Value |
|------|-------|
| Machine | Apple Silicon MacBook |
| Chip | M4 Pro (Metal GPU, 48 GB unified memory) — published reference from Prism ML |
| Backend | llama.cpp Prism fork — `llama.cpp` + Metal Q1_0 kernels |
| OS | macOS 15.x |
| Models dir | `~/models/` |
| Run command | `python3 benchmarks/run_bonsai_compare.py --models-dir ~/models` |
> **Note on M1 Mac**: Published Bonsai README explicitly reports M4 Pro numbers.
> For pure M1 data (M1 8-core GPU, 16 GB RAM), run the included benchmark script on
> your own machine and commit `benchmarks/bonsai_results_YYYY-MM-DD.json` back to the repo.
## Model Set
| Model | File | Quant | Source repo |
|-------------|---------------------------------|-------|-------------|
| Bonsai-8B | `Bonsai-8B-Q1_0.gguf` | Q1_0 | prism-ml/Bonsai-8B-gguf (gated) |
| Bonsai-4B | `Bonsai-4B-Q1_0.gguf` | Q1_0 | prism-ml/Bonsai-4B-gguf (gated) |
| Bonsai-1.7B | `Bonsai-1.7B-Q1_0.gguf` | Q1_0 | prism-ml/Bonsai-1.7B-gguf (gated) |
| Qwen3-8B | `Qwen3-8B-Q4_0.gguf` | Q4_0 | TheBloke/Qwen3-8B-GGUF (public) |
| Qwen3-4B | `Qwen3-4B-Q4_0.gguf` | Q4_0 | TheBloke/Qwen3-4B-GGUF (public) |
| Qwen3-1.7B | `Qwen3-1.7B-Q4_0.gguf` | Q4_0 | TheBloke/Qwen3-1.7B-GGUF (public) |
## Disk Size & Memory Footprint
Disk sizes are measured from actual GGUF files; GPU mem estimate includes activation
overhead (weights + KV cache warm-up).
| Model | Disk size (GB) | Est. GPU mem (GB) | FP16 baseline | Compression |
|-------------|---------------:|------------------:|--------------:|------------:|
| Bonsai-8B | 1.15 | 1.2 | 16.38 | **14.2×** |
| Bonsai-4B | 0.57 | 0.6 | 8.04 | **14.1×** |
| Bonsai-1.7B | 0.24 | 0.25| 3.44 | **14.3×** |
| Qwen3-8B | 4.70 | 5.0 | 16.38 | 3.5× |
| Qwen3-4B | 2.40 | 2.5 | 8.04 | 3.4× |
| Qwen3-1.7B | 1.00 | 1.05| 3.44 | 3.4× |
1-bit Bonsai models occupy **1.15 → 0.24 GB** on disk vs 4.71.0 GB for Q4_0 Qwen baselines.
Same numerical precision across embeddings, attention, MLP projections, and LM head.
## Throughput (Published Reference — M4 Pro Metal, 48 GB)
Numbers below are from the official Prism ML model READMEs (HuggingFace).
Measured with `llama-cli --timings`; prompt `"Once upon a time"`;
128 output tokens; temperature 0; Metal backend; all layers offloaded (`-ngl 99`).
| Model | TG128 tok/s (1-bit) | FP16 TG tok/s | Speedup vs FP16 |
|-------------|-------------------:|--------------:|----------------:|
| Bonsai-8B | 85 | 16 | **5.4×** |
| Bonsai-4B | 136 | 29 | **4.7×** |
| Bonsai-1.7B | 250 | 65 | **3.8×** |
Prefill throughput (PP512, tok/s):
| Model | PP512 tok/s (1-bit) | FP16 PP tok/s |
|-------------|-------------------:|--------------:|
| Bonsai-8B | 498 | 490 |
| Bonsai-4B | 915 | 915 |
| Bonsai-1.7B | 2305 | 2291 |
> **Interpretation**: 1-bit kernels eliminate the FP16→INT dequantization stall on Metal,
> yielding 3.8×5.4× speedup for generation. Prefill is compute-bound anyway (FFT path),
> so speedup is minimal there.
## Quality (Benchmark Scores — Published)
GSM8K / MMLU-R / MuSR / HE+ / IFEval / BFCL scores from Prism ML technical report.
Evaluated on H100 under EvalScope v1.4.2 with vLLM 0.15.1, identical scoring across all models.
| Model | Avg | GSM8K | MMLU-R | MuSR | HE+ | IFEval | BFCL |
|-------------|-----:|------:|-------:|-----:|-----:|-------:|-----:|
| Bonsai-8B | **70.5** | 88.0 | 65.7 | 50.0 | 73.8 | 79.8 | 65.7 |
| Qwen3-8B | 79.3 | 93.0 | 83.0 | 55.0 | 82.0 | 84.2 | 81.0 |
| Qwen3-4B | 76.0 | 90.0 | 80.0 | 52.0 | 78.0 | 80.1 | 78.1 |
| Qwen3-1.7B | 71.0 | 87.0 | 74.0 | 49.5 | 75.0 | 76.4 | 72.2 |
Despite being **1/14th the size**, 1-bit Bonsai 8B is competitive with leading
6B9B full-precision instruct models. Dropped 89 points vs the best-in-class
(mostly factuality and fine-grained instruction adherence), but still well above random.
## Tool Calling Viability
Run the regression test suite: `pytest tests/test_bonsai_tool_calling.py`
(created by issue #173). It spins up a local llama-server with Metal offload,
sends 10 structured tool-use prompts, and measures success rate.
**Pre-release indicators** (from Prism ML tool-use pilot):
- Bonsai-8B 1-bit achieved ~78% structured function-calling accuracy on 50-sample test set
- Failure mode: rare schema mis-generation on low-confidence math subroutines
- Memory budget on M1 Pro (16 GB) leaves ~13 GB for context with 8B model (3 GB base + 1 GB KV)
**Verdict**: 1-bit Bonsai 8B is viable for edge agent tool calling; Bonsai-4B
preferred when total RAM ≤ 4 GB (Air/Raspberry Pi).
## Minimum Viable Model for Edge Deployment
| Edge form factor | Recommended model | Why |
|-----------------|-:|:----|
| MacBook M1 (16 GB RAM, Metal GPU) | `Bonsai-8B-Q1_0` | Full capability, <2 GB total VRAM; room for 64K context |
| MacBook Air M2 (8 GB RAM) | `Bonsai-4B-Q1_0` | 0.6 GB VRAM, leaves memory for OS + browser |
| Raspberry Pi 5 (8 GB, Mali GPU) | `Bonsai-1.7B-Q1_0` | Fits entirely in RAM, usable latency (~200 tok/s) |
## How to Reproduce
```bash
# 1. Clone Prism fork of llama.cpp (Q1_0 Metal kernel support)
git clone https://github.com/PrismML-Eng/llama.cpp
cd llama.cpp
cmake -B build -DLLAMA_METAL=ON
cmake --build build -j # produces build/bin/llama-cli and llama-server
# 2. Download model files into ~/models/
# Bonsai are gated — you need HuggingFace access approval + `huggingface-cli login`
# Qwen3 baselines are public (TheBloke)
# Example:
huggingface-cli download prism-ml/Bonsai-8B-gguf Bonsai-8B-Q1_0.gguf --local-dir ~/models
huggingface-cli download prism-ml/Bonsai-4B-gguf Bonsai-4B-Q1_0.gguf --local-dir ~/models
huggingface-cli download prism-ml/Bonsai-1.7B-gguf Bonsai-1.7B-Q1_0.gguf --local-dir ~/models
# Additionally: download Qwen3 Q4_0 GGUF files from TheBloke into the same directory.
# 3. Run the benchmark (from turboquant repo root)
python3 benchmarks/run_bonsai_compare.py --models-dir ~/models
# 4. Commit the resulting JSON to turboquant/benchmarks/
git add benchmarks/bonsai_results_$(date +%Y-%m-%d).json
git commit -m "bench: add Bonsai 1-bit vs Q4_0 M1 Mac results (#100)"
```
## Sources
- Prism ML, "Bonsai: End-to-End 1-bit Language Model Deployment Across Apple, GPU, and Mobile Runtimes" (2026 ICLR submission)
- Model repositories:
- https://huggingface.co/prism-ml/Bonsai-8B-gguf
- https://huggingface.co/prism-ml/Bonsai-4B-gguf
- https://huggingface.co/prism-ml/Bonsai-1.7B-gguf
- https://huggingface.co/TheBloke/Qwen3-8B-GGUF (public)
- TurboQuant repo upstream:
- https://github.com/TheTom/llama-cpp-turboquant (Metal fork with Q1_0 kernels)
- https://github.com/TheTom/turboquant_plus (reference PolarQuant + QJL impl)

View File

@@ -1,83 +0,0 @@
{
"generated_at": "2026-04-30T06:48:24.534271+00:00",
"host_platform": "darwin",
"models_dir": "/nonexistent/models/path",
"results": [
{
"model": "Bonsai-8B-1bit",
"file": "Bonsai-8B-Q1_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 1.15,
"tok_per_sec": null,
"avg": 70.5,
"gsm8k": 88.0,
"mmlu_r": 65.7,
"musr": 50.0,
"he_plus": 73.8,
"ifeval": 79.8,
"bfcl": 65.7,
"quality_note": "Published Prism ML 'Bonsai' technical report (EvalScope v1.4.2, H100/H800 infrastructure). M4 Pro measured 85 tok/s (5.4\u00d7 vs FP16)."
},
{
"model": "Bonsai-4B-1bit",
"file": "Bonsai-4B-Q1_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 0.57,
"tok_per_sec": null,
"avg": 67.5,
"gsm8k": 84.0,
"mmlu_r": 62.0,
"quality_note": "Estimated from 8B trend \u2014 full eval required for ground-truth score."
},
{
"model": "Bonsai-1.7B-1bit",
"file": "Bonsai-1.7B-Q1_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 0.24,
"tok_per_sec": null,
"avg": 62.0,
"gsm8k": 78.0,
"mmlu_r": 56.0,
"quality_note": "Estimated from 8B trend \u2014 full eval required for ground-truth score."
},
{
"model": "Qwen3-8B-Q4_0",
"file": "Qwen3-8B-Q4_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 4.7,
"tok_per_sec": null,
"avg": 79.3,
"gsm8k": 93.0,
"mmlu_r": 83.0,
"source": "Alibaba Qwen 3 8B model card (Q4_0 baseline)"
},
{
"model": "Qwen3-4B-Q4_0",
"file": "Qwen3-4B-Q4_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 2.4,
"tok_per_sec": null,
"avg": 76.0,
"gsm8k": 90.0,
"mmlu_r": 80.0,
"source": "Approximated from Qwen3-4B model card metrics (public)"
},
{
"model": "Qwen3-1.7B-Q4_0",
"file": "Qwen3-1.7B-Q4_0.gguf",
"found": false,
"disk_size_gb": null,
"est_gpu_gb": 1.0,
"tok_per_sec": null,
"avg": 71.0,
"gsm8k": 87.0,
"mmlu_r": 74.0,
"source": "Approximated from Qwen3-1.7B model card metrics (public)"
}
]
}

View File

@@ -1,88 +0,0 @@
{
"generated_at": "2025-10-06T00:00:00.000Z",
"host_platform": "darwin",
"notes": "Pre-seeded results file — numbers sourced from Prism ML model READMEs (published M4 Pro Metal measurements). Replace with locally-generated file by running benchmarks/run_bonsai_compare.py.",
"source": "https://huggingface.co/prism-ml/Bonsai-8B-gguf (and -4B, -1.7B repos)",
"methodology": "llama-cli --timings, prompt='Once upon a time', 128 tokens, temp=0, -ngl 99 (full GPU offload)",
"results": [
{
"model": "Bonsai-8B-1bit",
"file": "Bonsai-8B-Q1_0.gguf",
"found": false,
"disk_size_gb": 1.15,
"est_gpu_gb": 1.15,
"tok_per_sec": null,
"avg": 70.5,
"gsm8k": 88.0,
"mmlu_r": 65.7,
"musr": 50.0,
"he_plus": 73.8,
"ifeval": 79.8,
"bfcl": 65.7,
"quality_note": "Published Prism ML technical report (EvalScope v1.4.2). M4 Pro Metal: 85 tok/s.",
"platform_reference": "M4 Pro (Metal), 48 GB — NOT M1 (see live-run file for actual M1 measurements)"
},
{
"model": "Bonsai-4B-1bit",
"file": "Bonsai-4B-Q1_0.gguf",
"found": false,
"disk_size_gb": 0.57,
"est_gpu_gb": 0.57,
"tok_per_sec": null,
"avg": 67.5,
"gsm8k": 84.0,
"mmlu_r": 62.0,
"quality_note": "Estimated from Bonsai size-quality trend — full eval needed.",
"platform_reference": "M4 Pro (Metal) published: 136 tok/s"
},
{
"model": "Bonsai-1.7B-1bit",
"file": "Bonsai-1.7B-Q1_0.gguf",
"found": false,
"disk_size_gb": 0.24,
"est_gpu_gb": 0.24,
"tok_per_sec": null,
"avg": 62.0,
"gsm8k": 78.0,
"mmlu_r": 56.0,
"quality_note": "Estimated from Bonsai size-quality trend — full eval needed.",
"platform_reference": "M4 Pro (Metal) published: 250 tok/s"
},
{
"model": "Qwen3-8B-Q4_0",
"file": "Qwen3-8B-Q4_0.gguf",
"found": false,
"disk_size_gb": 4.70,
"est_gpu_gb": 4.70,
"tok_per_sec": null,
"avg": 79.3,
"gsm8k": 93.0,
"mmlu_r": 83.0,
"source": "Alibaba Qwen 3 8B model card (Q4_0 baseline)"
},
{
"model": "Qwen3-4B-Q4_0",
"file": "Qwen3-4B-Q4_0.gguf",
"found": false,
"disk_size_gb": 2.40,
"est_gpu_gb": 2.40,
"tok_per_sec": null,
"avg": 76.0,
"gsm8k": 90.0,
"mmlu_r": 80.0,
"source": "Approximated from Qwen3 4B model card metrics"
},
{
"model": "Qwen3-1.7B-Q4_0",
"file": "Qwen3-1.7B-Q4_0.gguf",
"found": false,
"disk_size_gb": 1.00,
"est_gpu_gb": 1.00,
"tok_per_sec": null,
"avg": 71.0,
"gsm8k": 87.0,
"mmlu_r": 74.0,
"source": "Approximated from Qwen3 1.7B model card metrics"
}
]
}

View File

@@ -0,0 +1,189 @@
#!/usr/bin/env python3
"""Apple Silicon DFlash planning helpers and CLI (issue #152)."""
from __future__ import annotations
import argparse
import json
import platform
import subprocess
from dataclasses import asdict, dataclass
from pathlib import Path
from typing import Iterable, Optional
@dataclass(frozen=True)
class DFlashPair:
slug: str
base_model: str
draft_model: str
estimated_total_weights_gb: float
minimum_recommended_memory_gb: float
draft_sliding_window_size: int = 4096
SUPPORTED_PAIRS: tuple[DFlashPair, ...] = (
DFlashPair(
slug="qwen35-4b",
base_model="Qwen/Qwen3.5-4B",
draft_model="z-lab/Qwen3.5-4B-DFlash",
estimated_total_weights_gb=9.68,
minimum_recommended_memory_gb=16.0,
),
DFlashPair(
slug="qwen35-9b",
base_model="Qwen/Qwen3.5-9B",
draft_model="z-lab/Qwen3.5-9B-DFlash",
estimated_total_weights_gb=19.93,
minimum_recommended_memory_gb=28.0,
),
)
def detect_total_memory_gb() -> float:
"""Detect total system memory in GiB, rounded to a whole number for planning."""
system = platform.system()
if system == "Darwin":
mem_bytes = int(subprocess.check_output(["sysctl", "-n", "hw.memsize"]).strip())
return round(mem_bytes / (1024 ** 3), 1)
if system == "Linux":
with open("/proc/meminfo", "r", encoding="utf-8") as handle:
for line in handle:
if line.startswith("MemTotal:"):
mem_kb = int(line.split()[1])
return round(mem_kb / (1024 ** 2), 1)
raise RuntimeError(f"Unsupported platform for memory detection: {system}")
def get_pair(slug: str) -> DFlashPair:
for pair in SUPPORTED_PAIRS:
if pair.slug == slug:
return pair
raise ValueError(f"Unknown DFlash pair: {slug}")
def select_pair(total_memory_gb: float, preferred_slug: Optional[str] = None) -> DFlashPair:
"""Pick the strongest upstream-supported pair likely to fit the machine."""
if preferred_slug:
return get_pair(preferred_slug)
fitting = [pair for pair in SUPPORTED_PAIRS if total_memory_gb >= pair.minimum_recommended_memory_gb]
if fitting:
return max(fitting, key=lambda pair: pair.minimum_recommended_memory_gb)
return SUPPORTED_PAIRS[0]
def build_mlx_benchmark_command(
pair: DFlashPair,
*,
dataset: str = "gsm8k",
max_samples: int = 128,
enable_thinking: bool = True,
) -> str:
"""Build the upstream MLX benchmark command from the DFlash README."""
parts = [
"python -m dflash.benchmark --backend mlx",
f"--model {pair.base_model}",
f"--draft-model {pair.draft_model}",
f"--dataset {dataset}",
f"--max-samples {max_samples}",
]
if enable_thinking:
parts.append("--enable-thinking")
parts.append(f"--draft-sliding-window-size {pair.draft_sliding_window_size}")
return " \\\n ".join(parts)
def build_setup_commands(pair: DFlashPair) -> list[str]:
return [
"python3 -m venv .venv-dflash",
"source .venv-dflash/bin/activate",
"git clone https://github.com/z-lab/dflash.git",
"cd dflash",
"pip install -e .[mlx]",
build_mlx_benchmark_command(pair),
]
def render_report_template(machine_label: str, pair: DFlashPair) -> str:
command = build_mlx_benchmark_command(pair)
return f"""# DFlash Apple Silicon Benchmark Report
## Machine
- Label: {machine_label}
- Selected pair: {pair.slug}
- Base model: {pair.base_model}
- Draft model: {pair.draft_model}
- Estimated total weight footprint: {pair.estimated_total_weights_gb:.2f} GB
## Setup
```bash
python3 -m venv .venv-dflash
source .venv-dflash/bin/activate
git clone https://github.com/z-lab/dflash.git
cd dflash
pip install -e .[mlx]
{command}
```
## Baseline comparison
Compare against **plain MLX or llama.cpp speculative decoding** on the same prompt set.
## Results
- Throughput (tok/s):
- Peak memory (GB):
- Notes on acceptance / behavior:
## Verdict
Worth operationalizing locally?
- [ ] Yes
- [ ] No
- [ ] Needs more data
## Recommendation
Explain whether this should become part of the local inference stack.
"""
def build_plan(total_memory_gb: float, preferred_slug: Optional[str] = None) -> dict:
pair = select_pair(total_memory_gb=total_memory_gb, preferred_slug=preferred_slug)
return {
"machine_memory_gb": total_memory_gb,
"selected_pair": asdict(pair),
"setup_commands": build_setup_commands(pair),
"benchmark_command": build_mlx_benchmark_command(pair),
"baseline_note": "Compare against plain MLX or llama.cpp speculative decoding on the same prompt set.",
}
def write_output(path: Path, content: str) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(content, encoding="utf-8")
def main(argv: Optional[Iterable[str]] = None) -> int:
parser = argparse.ArgumentParser(description="Plan Apple Silicon DFlash benchmarks")
parser.add_argument("--memory-gb", type=float, default=None, help="Override detected total memory")
parser.add_argument("--pair", choices=[pair.slug for pair in SUPPORTED_PAIRS], default=None)
parser.add_argument("--machine-label", default="Apple Silicon Mac")
parser.add_argument("--format", choices=["json", "markdown"], default="markdown")
parser.add_argument("--output", default=None, help="Write plan/report to file instead of stdout")
args = parser.parse_args(list(argv) if argv is not None else None)
memory_gb = args.memory_gb if args.memory_gb is not None else detect_total_memory_gb()
pair = select_pair(total_memory_gb=memory_gb, preferred_slug=args.pair)
if args.format == "json":
content = json.dumps(build_plan(memory_gb, preferred_slug=pair.slug), indent=2)
else:
content = render_report_template(args.machine_label, pair)
if args.output:
write_output(Path(args.output), content)
else:
print(content)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,41 @@
# DFlash Apple Silicon Benchmark Report
## Machine
- Label: M3 Max 36GB
- Selected pair: qwen35-9b
- Base model: Qwen/Qwen3.5-9B
- Draft model: z-lab/Qwen3.5-9B-DFlash
- Estimated total weight footprint: 19.93 GB
## Setup
```bash
python3 -m venv .venv-dflash
source .venv-dflash/bin/activate
git clone https://github.com/z-lab/dflash.git
cd dflash
pip install -e .[mlx]
python -m dflash.benchmark --backend mlx \
--model Qwen/Qwen3.5-9B \
--draft-model z-lab/Qwen3.5-9B-DFlash \
--dataset gsm8k \
--max-samples 128 \
--enable-thinking \
--draft-sliding-window-size 4096
```
## Baseline comparison
Compare against **plain MLX or llama.cpp speculative decoding** on the same prompt set.
## Results
- Throughput (tok/s):
- Peak memory (GB):
- Notes on acceptance / behavior:
## Verdict
Worth operationalizing locally?
- [ ] Yes
- [ ] No
- [ ] Needs more data
## Recommendation
Explain whether this should become part of the local inference stack.

View File

@@ -0,0 +1,46 @@
# DFlash Apple Silicon Pilot — Qwen3.5-4B on M3 Max 36GB
Date: 2026-04-21
Machine: Apple M3 Max, 36 GB unified memory
Repo issue: #152
## Command
```bash
source /tmp/dflash-venv/bin/activate
cd /tmp/dflash-upstream
python -m dflash.benchmark --backend mlx \
--model Qwen/Qwen3.5-4B \
--draft-model z-lab/Qwen3.5-4B-DFlash \
--dataset gsm8k \
--max-samples 1 \
--enable-thinking \
--draft-sliding-window-size 4096
```
## Result
- Dataset: `gsm8k`
- Samples: `1`
- Baseline throughput: `22.35 tok/s`
- DFlash throughput: `46.78 tok/s`
- Decoding speedup: `2.09x`
- Average acceptance length: `6.48`
Acceptance length histogram:
```text
['0.3%', '11.1%', '12.7%', '10.4%', '11.7%', '7.6%', '7.0%', '3.8%', '5.1%', '6.3%', '2.8%', '3.8%', '2.2%', '1.9%', '0.9%', '2.5%', '9.8%']
```
## Caveats
- This is a **pilot**, not a decision-grade benchmark.
- Only `1` sample was run, so the throughput number is directional.
- No apples-to-apples baseline against plain MLX or llama.cpp speculative decoding is included yet.
- The planner still recommends trying `Qwen/Qwen3.5-9B + z-lab/Qwen3.5-9B-DFlash` on this machine for the more meaningful fit test.
## Interim takeaway
DFlash is **real on Apple Silicon** and already shows a meaningful local speedup on a small matched pair.
A `2.09x` pilot speedup on `Qwen3.5-4B` is enough evidence to keep pushing toward a proper benchmark slice in this repo.

View File

@@ -0,0 +1,59 @@
# DFlash on Apple Silicon Failure Report — Qwen3.5-9B on M3 Max 36GB
Date: 2026-04-21
Machine: Apple M3 Max, 36 GB unified memory
Repo issue: #152
## Command
```bash
source /tmp/dflash-venv/bin/activate
cd /tmp/dflash-upstream
python -m dflash.benchmark --backend mlx \
--model Qwen/Qwen3.5-9B \
--draft-model z-lab/Qwen3.5-9B-DFlash \
--dataset gsm8k \
--max-samples 1 \
--enable-thinking \
--draft-sliding-window-size 4096
```
## Outcome
The benchmark did **not** complete successfully on this machine.
### Failure signature
```text
libc++abi: terminating due to uncaught exception of type std::runtime_error:
[METAL] Command buffer execution failed:
Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout)
```
Additional shutdown noise:
```text
bash: [11285: 1] tcsetattr: Inappropriate ioctl for device
resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Interpretation
This is strong evidence that the `Qwen/Qwen3.5-9B + z-lab/Qwen3.5-9B-DFlash` pair is **not currently stable** on an M3 Max 36GB Mac under the upstream MLX benchmark path, at least with the default settings used here.
It may still be salvageable with:
- smaller block size / different benchmark settings
- a shorter generation target
- a different prompt sample
- upstream MLX / Metal fixes
- newer Apple Silicon hardware
But as of this run, it should be treated as **experimental / failing** on this exact machine.
## Recommendation
For this Mac, the working local proof path is still:
- `Qwen/Qwen3.5-4B`
- `z-lab/Qwen3.5-4B-DFlash`
Use the 4B pair for reproducible local validation while the 9B Metal timeout is investigated separately.

View File

@@ -1,179 +0,0 @@
#!/usr/bin/env python3
"""
Bonsai 1-bit vs Q4_0 benchmark — Issue #100
Compares Prism ML 1-bit Bonsai models (Q1_0) against standard GGUF Q4_0
on Apple Silicon (M1/M4 MacBook) using llama.cpp Metal backend.
Metrics collected:
- Model file size on disk
- Expected GPU memory at inference time
- Tokens/sec (generation throughput) via llama-cli --timings
- Quality: GSM8K score (reported from Prism ML paper)
- Tool calling viability (requires separate test — see issue #173)
Requirements:
- HF token at ~/.config/gitea/token (Prism ML repo is gated on HuggingFace)
- Models downloaded into ~/models/
- llama.cpp fork built with Metal + Q1_0 kernels:
git clone https://github.com/PrismML-Eng/llama.cpp
cmake -B build && cmake --build build -j
- llama-cli binary at ./llama.cpp/build/bin/llama-cli (relative to repo root)
Usage:
cd ~/burn-clone/STEP35-turboquant-100
python3 benchmarks/run_bonsai_compare.py [--models-dir DIR]
Output: benchmarks/bonsai_results_YYYY-MM-DD.json
"""
import argparse, json, os, re, subprocess, sys
from datetime import datetime, timezone
# Model manifest: (display_name, filename_on_disk, source_repo, expected_size_gb)
MODELS = [
# Bonsai 1-bit (Q1_0) — from prism-ml/Bonsai-*-gguf HuggingFace repos
("Bonsai-8B-1bit", "Bonsai-8B-Q1_0.gguf", "prism-ml/Bonsai-8B-gguf", 1.15),
("Bonsai-4B-1bit", "Bonsai-4B-Q1_0.gguf", "prism-ml/Bonsai-4B-gguf", 0.57),
("Bonsai-1.7B-1bit","Bonsai-1.7B-Q1_0.gguf", "prism-ml/Bonsai-1.7B-gguf", 0.24),
# Qwen3 baseline Q4_0 — common reference quant available from TheBloke or local sources
("Qwen3-8B-Q4_0", "Qwen3-8B-Q4_0.gguf", None, 4.70),
("Qwen3-4B-Q4_0", "Qwen3-4B-Q4_0.gguf", None, 2.40),
("Qwen3-1.7B-Q4_0", "Qwen3-1.7B-Q4_0.gguf", None, 1.00),
]
# Quality scores (GSM8K + aggregate) from Prism ML paper / model cards
# All scores 0100; Avg = arithmetic mean across 6 benchmarks.
QUALITY = {
"Bonsai-8B-1bit": {
"avg": 70.5, "gsm8k": 88.0, "mmlu_r": 65.7, "musr": 50.0,
"he_plus": 73.8, "ifeval": 79.8, "bfcl": 65.7,
"quality_note": "Published Prism ML 'Bonsai' technical report (EvalScope v1.4.2, "
"H100/H800 infrastructure). M4 Pro measured 85 tok/s (5.4× vs FP16)."
},
"Bonsai-4B-1bit": {
"avg": 67.5, "gsm8k": 84.0, "mmlu_r": 62.0,
"quality_note": "Estimated from 8B trend — full eval required for ground-truth score."
},
"Bonsai-1.7B-1bit": {
"avg": 62.0, "gsm8k": 78.0, "mmlu_r": 56.0,
"quality_note": "Estimated from 8B trend — full eval required for ground-truth score."
},
"Qwen3-8B-Q4_0": {
"avg": 79.3, "gsm8k": 93.0, "mmlu_r": 83.0,
"source": "Alibaba Qwen 3 8B model card (Q4_0 baseline)"
},
"Qwen3-4B-Q4_0": {
"avg": 76.0, "gsm8k": 90.0, "mmlu_r": 80.0,
"source": "Approximated from Qwen3-4B model card metrics (public)"
},
"Qwen3-1.7B-Q4_0": {
"avg": 71.0, "gsm8k": 87.0, "mmlu_r": 74.0,
"source": "Approximated from Qwen3-1.7B model card metrics (public)"
},
}
def disk_size_gb(path):
if os.path.exists(path):
return round(os.path.getsize(path) / 1024**3, 3)
return None
def run_timing(model_path, n_tokens=128, threads=4):
"""Run llama-cli --timings and parse tokens/sec."""
llama_cli = "./llama.cpp/build/bin/llama-cli"
if not os.path.exists(llama_cli):
return None, "Binary missing — build PrismML-Eng/llama.cpp fork first"
if not os.path.exists(model_path):
return None, "Model file not found"
cmd = [llama_cli,
"-m", model_path,
"-p", "Once upon a time",
"-n", str(n_tokens),
"--temp", "0",
"-t", str(threads),
"--timings",
"-ngl", "99"] # offload 99 layers to GPU
try:
res = subprocess.run(cmd, capture_output=True, text=True, timeout=90)
output = res.stdout + res.stderr
# tg_stop: X.XX ms ( Y.YY tokens/s)
m = re.search(r'tg_stop:\s*[\d.]+ ms\s*\(\s*([\d.]+) tokens/s\)', output)
if m:
return float(m.group(1)), None
return None, "tg_stop timing line not found — ensure Q1_0 Metal kernels present"
except subprocess.TimeoutExpired:
return None, "Subprocess timed out (>90 s)"
except Exception as e:
return None, str(e)
def main():
p = argparse.ArgumentParser(description=__doc__)
p.add_argument("--models-dir", default=os.path.expanduser("~/models"),
help="Directory containing model GGUF files")
p.add_argument("--n-tokens", type=int, default=128,
help="Generation length to measure (affects throughput)")
args = p.parse_args()
print("=" * 70)
print("Bonsai 1-bit vs Q4_0 Benchmark — Issue #100")
print("=" * 70)
print(f"Models directory : {args.models_dir}")
print(f"Metal offload : -ngl 99 (all layers onto GPU)")
print(f"Generation : {args.n_tokens} tokens from prompt 'Once upon a time'")
present = sum(1 for _, f, _, _ in MODELS
if os.path.exists(os.path.join(args.models_dir, f)))
if present == 0:
print("\n NO MODELS FOUND. To populate ~/models/:")
print(" ┌ Bonsai (gated on HuggingFace):")
print(" │ huggingface-cli login")
print(" │ huggingface-cli download prism-ml/Bonsai-8B-gguf Bonsai-8B-Q1_0.gguf --local-dir ~/models")
print(" │ (repeat for -4B and -1.7B repos)")
print(" └ Qwen3 baselines: TheBloke/Qwen3-8B-GGUF (public)")
print()
print(" Then re-run this script.")
results = []
for name, fname, _repo, sz_gb in MODELS:
path = os.path.join(args.models_dir, fname)
found = os.path.exists(path)
size_disk = disk_size_gb(path) if found else None
tok_s, err = (None, None) if not found else run_timing(path, args.n_tokens)
entry = {"model": name, "file": fname, "found": found,
"disk_size_gb": size_disk, "est_gpu_gb": sz_gb,
"tok_per_sec": tok_s}
if name in QUALITY:
entry.update(QUALITY[name])
if err:
entry["error"] = err
results.append(entry)
status = f"tok/s={tok_s:.1f}" if tok_s else f"(note: {err or 'missing'})"
print(f" {'FOUND' if found else 'MISSING':>7} [{name}] "
f"disk={size_disk or ''} GB {status}")
print(f"\nModels locally available: {present}/{len(MODELS)}")
# Write run artifacts
out = {"generated_at": datetime.now(timezone.utc).isoformat(),
"host_platform": sys.platform,
"models_dir": args.models_dir,
"results": results}
out_fname = os.path.join(os.path.dirname(__file__),
f"bonsai_results_{datetime.now().strftime('%Y-%m-%d')}.json")
os.makedirs(os.path.dirname(out_fname), exist_ok=True)
with open(out_fname, "w") as f:
json.dump(out, f, indent=2)
print(f"Results saved → {out_fname}")
return results
if __name__ == "__main__":
main()

View File

@@ -1,124 +0,0 @@
#!/usr/bin/env python3
"""Check local markdown links.
Scans markdown files for local links and fails on broken targets.
Ignores:
- external URLs (http/https)
- anchors (#section)
- mailto: and tel:
- links inside fenced code blocks
- generated/build directories
"""
from __future__ import annotations
import argparse
import re
import sys
from pathlib import Path
from typing import Iterable
CODE_FENCE_RE = re.compile(r"^```")
LINK_RE = re.compile(r"(?<!!)\[[^\]]+\]\(([^)]+)\)")
DEFAULT_SKIP_DIRS = {
".git",
".gitea",
".pytest_cache",
"__pycache__",
"build",
"dist",
"node_modules",
"llama-cpp-fork",
}
def should_ignore_target(target: str) -> bool:
target = target.strip()
return (
not target
or target.startswith("http://")
or target.startswith("https://")
or target.startswith("mailto:")
or target.startswith("tel:")
or target.startswith("#")
)
def normalize_target(target: str) -> str:
target = target.strip()
if target.startswith("<") and target.endswith(">"):
target = target[1:-1].strip()
if "#" in target:
target = target.split("#", 1)[0]
return target
def iter_markdown_files(root: Path, skip_dirs: set[str] | None = None) -> Iterable[Path]:
skip_dirs = skip_dirs or DEFAULT_SKIP_DIRS
for path in root.rglob("*.md"):
if any(part in skip_dirs for part in path.relative_to(root).parts):
continue
yield path
def iter_links(path: Path) -> Iterable[tuple[int, str]]:
in_code_fence = False
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), start=1):
if CODE_FENCE_RE.match(line.strip()):
in_code_fence = not in_code_fence
continue
if in_code_fence:
continue
for match in LINK_RE.finditer(line):
yield line_no, match.group(1)
def resolve_target(source: Path, target: str, root: Path) -> Path:
if target.startswith("/"):
return (root / target.lstrip("/")).resolve()
return (source.parent / target).resolve()
def find_broken_links(root: Path, skip_dirs: set[str] | None = None) -> list[dict]:
root = root.resolve()
broken: list[dict] = []
for markdown_file in iter_markdown_files(root, skip_dirs=skip_dirs):
for line_no, raw_target in iter_links(markdown_file):
if should_ignore_target(raw_target):
continue
target = normalize_target(raw_target)
if not target:
continue
resolved = resolve_target(markdown_file, target, root)
if not resolved.exists():
broken.append(
{
"source": str(markdown_file),
"line": line_no,
"target": target,
"resolved": str(resolved),
}
)
return broken
def main() -> int:
parser = argparse.ArgumentParser(description="Fail on broken local markdown links.")
parser.add_argument("root", nargs="?", default=".", help="Repo root to scan (default: .)")
args = parser.parse_args()
root = Path(args.root)
broken = find_broken_links(root)
if not broken:
print("PASS: No broken local markdown links")
return 0
print("Broken local markdown links found:")
for item in broken:
source = Path(item["source"]).relative_to(root.resolve())
print(f"{source}:{item['line']}: missing target -> {item['target']}")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,125 @@
# DFlash on Apple Silicon
This repo now carries a **Gitea-first benchmark harness** for evaluating whether upstream **DFlash on MLX** is worth adding to the local Apple Silicon inference stack.
## Why
The headline `Kimi K2.6 + DFlash` benchmark was measured on `8x MI300X` with huge RAM and ROCm patches. That exact recipe is not a fit for a `36 GB` Apple Silicon Mac.
What *is* relevant locally is the upstream `z-lab/dflash` MLX path, which can benchmark smaller matched target/draft pairs that fit on Apple Silicon.
## Current repo entry point
Use:
```bash
python3 benchmarks/dflash_apple_silicon.py --machine-label "M3 Max 36GB"
```
This prints a benchmark report template with:
- the selected model/draft pair
- exact setup commands
- the upstream MLX benchmark command
- baseline comparison guidance
Write the template to a file:
```bash
python3 benchmarks/dflash_apple_silicon.py \
--machine-label "M3 Max 36GB" \
--output benchmarks/reports/dflash_m3max_36gb.md
```
Emit the underlying plan as JSON:
```bash
python3 benchmarks/dflash_apple_silicon.py --format json
```
## Selection logic
Today the planner uses two upstream-supported MLX pairs:
- `qwen35-9b`
- base: `Qwen/Qwen3.5-9B`
- draft: `z-lab/Qwen3.5-9B-DFlash`
- chosen for ~28 GB+ machines
- `qwen35-4b`
- base: `Qwen/Qwen3.5-4B`
- draft: `z-lab/Qwen3.5-4B-DFlash`
- fallback for tighter-memory Macs
On a `36 GB` Mac, the default recommendation is `qwen35-9b`.
## Pilot result already landed
A first live Apple Silicon run has already been captured in:
- `benchmarks/reports/dflash_m3max_36gb_qwen35_4b_pilot.md`
Pilot command:
```bash
python -m dflash.benchmark --backend mlx \
--model Qwen/Qwen3.5-4B \
--draft-model z-lab/Qwen3.5-4B-DFlash \
--dataset gsm8k \
--max-samples 1 \
--enable-thinking \
--draft-sliding-window-size 4096
```
Pilot outcome on this Mac:
- baseline throughput: `22.35 tok/s`
- DFlash throughput: `46.78 tok/s`
- decoding speedup: `2.09x`
Treat that as a **directional proof**, not a final decision benchmark. The next step is the fuller comparison slice against plain MLX or llama.cpp speculative decoding.
## Known 9B failure on this machine
A follow-up live run with:
- `Qwen/Qwen3.5-9B`
- `z-lab/Qwen3.5-9B-DFlash`
failed on this same M3 Max 36GB Mac with:
```text
[METAL] Command buffer execution failed:
Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout)
```
That failure is recorded in:
- `benchmarks/reports/dflash_m3max_36gb_qwen35_9b_timeout.md`
So the current guidance is:
- treat `qwen35-9b` as **experimental** on this machine
- treat `qwen35-4b` as the current **known-working local proof path**
- keep the issue open until we either stabilize the 9B path or clearly rule it out for this hardware tier
## Upstream benchmark command
The harness uses the upstream MLX benchmark syntax from `z-lab/dflash`:
```bash
python -m dflash.benchmark --backend mlx \
--model Qwen/Qwen3.5-9B \
--draft-model z-lab/Qwen3.5-9B-DFlash \
--dataset gsm8k \
--max-samples 128 \
--enable-thinking \
--draft-sliding-window-size 4096
```
## What remains
This PR adds the **planner + report template** so the benchmark is reproducible from the repo.
The issue remains open until a real Apple Silicon run lands with:
- measured throughput
- measured memory
- a baseline comparison against plain MLX or llama.cpp speculative decoding
- a recommendation on whether to operationalize DFlash locally

View File

@@ -385,7 +385,7 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
---
*Repo: https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant*
*Repo: http://143.198.27.163:3000/Timmy_Foundation/turboquant*
*Build: /tmp/llama-cpp-turboquant/build/bin/ (all binaries)*
*Branch: feature/turboquant-kv-cache*

View File

@@ -1,29 +1,5 @@
"""Backward-compatible shim for hardware-aware quantization selection.
The original Phase 19 placeholder `hardware_optimizer.py` never shipped real
logic. The canonical implementation now lives in `evolution.quant_selector`.
This shim preserves the legacy import path for any downstream callers while
making `quant_selector.py` the single source of truth.
"""Phase 19: Hardware-Aware Inference Optimization.
Part of the TurboQuant suite for local inference excellence.
"""
from evolution.quant_selector import ( # noqa: F401
HardwareInfo,
QuantLevel,
QuantSelection,
QUANT_LEVELS,
detect_hardware,
estimate_kv_cache_gb,
estimate_model_memory_gb,
select_quant_level,
)
__all__ = [
"HardwareInfo",
"QuantLevel",
"QuantSelection",
"QUANT_LEVELS",
"detect_hardware",
"estimate_kv_cache_gb",
"estimate_model_memory_gb",
"select_quant_level",
]
import logging
# ... (rest of the code)

View File

@@ -379,8 +379,8 @@ def select_quant_level(
break
if chosen is None:
# Nothing fits — pick the most aggressive compression
chosen = QUANT_LEVELS[-1]
# Nothing fits — pick the most aggressive compression, not the q4_0 fallback.
chosen = max(QUANT_LEVELS, key=lambda level: level.compression_ratio)
logger.warning(f"No quant level fits in {memory_pool_gb:.1f}GB. Using {chosen.name}.")
# Calculate final numbers

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env python3
"""Tests for Apple Silicon DFlash benchmark planning helpers (issue #152)."""
import os
import sys
from unittest.mock import patch
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from benchmarks.dflash_apple_silicon import ( # noqa: E402
build_mlx_benchmark_command,
detect_total_memory_gb,
render_report_template,
select_pair,
)
class TestPairSelection:
def test_prefers_qwen35_9b_on_36gb_mac(self):
pair = select_pair(total_memory_gb=36)
assert pair.slug == "qwen35-9b"
assert pair.base_model == "Qwen/Qwen3.5-9B"
assert pair.draft_model == "z-lab/Qwen3.5-9B-DFlash"
def test_falls_back_to_4b_when_memory_is_tight(self):
pair = select_pair(total_memory_gb=20)
assert pair.slug == "qwen35-4b"
assert pair.base_model == "Qwen/Qwen3.5-4B"
class TestCommandGeneration:
def test_builds_upstream_mlx_benchmark_command(self):
pair = select_pair(total_memory_gb=36)
command = build_mlx_benchmark_command(pair, dataset="gsm8k", max_samples=64)
assert "python -m dflash.benchmark --backend mlx" in command
assert "--model Qwen/Qwen3.5-9B" in command
assert "--draft-model z-lab/Qwen3.5-9B-DFlash" in command
assert "--dataset gsm8k" in command
assert "--max-samples 64" in command
assert "--draft-sliding-window-size 4096" in command
class TestReportTemplate:
def test_report_template_mentions_baseline_and_verdict(self):
pair = select_pair(total_memory_gb=36)
report = render_report_template(machine_label="M3 Max 36GB", pair=pair)
assert "DFlash Apple Silicon Benchmark Report" in report
assert "M3 Max 36GB" in report
assert "Qwen/Qwen3.5-9B" in report
assert "plain MLX or llama.cpp speculative decoding" in report
assert "Worth operationalizing locally?" in report
class TestMemoryDetection:
@patch("benchmarks.dflash_apple_silicon.platform.system", return_value="Darwin")
@patch("benchmarks.dflash_apple_silicon.subprocess.check_output", return_value=b"38654705664\n")
def test_detect_total_memory_gb_on_macos(self, _mock_sysctl, _mock_system):
assert detect_total_memory_gb() == 36.0

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env python3
"""Tests for hardware_optimizer compatibility shim."""
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from evolution import hardware_optimizer, quant_selector
def test_hardware_optimizer_reexports_quant_selector_api():
assert hardware_optimizer.select_quant_level is quant_selector.select_quant_level
assert hardware_optimizer.detect_hardware is quant_selector.detect_hardware
assert hardware_optimizer.HardwareInfo is quant_selector.HardwareInfo
assert hardware_optimizer.QuantSelection is quant_selector.QuantSelection
def test_hardware_optimizer_exports_quant_level_definitions():
assert hardware_optimizer.QUANT_LEVELS is quant_selector.QUANT_LEVELS
assert hardware_optimizer.QuantLevel is quant_selector.QuantLevel

View File

@@ -1,74 +0,0 @@
import textwrap
from pathlib import Path
from check_markdown_links import find_broken_links
def write(path: Path, content: str) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
def test_reports_missing_local_markdown_target_with_line_number(tmp_path: Path):
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
broken = find_broken_links(tmp_path)
assert len(broken) == 1
assert broken[0]["source"].endswith("README.md")
assert broken[0]["line"] == 3
assert broken[0]["target"] == "docs/status.md"
def test_allows_existing_relative_targets(tmp_path: Path):
write(tmp_path / "docs" / "status.md", "# Status\n")
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_external_anchor_mailto_and_tel_links(tmp_path: Path):
write(
tmp_path / "README.md",
"""
[external](https://example.com)
[anchor](#section)
[mail](mailto:test@example.com)
[call](tel:988)
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_links_inside_fenced_code_blocks(tmp_path: Path):
write(
tmp_path / "README.md",
"""
```md
[broken](docs/missing.md)
```
""",
)
assert find_broken_links(tmp_path) == []
def test_skips_build_directories(tmp_path: Path):
write(tmp_path / "build" / "README.md", "[broken](missing.md)\n")
assert find_broken_links(tmp_path) == []

View File

@@ -19,36 +19,11 @@ from evolution.quant_selector import (
class TestQuantLevels:
def test_levels_ordered_by_quality(self):
"""TurboQuant levels should be ordered from best quality to most aggressive.
The quality ordering invariant for TurboQuant levels is monotonically
increasing compression_ratio (more aggressive = more compression).
Non-TurboQuant fallbacks (e.g. q4_0) are placed after all TurboQuant
levels and may have any compression ratio — they exist as safe defaults,
not as part of the quality progression.
"""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
turbo_levels = [l for l in QUANT_LEVELS if l.name in turbo_quant_names]
for i in range(len(turbo_levels) - 1):
assert turbo_levels[i].compression_ratio <= turbo_levels[i + 1].compression_ratio, (
f"TurboQuant {turbo_levels[i].name} (compression={turbo_levels[i].compression_ratio}x) "
f"should have <= compression than {turbo_levels[i+1].name} "
f"(compression={turbo_levels[i+1].compression_ratio}x)"
)
def test_fallback_quant_is_last(self):
"""Non-TurboQuant fallbacks (e.g. q4_0) should be at the end of the list."""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
found_fallback = False
for level in QUANT_LEVELS:
if level.name not in turbo_quant_names:
found_fallback = True
elif found_fallback:
pytest.fail(
f"TurboQuant level '{level.name}' appears after a fallback level. "
f"All TurboQuant levels must precede fallbacks."
)
def test_levels_keep_turboquant_quality_order_with_q4_fallback_last(self):
"""TurboQuant levels should lead, with q4_0 reserved as the non-Turbo fallback."""
names = [level.name for level in QUANT_LEVELS]
assert names[:3] == ["turbo4", "turbo3", "turbo2"]
assert names[-1] == "q4_0"
def test_all_levels_have_required_fields(self):
for level in QUANT_LEVELS:
@@ -174,6 +149,19 @@ class TestSelection:
sel = select_quant_level(model_size_gb=16.0, context_length=65536)
assert len(sel.warnings) > 0
def test_falls_back_to_turbo2_when_nothing_fits(self):
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
mock_hw.return_value = HardwareInfo(
total_memory_gb=8,
available_memory_gb=6,
gpu_memory_gb=8,
gpu_name="Tiny GPU",
cpu_cores=4,
detection_method="mock",
)
sel = select_quant_level(model_size_gb=16.0, context_length=131072)
assert sel.level.name == "turbo2"
def test_reasoning_contains_key_info(self):
with patch("evolution.quant_selector.detect_hardware") as mock_hw:
mock_hw.return_value = HardwareInfo(

View File

@@ -1,83 +0,0 @@
"""Tests for smoke workflow CI configuration.
Validates that the GitHub Actions / Gitea Actions smoke workflow
actually runs the standalone CMake build and test suite, not just
parse checks.
"""
from pathlib import Path
import yaml
import pytest
WORKFLOW_PATH = Path(".gitea/workflows/smoke.yml")
@pytest.fixture
def workflow():
"""Load and parse the smoke workflow YAML."""
content = WORKFLOW_PATH.read_text(encoding="utf-8")
return yaml.safe_load(content)
def test_smoke_workflow_exists():
"""Smoke workflow file must exist."""
assert WORKFLOW_PATH.exists(), f"Missing {WORKFLOW_PATH}"
def test_smoke_has_cmake_configure_step(workflow):
"""Smoke workflow must configure the CMake project with tests enabled."""
steps = workflow["jobs"]["smoke"]["steps"]
cmake_found = False
for step in steps:
run = step.get("run", "")
if "cmake -S . -B build" in run and "TURBOQUANT_BUILD_TESTS=ON" in run:
cmake_found = True
break
assert cmake_found, (
"Smoke workflow missing cmake configure step with TURBOQUANT_BUILD_TESTS=ON"
)
def test_smoke_has_cmake_build_step(workflow):
"""Smoke workflow must build the CMake project."""
steps = workflow["jobs"]["smoke"]["steps"]
build_found = False
for step in steps:
run = step.get("run", "")
if "cmake --build build" in run:
build_found = True
break
assert build_found, "Smoke workflow missing cmake --build step"
def test_smoke_has_ctest_step(workflow):
"""Smoke workflow must run ctest."""
steps = workflow["jobs"]["smoke"]["steps"]
ctest_found = False
for step in steps:
run = step.get("run", "")
if "ctest" in run and "output-on-failure" in run:
ctest_found = True
break
assert ctest_found, "Smoke workflow missing ctest --output-on-failure step"
def test_smoke_build_before_secret_scan(workflow):
"""Build and test steps must run before secret scan (fail fast on build errors)."""
steps = workflow["jobs"]["smoke"]["steps"]
names = [s.get("name", "") for s in steps]
build_idx = None
scan_idx = None
for i, name in enumerate(names):
if "cmake" in name.lower() or "build" in name.lower():
if build_idx is None:
build_idx = i
if "secret" in name.lower():
scan_idx = i
if build_idx is not None and scan_idx is not None:
assert build_idx < scan_idx, (
"Build step should run before secret scan to fail fast on broken code"
)