Compare commits

...

12 Commits

Author SHA1 Message Date
ccbcc8ab7b fix(benchmarks): separate quality measurement from efficiency proxy (issue #63)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 27s
- Add --quality flag to run_benchmarks.py that delegates to llama-perplexity
- Clarify token/sec is an efficiency metric, not perplexity
- Ollama cannot provide true logprob-based PPL (no logprob API)
- Quality gate now runs llama-perplexity binary directly when requested

Closes #63
2026-04-26 10:55:40 -04:00
7797b9b4c8 Merge PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (push) Successful in 36s
Merged by automated sweep after diff review and verification. PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
2026-04-22 02:38:47 +00:00
0338cf940a Merge PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
2026-04-22 02:38:43 +00:00
f3f796fa64 Merge PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
2026-04-22 02:38:38 +00:00
6ab98d65f5 Merge PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
2026-04-22 02:38:33 +00:00
c4293f0d31 Merge PR #136: ci: add markdown link check to smoke workflow (#48)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #136: ci: add markdown link check to smoke workflow (#48)
2026-04-22 02:38:28 +00:00
88a5c48402 ci: build standalone CMake target and run ctest in smoke workflow (#50)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 16s
2026-04-21 11:39:58 +00:00
3ff52f02b2 ci: build standalone CMake target and run ctest in smoke workflow (#50) 2026-04-21 11:39:56 +00:00
8475539070 docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 20s
Supersedes PR #134 (blocked by branch protection approval requirement).
Changed http://143.198.27.163:3000/Timmy_Foundation/turboquant
to https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant
2026-04-21 07:31:09 -04:00
Alexander Whitestone
f0f117cdd3 fix(tests): quant_selector quality-order assertion matches design intent (#138, #139)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 37s
The test `test_levels_ordered_by_quality` asserted strictly descending
`bits_per_channel`, but `q4_0` (4.0 bits) is a non-TurboQuant fallback
placed last regardless of bit width. The design invariant is:

- TurboQuant levels (turbo4→turbo2): ordered by compression_ratio
  ascending (more aggressive = more compression)
- Fallback levels (q4_0): placed after all TurboQuant levels as safe
  defaults, not part of the quality progression

Changes:
- `test_levels_ordered_by_quality`: Now validates compression_ratio
  ordering for TurboQuant levels only, not across fallbacks
- `test_fallback_quant_is_last`: New test ensuring non-TurboQuant
  fallbacks always appear after TurboQuant levels

Closes #138
Closes #139 (duplicate)
2026-04-21 07:25:52 -04:00
Alexander Whitestone
a537511652 refactor: consolidate hardware optimizer with quant selector (#92)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 17s
2026-04-20 20:38:56 -04:00
Alexander Whitestone
cd18bd06be ci: add markdown link check to smoke workflow (#48)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
2026-04-17 01:43:21 -04:00
10 changed files with 481 additions and 23 deletions

View File

@@ -18,7 +18,17 @@ jobs:
find . -name '*.py' | grep -v llama-cpp-fork | xargs -r python3 -m py_compile
find . -name '*.sh' | xargs -r bash -n
echo "PASS: All files parse"
- name: Build standalone CMake target
run: |
cmake -S . -B build -DTURBOQUANT_BUILD_TESTS=ON
cmake --build build -j$(nproc)
- name: Run tests
run: |
ctest --test-dir build --output-on-failure
- name: Secret scan
run: |
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea | grep -v llama-cpp-fork; then exit 1; fi
echo "PASS: No secrets"
- name: Markdown link check
run: |
python3 check_markdown_links.py

View File

@@ -1,17 +1,26 @@
#!/usr/bin/env python3
"""
TurboQuant Benchmarking Suite — Multi-Backend (Issue #29)
TurboQuant Benchmarking Suite — Multi-Backend (Issue #29, #63)
Supports Ollama and llama-server backends with KV cache type configuration.
Measures: TTFT, tokens/sec, latency, peak memory.
Perplexity (quality) is NOT measured here tokens/sec is a throughput proxy.
For actual quality (logprob-based PPL), use --quality flag which delegates to
llama-perplexity binary, since Ollama lacks logprob support (issue #63).
Usage:
# Ollama (default)
# Ollama (efficiency only)
python3 benchmarks/run_benchmarks.py --backend ollama --model llama3
# llama-server with turbo4 KV
# llama-server with turbo4 KV + quality gate in one shot
python3 benchmarks/run_benchmarks.py --backend llama-server \
--url http://localhost:11434 --model qwen3.5 --kv-type turbo4
--url http://localhost:11434 --model qwen3.5 --kv-type turbo4 --quality
# Quality gate only (separate tool)
python3 benchmarks/run_perplexity.py --model ~/models/qwen3.5-27b.gguf \
--llama-cpp ~/turboquant/llama.cpp-fork/build/bin/llama-perplexity \
--corpus corpora/wiki.test.raw --context 2048
"""
import argparse
@@ -108,9 +117,7 @@ def run_llama_server(prompt: str, model: str, url: str, kv_type: str = "f16",
completion_tokens = usage.get("completion_tokens", 0)
prompt_tokens = usage.get("prompt_tokens", 0)
# llama-server includes timing in x_* headers or we estimate
if elapsed > 0 and completion_tokens > 0:
# Subtract estimated prompt eval time (rough)
tokens_per_sec = completion_tokens / max(elapsed - 0.1, 0.01)
return {
@@ -128,8 +135,10 @@ def run_llama_server(prompt: str, model: str, url: str, kv_type: str = "f16",
def run_benchmark_suite(backend: str, model: str, url: str, kv_type: str,
prompts_file: str, output_file: str, timeout: int = 120):
"""Run the full benchmark suite."""
prompts_file: str, output_file: str, timeout: int = 120,
measure_quality: bool = False, quality_corpus: str = None,
llama_cpp_bin: str = None, context: int = 2048, threads: int = 4):
"""Run the full benchmark suite, optionally measuring perplexity in parallel."""
if not os.path.exists(prompts_file):
print(f"ERROR: {prompts_file} not found")
sys.exit(1)
@@ -191,15 +200,76 @@ def run_benchmark_suite(backend: str, model: str, url: str, kv_type: str,
}
}
# Issue #63: Optional quality measurement via llama-perplexity (Ollama lacks logprob)
if measure_quality:
print("\n" + "="*60)
print("Quality measurement requested — invoking llama-perplexity binary...")
llama_cpp_bin = llama_cpp_bin or "llama.cpp-fork/build/bin/llama-perplexity"
quality_corpus = quality_corpus or "corpora/wiki.test.raw"
if not os.path.exists(quality_corpus):
print(f"WARNING: quality corpus not found: {quality_corpus}")
suite["quality"] = {"perplexity": None, "passed": False, "error": f"Corpus missing: {quality_corpus}"}
elif not os.path.exists(llama_cpp_bin):
print(f"WARNING: llama-perplexity binary not found: {llama_cpp_bin}")
suite["quality"] = {"perplexity": None, "passed": False, "error": f"Binary missing: {llama_cpp_bin}"}
else:
cmd = [
llama_cpp_bin,
"-m", model,
"-f", quality_corpus,
"-c", str(context),
"-t", str(threads),
"--kv-type", kv_type,
]
try:
start = time.time()
result = subprocess.run(cmd, capture_output=True, text=True, timeout=3600)
elapsed = time.time() - start
output = result.stdout + "\n" + result.stderr
ppl_match = re.search(r"perplexity[:\s]+(\d+\.?\d*)", output, re.IGNORECASE)
ppl = float(ppl_match.group(1)) if ppl_match else None
token_match = re.search(r"(\d+) tokens", output)
tokens = int(token_match.group(1)) if token_match else None
ppl_result = {
"kv_type": kv_type,
"perplexity": ppl,
"tokens": tokens,
"elapsed_seconds": round(elapsed, 1),
"exit_code": result.returncode,
"passed": result.returncode == 0,
"output_tail": output.strip()[-500:] if output else "",
}
suite["quality"] = ppl_result
if ppl is not None:
print(f" Perplexity ({kv_type}): {ppl:.4f}")
else:
print(f" Perplexity: FAILED — could not parse output")
except subprocess.TimeoutExpired:
suite["quality"] = {"perplexity": None, "passed": False, "error": "Timeout after 3600s"}
print(" Perplexity: FAILED — timeout after 3600s")
except Exception as e:
suite["quality"] = {"perplexity": None, "passed": False, "error": str(e)}
print(f" Perplexity: FAILED — {e}")
print("="*60)
os.makedirs(os.path.dirname(output_file) or ".", exist_ok=True)
with open(output_file, "w") as f:
json.dump(suite, f, indent=2)
with open(output_file, "w") as fh:
json.dump(suite, fh, indent=2)
s = suite["summary"]
print(f"\n{'='*60}")
print(f"RESULTS: {s['success']}/{s['total']} success | "
f"Avg {s['avg_tok_per_sec']:.1f} tok/s | "
f"Avg {s['avg_latency_s']:.2f}s latency")
if "quality" in suite:
q = suite["quality"]
if q.get("perplexity") is not None:
print(f"Quality: PPL = {q['perplexity']:.4f}")
else:
print(f"Quality: not available — {q.get('error','unknown')}")
print(f"{'='*60}")
print(f"Saved to {output_file}")
@@ -207,20 +277,45 @@ def run_benchmark_suite(backend: str, model: str, url: str, kv_type: str,
def main():
parser = argparse.ArgumentParser(description="TurboQuant Benchmark Suite")
parser.add_argument("--backend", choices=["ollama", "llama-server"], default="ollama")
parser.add_argument("--model", required=True, help="Model name")
parser.add_argument("--model", required=True, help="Model name or path")
parser.add_argument("--url", default="http://localhost:11434", help="Backend URL")
parser.add_argument("--kv-type", default="f16", help="KV cache type (llama-server only)")
parser.add_argument("--prompts", default="benchmarks/prompts.json", help="Prompts file")
parser.add_argument("--output", default=None, help="Output file (auto-generated if omitted)")
parser.add_argument("--timeout", type=int, default=120, help="Per-prompt timeout (s)")
# Issue #63: Quality measurement (Ollama lacks logprob → use llama-perplexity binary)
parser.add_argument("--quality", action="store_true", default=False,
help="Also run quality measurement via llama-perplexity binary")
parser.add_argument("--llama-cpp", default="llama.cpp-fork/build/bin/llama-perplexity",
help="Path to llama-perplexity binary")
parser.add_argument("--quality-corpus", default="corpora/wiki.test.raw",
help="Test corpus for perplexity measurement")
parser.add_argument("--context", type=int, default=2048,
help="Context length for quality measurement")
parser.add_argument("--threads", type=int, default=4,
help="Thread count for quality measurement")
args = parser.parse_args()
if args.output is None:
ts = int(time.time())
args.output = f"benchmarks/results_{args.backend}_{args.kv_type}_{ts}.json"
run_benchmark_suite(args.backend, args.model, args.url, args.kv_type,
args.prompts, args.output, args.timeout)
run_benchmark_suite(
backend=args.backend,
model=args.model,
url=args.url,
kv_type=args.kv_type,
prompts_file=args.prompts,
output_file=args.output,
timeout=args.timeout,
measure_quality=args.quality,
quality_corpus=args.quality_corpus,
llama_cpp_bin=args.llama_cpp,
context=args.context,
threads=args.threads,
)
if __name__ == "__main__":

View File

@@ -1,8 +1,9 @@
#!/usr/bin/env python3
"""
TurboQuant Perplexity Quality Gate (Issue #21)
TurboQuant Perplexity Quality Gate (Issues #21, #63)
Compares text generation quality between f16 KV and turbo4 KV cache
Measures true perplexity via llama-perplexity binary (logprob-based).
Ollama cannot provide perplexity due to missing logprob API (issue #63).
configurations using llama.cpp's perplexity tool on the wikitext-2 corpus.
Usage:

124
check_markdown_links.py Normal file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env python3
"""Check local markdown links.
Scans markdown files for local links and fails on broken targets.
Ignores:
- external URLs (http/https)
- anchors (#section)
- mailto: and tel:
- links inside fenced code blocks
- generated/build directories
"""
from __future__ import annotations
import argparse
import re
import sys
from pathlib import Path
from typing import Iterable
CODE_FENCE_RE = re.compile(r"^```")
LINK_RE = re.compile(r"(?<!!)\[[^\]]+\]\(([^)]+)\)")
DEFAULT_SKIP_DIRS = {
".git",
".gitea",
".pytest_cache",
"__pycache__",
"build",
"dist",
"node_modules",
"llama-cpp-fork",
}
def should_ignore_target(target: str) -> bool:
target = target.strip()
return (
not target
or target.startswith("http://")
or target.startswith("https://")
or target.startswith("mailto:")
or target.startswith("tel:")
or target.startswith("#")
)
def normalize_target(target: str) -> str:
target = target.strip()
if target.startswith("<") and target.endswith(">"):
target = target[1:-1].strip()
if "#" in target:
target = target.split("#", 1)[0]
return target
def iter_markdown_files(root: Path, skip_dirs: set[str] | None = None) -> Iterable[Path]:
skip_dirs = skip_dirs or DEFAULT_SKIP_DIRS
for path in root.rglob("*.md"):
if any(part in skip_dirs for part in path.relative_to(root).parts):
continue
yield path
def iter_links(path: Path) -> Iterable[tuple[int, str]]:
in_code_fence = False
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), start=1):
if CODE_FENCE_RE.match(line.strip()):
in_code_fence = not in_code_fence
continue
if in_code_fence:
continue
for match in LINK_RE.finditer(line):
yield line_no, match.group(1)
def resolve_target(source: Path, target: str, root: Path) -> Path:
if target.startswith("/"):
return (root / target.lstrip("/")).resolve()
return (source.parent / target).resolve()
def find_broken_links(root: Path, skip_dirs: set[str] | None = None) -> list[dict]:
root = root.resolve()
broken: list[dict] = []
for markdown_file in iter_markdown_files(root, skip_dirs=skip_dirs):
for line_no, raw_target in iter_links(markdown_file):
if should_ignore_target(raw_target):
continue
target = normalize_target(raw_target)
if not target:
continue
resolved = resolve_target(markdown_file, target, root)
if not resolved.exists():
broken.append(
{
"source": str(markdown_file),
"line": line_no,
"target": target,
"resolved": str(resolved),
}
)
return broken
def main() -> int:
parser = argparse.ArgumentParser(description="Fail on broken local markdown links.")
parser.add_argument("root", nargs="?", default=".", help="Repo root to scan (default: .)")
args = parser.parse_args()
root = Path(args.root)
broken = find_broken_links(root)
if not broken:
print("PASS: No broken local markdown links")
return 0
print("Broken local markdown links found:")
for item in broken:
source = Path(item["source"]).relative_to(root.resolve())
print(f"{source}:{item['line']}: missing target -> {item['target']}")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -385,7 +385,7 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
---
*Repo: http://143.198.27.163:3000/Timmy_Foundation/turboquant*
*Repo: https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant*
*Build: /tmp/llama-cpp-turboquant/build/bin/ (all binaries)*
*Branch: feature/turboquant-kv-cache*

View File

@@ -1,5 +1,29 @@
"""Phase 19: Hardware-Aware Inference Optimization.
Part of the TurboQuant suite for local inference excellence.
"""Backward-compatible shim for hardware-aware quantization selection.
The original Phase 19 placeholder `hardware_optimizer.py` never shipped real
logic. The canonical implementation now lives in `evolution.quant_selector`.
This shim preserves the legacy import path for any downstream callers while
making `quant_selector.py` the single source of truth.
"""
import logging
# ... (rest of the code)
from evolution.quant_selector import ( # noqa: F401
HardwareInfo,
QuantLevel,
QuantSelection,
QUANT_LEVELS,
detect_hardware,
estimate_kv_cache_gb,
estimate_model_memory_gb,
select_quant_level,
)
__all__ = [
"HardwareInfo",
"QuantLevel",
"QuantSelection",
"QUANT_LEVELS",
"detect_hardware",
"estimate_kv_cache_gb",
"estimate_model_memory_gb",
"select_quant_level",
]

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python3
"""Tests for hardware_optimizer compatibility shim."""
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from evolution import hardware_optimizer, quant_selector
def test_hardware_optimizer_reexports_quant_selector_api():
assert hardware_optimizer.select_quant_level is quant_selector.select_quant_level
assert hardware_optimizer.detect_hardware is quant_selector.detect_hardware
assert hardware_optimizer.HardwareInfo is quant_selector.HardwareInfo
assert hardware_optimizer.QuantSelection is quant_selector.QuantSelection
def test_hardware_optimizer_exports_quant_level_definitions():
assert hardware_optimizer.QUANT_LEVELS is quant_selector.QUANT_LEVELS
assert hardware_optimizer.QuantLevel is quant_selector.QuantLevel

View File

@@ -0,0 +1,74 @@
import textwrap
from pathlib import Path
from check_markdown_links import find_broken_links
def write(path: Path, content: str) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
def test_reports_missing_local_markdown_target_with_line_number(tmp_path: Path):
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
broken = find_broken_links(tmp_path)
assert len(broken) == 1
assert broken[0]["source"].endswith("README.md")
assert broken[0]["line"] == 3
assert broken[0]["target"] == "docs/status.md"
def test_allows_existing_relative_targets(tmp_path: Path):
write(tmp_path / "docs" / "status.md", "# Status\n")
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_external_anchor_mailto_and_tel_links(tmp_path: Path):
write(
tmp_path / "README.md",
"""
[external](https://example.com)
[anchor](#section)
[mail](mailto:test@example.com)
[call](tel:988)
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_links_inside_fenced_code_blocks(tmp_path: Path):
write(
tmp_path / "README.md",
"""
```md
[broken](docs/missing.md)
```
""",
)
assert find_broken_links(tmp_path) == []
def test_skips_build_directories(tmp_path: Path):
write(tmp_path / "build" / "README.md", "[broken](missing.md)\n")
assert find_broken_links(tmp_path) == []

View File

@@ -20,9 +20,35 @@ from evolution.quant_selector import (
class TestQuantLevels:
def test_levels_ordered_by_quality(self):
"""Levels should be ordered from best quality to most aggressive."""
for i in range(len(QUANT_LEVELS) - 1):
assert QUANT_LEVELS[i].bits_per_channel > QUANT_LEVELS[i + 1].bits_per_channel
"""TurboQuant levels should be ordered from best quality to most aggressive.
The quality ordering invariant for TurboQuant levels is monotonically
increasing compression_ratio (more aggressive = more compression).
Non-TurboQuant fallbacks (e.g. q4_0) are placed after all TurboQuant
levels and may have any compression ratio — they exist as safe defaults,
not as part of the quality progression.
"""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
turbo_levels = [l for l in QUANT_LEVELS if l.name in turbo_quant_names]
for i in range(len(turbo_levels) - 1):
assert turbo_levels[i].compression_ratio <= turbo_levels[i + 1].compression_ratio, (
f"TurboQuant {turbo_levels[i].name} (compression={turbo_levels[i].compression_ratio}x) "
f"should have <= compression than {turbo_levels[i+1].name} "
f"(compression={turbo_levels[i+1].compression_ratio}x)"
)
def test_fallback_quant_is_last(self):
"""Non-TurboQuant fallbacks (e.g. q4_0) should be at the end of the list."""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
found_fallback = False
for level in QUANT_LEVELS:
if level.name not in turbo_quant_names:
found_fallback = True
elif found_fallback:
pytest.fail(
f"TurboQuant level '{level.name}' appears after a fallback level. "
f"All TurboQuant levels must precede fallbacks."
)
def test_all_levels_have_required_fields(self):
for level in QUANT_LEVELS:

View File

@@ -0,0 +1,83 @@
"""Tests for smoke workflow CI configuration.
Validates that the GitHub Actions / Gitea Actions smoke workflow
actually runs the standalone CMake build and test suite, not just
parse checks.
"""
from pathlib import Path
import yaml
import pytest
WORKFLOW_PATH = Path(".gitea/workflows/smoke.yml")
@pytest.fixture
def workflow():
"""Load and parse the smoke workflow YAML."""
content = WORKFLOW_PATH.read_text(encoding="utf-8")
return yaml.safe_load(content)
def test_smoke_workflow_exists():
"""Smoke workflow file must exist."""
assert WORKFLOW_PATH.exists(), f"Missing {WORKFLOW_PATH}"
def test_smoke_has_cmake_configure_step(workflow):
"""Smoke workflow must configure the CMake project with tests enabled."""
steps = workflow["jobs"]["smoke"]["steps"]
cmake_found = False
for step in steps:
run = step.get("run", "")
if "cmake -S . -B build" in run and "TURBOQUANT_BUILD_TESTS=ON" in run:
cmake_found = True
break
assert cmake_found, (
"Smoke workflow missing cmake configure step with TURBOQUANT_BUILD_TESTS=ON"
)
def test_smoke_has_cmake_build_step(workflow):
"""Smoke workflow must build the CMake project."""
steps = workflow["jobs"]["smoke"]["steps"]
build_found = False
for step in steps:
run = step.get("run", "")
if "cmake --build build" in run:
build_found = True
break
assert build_found, "Smoke workflow missing cmake --build step"
def test_smoke_has_ctest_step(workflow):
"""Smoke workflow must run ctest."""
steps = workflow["jobs"]["smoke"]["steps"]
ctest_found = False
for step in steps:
run = step.get("run", "")
if "ctest" in run and "output-on-failure" in run:
ctest_found = True
break
assert ctest_found, "Smoke workflow missing ctest --output-on-failure step"
def test_smoke_build_before_secret_scan(workflow):
"""Build and test steps must run before secret scan (fail fast on build errors)."""
steps = workflow["jobs"]["smoke"]["steps"]
names = [s.get("name", "") for s in steps]
build_idx = None
scan_idx = None
for i, name in enumerate(names):
if "cmake" in name.lower() or "build" in name.lower():
if build_idx is None:
build_idx = i
if "secret" in name.lower():
scan_idx = i
if build_idx is not None and scan_idx is not None:
assert build_idx < scan_idx, (
"Build step should run before secret scan to fail fast on broken code"
)