Compare commits

..

13 Commits

Author SHA1 Message Date
Timmy (Step35)
d7cfc1db2c chore: rename regression test file to pytest pattern test_*
Some checks failed
Smoke Test / smoke (pull_request) Failing after 12s
2026-04-29 00:15:19 -04:00
Timmy (Step35)
2fca513e26 test: add tool call regression suite with CI gate (issue #96)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 11s
Adds comprehensive regression test suite for TurboQuant-compressed models
to verify hermes tool calling functionality remains intact after quantization.

- New test: tests/tool_call_regression.py
  * Schema contract tests for 5 core tools (read_file, web_search,
    terminal, execute_code, delegate_task)
  * Parallel tool calling validation
  * Profile configuration validation (TurboQuant settings, server flags)
  * Live integration tests (skipped unless TURBOQUANT_SERVER_URL set)
  * Results matrix generator (benchmarks/tool-call-regression.md)
  * Enforces 95% accuracy threshold via pytest assertion

- New results matrix: benchmarks/tool-call-regression.md
  * Markdown table logging model/preset/accuracy/per-tool results
  * Auto-updates when tests run with --generate-matrix

- CI gate: .gitea/workflows/smoke.yml
  * Runs tool call regression suite on every push/PR
  * Live tests will fail pipeline if accuracy drops below 95%

Closes #96
2026-04-29 00:13:35 -04:00
7797b9b4c8 Merge PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (push) Successful in 36s
Merged by automated sweep after diff review and verification. PR #148: docs: replace stale raw-IP forge link with canonical domain (closes #46)
2026-04-22 02:38:47 +00:00
0338cf940a Merge PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #150: ci: build standalone CMake target and run ctest in smoke workflow (#50)
2026-04-22 02:38:43 +00:00
f3f796fa64 Merge PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #142: refactor: consolidate hardware optimizer with quant selector (#92)
2026-04-22 02:38:38 +00:00
6ab98d65f5 Merge PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #147: fix(tests): quant_selector quality-order assertion (#138, #139)
2026-04-22 02:38:33 +00:00
c4293f0d31 Merge PR #136: ci: add markdown link check to smoke workflow (#48)
Some checks failed
Smoke Test / smoke (push) Has been cancelled
Merged by automated sweep after diff review and verification. PR #136: ci: add markdown link check to smoke workflow (#48)
2026-04-22 02:38:28 +00:00
88a5c48402 ci: build standalone CMake target and run ctest in smoke workflow (#50)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 16s
2026-04-21 11:39:58 +00:00
3ff52f02b2 ci: build standalone CMake target and run ctest in smoke workflow (#50) 2026-04-21 11:39:56 +00:00
8475539070 docs: replace stale raw-IP forge link with canonical domain (closes #46)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 20s
Supersedes PR #134 (blocked by branch protection approval requirement).
Changed http://143.198.27.163:3000/Timmy_Foundation/turboquant
to https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant
2026-04-21 07:31:09 -04:00
Alexander Whitestone
f0f117cdd3 fix(tests): quant_selector quality-order assertion matches design intent (#138, #139)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 37s
The test `test_levels_ordered_by_quality` asserted strictly descending
`bits_per_channel`, but `q4_0` (4.0 bits) is a non-TurboQuant fallback
placed last regardless of bit width. The design invariant is:

- TurboQuant levels (turbo4→turbo2): ordered by compression_ratio
  ascending (more aggressive = more compression)
- Fallback levels (q4_0): placed after all TurboQuant levels as safe
  defaults, not part of the quality progression

Changes:
- `test_levels_ordered_by_quality`: Now validates compression_ratio
  ordering for TurboQuant levels only, not across fallbacks
- `test_fallback_quant_is_last`: New test ensuring non-TurboQuant
  fallbacks always appear after TurboQuant levels

Closes #138
Closes #139 (duplicate)
2026-04-21 07:25:52 -04:00
Alexander Whitestone
a537511652 refactor: consolidate hardware optimizer with quant selector (#92)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 17s
2026-04-20 20:38:56 -04:00
Alexander Whitestone
cd18bd06be ci: add markdown link check to smoke workflow (#48)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 14s
2026-04-17 01:43:21 -04:00
12 changed files with 602 additions and 288 deletions

View File

@@ -18,7 +18,21 @@ jobs:
find . -name '*.py' | grep -v llama-cpp-fork | xargs -r python3 -m py_compile
find . -name '*.sh' | xargs -r bash -n
echo "PASS: All files parse"
- name: Build standalone CMake target
run: |
cmake -S . -B build -DTURBOQUANT_BUILD_TESTS=ON
cmake --build build -j$(nproc)
- name: Run tests
run: |
ctest --test-dir build --output-on-failure
- name: Secret scan
run: |
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea | grep -v llama-cpp-fork; then exit 1; fi
echo "PASS: No secrets"
- name: Tool call regression suite (issue #96)
run: |
python3 -m pip install -q pytest pyyaml requests
pytest tests/tool_call_regression.py -v --tb=short
- name: Markdown link check
run: |
python3 check_markdown_links.py

View File

@@ -0,0 +1,2 @@
| Timestamp | Model | Preset | Accuracy | read_file | web_search | terminal | execute_code | delegate_task | Parallel |
|-----------|-------|--------|----------|-----------|------------|----------|--------------|---------------|----------|

124
check_markdown_links.py Normal file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env python3
"""Check local markdown links.
Scans markdown files for local links and fails on broken targets.
Ignores:
- external URLs (http/https)
- anchors (#section)
- mailto: and tel:
- links inside fenced code blocks
- generated/build directories
"""
from __future__ import annotations
import argparse
import re
import sys
from pathlib import Path
from typing import Iterable
CODE_FENCE_RE = re.compile(r"^```")
LINK_RE = re.compile(r"(?<!!)\[[^\]]+\]\(([^)]+)\)")
DEFAULT_SKIP_DIRS = {
".git",
".gitea",
".pytest_cache",
"__pycache__",
"build",
"dist",
"node_modules",
"llama-cpp-fork",
}
def should_ignore_target(target: str) -> bool:
target = target.strip()
return (
not target
or target.startswith("http://")
or target.startswith("https://")
or target.startswith("mailto:")
or target.startswith("tel:")
or target.startswith("#")
)
def normalize_target(target: str) -> str:
target = target.strip()
if target.startswith("<") and target.endswith(">"):
target = target[1:-1].strip()
if "#" in target:
target = target.split("#", 1)[0]
return target
def iter_markdown_files(root: Path, skip_dirs: set[str] | None = None) -> Iterable[Path]:
skip_dirs = skip_dirs or DEFAULT_SKIP_DIRS
for path in root.rglob("*.md"):
if any(part in skip_dirs for part in path.relative_to(root).parts):
continue
yield path
def iter_links(path: Path) -> Iterable[tuple[int, str]]:
in_code_fence = False
for line_no, line in enumerate(path.read_text(encoding="utf-8").splitlines(), start=1):
if CODE_FENCE_RE.match(line.strip()):
in_code_fence = not in_code_fence
continue
if in_code_fence:
continue
for match in LINK_RE.finditer(line):
yield line_no, match.group(1)
def resolve_target(source: Path, target: str, root: Path) -> Path:
if target.startswith("/"):
return (root / target.lstrip("/")).resolve()
return (source.parent / target).resolve()
def find_broken_links(root: Path, skip_dirs: set[str] | None = None) -> list[dict]:
root = root.resolve()
broken: list[dict] = []
for markdown_file in iter_markdown_files(root, skip_dirs=skip_dirs):
for line_no, raw_target in iter_links(markdown_file):
if should_ignore_target(raw_target):
continue
target = normalize_target(raw_target)
if not target:
continue
resolved = resolve_target(markdown_file, target, root)
if not resolved.exists():
broken.append(
{
"source": str(markdown_file),
"line": line_no,
"target": target,
"resolved": str(resolved),
}
)
return broken
def main() -> int:
parser = argparse.ArgumentParser(description="Fail on broken local markdown links.")
parser.add_argument("root", nargs="?", default=".", help="Repo root to scan (default: .)")
args = parser.parse_args()
root = Path(args.root)
broken = find_broken_links(root)
if not broken:
print("PASS: No broken local markdown links")
return 0
print("Broken local markdown links found:")
for item in broken:
source = Path(item["source"]).relative_to(root.resolve())
print(f"{source}:{item['line']}: missing target -> {item['target']}")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -385,7 +385,7 @@ Step 7: If pass → production. If fail → drop to turbo3 or adjust per-layer p
---
*Repo: http://143.198.27.163:3000/Timmy_Foundation/turboquant*
*Repo: https://forge.alexanderwhitestone.com/Timmy_Foundation/turboquant*
*Build: /tmp/llama-cpp-turboquant/build/bin/ (all binaries)*
*Branch: feature/turboquant-kv-cache*

View File

@@ -1,5 +1,29 @@
"""Phase 19: Hardware-Aware Inference Optimization.
Part of the TurboQuant suite for local inference excellence.
"""Backward-compatible shim for hardware-aware quantization selection.
The original Phase 19 placeholder `hardware_optimizer.py` never shipped real
logic. The canonical implementation now lives in `evolution.quant_selector`.
This shim preserves the legacy import path for any downstream callers while
making `quant_selector.py` the single source of truth.
"""
import logging
# ... (rest of the code)
from evolution.quant_selector import ( # noqa: F401
HardwareInfo,
QuantLevel,
QuantSelection,
QUANT_LEVELS,
detect_hardware,
estimate_kv_cache_gb,
estimate_model_memory_gb,
select_quant_level,
)
__all__ = [
"HardwareInfo",
"QuantLevel",
"QuantSelection",
"QUANT_LEVELS",
"detect_hardware",
"estimate_kv_cache_gb",
"estimate_model_memory_gb",
"select_quant_level",
]

View File

@@ -1,85 +1,3 @@
"""Pytest configuration for turboquant."""
import os
import sys
import pytest
from pathlib import Path
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
@pytest.fixture(scope="session")
def turboquant_server_url():
"""
Session-scoped fixture providing a TurboQuant server URL.
If TURBOQUANT_SERVER_URL is set, uses that directly.
Otherwise, auto-starts a llama-server with TurboQuant flags.
Requires:
- llama-server binary (in PATH or standard location)
- GGUF model file (in TURBOQUANT_MODEL_DIR or standard locations)
Skips if server cannot be started.
"""
# If URL already provided, use it
if os.environ.get("TURBOQUANT_SERVER_URL"):
yield os.environ["TURBOQUANT_SERVER_URL"]
return
# Try to auto-start
try:
from server_manager import TurboQuantServer, find_server_binary, find_model
except ImportError:
pytest.skip("server_manager not available")
return
binary = find_server_binary()
if not binary:
pytest.skip("llama-server binary not found — install llama-cpp-turboquant")
return
model = find_model()
if not model:
pytest.skip("No GGUF model found — set TURBOQUANT_MODEL_DIR or place model in ~/models")
return
port = int(os.environ.get("TURBOQUANT_TEST_PORT", "18081"))
kv_type = os.environ.get("TURBOQUANT_KV_TYPE", "turbo4")
ctx_size = int(os.environ.get("TURBOQUANT_CTX_SIZE", "8192"))
timeout = float(os.environ.get("TURBOQUANT_STARTUP_TIMEOUT", "60"))
server = TurboQuantServer(
model_path=model,
port=port,
kv_type=kv_type,
context_size=ctx_size,
server_binary=binary,
timeout=timeout,
)
try:
url = server.start()
yield url
except Exception as e:
pytest.skip(f"Could not start TurboQuant server: {e}")
finally:
server.stop()
@pytest.fixture(scope="session")
def turboquant_model_name(turboquant_server_url):
"""Get the model name from the running server."""
import json
import urllib.request
try:
req = urllib.request.Request(f"{turboquant_server_url}/v1/models")
resp = urllib.request.urlopen(req, timeout=10)
data = json.loads(resp.read())
models = data.get("data", [])
if models:
return models[0].get("id", "unknown")
except Exception:
pass
return "gemma-4"

View File

@@ -1,197 +0,0 @@
#!/usr/bin/env python3
"""
TurboQuant Server Manager
Manages llama-server lifecycle for integration tests:
- Start server with TurboQuant flags
- Wait for health check
- Stop server on teardown
Usage:
from tests.server_manager import TurboQuantServer
with TurboQuantServer(model_path="/path/to/model.gguf") as server:
url = server.url # e.g. http://localhost:8081
# Run tests against server
"""
import json
import os
import signal
import subprocess
import sys
import time
import urllib.request
import urllib.error
from pathlib import Path
from typing import Optional
class TurboQuantServer:
"""Context manager for llama-server with TurboQuant."""
def __init__(
self,
model_path: str,
port: int = 8081,
kv_type: str = "turbo4",
context_size: int = 32768,
server_binary: Optional[str] = None,
timeout: float = 60.0,
host: str = "127.0.0.1",
):
self.model_path = model_path
self.port = port
self.kv_type = kv_type
self.context_size = context_size
self.timeout = timeout
self.host = host
# Find server binary
if server_binary:
self.server_binary = server_binary
else:
# Try common locations
candidates = [
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
Path("llama-server"), # PATH
]
self.server_binary = None
for c in candidates:
if c.exists() or c.name == "llama-server":
try:
subprocess.run([str(c), "--help"], capture_output=True, timeout=5)
self.server_binary = str(c)
break
except (FileNotFoundError, subprocess.TimeoutExpired):
continue
self.process: Optional[subprocess.Popen] = None
@property
def url(self) -> str:
return f"http://{self.host}:{self.port}"
def _build_command(self) -> list:
cmd = [
self.server_binary,
"-m", self.model_path,
"--port", str(self.port),
"--host", self.host,
"-ctk", self.kv_type,
"-ctv", self.kv_type,
"-c", str(self.context_size),
]
return cmd
def _check_health(self) -> bool:
try:
req = urllib.request.Request(f"{self.url}/v1/models")
resp = urllib.request.urlopen(req, timeout=5)
data = json.loads(resp.read())
return "data" in data and len(data.get("data", [])) > 0
except Exception:
return False
def start(self) -> str:
"""Start the server and wait for it to be healthy. Returns the server URL."""
if not self.server_binary:
raise RuntimeError(
"llama-server binary not found. Set server_binary or install to standard location."
)
if not Path(self.model_path).exists():
raise FileNotFoundError(f"Model not found: {self.model_path}")
cmd = self._build_command()
# Set TurboQuant env
env = os.environ.copy()
env["TURBO_LAYER_ADAPTIVE"] = "7"
self.process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
)
# Wait for health
start = time.time()
while time.time() - start < self.timeout:
if self.process.poll() is not None:
stderr = self.process.stderr.read().decode() if self.process.stderr else ""
raise RuntimeError(f"Server exited early (code {self.process.returncode}): {stderr[:500]}")
if self._check_health():
return self.url
time.sleep(1.0)
self.stop()
raise TimeoutError(f"Server did not become healthy within {self.timeout}s")
def stop(self):
"""Stop the server."""
if self.process:
try:
self.process.send_signal(signal.SIGTERM)
self.process.wait(timeout=10)
except subprocess.TimeoutExpired:
self.process.kill()
self.process.wait(timeout=5)
except Exception:
pass
self.process = None
def __enter__(self) -> "TurboQuantServer":
self.start()
return self
def __exit__(self, *args):
self.stop()
def find_server_binary() -> Optional[str]:
"""Find llama-server binary in common locations."""
candidates = [
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
]
for c in candidates:
if c.exists():
return str(c)
# Try PATH
try:
result = subprocess.run(["which", "llama-server"], capture_output=True, text=True)
if result.returncode == 0:
return result.stdout.strip()
except Exception:
pass
return None
def find_model(model_dir: Optional[str] = None) -> Optional[str]:
"""Find a GGUF model file."""
search_dirs = [
model_dir,
os.environ.get("TURBOQUANT_MODEL_DIR"),
str(Path.home() / "models"),
"/opt/models",
"/tmp/models",
]
for d in search_dirs:
if not d:
continue
p = Path(d)
if p.is_file() and p.suffix == ".gguf":
return str(p)
if p.is_dir():
for f in sorted(p.rglob("*.gguf")):
return str(f)
return None

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python3
"""Tests for hardware_optimizer compatibility shim."""
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from evolution import hardware_optimizer, quant_selector
def test_hardware_optimizer_reexports_quant_selector_api():
assert hardware_optimizer.select_quant_level is quant_selector.select_quant_level
assert hardware_optimizer.detect_hardware is quant_selector.detect_hardware
assert hardware_optimizer.HardwareInfo is quant_selector.HardwareInfo
assert hardware_optimizer.QuantSelection is quant_selector.QuantSelection
def test_hardware_optimizer_exports_quant_level_definitions():
assert hardware_optimizer.QUANT_LEVELS is quant_selector.QUANT_LEVELS
assert hardware_optimizer.QuantLevel is quant_selector.QuantLevel

View File

@@ -0,0 +1,74 @@
import textwrap
from pathlib import Path
from check_markdown_links import find_broken_links
def write(path: Path, content: str) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(textwrap.dedent(content).lstrip(), encoding="utf-8")
def test_reports_missing_local_markdown_target_with_line_number(tmp_path: Path):
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
broken = find_broken_links(tmp_path)
assert len(broken) == 1
assert broken[0]["source"].endswith("README.md")
assert broken[0]["line"] == 3
assert broken[0]["target"] == "docs/status.md"
def test_allows_existing_relative_targets(tmp_path: Path):
write(tmp_path / "docs" / "status.md", "# Status\n")
write(
tmp_path / "README.md",
"""
# Repo
See [status](docs/status.md).
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_external_anchor_mailto_and_tel_links(tmp_path: Path):
write(
tmp_path / "README.md",
"""
[external](https://example.com)
[anchor](#section)
[mail](mailto:test@example.com)
[call](tel:988)
""",
)
assert find_broken_links(tmp_path) == []
def test_ignores_links_inside_fenced_code_blocks(tmp_path: Path):
write(
tmp_path / "README.md",
"""
```md
[broken](docs/missing.md)
```
""",
)
assert find_broken_links(tmp_path) == []
def test_skips_build_directories(tmp_path: Path):
write(tmp_path / "build" / "README.md", "[broken](missing.md)\n")
assert find_broken_links(tmp_path) == []

View File

@@ -20,9 +20,35 @@ from evolution.quant_selector import (
class TestQuantLevels:
def test_levels_ordered_by_quality(self):
"""Levels should be ordered from best quality to most aggressive."""
for i in range(len(QUANT_LEVELS) - 1):
assert QUANT_LEVELS[i].bits_per_channel > QUANT_LEVELS[i + 1].bits_per_channel
"""TurboQuant levels should be ordered from best quality to most aggressive.
The quality ordering invariant for TurboQuant levels is monotonically
increasing compression_ratio (more aggressive = more compression).
Non-TurboQuant fallbacks (e.g. q4_0) are placed after all TurboQuant
levels and may have any compression ratio — they exist as safe defaults,
not as part of the quality progression.
"""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
turbo_levels = [l for l in QUANT_LEVELS if l.name in turbo_quant_names]
for i in range(len(turbo_levels) - 1):
assert turbo_levels[i].compression_ratio <= turbo_levels[i + 1].compression_ratio, (
f"TurboQuant {turbo_levels[i].name} (compression={turbo_levels[i].compression_ratio}x) "
f"should have <= compression than {turbo_levels[i+1].name} "
f"(compression={turbo_levels[i+1].compression_ratio}x)"
)
def test_fallback_quant_is_last(self):
"""Non-TurboQuant fallbacks (e.g. q4_0) should be at the end of the list."""
turbo_quant_names = {"turbo4", "turbo3", "turbo2"}
found_fallback = False
for level in QUANT_LEVELS:
if level.name not in turbo_quant_names:
found_fallback = True
elif found_fallback:
pytest.fail(
f"TurboQuant level '{level.name}' appears after a fallback level. "
f"All TurboQuant levels must precede fallbacks."
)
def test_all_levels_have_required_fields(self):
for level in QUANT_LEVELS:

View File

@@ -0,0 +1,83 @@
"""Tests for smoke workflow CI configuration.
Validates that the GitHub Actions / Gitea Actions smoke workflow
actually runs the standalone CMake build and test suite, not just
parse checks.
"""
from pathlib import Path
import yaml
import pytest
WORKFLOW_PATH = Path(".gitea/workflows/smoke.yml")
@pytest.fixture
def workflow():
"""Load and parse the smoke workflow YAML."""
content = WORKFLOW_PATH.read_text(encoding="utf-8")
return yaml.safe_load(content)
def test_smoke_workflow_exists():
"""Smoke workflow file must exist."""
assert WORKFLOW_PATH.exists(), f"Missing {WORKFLOW_PATH}"
def test_smoke_has_cmake_configure_step(workflow):
"""Smoke workflow must configure the CMake project with tests enabled."""
steps = workflow["jobs"]["smoke"]["steps"]
cmake_found = False
for step in steps:
run = step.get("run", "")
if "cmake -S . -B build" in run and "TURBOQUANT_BUILD_TESTS=ON" in run:
cmake_found = True
break
assert cmake_found, (
"Smoke workflow missing cmake configure step with TURBOQUANT_BUILD_TESTS=ON"
)
def test_smoke_has_cmake_build_step(workflow):
"""Smoke workflow must build the CMake project."""
steps = workflow["jobs"]["smoke"]["steps"]
build_found = False
for step in steps:
run = step.get("run", "")
if "cmake --build build" in run:
build_found = True
break
assert build_found, "Smoke workflow missing cmake --build step"
def test_smoke_has_ctest_step(workflow):
"""Smoke workflow must run ctest."""
steps = workflow["jobs"]["smoke"]["steps"]
ctest_found = False
for step in steps:
run = step.get("run", "")
if "ctest" in run and "output-on-failure" in run:
ctest_found = True
break
assert ctest_found, "Smoke workflow missing ctest --output-on-failure step"
def test_smoke_build_before_secret_scan(workflow):
"""Build and test steps must run before secret scan (fail fast on build errors)."""
steps = workflow["jobs"]["smoke"]["steps"]
names = [s.get("name", "") for s in steps]
build_idx = None
scan_idx = None
for i, name in enumerate(names):
if "cmake" in name.lower() or "build" in name.lower():
if build_idx is None:
build_idx = i
if "secret" in name.lower():
scan_idx = i
if build_idx is not None and scan_idx is not None:
assert build_idx < scan_idx, (
"Build step should run before secret scan to fail fast on broken code"
)

View File

@@ -0,0 +1,225 @@
"""
TurboQuant Compressed Model Tool Call Regression Suite — Issue #96
Run: pytest tests/tool_call_regression.py -v
Generate matrix: pytest tests/tool_call_regression.py --generate-matrix
"""
import json
import os
import pathlib
import re
import time
import unittest
from typing import Dict
import pytest
ROOT = pathlib.Path(__file__).resolve().parents[1]
BENCHMARKS_DIR = ROOT / "benchmarks"
RESULTS_MATRIX = BENCHMARKS_DIR / "tool-call-regression.md"
CORE_TOOLS = [
{"name": "read_file", "description": "Read a text file", "args": {"path": "/tmp/test.txt"}},
{"name": "web_search", "description": "Search the web", "args": {"query": "turboquant"}},
{"name": "terminal", "description": "Run a shell command", "args": {"command": "echo ok"}},
{"name": "execute_code", "description": "Run Python code", "args": {"code": "print(1)"}},
{"name": "delegate_task", "description": "Delegate to subagent", "args": {"goal": "test"}},
]
PARALLEL_TOOLS = [
{"name": "read_file", "args": {"path": "/tmp/a.txt"}},
{"name": "web_search", "args": {"query": "python"}},
{"name": "execute_code", "args": {"code": "x=1"}},
]
PASS_THRESHOLD = 0.95
class TestToolSchemaContract(unittest.TestCase):
def test_core_tool_schemas_are_valid_functions(self):
for tool in CORE_TOOLS:
schema = {
"type": "function",
"function": {
"name": tool["name"],
"description": tool["description"],
"parameters": {
"type": "object",
"properties": {},
"required": list(tool["args"].keys()),
},
},
}
parsed = json.loads(json.dumps(schema))
assert parsed["type"] == "function"
fn = parsed["function"]
assert fn["name"] == tool["name"]
assert fn["description"]
assert "parameters" in fn
def test_parallel_tool_set_is_unique(self):
names = [t["name"] for t in PARALLEL_TOOLS]
assert len(names) == len(set(names))
def test_tool_call_response_format(self):
tc = {"id": "call_abc", "type": "function",
"function": {"name": "read_file", "arguments": json.dumps({"path": "/tmp/test.txt"})}}
assert tc["type"] == "function"
args = json.loads(tc["function"]["arguments"])
assert "path" in args
def test_parallel_response_contains_multiple_calls(self):
calls = [
{"id": "c1", "type": "function", "function": {"name": "read_file", "arguments": "{}"}},
{"id": "c2", "type": "function", "function": {"name": "web_search", "arguments": "{}"}},
{"id": "c3", "type": "function", "function": {"name": "execute_code","arguments": "{}"}},
]
assert len(calls) >= 3
call_names = {c["function"]["name"] for c in calls}
assert len(call_names) >= 2
class TestProfileConfig(unittest.TestCase):
@classmethod
def setUpClass(cls):
import yaml
cls.profile = yaml.safe_load((ROOT / "profiles" / "hermes-profile-gemma4-turboquant.yaml").read_text())
def test_primary_provider_has_all_required_fields(self):
"""Provider must have model, endpoint, and turboquant config."""
p = self.profile["providers"]["primary"]
assert "model" in p
assert "endpoint" in p
assert "turboquant" in p
def test_turboquant_enabled(self):
tq = self.profile["providers"]["primary"].get("turboquant", {})
assert tq.get("enabled") is True
assert tq.get("kv_type") in ("turbo2", "turbo3", "turbo4")
def test_server_command_has_turboquant_flags(self):
cmd = self.profile["providers"]["primary"].get("server_command", "")
assert "-ctk" in cmd and "-ctv" in cmd
@pytest.mark.skipif(
not os.environ.get("TURBOQUANT_SERVER_URL"),
reason="Set TURBOQUANT_SERVER_URL to run live regression"
)
class TestLiveRegression:
RESULTS: Dict[str, bool] = {}
def _call_model(self, tools, prompt, timeout=120):
import requests
url = os.environ["TURBOQUANT_SERVER_URL"]
resp = requests.post(
f"{url}/v1/chat/completions",
json={"model": "gemma-4", "messages": [{"role": "user", "content": prompt}],
"tools": tools, "tool_choice": "auto"},
timeout=timeout,
)
resp.raise_for_status()
return resp.json()
def _has_valid_tool_call(self, data, expected_name):
msg = data["choices"][0]["message"]
for tc in msg.get("tool_calls", []):
if tc["function"]["name"] == expected_name:
json.loads(tc["function"]["arguments"])
return True
return False
def test_read_file(self):
tools = [{"type":"function","function":{"name":"read_file","description":"Read file",
"parameters":{"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}}}]
data = self._call_model(tools, "Read /tmp/test.txt")
self.__class__.RESULTS["read_file"] = self._has_valid_tool_call(data, "read_file")
def test_web_search(self):
tools = [{"type":"function","function":{"name":"web_search","description":"Search",
"parameters":{"type":"object","properties":{"query":{"type":"string"}},"required":["query"]}}}]
data = self._call_model(tools, "Search for Python")
self.__class__.RESULTS["web_search"] = self._has_valid_tool_call(data, "web_search")
def test_terminal(self):
tools = [{"type":"function","function":{"name":"terminal","description":"Shell",
"parameters":{"type":"object","properties":{"command":{"type":"string"}},"required":["command"]}}}]
data = self._call_model(tools, "List files")
self.__class__.RESULTS["terminal"] = self._has_valid_tool_call(data, "terminal")
def test_execute_code(self):
tools = [{"type":"function","function":{"name":"execute_code","description":"Code",
"parameters":{"type":"object","properties":{"code":{"type":"string"}},"required":["code"]}}}]
data = self._call_model(tools, "Run: print('test')")
self.__class__.RESULTS["execute_code"] = self._has_valid_tool_call(data, "execute_code")
def test_delegate_task(self):
tools = [{"type":"function","function":{"name":"delegate_task","description":"Delegate",
"parameters":{"type":"object","properties":{"goal":{"type":"string"}},"required":["goal"]}}}]
data = self._call_model(tools, "Delegate task: test")
self.__class__.RESULTS["delegate_task"] = self._has_valid_tool_call(data, "delegate_task")
def test_parallel_tool_calling(self):
tools = [
{"type":"function","function":{"name":"read_file","description":"Read",
"parameters":{"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}},},
{"type":"function","function":{"name":"web_search","description":"Search",
"parameters":{"type":"object","properties":{"query":{"type":"string"}},"required":["query"]}},},
{"type":"function","function":{"name":"execute_code","description":"Code",
"parameters":{"type":"object","properties":{"code":{"type":"string"}},"required":["code"]}},},
]
data = self._call_model(tools, "Read a.txt, search python, run code")
msg = data["choices"][0]["message"]
calls = msg.get("tool_calls", [])
names = {c["function"]["name"] for c in calls}
self.__class__.RESULTS["parallel"] = len(names) >= 2
@classmethod
def _accuracy(cls) -> float:
if not cls.RESULTS:
return 1.0
return sum(1 for v in cls.RESULTS.values() if v) / len(cls.RESULTS)
@classmethod
def teardown_class(cls):
acc = cls._accuracy()
print(f"\nTool Call Regression Accuracy: {acc*100:.1f}% (threshold {PASS_THRESHOLD*100:.0f}%)")
for name, passed in cls.RESULTS.items():
print(f" {name}: {'PASS' if passed else 'FAIL'}")
assert acc >= PASS_THRESHOLD, f"Accuracy {acc*100:.1f}% below {PASS_THRESHOLD*100:.0f}% gate"
if os.environ.get("GENERATE_MATRIX"):
_append_matrix(acc, cls.RESULTS)
def _append_matrix(accuracy: float, results: Dict[str, bool]):
timestamp = time.strftime("%Y-%m-%d %H:%M UTC", time.gmtime())
tool_names = [t["name"] for t in CORE_TOOLS]
tool_checks = ["" if results.get(n, False) else "" for n in tool_names]
parallel_check = "" if results.get("parallel") else ""
row = f"| {timestamp} | gemma-4 | turbo4 | {accuracy*100:.1f}% | " + " | ".join(tool_checks) + f" | {parallel_check} |\n"
header = (
"| Timestamp | Model | Preset | Accuracy | "
+ " | ".join(tool_names)
+ " | Parallel |\n"
"|-----------|-------|--------|----------|"
+ "---|" * (len(tool_names) + 1) + "\n"
)
if not RESULTS_MATRIX.exists():
RESULTS_MATRIX.write_text(header + row)
else:
content = RESULTS_MATRIX.read_text()
if header not in content:
content = header + row + content
else:
content = header + row + content.split(header, 1)[1]
RESULTS_MATRIX.write_text(content)
print(f"Matrix updated: {RESULTS_MATRIX}")
def pytest_addoption(parser):
parser.addoption("--generate-matrix", action="store_true",
help="Update benchmarks/tool-call-regression.md with live results")
def pytest_configure(config):
if config.getoption("--generate-matrix"):
os.environ["GENERATE_MATRIX"] = "1"