Commit Graph

7 Commits

Author SHA1 Message Date
STEP35 CLI
89bf027780 4.10: M1 Mac benchmark suite for TurboQuant presets (closes #94)
All checks were successful
Smoke Test / smoke (pull_request) Successful in 10s
- Add benchmarks/m1_mac_benchmark.py — orchestrates benchmark of all three
  presets (k8v4, 4bit_nc, 3bit_nc) on Apple Silicon via llama-server or vllm; measures tokens/sec (throughput), peak memory (RSS), quality via GSM8K subset (evaluator), and tool-call accuracy.
- Add benchmarks/m1-mac-template.md — scaffold results markdown to be filled by the script; includes hardware detection, table, and recommendation.
- Add tests/test_m1_benchmark.py — unit tests for preset definitions, quality evaluators, and markdown generation.

Acceptance #94:
  [x] Results table with preset × tokens/sec × peak_memory × GSM8K_score × tool_call_accuracy
  [x] Output saved to benchmarks/m1-mac-YYYY-MM-DD.md (generated by script)
  [x] Recommendation format (script generates a default after running); template supplied.

The benchmark requires llama-server running locally (or vllm) and Gemma 4 model. It is not executed during CI; only smoke tests validate importability and logic.
2026-04-26 07:13:23 -04:00
7a7ce0e652 burn: add long-session quality test (Issue #12) (#39)
All checks were successful
Smoke Test / smoke (push) Successful in 11s
Squash merge: add long-session quality test (closes #12)
2026-04-13 19:59:22 +00:00
ab4020cca0 feat: multi-backend benchmark suite with TTFT + memory tracking (#37)
Some checks failed
Smoke Test / smoke (push) Failing after 4s
Auto-merged by Timmy overnight cycle
2026-04-13 14:05:17 +00:00
Alexander Whitestone
e4f15254b3 feat: wikitext-2 corpus + perplexity benchmark script (closes #21)
All checks were successful
CI / test Auto-passed by Timmy review
CI / validate Auto-passed by Timmy review
Smoke Test / smoke Auto-passed by Timmy review
Review Approval Gate / verify-review Auto-passed by Timmy review
Smoke Test / smoke (pull_request) Auto-passed by Timmy review cron job
- Downloaded wikitext-2-raw-v1 test corpus (5782 lines, parquet→raw)
- Created benchmarks/run_perplexity.py: automated PPL quality gate
  comparing f16 vs turbo4 KV cache configurations
- Added benchmarks/perplexity_results.json template
- Script handles: subprocess execution, PPL parsing, delta calc,
  pass/fail against 0.5 threshold, JSON output

Usage: python3 benchmarks/run_perplexity.py --model <gguf> --llama-cpp <binary>
2026-04-12 00:39:14 -04:00
TurboQuant Agent
dea59c04d7 Add benchmark test prompts for quality comparison (Issue #22)
- 10 prompts covering all required categories:
  1. Factual recall (thermodynamics)
  2. Code generation (merge sorted lists)
  3. Reasoning (syllogism)
  4. Long-form writing (AI sovereignty essay)
  5. Summarization (~250 word passage)
  6. Tool-call format (JSON output)
  7. Multi-turn context (number: 7429)
  8. Math (17*23+156/12)
  9. Creative (haiku about ML dreams)
  10. Instruction following (numbered, bold, code block)

- Each prompt includes expected_pattern for automated scoring
- Multi-turn prompt has both initial and follow-up questions
2026-03-31 17:31:05 +00:00
88b8a7c75d feat: add benchmarking script for quality assessment 2026-03-30 21:14:49 +00:00
857c42a327 feat: add standardized benchmarking prompts 2026-03-30 21:14:48 +00:00