@@ -1,310 +1,320 @@
# GENOME.md — Wolf (Timmy_Foundation/ wolf)
# GENOME.md — wolf
Generated 2026-04-17 from direct source inspection of `/tmp/wolf-genome` plus live test execution.
* Generated: 2026-04-14T19:10:00Z | Branch: main | Commit: 02767d8 *
## Project Overview
Wolf is a sovereign multi-model evaluation engine with two real operating modes:
** Wolf** is a sovereign multi-model evaluation engine. It runs prompts against multiple LLM providers (OpenAI, Anthropic, Groq, Ollama, OpenRouter), scores responses on relevance, coherence, and safety, and outputs structured JSON results for model selection and fleet deployment decisions.
1. Prompt evalu ation mode
- runs a set of prompts against multiple model providers
- scores responses on relevance, coherence, and safety
- emits structured JSON results plus a console leaderboard
2. Legacy task / PR mode
- fetches Gitea issues
- assigns them to configured models/providers
- generates output files and opens PRs
- records task scores in a leaderboard
**Two oper ational modes: **
1. **Prompt Evaluation (v1.0) ** — Standalone prompt-vs-model benchmarking via `python -m wolf.runner`
2. **Legacy PR Scoring ** — Gitea PR evaluation pipeline via `wolf.cli` (task generation, agent execution, leaderboard)
Current repo shape observed directly:
- 9 Python modules under `wolf/`
- 5 active test modules under `tests/`
- 63 tests passing across `test_config.py` , `test_evaluator.py` , `test_gitea.py` , `test_models.py` , `test_runner.py`
- two smoke workflows: `.gitea/workflows/smoke.yml` and `.github/workflows/smoke-test.yml`
- a checked-in `GENOME.md` at repo root
**Tagline: ** "Multi-model evaluation — agents work, PRs prove it, leaders get endpoints."
---
## Architecture
``` mermaid
flowchart TD
CLI1[wolf.cli ]
CLI2[wolf.runner ]
CFG[Config + setup_logging ]
TASKS[TaskGenerator]
AR[AgentRunner]
PE[PromptEvaluator]
SC[ResponseScorer]
MF[ModelFactory]
MC[Provider Clients]
GC[GiteaClient]
LB[Leaderboard]
OUT1[JSON results]
OUT2[stdout summary]
OUT3[Gitea PRs]
flowchart TB
subgraph CLI["CLI Entry Points" ]
A1["python -m wolf.runner\n(pure evaluation)" ]
A2["python -m wolf.cli\n(task pipeline)" ]
end
CLI1 --> CFG
CLI1 --> GC
CLI1 --> TASKS
CLI1 --> AR
CLI1 --> LB
CLI1 --> PE
subgraph Core["Core Engine"]
B1["PromptEvaluator\n(evaluator.py)"]
B2["ResponseScorer\n(evaluator.py)"]
B3["AgentRunner\n(runner.py)"]
B4["TaskGenerator\n(task.py)"]
end
CLI2 --> CFG
CLI2 --> PE
PE --> SC
PE --> MF
MF --> MC
CLI2 --> OUT1
CLI2 --> OUT2
subgraph Providers["Model Providers"]
C1["OpenRouterClient"]
C2["GroqClient"]
C3["OllamaClient"]
C4["AnthropicClient"]
C5["OpenAIClient\n(GroqClient w/ custom URL)"]
end
TASKS --> GC
AR --> MF
AR --> GC
AR --> OUT3
CLI1 --> LB
subgraph Infrastructure["Infrastructure"]
D1["GiteaClient\n(gitea.py)"]
D2["Config\n(config.py)"]
D3["Leaderboard\n(leaderboard.py)"]
D4["wolf-config.yaml"]
end
subgraph Output["Output"]
E1["JSON results file"]
E2["stdout summary table"]
E3["Gitea PRs"]
E4["Leaderboard scores"]
end
A1 --> B1
A2 --> B4 --> B3
B1 --> B2
B1 --> C1 & C2 & C3 & C4 & C5
B3 --> C1 & C2 & C3 & C4 & C5
B3 --> D1
A2 --> D1 & D2 & D3
B1 --> E1 & E2
B3 --> E3
D3 --> E4
D2 --> D4
style A1 fill:#4a9eff,color:#fff
style A2 fill:#4a9eff,color:#fff
style B1 fill:#ff6b6b,color:#fff
style B2 fill:#ff6b6b,color:#fff
```
### Data Flow — Prompt Evaluation Mode
```
prompts.json + models.json/wolf-config.yaml
→ load_prompts() / load_models_from_json()
→ PromptEvaluator.evaluate()
→ for each (prompt, model):
→ ModelFactory.get_client(provider) → ModelClient.generate()
→ ResponseScorer.score(response, prompt)
→ score_relevance() — keyword matching, length, refusal detection
→ score_coherence() — structure, readability, repetition
→ score_safety() — harmful content patterns, profanity
→ overall = relevance*0.40 + coherence*0.35 + safety*0.25
→ evaluate_and_serialize() → JSON dict
→ run(output_path) → write JSON + print_summary()
```
### Data Flow — Legacy Task Pipeline Mode
```
wolf-config.yaml
→ GiteaClient.get_issues(owner, repo)
→ TaskGenerator.from_gitea_issues()
→ TaskGenerator.assign_tasks(tasks, models)
→ for each task:
→ AgentRunner.execute_task(task)
→ ModelClient.generate(prompt)
→ GiteaClient.create_branch()
→ GiteaClient.create_file(wolf-outputs/{id}.md)
→ GiteaClient.create_pull_request()
→ Leaderboard.record_score()
→ Leaderboard.get_rankings()
```
---
## Entry Points
Primary runtime entry points:
- `python -m wolf.runner`
- p ure prompt evaluation pipeline
- requires `--prompts` plus either `--models` or `--config`
- `python -m wolf.cli`
- task runner / PR scoring / leaderboard CLI
- supports `--run` , `--evaluate` , `--leaderboard`
| Entry Point | Module | Purpose |
|-------------|--------|---------|
| `python -m wolf.runner` | `runner.py` | P ure prompt-vs-model evaluation. Primary v1.0 interface. |
| `python -m wolf.cli` | `cli.py` | Full task pipeline: fetch issues → run models → create PRs → leaderboard. |
Supporting entry surfaces:
- `wolf/config.py`
- config loading and log setup
- `wolf/models.py`
- provider-specific model clients
- `wolf/gitea.py`
- repository / branch / file / PR operations
### runner.py CLI Flags
## Data Flow
| Flag | Required | Description |
|------|----------|-------------|
| `--prompts / -p` | Yes | Path to prompts JSON file |
| `--models / -m` | No* | Path to models JSON file |
| `--config / -c` | No* | Path to wolf-config.yaml (alternative to --models) |
| `--output / -o` | No | Path to write JSON results |
| `--system-prompt` | No | System prompt (default: "You are a helpful assistant.") |
### Prompt evaluation mode
*Either --models or --config is required.
1. `runner.py` loads prompts from JSON via `load_prompts()`
2. it loads model endpoints from JSON or config via `load_models_from_json()` / `load_models_from_config()`
3. `PromptEvaluator.evaluate()` iterates prompt × model
4. `ModelFactory.get_client()` selects the provider client
5. the client calls the model API and returns response text
6. `ResponseScorer.score()` computes:
- relevance
- coherence
- safety
- weighted overall
7. `evaluate_and_serialize()` builds per-model summaries and detailed results
8. `run()` returns JSON and optionally writes it to disk
9. `print_summary()` renders a human-readable ranking table
### cli.py CLI Flags
### Legacy task / PR mode
| Flag | Description |
|------|-------------|
| `--config` | Path to wolf-config.yaml |
| `--task-spec` | Path to task specification JSON |
| `--run` | Run pending tasks (fetch issues → generate → PR) |
| `--evaluate` | Evaluate open PRs (legacy scoring) |
| `--leaderboard` | Show model rankings |
1. `cli.py` loads config and constructs `GiteaClient`
2. `TaskGenerator.from_gitea_issues()` or `from_spec()` builds `Task` objects
3. `assign_tasks()` applies round-robin model/provider assignment
4. `AgentRunner.execute_task()` :
- generates model output
- creates a branch
- writes `wolf-outputs/<task>.md`
- opens a PR
5. `Leaderboard.record_score()` persists score history and serverless-readiness flags
---
## Key Abstractions
Core d ataclasses in `wolf/ evaluator.py` :
- `PromptEntry`
- `ModelEndpoint`
- `ScoreResult`
- `EvaluationResult`
### D ataclasses ( evaluator.py)
Core engines:
- `ResponseScorer`
- heuristic scoring engine for relevance/coherence/safety
- `PromptEvaluator`
- N× M evaluation orchestration
- `ModelFactory`
- dispatches to provider clients
- `GiteaClient`
- wraps issue / branch / file / PR operations
- `TaskGenerator`
- turns issues or spec JSON into `Task` objects
- `AgentRunner`
- legacy execution path from task to PR
- `Leaderboard`
- persists scoring history and ranking output
- `Config`
- tolerant config loader with PyYAML fallback logic
| Class | Fields | Purpose |
|-------|--------|---------|
| `PromptEntry` | id, text, expected_keywords, category | A single evaluation prompt with metadata |
| `ModelEndpoint` | name, provider, model_id, api_key, base_url | Model connection config |
| `ScoreResult` | relevance, coherence, safety, overall, details | Scoring output for one response |
| `EvaluationResult` | prompt_id, prompt_text, model_name, ..., scores, error | Complete result of one prompt× model evaluation |
### Core Classes
| Class | Module | Responsibility |
|-------|--------|----------------|
| `ResponseScorer` | evaluator.py | Scores responses on 3 dimensions using regex heuristics |
| `PromptEvaluator` | evaluator.py | Orchestrates N× M evaluation matrix |
| `ModelClient` | models.py | Abstract base for provider clients |
| `ModelFactory` | models.py | Static factory: `get_client(provider, key, url)` |
| `GiteaClient` | gitea.py | Full Gitea API wrapper (issues, branches, files, PRs) |
| `AgentRunner` | runner.py | Task execution: generate → branch → commit → PR |
| `TaskGenerator` | task.py | Converts Gitea issues to evaluable Task dataclasses |
| `Leaderboard` | leaderboard.py | Tracks model scores, determines serverless readiness |
| `Config` | config.py | Loads wolf-config.yaml, manages logging |
### Provider Clients (models.py)
| Class | Provider | API Format |
|-------|----------|------------|
| `OpenRouterClient` | openrouter | OpenAI-compatible chat completions |
| `GroqClient` | groq | OpenAI-compatible chat completions |
| `OllamaClient` | ollama | Ollama native /api/generate |
| `AnthropicClient` | anthropic | Anthropic Messages API |
| `OpenAIClient` | openai | GroqClient with base_url override |
---
## API Surface
CLI flags in `wolf.runner` :
- `--prompts/-p`
- `--models/-m`
- `--config/-c`
- `--output/-o`
- `--system-prompt`
### Public API (importable)
CLI flags in `wolf.cli` :
- `--config`
- `--task-spec`
- `--run`
- `--evaluate`
- `--leaderboard`
``` python
# Evaluation pipeline
from wolf . evaluator import PromptEvaluator , PromptEntry , ModelEndpoint , ScoreResult
Provider surface in `wolf.models` :
- `OpenRouter Client`
- `GroqClient`
- `OllamaClient`
- `AnthropicClient`
- OpenAI is handled as a Groq-style compatible client with a different base URL
# Provider clients
from wolf . models import ModelFactory , Model Client
Gitea surface in `wolf.gitea` :
- `get_issues()`
- `create_branch()`
- `create_file()`
- `update_file()`
- `get_file()`
- `create_pull_request()`
- `get_pull_request()`
- `get_pr_status()`
# Gitea integration
from wolf . gitea import GiteaClient
# Task pipeline
from wolf . runner import AgentRunner
from wolf . task import TaskGenerator , Task
# Leaderboard
from wolf . leaderboard import Leaderboard
# Config
from wolf . config import Config , setup_logging
```
### Scoring Weights
| Dimension | Weight | Method |
|-----------|--------|--------|
| Relevance | 0.40 | Keyword matching (60%) + length score (40%) |
| Coherence | 0.35 | Length + structure indicators + sentence completeness + uniqueness |
| Safety | 0.25 | Unsafe pattern detection + profanity check |
| **Overall ** | 1.00 | Weighted sum |
### Scoring Details
**Relevance (ResponseScorer.score_relevance): **
- Expected keyword match ratio
- Fallback: word overlap with prompt (boosted 1.5× )
- Length penalty: <20 chars → 0.3, <50 chars → 0.6
- Refusal detection: 3 regex patterns, penalty if low keyword match
**Coherence (ResponseScorer.score_coherence): **
- Length sweet spot: 100-3000 chars → 1.0
- Structure: paragraph breaks, transition words, lists/steps
- Sentence completeness: avg 20-200 chars → 0.9
- Uniqueness: unique word ratio >0.4 → 0.9
**Safety (ResponseScorer.score_safety): **
- 6 unsafe patterns (weapon creation, system exploitation, prompt injection, etc.)
- Profanity detection (minor penalty: 0.1 per word, capped at 0.3)
---
## Test Coverage
Live verification run:
- `python3 -m pytest -q tests/test_config.py tests/test_evaluator.py tests/test_gitea.py tests/test_models.py tests/test_runner.py`
- result: `63 passed`
### Current Tests
Current tested modules:
- `tests/test_config.py`
- config load happy path
- `tests/test_evaluator.py`
- scorer heuristics
- prompt/model dataclasses
- evaluator serialization paths
- legacy PR evaluator behavior
- `tests/test_gitea.py`
- Gitea client request/response behavior
- 404 and fallback status handling
- `tests/test_models.py`
- provider factory dispatch
- provider generate() request formatting
- `tests/test_runner.py`
- prompt/model loading helpers
- parser wiring
- `AgentRunner.execute_task()` behavior
| Test File | Covers | Status |
|-----------|--------|--------|
| `test_evaluator.py` | PromptEntry, ModelEndpoint, ScoreResult, ResponseScorer, PromptEvaluator, PREvaluator | ✅ 23 test methods |
| `test_config.py` | Config.load | ✅ 1 test method |
Coverage g aps that still matter:
- `wolf/cli.py`
- no direct tests for the top-level workflow routing
- `wolf/task.py`
- no direct tests for `from_gitea_issues()` , `from_spec()` , `assign_tasks()` in this repo state
- `wolf/leaderboard.py`
- no direct tests for persistence / ranking / serverless-ready threshold logic
### Coverage G aps — Untested Modules
Important drift note:
- the older timmy-home genome artifact claimed only `test_config.py` and `test_evaluator.py` existed
- current repo also includes `tests/test_models.py` , `tests/test_gitea.py` , and `tests/test_runner.py`
| Module | Risk | Critical Paths |
|--------|------|----------------|
| `cli.py` | **HIGH ** | Argparse wiring, config→models→evaluator pipeline, PR scoring flow |
| `runner.py` | **HIGH ** | load_prompts, load_models_from_json, load_models_from_config, run_evaluation, AgentRunner.execute_task |
| `models.py` | **HIGH ** | ModelFactory.get_client for each provider, each client's generate() |
| `gitea.py` | **MEDIUM ** | All GiteaClient methods (HTTP calls) |
| `task.py` | **MEDIUM ** | TaskGenerator.from_gitea_issues, from_spec, assign_tasks |
| `leaderboard.py` | **LOW ** | Leaderboard.record_score, get_rankings, serverless_ready |
## CI / Verification Surface
### Coverage Gaps — Existing Tests
Current CI contracts observed directly:
- `.gitea/workflows/s moke.yml`
- checkout
- setup Python 3.11
- install `pytest` and `pyyaml`
- install `requirements.txt` if present
- run `pytest tests/`
- `.github/workflows/smoke-test.yml`
- YAML parse check
- JSON parse check
- Python compile check
- shell syntax check
- secret scan
- `test_evaluator.py` : No tests for `PromptEvaluator._get_model_client()` , `_run_single()` with real model call, or `evaluate_and_serialize()` summary statistics
- `test_evaluator.py` : No integration test ( moc ked model calls only)
- `test_config.py` : No test for missing config, env var overrides, or logging setup
This means the real repo contract is broader than unit tests alone: syntax, parseability, and secret hygiene are part of the shipped smoke lane.
## Dependencies
Direct dependency files:
- `requirements.txt`
- only `requests`
- README install instructions
- `pip install requests pyyaml`
Observed dependency tension:
- `wolf/config.py` imports `yaml` when available and falls back to a simple parser if PyYAML is absent
- CI installs `pyyaml`
- `requirements.txt` does not list `pyyaml`
So PyYAML is operationally expected in normal use and CI, but not formally pinned in `requirements.txt` .
---
## Security Considerations
1. Plaintext secrets in config
- model API keys and Gitea tokens are expected via config files
- this is user-controlled but still a secret-handling risk
2. Arbitrary base URLs
- provider configs can point to arbitrary endpoints
- useful for sovereignty, but also expands trust boundaries
3. PR automation blast radius
- `AgentRunner.execute_task()` can create branches, files, and PRs
- bad prompts or weak issue filtering could create noisy or unsafe PRs
4. Prompt-injection exposure
- model prompts and issue bodies are passed through with limited sanitization
5. Leaderboard persistence without locking
- `leaderboard.json` writes are not protected against concurrent writers
1. **API Keys in Config ** : `wolf-config.yaml` stores provider API keys. Never commit to version control. Recommend `~/.hermes/wolf-config.yaml` with restricted permissions.
## Repository Notes
2. **HTTP Requests ** : All model calls and Gitea API calls are outbound HTTP. No input validation on URLs — `base_url` fields accept arbitrary endpoints.
Notable current-repo facts that the host-repo genome should preserve:
- Wolf already ships its own `GENOME.md` at repo root
- the timmy-home deliverable for issue #683 is therefore a host-repo genome artifact that mirrors / tracks the current wolf repo, not the first genome ever written for wolf
- current smoke workflows exist in both `.gitea/` and `.github/`
3. **Prompt Injection ** : ResponseScorer detects injection patterns in * model output * , but Wolf itself is vulnerable to prompt injection via `expected_keywords` or `system_prompt` fields.
## File Index
4. **Gitea Token Scope ** : GiteaClient uses a single token for all operations. Scoped tokens (read-only for evaluation, write for task execution) would reduce blast radius.
Observed module sizes:
- `wolf/evaluator.py` — 465 lines
- `wolf/runner.py` — 311 lines
- `wolf/models.py` — 120 lines
- `wolf/gitea.py` — 95 lines
- `wolf/cli.py` — 94 lines
- `wolf/leaderboard.py` — 77 lines
- `wolf/task.py` — 63 lines
- `wolf/config.py` — 51 lines
- `wolf/__init__.py` — 12 lines
5. **No TLS Verification Override ** : `requests.post()` uses default SSL verification. If self-signed certs are used for local providers (Ollama), this could fail silently.
Aggregate metrics from direct scan:
- 15 Python files total
- 9 module files under `wolf/`
- 6 Python files under `tests/` (including `__init__.py` )
- ~2150 lines of Python total
6. **Race Conditions ** : Leaderboard reads/writes JSON without locking. Concurrent evaluations could corrupt the leaderboard file.
## Verification Commands
---
Commands used for this update:
- `git clone --depth 1 --single-branch https://.../Timmy_Foundation/wolf.git /tmp/wolf-genome`
- `python3 -m pytest -q tests/test_config.py tests/test_evaluator.py tests/test_gitea.py tests/test_models.py tests/test_runner.py`
- direct file inspection of:
- `README.md`
- `wolf/cli.py`
- `wolf/config.py`
- `wolf/evaluator.py`
- `wolf/gitea.py`
- `wolf/models.py`
- `wolf/runner.py`
- `wolf/task.py`
- `wolf/leaderboard.py`
- `.gitea/workflows/smoke.yml`
- `.github/workflows/smoke-test.yml`
## Dependencies
## Summary
```
requests # HTTP client for all providers and Gitea
pyyaml # Config file parsing (not in requirements.txt — BUG)
```
Wolf is real and useful today, but its current reality is:
- stronger test coverage than the older timmy-home genome recorded
- a still-untested CLI/task/leaderboard control plane
- smoke workflows that now form part of the repo’ s real contract
- a checked-in root `GENOME.md` that does not remove the need for the host-repo genome issue artifact
* * ⚠️ Missing dependency:** `pyyaml` is imported in `config.py` but not listed in `requirements.txt` .
---
## Configuration Schema
``` yaml
# wolf-config.yaml
gitea :
base_url : "https://forge.example.com/api/v1"
token : "gitea_token_here"
owner : "Timmy_Foundation"
repo : "eval-repo"
providers :
openrouter :
api_key : "sk-or-..."
base_url : "https://openrouter.ai/api/v1"
groq :
api_key : "gsk_..."
ollama :
base_url : "http://localhost:11434"
models :
- model : "anthropic/claude-3.5-sonnet"
provider : "openrouter"
- model : "llama3-70b-8192"
provider : "groq"
- model : "llama3:70b"
provider : "ollama"
system_prompt : "You are a helpful assistant."
leaderboard_path : "~/.hermes/wolf/leaderboard.json"
log_dir : "~/.hermes/wolf/logs"
```
---
* Generated by Codebase Genome Pipeline. Review and update manually. *