- Architecture doc exists at `evennia-mind-palace.md`
- The 16 tracked Evennia issues are mapped in the issue-to-layer table inside `evennia-mind-palace.md`
- Milestone 1 is implemented in `evennia_tools/mind_palace.py` with `Hall of Knowledge`, `The Ledger`, `MutableFact`, `BurnCycleSnapshot`, and deterministic room-entry rendering
- The proof comment already exists on the issue as issue comment #56965
## Historical trail
- PR #711 attempted the issue and posted the room-entry proof comment
- PR #711 was later closed unmerged, but the requested deliverables are present on `main` today and pass targeted verification from a fresh clone
- the proof script emitted the expected `ENTER Hall of Knowledge` packet with room context, ledger fact, and Timmy burn-cycle focus
## Recommendation
Close issue #567 as already implemented on `main`.
This verification PR exists only to document the evidence trail cleanly and close the stale issue without re-implementing the already-landed architecture.
Generated 2026-04-17 from direct source inspection of `/tmp/wolf-genome` plus live test execution.
## Project Overview
**Wolf** is a multi-model evaluation engine for sovereign AI fleets. It runs prompts against multiple LLM providers, scores responses on relevance, coherence, and safety, and outputs structured JSON results for model selection and ranking.
Wolf is a sovereign multi-model evaluation engine with two real operating modes:
**Core principle:** agents work, PRs prove it, CI judges it.
1. Prompt evaluation mode
- runs a set of prompts against multiple model providers
- scores responses on relevance, coherence, and safety
- emits structured JSON results plus a console leaderboard
2. Legacy task / PR mode
- fetches Gitea issues
- assigns them to configured models/providers
- generates output files and opens PRs
- records task scores in a leaderboard
**Status:** v1.0.0 — production-ready for prompt evaluation. Legacy PR evaluation module retained for backward compatibility.
Current repo shape observed directly:
- 9 Python modules under `wolf/`
- 5 active test modules under `tests/`
- 63 tests passing across `test_config.py`, `test_evaluator.py`, `test_gitea.py`, `test_models.py`, `test_runner.py`
- two smoke workflows: `.gitea/workflows/smoke.yml` and `.github/workflows/smoke-test.yml`
-no direct tests for the top-level workflow routing
-`wolf/task.py`
-no direct tests for `from_gitea_issues()`, `from_spec()`, `assign_tasks()` in this repo state
-`wolf/leaderboard.py`
-no direct tests for persistence / ranking / serverless-ready threshold logic
Important drift note:
- the older timmy-home genome artifact claimed only `test_config.py` and `test_evaluator.py` existed
- current repo also includes `tests/test_models.py`, `tests/test_gitea.py`, and `tests/test_runner.py`
## CI / Verification Surface
Current CI contracts observed directly:
-`.gitea/workflows/smoke.yml`
- checkout
- setup Python 3.11
- install `pytest` and `pyyaml`
- install `requirements.txt` if present
- run `pytest tests/`
-`.github/workflows/smoke-test.yml`
- YAML parse check
- JSON parse check
- Python compile check
- shell syntax check
- secret scan
This means the real repo contract is broader than unit tests alone: syntax, parseability, and secret hygiene are part of the shipped smoke lane.
## Dependencies
| Dependency | Used By | Purpose |
|------------|---------|---------|
|`requests` | models.py, gitea.py | HTTP client for all API calls |
| `pyyaml` (optional) | config.py | YAML config parsing (falls back to line parser) |
Direct dependency files:
-`requirements.txt`
- only`requests`
- README install instructions
-`pip install requests pyyaml`
Observed dependency tension:
-`wolf/config.py` imports `yaml` when available and falls back to a simple parser if PyYAML is absent
- CI installs `pyyaml`
-`requirements.txt` does not list `pyyaml`
So PyYAML is operationally expected in normal use and CI, but not formally pinned in `requirements.txt`.
## Security Considerations
1.**API keys in config**: wolf-config.yaml stores provider API keys in plaintext. File should be chmod 600 and excluded from git (already in .gitignore pattern via ~/.hermes/).
2.**Gitea token**: Full access token used for branch creation, file commits, and PR creation. Scoped access recommended.
3.**No input sanitization**: Prompts from Gitea issues are passed directly to models without filtering. Prompt injection risk for automated workflows.
4.**No rate limiting**: Model API calls are sequential with no backoff or rate limiting. Could exhaust API quotas.
5.**Legacy code reference**: `evaluator.py` references `Evaluator = PREvaluator` alias but `cli.py` imports `Evaluator` expecting the legacy class. This works but is confusing.
1.Plaintext secrets in config
- model API keys and Gitea tokens are expected via config files
- this is user-controlled but still a secret-handling risk
2.Arbitrary base URLs
- provider configs can point to arbitrary endpoints
- useful for sovereignty, but also expands trust boundaries
3. PR automation blast radius
-`AgentRunner.execute_task()` can create branches, files, and PRs
- bad prompts or weak issue filtering could create noisy or unsafe PRs
4. Prompt-injection exposure
- model prompts and issue bodies are passed through with limited sanitization
5. Leaderboard persistence without locking
-`leaderboard.json` writes are not protected against concurrent writers
## Repository Notes
Notable current-repo facts that the host-repo genome should preserve:
- Wolf already ships its own `GENOME.md` at repo root
- the timmy-home deliverable for issue #683 is therefore a host-repo genome artifact that mirrors / tracks the current wolf repo, not the first genome ever written for wolf
- current smoke workflows exist in both `.gitea/` and `.github/`
## File Index
| File | LOC | Purpose |
|------|-----|---------|
|`wolf/__init__.py` | 12 | Package init, version |
assertnotmissing,f"wolf genome missing current repo facts: {missing}"
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.