## Status: ✅ EPIC SLICE ALREADY IMPLEMENTED ON MAIN
Issue #582 is a parent epic, not a single atomic feature. The repo already contains the epic-level operational slice that ties the merged Know Thy Father phases together, but the epic remains open because fully consuming the local archive and wiring every downstream memory path is a larger horizon than this one slice.
## Mainline evidence
The parent-epic operational slice is already present on `main` in a fresh clone:
-`scripts/know_thy_father/epic_pipeline.py`
-`docs/KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md`
-`tests/test_know_thy_father_pipeline.py`
What that slice already does:
- enumerates the current source-of-truth scripts for all Know Thy Father phases
- provides one operational runner/status view for the epic
- preserves the split implementation truth across `scripts/know_thy_father/`, `scripts/twitter_archive/analyze_media.py`, and `twitter-archive/know-thy-father/tracker.py`
- gives the epic a single orchestration spine without falsely claiming the full archive is already processed end-to-end
## Phase evidence already merged on main
The four decomposed phase lanes named by the epic already have merged implementation coverage on `main`:
- PR #738 shipped the parent-epic orchestrator/status slice on branch `fix/582`
- issue comment #57259 already points to that orchestrator/status slice and explains why it used `Refs #582`
- PR #738 is now closed unmerged, but the epic-level runner/doc/test trio is present on `main` today and passes targeted verification from a fresh clone
- the phase-level index, synthesis, cross-reference, tracker, and media-analysis tests pass
- the repo already contains a working parent-epic operational spine plus merged phase implementations
## Why the epic remains open
The epic remains open because this verification only proves the current repo-side operational slice is already implemented on main. It does not claim:
- the full local archive has been consumed
- all pending media has been processed
- every extracted kernel has been ingested into downstream memory systems
- the broader multimodal consumption mission is complete
## Recommendation
Do not rebuild the same epic-level orchestrator again.
Use the existing mainline slice (`scripts/know_thy_father/epic_pipeline.py` + `docs/KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md`) as the parent-epic operational entrypoint.
This verification PR exists to preserve the evidence trail cleanly while making it explicit that the epic remains open for future end-to-end progress.
Wolf is a sovereign multi-model evaluation engine with two real operating modes:
**Wolf** is a multi-model evaluation engine for sovereign AI fleets. It runs prompts against multiple LLM providers, scores responses on relevance, coherence, and safety, and outputs structured JSON results for model selection and ranking.
1. Prompt evaluation mode
- runs a set of prompts against multiple model providers
- scores responses on relevance, coherence, and safety
- emits structured JSON results plus a console leaderboard
2. Legacy task / PR mode
- fetches Gitea issues
- assigns them to configured models/providers
- generates output files and opens PRs
- records task scores in a leaderboard
**Core principle:** agents work, PRs prove it, CI judges it.
Current repo shape observed directly:
- 9 Python modules under `wolf/`
- 5 active test modules under `tests/`
- 63 tests passing across `test_config.py`, `test_evaluator.py`, `test_gitea.py`, `test_models.py`, `test_runner.py`
- two smoke workflows: `.gitea/workflows/smoke.yml` and `.github/workflows/smoke-test.yml`
- a checked-in `GENOME.md` at repo root
**Status:** v1.0.0 — production-ready for prompt evaluation. Legacy PR evaluation module retained for backward compatibility.
-`wolf/config.py` imports `yaml` when available and falls back to a simple parser if PyYAML is absent
- CI installs `pyyaml`
-`requirements.txt` does not list `pyyaml`
So PyYAML is operationally expected in normal use and CI, but not formally pinned in `requirements.txt`.
| Dependency | Used By | Purpose |
|------------|---------|---------|
|`requests` | models.py, gitea.py | HTTP client for all API calls |
| `pyyaml` (optional) | config.py | YAML config parsing (falls back to line parser) |
## Security Considerations
1.Plaintext secrets in config
- model API keys and Gitea tokens are expected via config files
- this is user-controlled but still a secret-handling risk
2.Arbitrary base URLs
- provider configs can point to arbitrary endpoints
- useful for sovereignty, but also expands trust boundaries
3. PR automation blast radius
-`AgentRunner.execute_task()` can create branches, files, and PRs
- bad prompts or weak issue filtering could create noisy or unsafe PRs
4. Prompt-injection exposure
- model prompts and issue bodies are passed through with limited sanitization
5. Leaderboard persistence without locking
-`leaderboard.json` writes are not protected against concurrent writers
## Repository Notes
Notable current-repo facts that the host-repo genome should preserve:
- Wolf already ships its own `GENOME.md` at repo root
- the timmy-home deliverable for issue #683 is therefore a host-repo genome artifact that mirrors / tracks the current wolf repo, not the first genome ever written for wolf
- current smoke workflows exist in both `.gitea/` and `.github/`
1.**API keys in config**: wolf-config.yaml stores provider API keys in plaintext. File should be chmod 600 and excluded from git (already in .gitignore pattern via ~/.hermes/).
2.**Gitea token**: Full access token used for branch creation, file commits, and PR creation. Scoped access recommended.
3.**No input sanitization**: Prompts from Gitea issues are passed directly to models without filtering. Prompt injection risk for automated workflows.
4.**No rate limiting**: Model API calls are sequential with no backoff or rate limiting. Could exhaust API quotas.
5.**Legacy code reference**: `evaluator.py` references `Evaluator = PREvaluator` alias but `cli.py` imports `Evaluator` expecting the legacy class. This works but is confusing.
## File Index
Observed module sizes:
-`wolf/evaluator.py` — 465 lines
-`wolf/runner.py` — 311 lines
-`wolf/models.py` — 120 lines
-`wolf/gitea.py` — 95 lines
-`wolf/cli.py` — 94 lines
-`wolf/leaderboard.py` — 77 lines
-`wolf/task.py`— 63 lines
-`wolf/config.py` — 51 lines
-`wolf/__init__.py` — 12 lines
| File | LOC | Purpose |
|------|-----|---------|
|`wolf/__init__.py` | 12 | Package init, version |
assertnotmissing,f"wolf genome missing current repo facts: {missing}"
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.