Full audit of homebrew components evaluated for OSS replacement. CRITICAL: GOFAI source files missing, keystore permissions insecure. Assigned to allegro.
20 KiB
Formalization Audit Report
Date: 2026-04-06
Auditor: Allegro (subagent)
Scope: All homebrew components on VPS 167.99.126.228
Executive Summary
This system runs a fleet of 5 Hermes AI agents (allegro, adagio, ezra, bezalel, bilbobagginshire) alongside supporting infrastructure (Gitea, Nostr relay, Evennia MUD, Ollama). The deployment is functional but heavily ad-hoc — characterized by one-off systemd units, scattered scripts, bare docker run containers with no compose file, and custom glue code where standard tooling exists.
Priority recommendations:
- Consolidate fleet deployment into docker-compose (HIGH impact, MEDIUM effort)
- Clean up burn scripts — archive or delete (HIGH impact, LOW effort)
- Add docker-compose for Gitea + strfry (MEDIUM impact, LOW effort)
- Formalize the webhook receiver into the hermes-agent repo (MEDIUM impact, LOW effort)
- Recover or rewrite GOFAI source files — only .pyc remain (HIGH urgency)
1. Gitea Webhook Receiver
File: /root/wizards/allegro/gitea_webhook_receiver.py (327 lines)
Service: allegro-gitea-webhook.service
Current State
Custom aiohttp server that:
- Listens on port 8670 for Gitea webhook events
- Verifies HMAC-SHA256 signatures
- Filters for @allegro mentions and issue assignments
- Forwards to Hermes API (OpenAI-compatible endpoint)
- Posts response back as Gitea comment
- Includes health check, event logging, async fire-and-forget processing
Quality: Solid. Clean async code, proper signature verification, sensible error handling, daily log rotation. Well-structured for a single-file service.
OSS Alternatives
- Adnanh/webhook (Go, 10k+ stars) — generic webhook receiver, but would need custom scripting anyway
- Flask/FastAPI webhook blueprints — would be roughly equivalent effort
- Gitea built-in webhooks + Woodpecker CI — different architecture (push-based CI vs. agent interaction)
Recommendation: KEEP, but formalize
The webhook logic is Allegro-specific (mention detection, Hermes API forwarding, comment posting). No off-the-shelf tool replaces this without equal or more glue code. However:
- Move into the hermes-agent repo as a plugin/skill
- Make it configurable for any wizard name (not just "allegro")
- Add to docker-compose instead of standalone systemd unit
Effort: 2-4 hours
2. Nostr Relay + Bridge
Relay (strfry + custom timmy-relay)
Running: Two relay implementations in parallel
- strfry Docker container (port 7777) — standard relay, healthy, community-maintained
- timmy-relay Go binary (port 2929) — custom NIP-29 relay built on
relay29/khatru29
The custom relay (main.go, 108 lines) is a thin wrapper around fiatjaf/relay29 with:
- NIP-29 group support (admin/mod roles)
- LMDB persistent storage
- Allowlisted event kinds
- Anti-spam policies (tag limits, timestamp guards)
Bridge (dm_bridge_mvp)
Service: nostr-bridge.service
Status: Running but source file deleted — only .pyc cache remains at /root/nostr-relay/__pycache__/dm_bridge_mvp.cpython-312.pyc
From decompiled structure, the bridge:
- Reads DMs from Nostr relay
- Parses commands from DMs
- Creates Gitea issues/comments via API
- Polls for new DMs in a loop
- Uses keystore.json for identity management
CRITICAL: Source code is gone. If the service restarts on a Python update (new .pyc format), this component dies.
OSS Alternatives
- strfry: Already using it. Good choice, well-maintained.
- relay29: Already using it. Correct choice for NIP-29 groups.
- nostr-tools / rust-nostr SDKs for bridge — but bridge logic is custom regardless
Recommendation: KEEP relay, RECOVER bridge
- The relay setup (relay29 custom binary + strfry) is appropriate
- URGENT: Decompile dm_bridge_mvp.pyc and reconstruct source before it's lost
- Consider whether strfry (port 7777) is still needed alongside timmy-relay (port 2929) — possible to consolidate
- Move bridge into its own git repo on Gitea
Effort: 4-6 hours (bridge recovery), 1 hour (strfry consolidation assessment)
3. Evennia / Timmy Academy
Path: /root/workspace/timmy-academy/
Components:
| Component | File | Custom? | Lines |
|---|---|---|---|
| AuditedCharacter | typeclasses/audited_character.py | Yes | 110 |
| Custom Commands | commands/command.py | Yes | 368 |
| Audit Dashboard | web/audit/ (views, api, templates) | Yes | ~250 |
| Object typeclass | typeclasses/objects.py | Stock (untouched) | 218 |
| Room typeclass | typeclasses/rooms.py | Minimal | ~15 |
| Exit typeclass | typeclasses/exits.py | Minimal | ~15 |
| Account typeclass | typeclasses/accounts.py | Custom (157 lines) | 157 |
| Channel typeclass | typeclasses/channels.py | Custom | ~160 |
| Scripts | typeclasses/scripts.py | Custom | ~130 |
| World builder | world/ | Custom | Unknown |
Custom vs Stock Analysis
- objects.py — Stock Evennia template with no modifications. Safe to delete and use defaults.
- audited_character.py — Fully custom. Tracks movement, commands, session time, generates audit summaries. Clean code.
- commands/command.py — 7 custom commands (examine, rooms, status, map, academy, smell, listen). All game-specific. Quality is good — uses Evennia patterns correctly.
- web/audit/ — Custom Django views and templates for an audit dashboard (character detail, command logs, movement logs, session logs). Functional but simple.
- accounts.py, channels.py, scripts.py — Custom but follow Evennia patterns. Mainly enhanced with audit hooks.
OSS Alternatives
Evennia IS the OSS framework. The customizations are all game-specific content, which is exactly how Evennia is designed to be used.
Recommendation: KEEP as-is
This is a well-structured Evennia game. The customizations are appropriate and follow Evennia best practices. No formalization needed — it's already a proper project in a git repo.
Minor improvements:
- Remove the
{e})empty file in root (appears to be a typo artifact) - The audit dashboard could use authentication guards
Effort: 0 (already formalized)
4. Burn Scripts (/root/burn_*.py)
Count: 39 scripts
Total lines: 2,898
Date range: All from April 5, 2026 (one day)
Current State
These are one-off Gitea API query scripts. Examples:
burn_sitrep.py— fetch issue details from Giteaburn_comments.py— fetch issue commentsburn_fetch_issues.py— list open issuesburn_execute.py— perform actions on issuesburn_mode_query.py— query specific issue data
All follow the same pattern:
- Load token from
/root/.gitea_token - Define
api_get(path)helper - Hit specific Gitea API endpoints
- Print JSON results
They share ~80% identical boilerplate. Most appear to be iterative debugging scripts (burn_discover.py, burn_discover2.py; burn_fetch_issues.py, burn_fetch_issues2.py).
OSS Alternatives
- Gitea CLI (
tea) — official Gitea CLI tool, does everything these scripts do - python-gitea — Python SDK for Gitea API
- httpie / curl — for one-off queries
Recommendation: DELETE or ARCHIVE
These are debugging artifacts, not production code. They:
- Duplicate functionality already in the webhook receiver and hermes-agent tools
- Contain hardcoded issue numbers and old API URLs (
143.198.27.163:3000vs currentforge.alexanderwhitestone.com) - Have numbered variants showing iterative debugging (not versioned)
Action:
mkdir /root/archive && mv /root/burn_*.py /root/archive/- If any utility is still needed, extract it into the hermes-agent's
tools/gitea_client.pywhich already exists - Install
teaCLI for ad-hoc Gitea queries
Effort: 30 minutes
5. Heartbeat Daemon
Files:
/root/wizards/allegro/home/skills/devops/hybrid-autonomous-production/templates/heartbeat_daemon.py(321 lines)/root/wizards/allegro/household-snapshots/scripts/template_checkpoint_heartbeat.py(155 lines)- Various per-wizard heartbeat scripts
Current State
Two distinct heartbeat patterns:
A) Production Heartbeat Daemon (321 lines) Full autonomous operations script:
- Health checks (Gitea, Nostr relay, Hermes services)
- Dynamic repo discovery
- Automated triage (comments on unlabeled issues)
- PR merge automation
- Logged to
/root/allegro/heartbeat_logs/ - Designed to run every 15 minutes via cron
Quality: Good for a prototype. Well-structured phases, logging, error handling. But runs as root, uses urllib directly, has hardcoded org name.
B) Checkpoint Heartbeat Template (155 lines) State backup script:
- Syncs wizard home dirs to git repos
- Auto-commits and pushes to Gitea
- Template pattern (copy and customize per wizard)
OSS Alternatives
- For health checks: Uptime Kuma, Healthchecks.io, Monit
- For PR automation: Renovate, Dependabot, Mergify (but these are SaaS/different scope)
- For backups: restic, borgbackup, git-backup tools
- For scheduling: systemd timers (already used), or cron
Recommendation: FORMALIZE into proper systemd timer + package
- Create a proper
timmy-heartbeatPython package with:heartbeat.health— infrastructure health checksheartbeat.triage— issue triage automationheartbeat.checkpoint— state backup
- Install as a systemd timer (not cron) with proper unit files
- Use the existing
tools/gitea_client.pyfrom hermes-agent instead of duplicating urllib code - Add alerting (webhook to Telegram/Nostr on failures)
Effort: 4-6 hours
6. GOFAI System
Path: /root/wizards/allegro/gofai/
Current State: CRITICAL — SOURCE FILES MISSING
The gofai/ directory contains:
tests/test_gofai.py(790 lines, 20+ test cases) — existstests/test_knowledge_graph.py(14k chars) — exists__pycache__/*.cpython-312.pyc— cached bytecode for 4 modules- NO .py source files for the actual modules
The .pyc files reveal the following modules were deleted but cached:
| Module | Classes/Functions | Purpose |
|---|---|---|
schema.py |
FleetSchema, Wizard, Task, TaskStatus, EntityType, Relationship, Principle, Entity, get_fleet_schema | Pydantic/dataclass models for fleet knowledge |
rule_engine.py |
RuleEngine, Rule, RuleContext, ActionType, create_child_rule_engine | Forward-chaining rule engine with SOUL.md integration |
knowledge_graph.py |
KnowledgeGraph, FleetKnowledgeBase, Node, Edge, JsonGraphStore, SQLiteGraphStore | Property graph with JSON and SQLite persistence |
child_assistant.py |
ChildAssistant, Decision | Decision support for child wizards (can_i_do_this, who_is_my_family, etc.) |
Git history shows: feat(gofai): add SQLite persistence layer to KnowledgeGraph — so this was an active development.
Maturity Assessment (from .pyc + tests)
- Rule Engine: Basic forward-chaining with keyword matching. Has predefined child safety and fleet coordination rules. ~15 rules. Functional but simple.
- Knowledge Graph: Property graph with CRUD, path finding, lineage tracking, GraphViz export. JSON + SQLite backends. Reasonably mature.
- Schema: Pydantic/dataclass models. Standard data modeling.
- Child Assistant: Interactive decision helper. Novel concept for wizard hierarchy.
- Tests: Comprehensive (790 lines). This was being actively developed and tested.
OSS Alternatives
- Rule engines: Durable Rules, PyKnow/Experta, business-rules
- Knowledge graphs: NetworkX (simpler), Neo4j (overkill), RDFlib
- Schema: Pydantic (already used)
Recommendation: RECOVER and FORMALIZE
- URGENT: Recover source from git history:
git show <commit>:gofai/schema.pyetc. - Package as
timmy-gofaiwith properpyproject.toml - The concept is novel enough to keep — fleet coordination via deterministic rules + knowledge graph is genuinely useful
- Consider using NetworkX for graph backend instead of custom implementation
- Push to its own Gitea repo
Effort: 2-4 hours (recovery from git), 4-6 hours (formalization)
7. Hermes Agent (Claude Code / Hermes)
Path: /root/wizards/allegro/hermes-agent/
Origin: https://github.com/NousResearch/hermes-agent.git (MIT license)
Version: 0.5.0
Size: ~26,000 lines of Python (top-level only), massive codebase
Current State
This is an upstream open-source project (NousResearch/hermes-agent) with local modifications. Key components:
run_agent.py— 8,548 lines (!) — main agent loopcli.py— 7,691 lines — interactive CLIhermes_state.py— 1,623 lines — state managementgateway/— HTTP API gateway for each wizardtools/— 15+ tool modules (gitea_client, memory, image_generation, MCP, etc.)skills/— 29 skill directoriesprose/— document generation engine- Custom profiles per wizard
OSS Duplication Analysis
| Component | Duplicates | Alternative |
|---|---|---|
tools/gitea_client.py |
Custom Gitea API wrapper | python-gitea, PyGitea |
tools/web_research_env.py |
Custom web search | Already uses exa-py, firecrawl |
tools/memory_tool.py |
Custom memory/RAG | Honcho (already optional dep) |
tools/code_execution_tool.py |
Custom code sandbox | E2B, Modal (already optional dep) |
gateway/ |
Custom HTTP API | FastAPI app (reasonable) |
trajectory_compressor.py |
Custom context compression | LangChain summarizers, LlamaIndex |
Recommendation: KEEP — it IS the OSS project
Hermes-agent is itself an open-source project. The right approach is:
- Keep upstream sync working (both
originandgitearemotes configured) - Don't duplicate the gitea_client into burn scripts or heartbeat daemons — use the one in tools/
- Monitor for upstream improvements to tools that are currently custom
- The 8.5k-line run_agent.py is a concern for maintainability — but that's an upstream issue
Effort: 0 (ongoing maintenance)
8. Fleet Deployment
Current State
Each wizard runs as a separate systemd service:
hermes-allegro.service— WorkingDir at allegro's hermes-agenthermes-adagio.service— WorkingDir at adagio's hermes-agenthermes-ezra.service— WorkingDir at ezra's (uses allegro's hermes-agent origin)hermes-bezalel.service— WorkingDir at bezalel's
Each has its own:
- Copy of hermes-agent (or symlink/clone)
- .venv (separate Python virtual environment)
- home/ directory with SOUL.md, .env, memories, skills
- EnvironmentFile pointing to per-wizard .env
Docker containers (not managed by compose):
gitea— baredocker runstrfry— baredocker run
Issues
- No docker-compose.yml — containers were created with
docker runand survive via restart policy - Duplicate venvs — each wizard has its own .venv (~500MB each = 2.5GB+)
- Inconsistent origins — ezra's hermes-agent origin points to allegro's local copy, not git
- No fleet-wide deployment tool — updates require manual per-wizard action
- All run as root
OSS Alternatives
| Tool | Fit | Complexity |
|---|---|---|
| docker-compose | Good — defines Gitea, strfry, and could define agents | Low |
| k3s | Overkill for 5 agents on 1 VPS | High |
| Podman pods | Similar to compose, rootless possible | Medium |
| Ansible | Good for fleet management across VPSes | Medium |
| systemd-nspawn | Lightweight containers | Medium |
Recommendation: ADD docker-compose for infrastructure, KEEP systemd for agents
- Create
/root/docker-compose.ymlfor Gitea + strfry + Ollama(optional) - Keep wizard agents as systemd services (they need filesystem access, tool execution, etc.)
- Create a fleet management script:
fleet.sh {start|stop|restart|status|update} [wizard] - Share a single hermes-agent checkout with per-wizard config (not 5 copies)
- Long term: consider running agents in containers too (requires volume mounts for home/)
Effort: 4-6 hours (docker-compose + fleet script)
9. Nostr Key Management
File: /root/nostr-relay/keystore.json
Current State
Plain JSON file containing nsec (private keys), npub (public keys), and hex equivalents for:
- relay
- allegro
- ezra
- alexander (with placeholder "ALEXANDER_CONTROLS_HIS_OWN" for secret)
The keystore is:
- World-readable (
-rw-r--r--) - Contains private keys in cleartext
- No encryption
- No rotation mechanism
- Used by bridge and relay scripts via direct JSON loading
OSS Alternatives
- SOPS (Mozilla) — encrypted secrets in version control
- age encryption — simple file encryption
- Vault (HashiCorp) — overkill for this scale
- systemd credentials — built into systemd 250+
- NIP-49 encrypted nsec — Nostr-native key encryption
- Pass / gopass — Unix password manager
Recommendation: FORMALIZE with minimal encryption
chmod 600 /root/nostr-relay/keystore.json— immediate (5 seconds)- Move secrets to per-service EnvironmentFiles (already pattern used for .env)
- Consider NIP-49 (password-encrypted nsec) for the keystore
- Remove the relay private key from the systemd unit file (currently in plaintext in the
[Service]section!) - Never commit keystore.json to git (check .gitignore)
Effort: 1-2 hours
10. Ollama Setup and Model Management
Current State
- Service:
ollama.service— standard systemd unit, running asollamauser - Binary:
/usr/local/bin/ollama— standard install - Models: Only
qwen3:4b(2.5GB) currently loaded - Guard:
/root/wizards/scripts/ollama_guard.py— custom 55-line script that blocks models >5GB - Port: 11434 (default, localhost only)
Assessment
The Ollama setup is essentially stock. The only custom component is ollama_guard.py, which is a clever but fragile size guard that:
- Checks local model size before pulling
- Blocks downloads >5GB to protect the VPS
- Designed to be symlinked ahead of real
ollamain PATH
However: it's not actually deployed as a PATH override (real ollama is at /usr/local/bin/ollama, guard is in /root/wizards/scripts/).
OSS Alternatives
- Ollama itself is the standard. No alternative needed.
- For model management: LiteLLM proxy, OpenRouter (for offloading large models)
- For guards: Ollama has
OLLAMA_MAX_MODEL_SIZEenv var (check if available in current version)
Recommendation: KEEP, minor improvements
- Actually deploy the guard if you want it (symlink or wrapper)
- Or just set
OLLAMA_MAX_LOADED_MODELS=1and use Ollama's native controls - Document which models are approved for local use vs. RunPod offload
- Consider adding Ollama to docker-compose for consistency
Effort: 30 minutes
Priority Matrix
| # | Component | Action | Priority | Effort | Impact |
|---|---|---|---|---|---|
| 1 | GOFAI source recovery | Recover from git | CRITICAL | 2h | Source code loss |
| 2 | Nostr bridge source | Decompile/recover .pyc | CRITICAL | 4h | Service loss risk |
| 3 | Keystore permissions | chmod 600 | CRITICAL | 5min | Security |
| 4 | Burn scripts | Archive to /root/archive/ | HIGH | 30min | Cleanliness |
| 5 | Docker-compose | Create for Gitea+strfry | HIGH | 2h | Reproducibility |
| 6 | Fleet script | Create fleet.sh management | HIGH | 3h | Operations |
| 7 | Webhook receiver | Move into hermes-agent repo | MEDIUM | 3h | Maintainability |
| 8 | Heartbeat daemon | Package as timmy-heartbeat | MEDIUM | 5h | Reliability |
| 9 | Ollama guard | Deploy or remove | LOW | 30min | Consistency |
| 10 | Evennia | No action needed | LOW | 0h | Already good |
Appendix: Files Examined
/etc/systemd/system/allegro-gitea-webhook.service
/etc/systemd/system/nostr-bridge.service
/etc/systemd/system/nostr-relay.service
/etc/systemd/system/hermes-allegro.service
/etc/systemd/system/hermes-adagio.service
/etc/systemd/system/hermes-ezra.service
/etc/systemd/system/hermes-bezalel.service
/etc/systemd/system/ollama.service
/root/wizards/allegro/gitea_webhook_receiver.py
/root/nostr-relay/main.go
/root/nostr-relay/keystore.json
/root/nostr-relay/__pycache__/dm_bridge_mvp.cpython-312.pyc
/root/wizards/allegro/gofai/ (all files)
/root/wizards/allegro/hermes-agent/pyproject.toml
/root/workspace/timmy-academy/ (typeclasses, commands, web)
/root/burn_*.py (39 files)
/root/wizards/allegro/home/skills/devops/.../heartbeat_daemon.py
/root/wizards/allegro/household-snapshots/scripts/template_checkpoint_heartbeat.py
/root/wizards/scripts/ollama_guard.py