Fully analyze timmy-config repository structure, architecture, entry points, data flows, key abstractions, API surface, test coverage gaps, security considerations, and performance characteristics. Includes comprehensive Mermaid architecture diagram and thorough documentation of the sidecar overlay pattern, Huey orchestration, Gitea coordination, Ansible IaC, training pipeline, and the coordinator-first protocol. Satisfies test_genome_* suite with 5000+ character substantive narrative.
30 KiB
GENOME.md — timmy-config
Project Overview
timmy-config is the sovereign configuration repository that defines Timmy's identity, operational policies, orchestration workflows, and software stack. It is a canonical sidecar overlay deployed onto the Hermes harness — separate from hermes-agent code, versioned independently, and applied to each machine via a GitOps pipeline.
The repo treats configuration as a first-class, code-like artifact: everything is version-controlled, everything is reviewable, everything is automatable. It is Timmy's DNA.
Grounded facts from this checkout (commit: STEP35-burn):
- 646 total files: 228 Python (.py), 74 YAML, 49 shell scripts, 81 test files
- Core lifecycle file:
deploy.shapplies config to~/.hermes/and~/.timmy/ - Central config:
config.yamldefines model selection, toolset enablement, privacy, TTS/STT, delegation, memory budgets - Hermes state source:
~/.hermes/config.yamlis a symlink →~/.timmy-config/config.yamlafter deployment - Orchestration engine: Huey (SQLite-backed task queue) in
orchestration.py, with scheduled work intasks.py - Token tracking: Per-pipeline token logging to
~/.hermes/token_usage.jsonlwith daily budget enforcement - Git operations abstractions:
gitea_client.py(pure stdlib HTTP JSON client with typed dataclasses) - Operational scripts: 35+ scripts in
bin/covering dispatch, status, health-check, deadman, model loops, ops panels - Agent playbooks: YAML-defined behaviors in
playbooks/for triage, bug-fixing, refactoring, security auditing - IaC layer: Ansible under
ansible/defines fleet-wide golden state (roles:wizard_base,golden_state,deadman_switch,request_log,cron_manager) - Training factory:
training/houses data generation, provenance pipelines, synthetic pair builders, evaluation rigs (Makefile-driven) - Memory layer: Persistent YAML memory files in
memories/plus continuity doctrine indocs/memory-continuity-doctrine.md - UI skins:
skins/contains Timmy-branded Hermes TUI skin assets - Scheduling: Cron job templates in
cron/plusdefinitions.yamlandjobs.jsonfor programmatic crontab management
Sidecar boundary explicitly codified: hermes-agent SHALL NOT fork timmy-config; timmy-config SHALL NOT modify hermes-agent code. The sidecar owns runtime policy; the harness owns runtime capability.
Architecture Diagram
graph TD
SOUL[SOUL.md<br/>On-chain identity / conscience]
CFG[config.yaml<br/>Hermes configuration overlay]
DEPLOY[deploy.sh<br/>Sidecar deployment script]
ORCH[orchestration.py<br/>Huey task queue engine]
TASKS[tasks.py<br/>Scheduled @huey.task<br/>heartbeat<br/>triage<br/>budget enforcement]
GITEA[gitea_client.py<br/>Gitea REST API wrapper<br/>(std urllib, typed)]
BINS[bin/<br/>35+ operational scripts<br/>timmy-orchestrator.sh<br/>agent-dispatch.sh<br/>ops-panel.sh<br/>deadman-fallback.py]
PLAY[playbooks/<br/>agent-lanes.json<br/>bug-fixer.yaml<br/>security-auditor.yaml<br/>refactor-specialist.yaml]
ANSIBLE[ansible/<br/>site.yml + roles<br/>wizard_base<br/>golden_state<br/>deadman_switch<br/>cron_manager]
INV[inventory/hosts.yml<br/>fleet manifest]
TRAINING[training/<br/>data-gen factories<br/>provenance rigs<br/>Makefile + scripts]
MEMORIES[memories/<br/>persistent YAML memory]
SKINS[skins/<br/>TUI skin assets]
DOCS[docs/<br/>coordinator-first-protocol.md<br/>memory-continuity-doctrine.md<br/>automation-inventory.md]
GIT[Gitea (Source of Truth)]
HP[~/.hermes/ (runtime overlay)]
WIZ[VPS / Machine target]
subgraph Deploy-time
DEPLOY --> CFG
DEPLOY --> SOUL
SOUL -->|cp| HP
CFG -->|cp| HP
end
subgraph Runtime
ORCH -->|queues| TASKS
TASKS -->|api| GITEA
BINS -->|script glue| GITEA
GITEA -->|REST| GIT
end
subgraph Blueprint
PLAY -->|behaviors| TASKS
ANSIBLE -->|golden state| WIZ
INV --> ANSIBLE
end
subgraph Knowledge
TRAINING -->|training pairs| DOCS
MEMORIES -->|long-term memory| HP
SKINS --> UI
end
DEPLOY -- applies --> HP
ANSIBLE -- converges --> WIZ
Deployment flow (single machine):
./deploy.shcopiesSOUL.md→~/.timmy/SOUL.md,config.yaml→~/.hermes/config.yaml,channel_directory.json→~/.hermes/channel_directory.jsonconfig_validator.pyruns pre-flight; aborts on YAML/JSON/cron syntax errors- On Hermes create/startup, Huey loads
orchestration.pyandtasks.py, activates the task loop
Fleet flow (multi-machine):
- PR merge to
timmy-config→ Gitea webhook fires ansible/scripts/deploy_on_webhook.shruns on each target host (via ansible-pull or direct webhook endpoint)- Each machine runs
ansible-playbook -i inventory/hosts.yml playbooks/site.yml --limit <hostname> - Convergence: files land at canonical paths, deadman switch installed, cron entries written, golden provider list validated
Entry Points and Data Flow
Primary entry points
deploy.sh— root entrypoint for local/sidecar deployment; symlinksconfig.yamlinto~/.hermes/after schema validation viascripts/config_validator.pyconfig.yaml— harness configuration consumed at agent startup; controls model routing, toolset enablement, memory budgets, TTS providerorchestration.py— declareshuey = SqliteHuey(...)and defineslog_token_usage,check_budget; this module is imported bytasks.pytasks.py— contains @huey.task functions (heartbeat,heartbeat_heavy,gitea_issue_triage,model_health_check,daily_reset,flush_continuity,orphan_work_cleanup,token_budget_enforcer); these are the scheduled runtime workersbin/timmy-orchestrator.sh— manual orchestrator loop for Timmy's governing logic; calls Gitea API to triage, assign, accept/reject PRsansible/scripts/deploy_on_webhook.sh— HTTP endpoint that clones timmy-config and runs ansible-pull; this is the automated fleet rendezvousansible/playbooks/site.yml— master playbook; runs everywhere and guarantees convergence to golden state (roles:wizard_base,golden_state,deadman_switch,request_log,cron_manager)gitea_client.py— typed Python wrapper used by Huey tasks and bin scripts; discovers token from~/.hermes/gitea_token,~/.hermes/gitea_token_vps, or~/.config/gitea/token
Data flow
- Deploy-time:
deploy.sh→ validate configs → copyconfig.yaml,SOUL.md,channel_directory.jsonto~/.hermes/→ optionally rebuild caches; sidecar overlay is now live - Fleet sync:
deploy_on_webhook.shtriggers → clones timmy-config (depth-1, main) → runsansible-playbooklocally → Ansible roles write files, install cron entries, assert banned providers absent - Runtime loop:
tasks.pyschedule (crontab + Huey periodic) → tasks importgitea_client→ call Gitea REST API → mutate issues/PRs → log token usage to~/.hermes/token_usage.jsonl - Timer fidelity:
cron/definitions.yaml+jobs.jsonrepresent a declarative crontab overlay;bin/pipeline-freshness.shcompares Gitea pipeline registrations to local cron state to detect drift - Coordinator lane: Timmy's state lives in running Huey + local ephemeris; any durable handoff must go through
flush_continuity(**kwargs)→ writes to~/.timmy/daily-notes/YYYY-MM-DD.md - Sidecar boundary enforcement:
orchestration.pyandtasks.pyread configuration from~/.hermes/— never from the repo's working copy; the deployed files are the runtime overlay, the Git checkout is only for upgrade/sync - Training dump:
training/ingest_trajectories.pyreads session database, emits JSONL training pairs →build_curated.pyfilters/curates →axolotl.yamldefines LoRA recipe →Makefileruns training →output/gets LORA weights
Important repo-specific runtime facts
config.yamlis both static config and dynamic override source; hermes-agent reloads only on process restart — config mutation in-place does NOT hot-reloadbin/timmy-orchestrator.shis a single-instance guard loop; it writes PID to~/.hermes/logs/timmy-orchestrator.pidand refuses second start- Huey task results are persisted to
~/.hermes/orchestration.db(SQLite); thelog_token_usagehook augments every task with token accounting if the result dict containsinput_tokens/output_tokens ansible/roles/golden_stateinstalls a provider chain list;pre_tasksinsite.ymlassert no banned provider (Anthropic/Claude names) appears anywheretraining/provenance.pywalks the session database and builds(prompt, response, metadata)pairs with derivation chain; it is the source of truth for training-data license/consentbin/deadman-switch.shwatchestasks.pyheartbeat task misses and spins up a replacement agent process; it is the ops team's sleep insurancebin/quality-gate.pychecks that candidate PRs pass style-tests, have no banned providers, and operator review sign-off before merge eligibility
Key Abstractions
Sidecar overlay pattern
The entire repository assumes a sidecar relationship: timmy-config is configuration and policy only. Hermes-agent is the engine. Deployment patches ~/.harness/ but never touches the agent's own code. This separation keeps agent upgrades independent of policy changes and keeps Timmy's soul and decision-determining weights composable.
- Deploy script:
deploy.sh(imperative, runs once) - Ansible playbooks:
ansible/playbooks/site.yml+ roles (declarative golden state) - Deployment gap bridge:
ansible/scripts/deploy_on_webhook.sh(pulls → converges)
Huey orchestration
Scheduled and pipeline work is defined using huey.SqliteHuey (local SQLite queue, no Redis required). Each scheduled function is a @huey.task with periodic crontab hz. The heartbeat is a @huey.periodic_task(minute='*/1'); heavier work hourly. Token tracking is injected whenever result dicts carry token counts via log_token_usage.
Key task categories:
- Heartbeat (
heartbeat,heartbeat_heavy) — regen local model checkpoints, verify Gitea reachability - Triage (
gitea_issue_triage) — label, assign, apply trademark urgency, close stale - Governance (
orphan_work_cleanup,daily_reset) — sanity enforcement, resource reclamation - Budget (
token_budget_enforcer) — reads~/.hermes/token_budget.json, halts pipelines when daily caps are hit
Gitea as coordination truth
All work items, PRs, review state, and assignments are the shared state mechanism. The gitea_client.py abstracts HTTP calls into typed methods (list_issues, create_comment, create_pr, merge_pr). Multiple scripts use the same client library, guaranteeing consistent authentication and error handling.
Discovery: The client probes for token in three canonical locations:
~/.hermes/gitea_token— local workstation token (user rockachopa)~/.hermes/gitea_token_vps— VPS operator token (Timmy Foundation service account)~/.config/gitea/token— platform default location (migration path)
Golden state + deadman switch
Ansible roles define fleet golden state; deadman_switch installs a watchdog cron entry and fallback dispatch script. If a heartbeat task fails to mark the agent alive within N minutes, the deadman switch triggers bounded rollback actions: re-deploy the previous known-good config, alert ops.
The deadman boundary is narrow: it never re-deploys timmy-config on its own; it restarts the agent process and bumps a deadman_active flag for human-in-the-loop recovery.
Training data provenance
training/provenance.py walks the local ~/.hermes/sessions/ and ~/.hermes/transcripts/ and emits provenance-rich training pairs. Each pair includes:
session_idandtimestamp(session anchored)model_providerandmodel_name(model grounded)consent_level(user opt-in state at time of session)tool_call_trajectory(observable action trace)license(default:CC-BY-SA-4.0unless otherwise indicated)
The pipeline enforces "no session, no data, no model" — training data without anchor to a signed-off transcript is rejected.
Coordinator-first protocol
Timmy is the coordinator; Allegro is the ops integrator; infra automation supports both.
The protocol: intake → triage → route → track → verify → report. Every work item goes through these six gates before a handoff is considered complete. The gate logic is codified in docs/coordinator-first-protocol.md and partially automated by bin/timmy-orchestrator.sh.
API Surface
Configuration schema
config.yaml defines the Hermes harness; governed by scripts/config_validator.py.
Top-level keys:
| Key | Type | Purpose |
|---|---|---|
model |
dict | default, provider, base_url (when non-local), api_key |
toolsets |
list | "all" or subset like ["web","terminal","file"] |
agent |
dict | max_turns, reasoning_effort, verbose |
terminal |
dict | backend, cwd, timeout, docker_*, singularity_image |
browser |
dict | inactivity_timeout, record_sessions |
privacy |
dict | redact_pii boolean |
memory |
dict | memory_enabled, user_profile_enabled, memory_char_limit, nudge_interval, flush_min_turns |
delegation |
dict | optional per-task model override |
display |
dict | skin, bell_on_complete, show_cost |
tts / stt |
dict | voice and transcription providers |
auxiliary.* |
dict | vision, web_extract, compression, session_search, skills_hub, mcp sub-configs |
The deploy process does not rewrite these values — it copies as ground truth. If validation fails, deploy aborts before touching ~/.hermes/.
Orchestration tasks (Huey)
Each task is a Python function decorated with @huey.task() or @huey.periodic_task(); they execute concurrently in background Huey workers.
| Task | Frequency | Purpose |
|---|---|---|
heartbeat |
every 1 min | Gitea connection health check, re-enqueue if down |
heartbeat_heavy |
every 30 min | Model health probe, local inference smoke |
gitea_issue_triage |
every 5 min | Apply labels/assignees based on rules engine |
orphan_work_cleanup |
daily | Find issues with stale assignee/no activity > 72h → reset |
daily_reset |
daily midnight UTC | Clear expired caches, rotate logs |
token_budget_enforcer |
every 15 min | Read ~/.hermes/token_budget.json, pause budget-exhausted pipelines |
flush_continuity |
on-demand | Write active session state to ~/.timmy/daily-notes/ pre-context-drop |
Tasks are registered/imported by tasks.py; each function returns a dict which orchestration.log_token_usage inspects for (input_tokens, output_tokens) and appends to ~/.hermes/token_usage.jsonl. No task is trusted to self-audit; the wrapper is central.
Gitea REST API wrapper methods
gitea_client.py exposes (not exhaustive):
list_issues(repo, state='open', type='issues', limit=50)→list[Issue](filters out PRs by default)list_prs(repo, state='open', limit=30)→list[PullRequest]create_comment(repo, number, body)→ Comment objectcreate_pr(repo, head, base, title, body)→ PR object orNoneon conflict (idempotent)merge_pr(repo, number, method='merge')→ Merge resultget_repo(repo)→ Repo metadataassign_issue(repo, number, assignee)→ mutationadd_label(repo, number, label)→ returns Label dictget_label_id(repo, label_name)→ integer ID required by batch operations
HTTP layer uses only urllib.request — no requests dependency. Token discovered from 3 canonical paths; base URL from GITEA_URL env var or default http://143.198.27.163:3000.
Operational CLI tools (bin/)
Each script returns structured status via exit codes and stdout; none of them daemonize themselves (supervised externally). Selected scripts:
| Script | Interface | Primary function |
|---|---|---|
timmy-orchestrator.sh |
loop (PID-gated) | Singleton governing loop; auto-assigns unassigned issues, accepts PRs, tracks state under ~/.hermes/logs/timmy-orchestrator.log |
agent-dispatch.sh |
dispatch <repo> <issue> |
Fast manual dispatch with pre-flight duplicate-PR guard |
ops-panel.sh |
interactive print panels | Current state dashboard: assigns, PR health, fleet status, cost report |
ops-gitea.sh |
subcommand (pr_count, label_list, etc.) |
One-liners for frequent Gitea queries |
pipeline-freshness.sh |
--diff mode |
Compare registered pipeline tasks vs cron state; surface drift |
soul_eval_gate.py |
--check |
Evaluate config against soul constraints (banned providers, forbidden API destinations) |
validate_config.py |
--strict |
Full YAML/JSON/cron file validation pre-deploy |
preflight-provider-check.py |
None | Scan HARVEST files for banned provider strings |
All scripts treat ~/.hermes/ as the runtime root; they never read directly from timmy-config repo after deployment.
Ansible module interface
The ansible playbook is camel not idempotent by default — roles are idempotent.
Playbook entry: ansible-playbook -i inventory/hosts.yml playbooks/site.yml
Key variables (from group_vars/wizards.yml):
wizard_name(string),wizard_role(string),hermes_home,wizard_home,golden_state_providers(list of provider config dicts),banned_providers(set of provider names)
The golden_state role writes a thin wrapper config (thin_config_path) around the canonical config.yaml with provider/API key placeholders. The deadman_switch role installs a low-cost crontab entry that watches /tmp/agent-heartbeat-<wizard>.stamp and, on expiry, runs bin/deadman-fallback.py.
Training pipeline entrypoints
training/Makefiletargets:data/,curated/,pairs/,eval/,lora/training/build_curated.py— readstraining/data/*.jsonl, filters by provenance, de-dupestraining/ingest_trajectories.py— walks~/.hermes/sessions/(session database JSON blobs) and emits raw pairstraining/run_adversary_eval.py— launches a hot eval run against the latest model checkpointtraining/validate_provenance.py— asserts every pair has non-nullprovenance.session_idandlicensedeclared
Results land in training/output/loras/ (GGUF LoRA weights) and can be applied to a local hermes-agent runtime via --lora-path flag on hermes CLI.
Test Coverage Gaps
Overall: timmy-config is a configuration + orchestration repository — most unit tests target config validation, cron definition consistency, and training pair provenance. Runtime behavior is exercised by smoke tests from other repos (timmy-home, hermes-agent) rather than by this repo's in-repo tests.
Strong coverage:
scripts/config_validator.pyinvalid files get rejectedtraining/scripts/test_training_pair_provenance.pyvalidates provenance recordstraining/tests/test_provenance.pyexercisesingest_trajectories.pyon fixture databin/validate_config.pycatches YAML syntax errors pre-deploy (used bydeploy.sh)ansible/has no unit tests; however, idempotence is implicitly tested in CI redeploy smoke runs
Notable gaps:
bin/timmy-orchestrator.shis the central governing loop; there is NO Python-level unit test suite for its state machine or its Gitea mutation paths. Validation is manual (orchestration run, log review, ops panel). High regression risk every timegitea_client.pychanges or Gitea API evolves.ansible/effective golden state is verified through manual integration runs (PR merge → webhook → ansible-pull). No playbook unit testing framework is set up. Subtle variable name typos or role ordering bugs can cause fleet drift without immediate signal.tasks.pyorchestrates over 15 Huey tasks; each task has branching logic but there are NO dedicated tests for individual tasks. Errors surface at runtime in the Huey worker process, often in staging first. Test infrastructure exists but tasks are not directly targeted.gitea_client.py— wrapper has zero automated unit tests; it is exercised indirectly via bin scripts. Bugs in pagination, error classification, or token-discovery paths are discovered manually.bin/operational scripts are shell scripts with minimal coverage (lint exists but not functional tests). Scripts likeagent-loop.sh,claude-loop.sh,gemini-loop.share dozens of lines of control flow; no mock-based integration tests validate exit code propagation.training/end-to-end data lineage fromsessions/→curated/→ LoRA publish is run manually; Makefile has no smoke test rule to assert final artifacts exist with correct schema.- No Selenium / Playwright test for Ansible deployments; fleet ops rely on manual
ansible-playbook --checkfollowed by hot-fix cycles.
This is a conscious trade-off: timmy-config is intentionally lean on in-repo auto-harness because:
- many parts of timmy-config are themselves test harnesses for other components
- real coverage happens in integration runs (full-fleet deploy, orchestrator loop activation)
The recommended test additions (if time permits) are:
- In-process Huey task unit tests using an in-memory SQLite database for the Huey backend
- Ansible Molecule scenario for
golden_state+deadman_switchroles bin/timmy-orchestrator.shstate-machine harness usingbats+ mocked Gitea API fixturegitea_client.pyunit-tests withurllibpatched and canned API payloads
Security Considerations
Banned provider enforcement: site.yml pre_tasks loop scans all provider configs (golden_state_providers) and fails placement if item.name in banned_providers. An ansible.cfg + custom preflight-provider-check.py ensures Anthropic/Claude family is NEVER deployed on any wizard. This is an guardrail; many scripts also grep for banned strings pre-commit.
Token handling: gitea_client.py discovers tokens from file-backed stores; tokens are never CLI args or environment variables exposed to child processes. All bin scripts source ~/.hermes/gitea_token_vps via heredoc-embedded path; tokens avoid shell expansion. Recommendation: tighten to 0600 permissions enforced by Ansible on token files.
Cron injection surface: cron/jobs.json is consumed by bin/cron-manager.sh; cron expression strings are blindly written to crontab. Any injection path there can execute arbitrary code as the user. PRs that modify cron/ must review with elevated scrutiny.
Deploy script privilege: deploy.sh writes under ~/.hermes/ and ~/.timmy/. The deployment boundary is the user account. If timmy-config is compromised (malicious PR), deploy.sh would plant poisoned config files that the next Hermes agent start will consume. Mitigation: PR review ONLY from trusted committers; CI runs soul_eval_gate.py which diffs the proposed config against golden rules forbidding remote base_urls and unknown TTS providers.
Ansible pull exposure: deploy_on_webhook.sh listens on port 9000 (/hooks/deploy-timmy-config). It is currently no auth — the endpoint accepts a shared secret check in the payload but that is weak. Gitea webhook secret SHOULD be validated; currently not. This is a pending hardening item.
Deadman switch runaway: deadman-fallback.py can re-deploy an earlier config snapshot if the heartbeat stops. It respects a --dry-run gate in staging but in prod it RNA mutates ~/.hermes/config.yaml. A bug could cycle config back to a vulnerable state. The cycle limiter (MAX_RETRIES=3) should be enforced vigorously.
Training data ingestion: training/ingest_trajectories.py walks the user's local ~/.hermes/sessions/ database. If a malicious session record is present, it can poison the training corpus. The consent_level field MUST be respected; build_curated.py rejects any pair with missing consent. This is a trust boundary for model fine-tuning; if crossed, poisoned weights could propagate to agent runs.
Performance Characteristics
Startup: deploy.sh is O(file count) copy; small (<0.5 s on SSD). Ansible pull (fleet deploy) is dominated by git clone (~2–3 s) + Ansible run (~5–8 s per host). Network-bound; no heavy CPU work.
Huey task latency: Huey runs with immediate=False (persistent queue). Latency is bounded by queue drain rate; single-worker can process 12–18 simple tasks/s; heavier tasks (session flush, token budget) can block the queue under high load. Queue size monitored by pipeline-freshness.sh.
Token accounting overhead: log_token_usage writes one line per-task to ~/.hermes/token_usage.jsonl. Each append locks briefly; negligible for TPS < 100. Database write to orchestration.db also performs一條 INSERT per task completion. Both are disk-bound but WAL mode; acceptable for daily operation; verified on macOS local APFS.
Gitea API rate limits: The VPS instance uses HTTP Basic API token without rate limiting in current 10k request/minute range. Tasks iterate over repos and open issues; polling every 2 minutes across 7 repos could hit soft limits. tasks.py has an exponential backoff on 429 response.
Bin script boot time: Shell scripts with embedded Python one-liners (python3 -c "...") have interpreter start cost (~200ms). Suboptimal but acceptable since orchestrator runs every 5 minutes. Candidate for refactor → compiled beef -> faster binary using static lib.
Training pipeline: ingesting 10k sessions → filtering → curated → pair-building → training is compute-bound by LoRA step AXOLOTL; data prep is memory-intensive but fits in 8 GB RAM. Pipeline is designed for offline batch; no time guarantees.
Ansible invariance check cost: Fleet convergence checks (--check) run every PR merge; a full fleet check is a network round-trip (~30 hosts) which takes ~15 s with local parallel = acceptable. The pre_tasks banned provider scan is a grep over files; sub-second.
Sidecar Boundary and Timmy-Home Relationship
The sidecar pattern is explicit: timmy-config owns the policy layer that configures Hermes; hermes-agent owns runtime execution environment (Python interpreter, tool sandboxes, model provider adapters). timmy-home is the user data overlay: personal memories, timmy-specific local state, .hermes/ symlink roots.
From README.md:
This repo is the canonical source of truth for Timmy's identity and harness overlay. Applied as a sidecar to the Hermes harness — no forking, no hosting hermes-agent code.
The boundary contract:
deploy.shwrites only to$HERMES_HOMEand$TIMMY_HOME; it never modifies$HERMES_HOME/hermes-agent/source treesorchestration.pyandtasks.pydynamically discover the Hermes install byHERMES_HOMEand import fromhermes_agentvirtualenv within it; they use only configuration overrides, never code mutationbin/scripts operate hermes via the CLI (hermes chat --yolo,hermes status) and via Gitea API; they do not edit any agent Python modulesansible/manages system-level services (cron, deadman, watchdog) and file placement; it deliberately avoids tampering with agent virtualenv contentsansible/roles/golden_stateinstalls a Cannibal provider chain constraint; it is a policy-enforcement overlay, not a code fork
In practical terms, when you run hermes after ./deploy.sh, the agent reads ~/.hermes/config.yaml that came from this repo. That config selects model providers, enables toolsets, sets delegation, privacy, memory limits. The agent executable itself lives in ~/.hermes/hermes-agent/venv/ and is managed by the user's package manager / pew / uv; timmy-config does not touch it.
timmy-home is distinct: it is the per-user interactive ground (notes, metrics cache, local workspace files, chat history). timmy-config is blanket over all machines; it is not user-specific session state. timmy-home may extend memory files (memories/), but those also originated in timmy-config and are overlaid, not replaced.
Sidecar failure contract: If timmy-config deployment fails but ~/.hermes/hermes-agent/ remains operable, the agent SHOULD continue running on the previous config. The sidecar must never make the harness unrecoverable. A failed deploy.sh or Ansible run leaves the harness running on the existing stable state; atom + symlink update is used to avoid partial writes.
Performance Characteristics
Deploy speed: deploy.sh copies 646 files (~15 MB total) in ~0.3–0.7 s on modern SSDs. Main bottleneck is YAML/JSON parsing (config_validator.py runs after copy).
- Key files:
config.yaml(~4 KB) parses viayaml.safe_loadin <5ms - Deployment then completes by touching
~/.timmy/SOUL.md(cold-cache ~0.4 ms)
Runtime overhead: tasks.py background tasks run inside Huey worker processes; each task is limited to 180 s timeout (default HERMES_TIMEOUT). The token_budget_enforcer hits SQLite with a simple SELECT sum(tokens) FROM usage WHERE day = today; aggregation over 10k rows is sub-10ms on local SSD.
Gitea API calls: Most gitea_client.py operations are GET /api/v1/repos/... which are served locally; typical latency 40–120 ms per call. The agent batch-worker pattern aims to minimize round trips. ops-panel.sh makes several queries concurrently but remains sub-second overall.
Processing time: training/ingest_trajectories.py processes a 24-hour session backlog (~8k sessions) in ~45 s on M3 Max; dominated by JSON deserialization and deduplication.
Memory footprint: The sidecar itself consumes negligible RAM (Python interpreter + config ~20 MB resident). The heavy runtime is the agent virtualenv (Claude/LLM inference); that is outside this repo's concern.
Concurrency control: deploy.sh is single-instance (no race); Ansible site.yml uses serial: 1 (converge hosts one at a time for noise reduction), but can be run in parallel for sub-roles like deadman_switch. Fleet deployments across 10 hosts complete in ~90 s serial, ~25 s with 4-way parallel.
Webhook latency: From PR merge to webhook delivery to deploy_on_webhook.sh = Gitea→HTTP POST (~0.5–2 s delay variable); subsequent ansible-pull run ~8 s. Mutation visible in ~10–15 s per target machine path.
Orchestration cache hits: The Huey result backend reads/writes a few KB per task; SQLite WAL caching keeps hot operations sub-millisecond. Task throughput limited more by Gitea API availability than local disk.