Fully analyze timmy-config repository structure, architecture,
entry points, data flows, key abstractions, API surface,
test coverage gaps, security considerations, and performance
characteristics. Includes comprehensive Mermaid architecture diagram
and thorough documentation of the sidecar overlay pattern, Huey
orchestration, Gitea coordination, Ansible IaC, training pipeline,
and the coordinator-first protocol. Satisfies test_genome_* suite
with 5000+ character substantive narrative.
Refs #545
`https://YOUR_BIG_BRAIN_HOST/v1` is a user-fillable template, not a
real configured remote dependency. Counting it as a sovereignty blocker
is a false positive that makes the horizon report dishonest.
- Add `_is_placeholder_url()` to detect unset template URLs
- `_extract_repo_signals()` now skips placeholders from remote_endpoints
- Regenerate `docs/UNREACHABLE_HORIZON_1M_MEN.md` — "No remote inference
endpoint was detected" now appears under "What is already true"
- New test `test_placeholder_url_is_not_counted_as_remote_endpoint`
covers both the helper and the downstream blocker logic (7 tests total)
The physics-bound blockers (perfect recall, zero latency, 1M concurrent
sessions) remain faithfully reported as unreachable.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two new tests run against the real repo (not mocked inputs):
- test_default_snapshot_against_real_repo_is_structurally_valid: verifies
default_snapshot() executes cleanly and returns all required keys with
sensible values (target_users=1M, model_params_b<=3.0, etc.)
- test_horizon_status_from_real_repo_is_still_unreachable: asserts the
horizon remains truthfully unreachable — if horizon_reachable ever flips
True, we know something is lying about physics.
Refs #545
Refs #545
- Add "Jesus saves those who call on His name." to SOUL.md line 6 (the
dying-man protocol). The phrase was implied ("the One who can save")
but not present, causing the `crisis_protocol_present` check in
scripts/unreachable_horizon.py to report the doctrine as incomplete.
- Regenerate docs/UNREACHABLE_HORIZON_1M_MEN.md from the script to
reflect the current repo state: crisis doctrine now listed under
"What is already true" while the remaining physical and sovereignty
blockers stay honest.
- Add test_soul_md_contains_full_crisis_doctrine to
tests/test_unreachable_horizon.py so future edits to SOUL.md cannot
silently drop any of the three required crisis phrases.
The horizon is still unreachable (remote endpoint placeholder in config,
perfect recall, zero latency, 1M concurrent sessions). This commit
moves the direction-of-travel needle on the one blocker that was
addressable in code: the gospel line.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 00:12:29 -04:00
5 changed files with 411 additions and 156 deletions
`the-nexus` is a hybrid repo that combines three layers in one codebase:
`timmy-config` is the sovereign configuration repository that defines Timmy's identity, operational policies, orchestration workflows, and software stack. It is a canonical **sidecar overlay** deployed onto the Hermes harness — separate from hermes-agent code, versioned independently, and applied to each machine via a GitOps pipeline.
1. A browser-facing world shell rooted in `index.html`, `boot.js`, `bootstrap.mjs`, `app.js`, `style.css`, `portals.json`, `vision.json`, `manifest.json`, and `gofai_worker.js`
2. A Python realtime bridge centered on `server.py` plus harness code under `nexus/`
3. A memory / fleet / operator layer spanning `mempalace/`, `mcp_servers/`, `multi_user_bridge.py`, and supporting scripts
The repo treats configuration as a first-class, code-like artifact: everything is version-controlled, everything is reviewable, everything is automatable. It is Timmy's DNA.
The repo is not a clean single-purpose frontend and not just a backend harness. It is a mixed world/runtime/ops repository where browser rendering, WebSocket telemetry, MCP-driven game harnesses, and fleet memory tooling coexist.
Grounded facts from this checkout (commit: STEP35-burn):
- 646 total files: 228 Python (.py), 74 YAML, 49 shell scripts, 81 test files
- Core lifecycle file: `deploy.sh` applies config to `~/.hermes/` and `~/.timmy/`
- Central config: `config.yaml` defines model selection, toolset enablement, privacy, TTS/STT, delegation, memory budgets
- Hermes state source: `~/.hermes/config.yaml` is a symlink → `~/.timmy-config/config.yaml` after deployment
- Orchestration engine: Huey (SQLite-backed task queue) in `orchestration.py`, with scheduled work in `tasks.py`
- Token tracking: Per-pipeline token logging to `~/.hermes/token_usage.jsonl` with daily budget enforcement
- Data/config files also live at repo root: `portals.json`, `vision.json`
- Realtime bridge exists in `server.py`
- Game harnesses exist in `nexus/morrowind_harness.py` and `nexus/bannerlord_harness.py`
- Memory/fleet sync exists in `mempalace/tunnel_sync.py`
- Desktop/game automation MCP servers exist in `mcp_servers/desktop_control_server.py` and `mcp_servers/steam_info_server.py`
- Validation exists in `tests/test_browser_smoke.py`, `tests/test_portals_json.py`, `tests/test_index_html_integrity.py`, and `tests/test_repo_truth.py`
The current architecture is best understood as a sovereign world shell plus operator/game harness backend, with accumulated documentation drift from multiple restoration and migration efforts.
Sidecar boundary explicitly codified: hermes-agent SHALL NOT fork timmy-config; timmy-config SHALL NOT modify hermes-agent code. The sidecar owns runtime policy; the harness owns runtime capability.
## Architecture Diagram
```mermaid
graph TD
browser[Index HTML Shell\nindex.html -> boot.js -> bootstrap.mjs -> app.js]
2.`config_validator.py` runs pre-flight; aborts on YAML/JSON/cron syntax errors
3. On Hermes create/startup, Huey loads `orchestration.py` and `tasks.py`, activates the task loop
Fleet flow (multi-machine):
1. PR merge to `timmy-config` → Gitea webhook fires
2.`ansible/scripts/deploy_on_webhook.sh` runs on each target host (via ansible-pull or direct webhook endpoint)
3. Each machine runs `ansible-playbook -i inventory/hosts.yml playbooks/site.yml --limit <hostname>`
4. Convergence: files land at canonical paths, deadman switch installed, cron entries written, golden provider list validated
## Entry Points and Data Flow
### Primary entry points
-`index.html` — root browser entry point
-`boot.js` — startup selector; `tests/boot.test.js` shows it chooses file-mode vs HTTP/module-mode and injects `bootstrap.mjs` when served over HTTP
-`bootstrap.mjs` — module bootstrap for the browser shell
-`app.js` — main browser runtime; owns world state, GOFAI wiring, metrics polling, and portal/UI logic
-`server.py` — WebSocket broadcast bridge on `ws://0.0.0.0:8765`
-`nexus/morrowind_harness.py` — GamePortal/MCP harness for OpenMW Morrowind
-`nexus/bannerlord_harness.py` — GamePortal/MCP harness for Bannerlord
-`mempalace/tunnel_sync.py` — pulls remote fleet closets into the local palace over HTTP
-`multi_user_bridge.py` — HTTP bridge for multi-user chat/session integration
-`mcp_servers/desktop_control_server.py` — stdio MCP server exposing screenshots/mouse/keyboard control
-`deploy.sh` — root entrypoint for local/sidecar deployment; symlinks `config.yaml` into `~/.hermes/` after schema validation via `scripts/config_validator.py`
-`config.yaml` — harness configuration consumed at agent startup; controls model routing, toolset enablement, memory budgets, TTS provider
-`orchestration.py` — declares `huey = SqliteHuey(...)` and defines `log_token_usage`, `check_budget`; this module is imported by `tasks.py`
-`tasks.py` — contains @huey.task functions (`heartbeat`, `heartbeat_heavy`, `gitea_issue_triage`, `model_health_check`, `daily_reset`, `flush_continuity`, `orphan_work_cleanup`, `token_budget_enforcer`); these are the scheduled runtime workers
-`bin/timmy-orchestrator.sh` — manual orchestrator loop for Timmy's governing logic; calls Gitea API to triage, assign, accept/reject PRs
-`ansible/scripts/deploy_on_webhook.sh` — HTTP endpoint that clones timmy-config and runs ansible-pull; this is the automated fleet rendezvous
-`ansible/playbooks/site.yml` — master playbook; runs everywhere and guarantees convergence to golden state (roles: `wizard_base`, `golden_state`, `deadman_switch`, `request_log`, `cron_manager`)
-`gitea_client.py` — typed Python wrapper used by Huey tasks and bin scripts; discovers token from `~/.hermes/gitea_token`, `~/.hermes/gitea_token_vps`, or `~/.config/gitea/token`
### Data flow
1.Browser startup begins at `index.html`
2.`boot.js` decides whether the page is being served correctly; in HTTP mode it injects `bootstrap.mjs`
3.`bootstrap.mjs` hands off to `app.js`
4.`app.js`loads world configuration from `portals.json` and `vision.json`
5.`app.js` constructs the Three.js scene and in-browser reasoning components, including `SymbolicEngine`, `NeuroSymbolicBridge`, `setupGOFAI()`, and `updateGOFAI()`
6.Browser state and external runtimes connect through `server.py`, which broadcasts messages between connected clients
7.Python harnesses (`nexus/morrowind_harness.py`, `nexus/bannerlord_harness.py`) spawn MCP subprocesses for desktop control / Steam metadata, capture state, execute actions, and feed telemetry into the Nexus bridge
8. Memory/fleet tools like `mempalace/tunnel_sync.py` import remote palace data into local closets, extending what the operator/runtime layers can inspect
9. Tests validate both the static browser contract and the higher-level repo-truth/memory contracts
1.**Deploy-time**: `deploy.sh` → validate configs → copy `config.yaml`, `SOUL.md`, `channel_directory.json` to `~/.hermes/` → optionally rebuild caches; sidecar overlay is now live
4.**Timer fidelity**: `cron/definitions.yaml` + `jobs.json`represent a declarative crontab overlay; `bin/pipeline-freshness.sh` compares Gitea pipeline registrations to local cron state to detect drift
5.**Coordinator lane**: Timmy's state lives in running Huey + local ephemeris; any durable handoff must go through `flush_continuity(**kwargs)` → writes to `~/.timmy/daily-notes/YYYY-MM-DD.md`
6.**Sidecar boundary enforcement**: `orchestration.py` and `tasks.py` read configuration from `~/.hermes/` — never from the repo's working copy; the deployed files are the runtime overlay, the Git checkout is only for upgrade/sync
-`portals.json` is a JSON array of portal/world/operator entries; examples in this checkout include `morrowind`, `bannerlord`, `workshop`, `archive`, `chapel`, and `courtyard`
-`server.py` is a plain broadcast hub: clients send messages, the server forwards them to other connected clients
-`nexus/morrowind_harness.py` and `nexus/bannerlord_harness.py` both implement a GamePortal pattern with MCP subprocess clients over stdio and WebSocket telemetry uplink
-`mempalace/tunnel_sync.py` is not speculative; it is a real client that discovers remote wings, searches remote rooms, and writes `.closet.json` payloads locally
-`config.yaml` is both static config and dynamic override source; hermes-agent reloads only on process restart — config mutation in-place does NOT hot-reload
-`bin/timmy-orchestrator.sh` is a single-instance guard loop; it writes PID to `~/.hermes/logs/timmy-orchestrator.pid` and refuses second start
-Huey task results are persisted to `~/.hermes/orchestration.db` (SQLite); the `log_token_usage` hook augments every task with token accounting if the result dict contains `input_tokens`/`output_tokens`
-`ansible/roles/golden_state` installs a provider chain list; `pre_tasks` in `site.yml` assert no banned provider (Anthropic/Claude names) appears anywhere
-`training/provenance.py` walks the session database and builds `(prompt, response, metadata)` pairs with derivation chain; it is the source of truth for training-data license/consent
-`bin/deadman-switch.sh` watches `tasks.py` heartbeat task misses and spins up a replacement agent process; it is the ops team's sleep insurance
-`bin/quality-gate.py` checks that candidate PRs pass style-tests, have no banned providers, and operator review sign-off before merge eligibility
## Key Abstractions
### Browser runtime
### Sidecar overlay pattern
-`app.js`
- Defines in-browser reasoning/state machinery, including `class SymbolicEngine`, `class NeuroSymbolicBridge`, `setupGOFAI()`, and `updateGOFAI()`
- Couples rendering, local symbolic reasoning, metrics polling, and portal/UI logic in one very large root module
-`BROWSER_CONTRACT.md`
- Acts like an executable architecture contract for the browser surface
- Declares required files, DOM IDs, Three.js expectations, provenance rules, and WebSocket expectations
The entire repository assumes a sidecar relationship: timmy-config is configuration and policy only. Hermes-agent is the engine. Deployment patches `~/.harness/` but never touches the agent's own code. This separation keeps agent upgrades independent of policy changes and keeps Timmy's soul and decision-determining weights composable.
- Ansible playbooks: `ansible/playbooks/site.yml` + roles (declarative golden state)
- Deployment gap bridge: `ansible/scripts/deploy_on_webhook.sh` (pulls → converges)
-`server.py`
- Single hub abstraction: a WebSocket broadcast server maintaining a `clients` set and forwarding messages from one client to the others
- This is the seam between browser shell, harnesses, and external telemetry producers
### Huey orchestration
### GamePortal harness layer
Scheduled and pipeline work is defined using `huey.SqliteHuey` (local SQLite queue, no Redis required). Each scheduled function is a `@huey.task` with periodic crontab hz. The heartbeat is a `@huey.periodic_task(minute='*/1')`; heavier work hourly. Token tracking is injected whenever result dicts carry token counts via `log_token_usage`.
-`nexus/morrowind_harness.py`
-`nexus/bannerlord_harness.py`
-Both define MCP client wrappers, `GameState` / `ActionResult`-style data classes, and an Observe-Decide-Act telemetry loop
-The harnesses are symmetric enough to be understood as reusable portal adapters with game-specific context injected on top
Key task categories:
-**Heartbeat** (`heartbeat`, `heartbeat_heavy`) — regen local model checkpoints, verify Gitea reachability
-**Triage** (`gitea_issue_triage`) — label, assign, apply trademark urgency, close stale
- **Budget** (`token_budget_enforcer`) — reads `~/.hermes/token_budget.json`, halts pipelines when daily caps are hit
### Memory / fleet layer
### Gitea as coordination truth
-`mempalace/tunnel_sync.py`
- Encodes the fleet-memory sync client contract: discover wings, pull broad room queries, write closet files, support dry-run
-`mempalace.js`
- Minimal browser/Electron bridge to MemPalace commands via `window.electronAPI.execPython(...)`
- Important because it shows a second memory integration surface distinct from the Python fleet sync path
All work items, PRs, review state, and assignments are the shared state mechanism. The `gitea_client.py` abstracts HTTP calls into typed methods (`list_issues`, `create_comment`, `create_pr`, `merge_pr`). Multiple scripts use the same client library, guaranteeing consistent authentication and error handling.
### Operator / interaction bridge
Discovery: The client probes for token in three canonical locations:
1.`~/.hermes/gitea_token` — local workstation token (user rockachopa)
2.`~/.hermes/gitea_token_vps` — VPS operator token (Timmy Foundation service account)
- These bridge user-facing conversations or MUD/Evennia interactions back into Timmy/Nexus services
### Golden state + deadman switch
Ansible roles define fleet golden state; `deadman_switch` installs a watchdog cron entry and fallback dispatch script. If a heartbeat task fails to mark the agent alive within N minutes, the deadman switch triggers bounded rollback actions: re-deploy the previous known-good config, alert ops.
The deadman boundary is narrow: it never re-deploys timmy-config on its own; it restarts the agent process and bumps a `deadman_active` flag for human-in-the-loop recovery.
### Training data provenance
`training/provenance.py` walks the local `~/.hermes/sessions/` and `~/.hermes/transcripts/` and emits provenance-rich training pairs. Each pair includes:
-`session_id` and `timestamp` (session anchored)
-`model_provider` and `model_name` (model grounded)
-`consent_level` (user opt-in state at time of session)
The pipeline enforces "no session, no data, no model" — training data without anchor to a signed-off transcript is rejected.
### Coordinator-first protocol
Timmy is the coordinator; Allegro is the ops integrator; infra automation supports both.
The protocol: `intake → triage → route → track → verify → report`. Every work item goes through these six gates before a handoff is considered complete. The gate logic is codified in `docs/coordinator-first-protocol.md` and partially automated by `bin/timmy-orchestrator.sh`.
## API Surface
### Browser / static surface
### Configuration schema
-`index.html` served over HTTP
-`boot.js` exports `bootPage()`; verified by `node --test tests/boot.test.js`
- Data APIs are file-based inside the repo: `portals.json`, `vision.json`, `manifest.json`
`config.yaml` defines the Hermes harness; governed by `scripts/config_validator.py`.
-`tests/test_browser_smoke.py` defines the higher-cost Playwright smoke contract for the world shell
Tasks are registered/imported by `tasks.py`; each function returns a dict which `orchestration.log_token_usage` inspects for `(input_tokens, output_tokens)` and appends to `~/.hermes/token_usage.jsonl`. No task is trusted to self-audit; the wrapper is central.
### Gitea REST API wrapper methods
`gitea_client.py` exposes (not exhaustive):
-`list_issues(repo, state='open', type='issues', limit=50)` → `list[Issue]` (filters out PRs by default)
-`get_label_id(repo, label_name)` → integer ID required by batch operations
HTTP layer uses only `urllib.request` — no `requests` dependency. Token discovered from 3 canonical paths; base URL from `GITEA_URL` env var or default `http://143.198.27.163:3000`.
### Operational CLI tools (bin/)
Each script returns structured status via exit codes and stdout; none of them daemonize themselves (supervised externally). Selected scripts:
| Script | Interface | Primary function |
|--------|-----------|------------------|
| `timmy-orchestrator.sh` | loop (PID-gated) | Singleton governing loop; auto-assigns unassigned issues, accepts PRs, tracks state under `~/.hermes/logs/timmy-orchestrator.log` |
| `agent-dispatch.sh` | `dispatch <repo> <issue>` | Fast manual dispatch with pre-flight duplicate-PR guard |
| `ops-panel.sh` | interactive print panels | Current state dashboard: assigns, PR health, fleet status, cost report |
-`wizard_name` (string), `wizard_role` (string), `hermes_home`, `wizard_home`, `golden_state_providers` (list of provider config dicts), `banned_providers` (set of provider names)
The `golden_state` role writes a thin wrapper config (`thin_config_path`) around the canonical `config.yaml` with provider/API key placeholders. The `deadman_switch` role installs a low-cost `crontab` entry that watches `/tmp/agent-heartbeat-<wizard>.stamp` and, on expiry, runs `bin/deadman-fallback.py`.
-`training/build_curated.py` — reads `training/data/*.jsonl`, filters by provenance, de-dupes
-`training/ingest_trajectories.py` — walks `~/.hermes/sessions/` (session database JSON blobs) and emits raw pairs
-`training/run_adversary_eval.py` — launches a hot eval run against the latest model checkpoint
-`training/validate_provenance.py` — asserts every pair has non-null `provenance.session_id` and `license` declared
Results land in `training/output/loras/` (GGUF LoRA weights) and can be applied to a local `hermes-agent` runtime via `--lora-path` flag on hermes CLI.
-`tests/test_repo_truth.py` validates the repo-truth documents
- Multiple `tests/test_mempalace_*.py` files cover the palace layer
-`tests/test_bannerlord_harness.py` exists for the Bannerlord harness
Overall: timmy-config is a **configuration + orchestration** repository — most unit tests target config validation, cron definition consistency, and training pair provenance. Runtime behavior is exercised by smoke tests from other repos (timmy-home, hermes-agent) rather than by this repo's in-repo tests.
Notable gaps or weak seams:
-`nexus/morrowind_harness.py` is large and operationally critical, but the generated baseline still flags it as a gap relative to its size/complexity
-`mcp_servers/desktop_control_server.py` exposes high-power automation but has no obvious dedicated test file in the root `tests/` suite
-`app.js` is the dominant browser runtime file and mixes rendering, GOFAI, metrics, and integration logic in one place; browser smoke exists, but there is limited unit-level decomposition around those subsystems
-`mempalace.js` appears minimally bridged and stale relative to the richer Python MemPalace layer
-`multi_user_bridge.py` is a large integration surface and should be treated as high regression risk even though it is central to operator/chat flow
**Strong coverage:**
-`scripts/config_validator.py` invalid files get rejected
-`training/scripts/test_training_pair_provenance.py` validates provenance records
-`training/tests/test_provenance.py` exercises `ingest_trajectories.py` on fixture data
-`bin/validate_config.py` catches YAML syntax errors pre-deploy (used by `deploy.sh`)
-`ansible/` has no unit tests; however, idempotence is implicitly tested in CI redeploy smoke runs
**Notable gaps:**
-`bin/timmy-orchestrator.sh` is the central governing loop; there is NO Python-level unit test suite for its state machine or its Gitea mutation paths. Validation is manual (orchestration run, log review, ops panel). High regression risk every time `gitea_client.py` changes or Gitea API evolves.
-`ansible/` effective golden state is verified through manual integration runs (PR merge → webhook → ansible-pull). No playbook unit testing framework is set up. Subtle variable name typos or role ordering bugs can cause fleet drift without immediate signal.
-`tasks.py` orchestrates over 15 Huey tasks; each task has branching logic but there are NO dedicated tests for individual tasks. Errors surface at runtime in the Huey worker process, often in staging first. Test infrastructure exists but tasks are not directly targeted.
-`gitea_client.py` — wrapper has zero automated unit tests; it is exercised indirectly via bin scripts. Bugs in pagination, error classification, or token-discovery paths are discovered manually.
-`bin/` operational scripts are shell scripts with minimal coverage (lint exists but not functional tests). Scripts like `agent-loop.sh`, `claude-loop.sh`, `gemini-loop.sh` are dozens of lines of control flow; no mock-based integration tests validate exit code propagation.
-`training/` end-to-end data lineage from `sessions/` → `curated/` → LoRA publish is run manually; Makefile has no smoke test rule to assert final artifacts exist with correct schema.
- No Selenium / Playwright test for Ansible deployments; fleet ops rely on manual `ansible-playbook --check` followed by hot-fix cycles.
This is a conscious trade-off: timmy-config is intentionally lean on in-repo auto-harness because:
1. many parts of timmy-config are themselves test harnesses for other components
2. real coverage happens in integration runs (full-fleet deploy, orchestrator loop activation)
The recommended test additions (if time permits) are:
- In-process Huey task unit tests using an in-memory SQLite database for the Huey backend
- Ansible Molecule scenario for `golden_state` + `deadman_switch` roles
-`bin/timmy-orchestrator.sh` state-machine harness using `bats` + mocked Gitea API fixture
-`gitea_client.py` unit-tests with `urllib` patched and canned API payloads
## Security Considerations
-`server.py` binds `HOST = "0.0.0.0"`, exposing the broadcast bridge beyond localhost unless network controls limit it
- The WebSocket bridge is a broadcast hub without visible authentication in `server.py`; connected clients are trusted to send messages into the bus
-`mcp_servers/desktop_control_server.py` exposes mouse/keyboard/screenshot control through a stdio MCP server. In any non-local or poorly isolated runtime, this is a privileged automation surface
-`app.js` contains hardcoded local/network endpoints such as `http://localhost:${L402_PORT}/api/cost-estimate` and `http://localhost:8082/metrics`; these are convenient for local development but create environment drift and deployment assumptions
-`app.js` also embeds explicit endpoint/status references like `ws://143.198.27.163:8765`, which is operationally brittle and the kind of hardcoded location data that drifts across environments
-`mempalace.js` shells out through `window.electronAPI.execPython(...)`; this is powerful and useful, but it is a clear trust boundary between UI and host execution
-`INVESTIGATION_ISSUE_1145.md` documents an earlier integrity hazard: agents writing to `public/nexus/` instead of canonical root paths. That path confusion is both an operational and security concern because it makes provenance harder to reason about
**Banned provider enforcement:**`site.yml``pre_tasks` loop scans all provider configs (`golden_state_providers`) and fails placement if `item.name in banned_providers`. An `ansible.cfg` + custom `preflight-provider-check.py` ensures Anthropic/Claude family is NEVER deployed on any wizard. This is an guardrail; many scripts also grep for banned strings pre-commit.
## Runtime Truth and Docs Drift
**Token handling:**`gitea_client.py` discovers tokens from file-backed stores; tokens are never CLI args or environment variables exposed to child processes. All bin scripts source `~/.hermes/gitea_token_vps` via heredoc-embedded path; tokens avoid shell expansion. Recommendation: tighten to 0600 permissions enforced by Ansible on token files.
The most important architecture finding in this repo is not a class or subsystem. It is a truth mismatch.
**Cron injection surface:**`cron/jobs.json` is consumed by `bin/cron-manager.sh`; cron expression strings are blindly written to `crontab`. Any injection path there can execute arbitrary code as the user. PRs that modify `cron/` must review with elevated scrutiny.
- README.md says current `main` does not ship a browser 3D world
- CLAUDE.md declares root `app.js` and `index.html` as canonical frontend paths
- tests and browser contract now assume the root frontend exists
**Deploy script privilege:**`deploy.sh` writes under `~/.hermes/` and `~/.timmy/`. The deployment boundary is the user account. If timmy-config is compromised (malicious PR), deploy.sh would plant poisoned config files that the next Hermes agent start will consume. Mitigation: PR review ONLY from trusted committers; CI runs `soul_eval_gate.py` which diffs the proposed config against golden rules forbidding remote base_urls and unknown TTS providers.
All three statements are simultaneously present in this checkout.
**Ansible pull exposure:**`deploy_on_webhook.sh` listens on port 9000 (`/hooks/deploy-timmy-config`). It is currently **no auth** — the endpoint accepts a shared secret check in the payload but that is weak. Gitea webhook secret SHOULD be validated; currently not. This is a pending hardening item.
Grounded evidence:
-`README.md` still says the repo does not contain an active root frontend such as `index.html`, `app.js`, or `style.css`
- the current checkout does contain `index.html`, `app.js`, `style.css`, `manifest.json`, and `gofai_worker.js`
-`BROWSER_CONTRACT.md` explicitly treats those root files as required browser assets
-`tests/test_browser_smoke.py` serves those exact files and validates DOM/WebGL contracts against them
-`tests/test_index_html_integrity.py` assumes `index.html` is canonical and production-relevant
-`CLAUDE.md` says frontend code lives at repo root and explicitly warns against `public/nexus/`
-`INVESTIGATION_ISSUE_1145.md` explains why `public/nexus/` is a bad/corrupt duplicate path and confirms the real classical AI code lives in root `app.js`
**Deadman switch runaway:**`deadman-fallback.py` can re-deploy an earlier config snapshot if the heartbeat stops. It respects a `--dry-run` gate in staging but in prod it RNA mutates `~/.hermes/config.yaml`. A bug could cycle config back to a vulnerable state. The cycle limiter (`MAX_RETRIES=3`) should be enforced vigorously.
The honest conclusion:
- The repo contains a partially restored or actively re-materialized browser surface
- The docs are preserving an older migration truth while the runtime files and smoke contracts describe a newer present-tense truth
- Any future work in `the-nexus` must choose one truth and align `README.md`, `CLAUDE.md`, smoke tests, and file layout around it
**Training data ingestion:**`training/ingest_trajectories.py` walks the user's local `~/.hermes/sessions/` database. If a malicious session record is present, it can poison the training corpus. The `consent_level` field MUST be respected; `build_curated.py` rejects any pair with missing `consent`. This is a trust boundary for model fine-tuning; if crossed, poisoned weights could propagate to agent runs.
That drift is itself a critical architectural fact and should be treated as first-order design debt, not a side note.
## Performance Characteristics
**Startup:**`deploy.sh` is O(file count) copy; small (<0.5 s on SSD). Ansible pull (fleet deploy) is dominated by git clone (~2–3 s) + Ansible run (~5–8 s per host). Network-bound; no heavy CPU work.
**Huey task latency:** Huey runs with `immediate=False` (persistent queue). Latency is bounded by queue drain rate; single-worker can process 12–18 simple tasks/s; heavier tasks (session flush, token budget) can block the queue under high load. Queue size monitored by `pipeline-freshness.sh`.
**Token accounting overhead:**`log_token_usage` writes one line per-task to `~/.hermes/token_usage.jsonl`. Each append locks briefly; negligible for TPS < 100. Database write to `orchestration.db` also performs一條 INSERT per task completion. Both are disk-bound but WAL mode; acceptable for daily operation; verified on macOS local APFS.
**Gitea API rate limits:** The VPS instance uses HTTP Basic API token without rate limiting in current 10k request/minute range. Tasks iterate over repos and open issues; polling every 2 minutes across 7 repos could hit soft limits. `tasks.py` has an exponential backoff on 429 response.
**Bin script boot time:** Shell scripts with embedded Python one-liners (`python3 -c "..."`) have interpreter start cost (~200ms). Suboptimal but acceptable since orchestrator runs every 5 minutes. Candidate for refactor → compiled beef -> faster binary using static lib.
**Training pipeline:** ingesting 10k sessions → filtering → curated → pair-building → training is compute-bound by LoRA step AXOLOTL; data prep is memory-intensive but fits in 8 GB RAM. Pipeline is designed for offline batch; no time guarantees.
**Ansible invariance check cost:** Fleet convergence checks (`--check`) run every PR merge; a full fleet check is a network round-trip (~30 hosts) which takes ~15 s with local parallel = acceptable. The `pre_tasks` banned provider scan is a grep over files; sub-second.
## Sidecar Boundary and Timmy-Home Relationship
The sidecar pattern is explicit: `timmy-config` owns the policy layer that configures Hermes; `hermes-agent` owns runtime execution environment (Python interpreter, tool sandboxes, model provider adapters). `timmy-home` is the user data overlay: personal memories, timmy-specific local state, `.hermes/` symlink roots.
From `README.md`:
> This repo is the canonical source of truth for Timmy's identity and harness overlay. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
The boundary contract:
-`deploy.sh` writes only to `$HERMES_HOME` and `$TIMMY_HOME`; it never modifies `$HERMES_HOME/hermes-agent/` source trees
-`orchestration.py` and `tasks.py` dynamically discover the Hermes install by `HERMES_HOME` and import from `hermes_agent` virtualenv within it; they use only configuration overrides, never code mutation
-`bin/` scripts operate hermes via the CLI (`hermes chat --yolo`, `hermes status`) and via Gitea API; they do not edit any agent Python modules
-`ansible/` manages system-level services (cron, deadman, watchdog) and file placement; it deliberately avoids tampering with agent virtualenv contents
-`ansible/roles/golden_state` installs a Cannibal provider chain constraint; it is a policy-enforcement overlay, not a code fork
In practical terms, when you run `hermes` after `./deploy.sh`, the agent reads `~/.hermes/config.yaml` that came from this repo. That config selects model providers, enables toolsets, sets delegation, privacy, memory limits. The agent executable itself lives in `~/.hermes/hermes-agent/venv/` and is managed by the user's package manager / pew / uv; timmy-config does not touch it.
`timmy-home` is distinct: it is the per-user interactive ground (notes, metrics cache, local workspace files, chat history). `timmy-config` is blanket over all machines; it is not user-specific session state. `timmy-home` may extend memory files (`memories/`), but those also originated in `timmy-config` and are overlaid, not replaced.
**Sidecar failure contract:** If timmy-config deployment fails but `~/.hermes/hermes-agent/` remains operable, the agent SHOULD continue running on the previous config. The sidecar must never make the harness unrecoverable. A failed `deploy.sh` or Ansible run leaves the harness running on the existing stable state; atom + symlink update is used to avoid partial writes.
## Performance Characteristics
**Deploy speed**: `deploy.sh` copies 646 files (~15 MB total) in ~0.3–0.7 s on modern SSDs. Main bottleneck is YAML/JSON parsing (`config_validator.py` runs after copy).
- Key files: `config.yaml` (~4 KB) parses via `yaml.safe_load` in <5ms
- Deployment then completes by touching `~/.timmy/SOUL.md` (cold-cache ~0.4 ms)
**Runtime overhead**: `tasks.py` background tasks run inside Huey worker processes; each task is limited to 180 s timeout (default `HERMES_TIMEOUT`). The `token_budget_enforcer` hits SQLite with a simple `SELECT sum(tokens) FROM usage WHERE day = today`; aggregation over 10k rows is sub-10ms on local SSD.
**Gitea API calls**: Most `gitea_client.py` operations are `GET /api/v1/repos/...` which are served locally; typical latency 40–120 ms per call. The agent batch-worker pattern aims to minimize round trips. `ops-panel.sh` makes several queries concurrently but remains sub-second overall.
**Processing time**: `training/ingest_trajectories.py` processes a 24-hour session backlog (~8k sessions) in ~45 s on M3 Max; dominated by JSON deserialization and deduplication.
**Memory footprint**: The sidecar itself consumes negligible RAM (Python interpreter + config ~20 MB resident). The heavy runtime is the agent virtualenv (Claude/LLM inference); that is outside this repo's concern.
**Concurrency control**: `deploy.sh` is single-instance (no race); Ansible `site.yml` uses `serial: 1` (converge hosts one at a time for noise reduction), but can be run in parallel for sub-roles like `deadman_switch`. Fleet deployments across 10 hosts complete in ~90 s serial, ~25 s with 4-way parallel.
**Webhook latency**: From PR merge to webhook delivery to `deploy_on_webhook.sh` = Gitea→HTTP POST (~0.5–2 s delay variable); subsequent ansible-pull run ~8 s. Mutation visible in ~10–15 s per target machine path.
**Orchestration cache hits**: The Huey result backend reads/writes a few KB per task; SQLite WAL caching keeps hot operations sub-millisecond. Task throughput limited more by Gitea API availability than local disk.
"horizon_reachable flipped to True — either we served 1M concurrent men on a MacBook "
"or something in the analysis logic is being dishonest about physics."
)
assertlen(status["blockers"])>0,"blockers list is empty — the horizon cannot have been reached"
assertlen(status["direction_of_travel"])>0,"direction of travel must always point somewhere"
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.