Compare commits

..

13 Commits

Author SHA1 Message Date
Alexander Whitestone
e6279b856a Merge remote-tracking branch 'origin/main' into fix/680 2026-04-15 21:50:46 -04:00
601c5fe267 Merge pull request 'research: Long Context vs RAG Decision Framework (backlog item #4.3)' (#750) from research/long-context-vs-rag into main 2026-04-16 01:39:55 +00:00
Alexander Whitestone
b3dd906805 docs: add fleet-ops genome analysis (#680) 2026-04-15 21:36:08 -04:00
6222b18a38 research: Long Context vs RAG Decision Framework (backlog item #4.3)
Some checks are pending
Smoke Test / smoke (pull_request) Waiting to run
Highest-ratio research item (Impact:4, Effort:1, Ratio:4.0).
Covers decision matrix for stuffing vs RAG, our stack constraints,
context budgeting, progressive loading, and smart compression.
2026-04-15 16:38:07 +00:00
10fd467b28 Merge pull request 'fix: resolve v2 harness import collision with explicit path loading (#716)' (#748) from burn/716-1776264183 into main 2026-04-15 16:04:04 +00:00
ba2d365669 fix: resolve v2 harness import collision with explicit path loading (closes #716)
Some checks are pending
Smoke Test / smoke (pull_request) Waiting to run
2026-04-15 11:46:37 -04:00
5a696c184e Merge pull request 'feat: add NH Broadband install packet scaffold (closes #740)' (#741) from sprint/issue-740 into main 2026-04-15 11:57:34 +00:00
Alexander Whitestone
90d8daedcf feat: add NH Broadband install packet scaffold (closes #740)
Some checks failed
Smoke Test / smoke (pull_request) Failing after 19s
2026-04-15 07:33:01 -04:00
3016e012cc Merge PR #739: feat: add laptop fleet planner scaffold (#530) 2026-04-15 06:17:19 +00:00
60b9b90f34 Merge PR #738: feat: add Know Thy Father epic orchestrator 2026-04-15 06:12:05 +00:00
Alexander Whitestone
c818a30522 feat: add laptop fleet planner scaffold (#530)
Some checks failed
Smoke Test / smoke (pull_request) Failing after 30s
2026-04-15 02:11:31 -04:00
Alexander Whitestone
89dfa1e5de feat: add Know Thy Father epic orchestrator (#582)
Some checks failed
Smoke Test / smoke (pull_request) Failing after 23s
2026-04-15 01:52:58 -04:00
Alexander Whitestone
d791c087cb feat: add Ezra mempalace integration packet (#570)
Some checks failed
Smoke Test / smoke (pull_request) Failing after 22s
2026-04-15 01:37:47 -04:00
22 changed files with 1816 additions and 572 deletions

485
GENOME.md
View File

@@ -1,485 +0,0 @@
# GENOME.md — hermes-agent
Repository-wide facts in this document come from two grounded passes over `/Users/apayne/hermes-agent` on 2026-04-15:
- `python3 ~/.hermes/pipelines/codebase-genome.py --path /Users/apayne/hermes-agent --dry-run`
- targeted manual inspection of the core runtime, tooling, gateway, ACP, cron, and persistence modules
This is the Timmy Foundation fork of `hermes-agent`, not a generic upstream summary.
## Project Overview
`hermes-agent` is a multi-surface AI agent runtime, not just a terminal chatbot.
It combines:
- a rich interactive CLI/TUI
- a synchronous core agent loop
- a large tool registry with terminal, file, web, browser, MCP, memory, cron, delegation, and code-execution tools
- a multi-platform messaging gateway
- ACP editor integration
- an OpenAI-compatible API server
- cron scheduling
- persistent session/memory/state stores
- batch and RL-adjacent research surfaces
The product promise in `README.md` is that Hermes is a self-improving agent:
- it creates and updates skills
- persists memory across sessions
- searches past conversations
- delegates to subagents
- runs scheduled automations
- can operate through multiple runtime backends and communication surfaces
Grounded quick facts from the analyzed checkout:
- pipeline scan: 395 source files, 561 test files, 11 config files, 331,794 total lines
- Python-only pass: 307 non-test `.py` modules and 561 test Python files
- Python LOC split: 211,709 source LOC / 184,512 test LOC
- current branch: `main`
- current commit: `95d11dfd`
- last commit seen by pipeline: `95d11dfd docs: automation templates gallery + comparison post (#9821)`
- total commits reported by pipeline: 4140
- largest Python modules observed:
- `run_agent.py` — 10,871 LOC
- `cli.py` — 10,017 LOC
- `gateway/run.py` — 9,289 LOC
- `hermes_cli/main.py` — 6,056 LOC
That size profile matters. Hermes is architecturally broad, but a few very large orchestration files still dominate the control plane.
## Architecture Diagram
```mermaid
flowchart TD
A[CLI / Gateway / ACP / API / Cron / Batch] --> B[AIAgent in run_agent.py]
B --> C[agent/prompt_builder.py]
B --> D[agent/memory_manager.py]
B --> E[agent/context_compressor.py]
B --> F[model_tools.py]
F --> G[tools/registry.py]
G --> H[tools/*.py built-in tools]
G --> I[tools/mcp_tool.py imported MCP tools]
G --> J[delegate / execute_code / cron / browser / terminal / file tools]
B --> K[hermes_state.py SQLite SessionDB]
B --> L[toolsets.py toolset selection]
M[cli.py + hermes_cli/main.py] --> B
N[gateway/run.py] --> B
O[acp_adapter/server.py] --> B
P[gateway/platforms/api_server.py] --> B
Q[cron/scheduler.py + cron/jobs.py] --> B
R[batch_runner.py] --> B
N --> S[gateway/session.py]
N --> T[gateway/platforms/* adapters]
P --> U[Responses API store]
O --> V[ACP session/event server]
Q --> W[cron job persistence + delivery]
K --> X[state.db / FTS5 search]
S --> Y[sessions.json mapping]
J --> Z[local shell, files, web, browser, subprocesses, remote MCP servers]
```
## Entry Points and Data Flow
### Primary entry points
1. `hermes``hermes_cli.main:main`
- canonical CLI entry point
- preloads profile context and builds the argparse/subcommand shell
- hands interactive chat to `cli.py`
2. `hermes-agent``run_agent:main`
- direct runner around the core agent loop
- closest entry point to the raw agent runtime
3. `hermes-acp``acp_adapter.entry:main`
- ACP server for VS Code / Zed / JetBrains style integrations
4. `gateway/run.py`
- async orchestration loop for Telegram, Discord, Slack, WhatsApp, Signal, Matrix, webhook, email, SMS, and other adapters
5. `gateway/platforms/api_server.py`
- OpenAI-compatible HTTP surface
- exposes `/v1/chat/completions`, `/v1/responses`, `/v1/models`, `/v1/runs`, and `/health`
6. `cron/scheduler.py` + `cron/jobs.py`
- scheduled job execution and delivery
7. `batch_runner.py`
- parallel batch trajectory and research workloads
### Core data flow
1. An entry surface receives input:
- terminal prompt
- incoming platform message
- ACP editor request
- HTTP request
- scheduled cron job
- batch input
2. The surface resolves runtime state:
- profile/config
- platform identity
- model/provider settings
- toolset selection
- current session ID and conversation history
3. `run_agent.py` assembles the effective prompt:
- persona/system directives
- platform hints
- context files (`AGENTS.md`, `SOUL.md`, repo-local context)
- skill content
- memory blocks from `agent/memory_manager.py`
- compression summaries from `agent/context_compressor.py`
4. `model_tools.py` discovers and filters tools:
- imports tool modules so they self-register into `tools/registry.py`
- resolves enabled toolsets from `toolsets.py`
- returns tool schemas to the active model provider
5. The model responds with either:
- final assistant text
- tool calls
6. Tool calls are dispatched through:
- `model_tools.py`
- `tools/registry.py`
- the concrete tool handler
7. Tool outputs are appended back into the conversation and the loop continues until a final answer is produced.
8. State is persisted through:
- `hermes_state.py` for sessions/messages/search
- `gateway/session.py` for gateway session routing state
- dedicated stores for response APIs, background processes, and cron jobs
This is a layered architecture: many user-facing surfaces, one central agent runtime, one central tool registry, and several specialized persistence layers.
## Key Abstractions
### 1. `AIAgent` (`run_agent.py`)
This is the heart of Hermes.
It owns:
- provider/model invocation
- tool-loop orchestration
- prompt assembly
- memory integration
- compression and token budgeting
- final response construction
### 2. `IterationBudget` (`run_agent.py`)
A guardrail abstraction around how much work a turn may do.
It matters because Hermes is not just text generation — it may launch tools, spawn subagents, or recurse through internal workflows.
### 3. `ToolRegistry` / tool self-registration (`tools/registry.py`)
Every major tool advertises itself into a central registry.
That gives Hermes one place to manage:
- schemas
- handlers
- availability checks
- environment requirements
- dispatch behavior
This is a defining architectural trait of the codebase.
### 4. Toolsets (`toolsets.py`)
Tool exposure is not hardcoded per surface.
Instead, Hermes uses named toolsets and platform-specific aliases such as CLI, gateway, ACP, and API-server presets.
This is how one agent runtime can safely shape different operating surfaces.
### 5. `MemoryManager` (`agent/memory_manager.py`)
Hermes supports both built-in memory and external memory providers.
The abstraction here is not “a markdown note” but a memory multiplexor that decides what memory context gets injected and how memory tools behave.
### 6. `ContextCompressor` (`agent/context_compressor.py`)
Compression is a first-class subsystem.
Hermes treats long-context management as part of the runtime architecture, not an afterthought.
### 7. `SessionDB` (`hermes_state.py`)
SQLite + FTS5 session persistence is core infrastructure.
This is what makes cross-session recall, search, billing/accounting, and agent continuity practical.
### 8. `SessionStore` / `SessionContext` (`gateway/session.py`)
The gateway needs a routing abstraction different from raw message history.
It tracks home channels, session keys, reset policy, and platform-specific mapping.
### 9. `HermesACPAgent` (`acp_adapter/server.py`)
ACP is not bolted on as a thin shim.
It wraps Hermes as an editor-native agent with its own session/event lifecycle.
### 10. `ProcessRegistry` (`tools/process_registry.py`)
Long-running background commands are first-class managed resources.
Hermes tracks them explicitly rather than treating subprocesses as disposable side effects.
## API Surface
### CLI and shell API
Important surfaces exposed by packaging and command routing:
- `hermes`
- `hermes-agent`
- `hermes-acp`
- subcommands in `hermes_cli/main.py`
- slash commands defined centrally in `hermes_cli/commands.py`
The slash-command registry is a notable design choice because the same command metadata feeds:
- CLI help
- gateway help
- Telegram bot command menus
- Slack subcommand routing
- autocomplete
### HTTP API surface
From `gateway/platforms/api_server.py`, the major routes are:
- `POST /v1/chat/completions`
- `POST /v1/responses`
- `GET /v1/responses/{response_id}`
- `DELETE /v1/responses/{response_id}`
- `GET /v1/models`
- `POST /v1/runs`
- `GET /v1/runs/{run_id}/events`
- `GET /health`
This makes Hermes usable as an OpenAI-compatible backend for external clients and web UIs.
### Messaging platform API surface
The gateway platform abstraction exposes Hermes across many adapters under `gateway/platforms/`.
Observed adapters include:
- Telegram
- Discord
- Slack
- WhatsApp
- Signal
- Matrix
- Home Assistant
- webhook
- email
- SMS
- Mattermost
- QQBot
- WeCom / Weixin
- DingTalk
- BlueBubbles
### Tool API surface
The tool surface is broad and central to the product:
- terminal execution
- process management
- file IO / search / patch
- browser automation
- web search/extract
- cron jobs
- memory and session search
- subagent delegation
- execute_code sandbox
- MCP tool import
- TTS / vision / image generation
- smart-home integrations
### MCP / ACP surface
Hermes participates on both sides:
- as an MCP client via `tools/mcp_tool.py`
- as an MCP server for messaging/session capabilities via `mcp_serve.py`
- as an ACP server via `acp_adapter/*`
That makes Hermes an orchestration hub, not just a single runtime process.
## Test Coverage Gaps
### Current observed test posture
A live collection pass on the analyzed checkout produced:
- 11,470 tests collected
- 50 deselected
- 6 collection errors
The collection errors are all ACP-related:
- `tests/acp/test_entry.py`
- `tests/acp/test_events.py`
- `tests/acp/test_mcp_e2e.py`
- `tests/acp/test_permissions.py`
- `tests/acp/test_server.py`
- `tests/acp/test_tools.py`
Root cause from the live run:
- `ModuleNotFoundError: No module named 'acp'`
- equivalently: `ModuleNotFoundError: No module named `acp`` in the failing ACP collection lane
- this lines up with `pyproject.toml`, where ACP support is optional and gated behind the `acp` extra (`agent-client-protocol>=0.9.0,<1.0`)
A secondary signal from collection:
- `tests/tools/test_file_sync_perf.py` emits `PytestUnknownMarkWarning: Unknown pytest.mark.ssh`
This specific collection problem is now tracked in hermes-agent issue `#779`.
### Where coverage looks strong
By file distribution, the codebase is heavily tested around:
- `gateway/`
- `tools/`
- `hermes_cli/`
- `run_agent`
- `cli`
- `agent`
That matches the product center of gravity: runtime orchestration, tool dispatch, and communication surfaces.
### Highest-value remaining gaps
The biggest gaps are not in total test count. They are in critical-path complexity.
1. `run_agent.py`
- the most important file in the repo and also the largest
- likely has broad behavior coverage, but branch-level completeness is improbable at 10k+ LOC
2. `cli.py`
- extremely large UI/orchestration surface
- high risk of hidden regressions across streaming, voice, slash-command routing, and interaction state
3. `gateway/run.py`
- core async gateway brain
- many platform-specific edge cases converge here
4. `hermes_cli/main.py`
- main command shell is huge and mixes parsing, routing, setup, and environment behavior
5. ACP end-to-end coverage under optional dependency installation
- current collection failure proves this lane is environment-sensitive
- ACP deserves a reliable extras-aware CI lane so collection failures are surfaced intentionally, not accidentally
6. `batch_runner.py` and `trajectory_compressor.py`
- research/training surfaces appear lighter and deserve more explicit contract tests
7. cron lifecycle and delivery failure behavior
- `cron/scheduler.py` and `cron/jobs.py` are safety-critical for unattended automation
8. optional or integration-heavy backends
- platform adapters like Feishu / Discord / Telegram
- container/cloud terminal environments
- MCP server interop
- API server streaming edge cases
### Missing tests for critical paths
The next high-leverage test work should target:
- ACP extras-enabled collection and smoke execution
- `run_agent.py` happy-path + interruption + compression + delegate + approval interaction boundaries
- `gateway/run.py` cache/interrupt/restart/session-boundary behavior at integration level
- `cron/scheduler.py` delivery error recovery, stale-job cleanup, and due-job fairness
- `batch_runner.py` and `trajectory_compressor.py` contract tests
- API-server Responses lifecycle and streaming segmentation behavior
## Security Considerations
Hermes is security-sensitive because it can run commands, read files, talk to platforms, call browsers, and broker MCP tools.
The codebase already contains several strong defensive layers.
### 1. Prompt-injection defense for context files
`agent/prompt_builder.py` scans context files such as `AGENTS.md`, `SOUL.md`, and similar instructions for:
- prompt-override language
- hidden comment/HTML tricks
- invisible unicode
- secret exfiltration patterns
That is an important architectural guardrail because Hermes explicitly ingests repository-local instruction files.
### 2. Dangerous-command approval system
`tools/approval.py` centralizes detection of destructive commands and risky shell behavior.
The repo treats command approval as a core policy subsystem, not a UI nicety.
### 3. File-path and device protections
`tools/file_tools.py` blocks dangerous device paths and sensitive system writes.
It also redacts sensitive content in read/search results and blocks reads from internal Hermes-sensitive locations.
### 4. Terminal/workdir sanitization
`tools/terminal_tool.py` constrains workdir handling and shell execution boundaries.
This matters because terminal access is one of the highest-risk capabilities Hermes exposes.
### 5. MCP subprocess hygiene
`tools/mcp_tool.py` filters environment variables passed to MCP servers and strips credentials from surfaced errors.
Given that MCP introduces third-party subprocesses into the tool graph, this is a critical boundary.
### 6. Gateway privacy and pairing controls
Gateway code includes pairing, session routing, and ID-redaction logic.
That is important because Hermes operates across public and semi-public communication surfaces.
### 7. HTTP/API hardening
`gateway/platforms/api_server.py` includes auth, CORS handling, and response-store boundaries.
This makes the API server a real production surface, not just a convenience wrapper.
### 8. Supply-chain awareness
`pyproject.toml` pins many dependencies to constrained ranges and includes security notes for selected packages.
That indicates explicit supply-chain thinking in dependency management.
## Performance Characteristics
### 1. prompt caching is a first-class optimization
Hermes preserves long-lived agent instances and supports provider-specific prompt caching for compatible providers.
That is essential because repeated system prompts and tool schemas are expensive.
### 2. context compression is built into the runtime
Compression is not a manual rescue path only.
Hermes estimates token budgets, prunes old tool noise, and can summarize prior context when needed.
### 3. parallel tool execution exists, but selectively
The runtime can batch safe tool calls in parallel rather than serializing every read-only action.
This improves latency without giving up all control over side effects.
### 4. Async loop reuse reduces orchestration overhead
The runtime avoids constantly recreating event loops for async tools, which matters when many tool calls are issued inside otherwise synchronous agent flows.
### 5. SQLite is tuned for agent workloads
`hermes_state.py` uses WAL mode, short lock windows, and retry logic instead of pretending SQLite is magically contention-free.
This is a sensible tradeoff for sovereign local persistence.
### 6. Background processes are explicitly managed
`ProcessRegistry` maintains output windows, state, and watcher behavior so long-running commands do not become invisible resource leaks.
### 7. Large control-plane files are a real performance and maintenance cost
The repo has broad feature coverage, but a few huge orchestration files dominate complexity:
- `run_agent.py`
- `cli.py`
- `gateway/run.py`
- `hermes_cli/main.py`
These files are not just maintainability debt; they also create higher reasoning and regression load for both humans and agents working in the codebase.
## Critical Modules to Name Explicitly
The following files define the real control plane of Hermes and should always be named in any serious architecture summary:
- `run_agent.py`
- `model_tools.py`
- `tools/registry.py`
- `toolsets.py`
- `cli.py`
- `hermes_cli/main.py`
- `hermes_cli/commands.py`
- `hermes_state.py`
- `agent/prompt_builder.py`
- `agent/context_compressor.py`
- `agent/memory_manager.py`
- `tools/terminal_tool.py`
- `tools/file_tools.py`
- `tools/mcp_tool.py`
- `gateway/run.py`
- `gateway/session.py`
- `gateway/platforms/api_server.py`
- `acp_adapter/server.py`
- `cron/scheduler.py`
- `cron/jobs.py`
- `batch_runner.py`
- `trajectory_compressor.py`
## Practical Takeaway
Hermes Agent is best understood as a sovereign agent operating system.
The CLI, gateway, ACP server, API server, cron scheduler, and tool graph are all frontends onto one core runtime.
The strongest qualities of the codebase are:
- broad feature coverage
- a central tool-registry design
- serious persistence/memory infrastructure
- strong security thinking around prompts, tools, files, and approvals
- a deep test surface across gateway/tools/CLI behavior
The most important risks are:
- extremely large orchestration files
- optional-surface fragility, especially ACP extras and integration-heavy adapters
- under-tested research/batch lanes relative to the core runtime
- growing complexity at the boundaries where multiple surfaces reuse the same agent loop

View File

@@ -0,0 +1,61 @@
# Know Thy Father — Multimodal Media Consumption Pipeline
Refs #582
This document makes the epic operational by naming the current source-of-truth scripts, their handoff artifacts, and the one-command runner that coordinates them.
## Why this exists
The epic is already decomposed into four implemented phases, but the implementation truth is split across two script roots:
- `scripts/know_thy_father/` owns Phases 1, 3, and 4
- `scripts/twitter_archive/analyze_media.py` owns Phase 2
- `twitter-archive/know-thy-father/tracker.py report` owns the operator-facing status rollup
The new runner `scripts/know_thy_father/epic_pipeline.py` does not replace those scripts. It stitches them together into one explicit, reviewable plan.
## Phase map
| Phase | Script | Primary output |
|-------|--------|----------------|
| 1. Media Indexing | `scripts/know_thy_father/index_media.py` | `twitter-archive/know-thy-father/media_manifest.jsonl` |
| 2. Multimodal Analysis | `scripts/twitter_archive/analyze_media.py --batch 10` | `twitter-archive/know-thy-father/analysis.jsonl` + `meaning-kernels.jsonl` + `pipeline-status.json` |
| 3. Holographic Synthesis | `scripts/know_thy_father/synthesize_kernels.py` | `twitter-archive/knowledge/fathers_ledger.jsonl` |
| 4. Cross-Reference Audit | `scripts/know_thy_father/crossref_audit.py` | `twitter-archive/notes/crossref_report.md` |
| 5. Processing Log | `twitter-archive/know-thy-father/tracker.py report` | `twitter-archive/know-thy-father/REPORT.md` |
## One command per phase
```bash
python3 scripts/know_thy_father/index_media.py --tweets twitter-archive/extracted/tweets.jsonl --output twitter-archive/know-thy-father/media_manifest.jsonl
python3 scripts/twitter_archive/analyze_media.py --batch 10
python3 scripts/know_thy_father/synthesize_kernels.py --input twitter-archive/media/manifest.jsonl --output twitter-archive/knowledge/fathers_ledger.jsonl --summary twitter-archive/knowledge/fathers_ledger.summary.json
python3 scripts/know_thy_father/crossref_audit.py --soul SOUL.md --kernels twitter-archive/notes/know_thy_father_crossref.md --output twitter-archive/notes/crossref_report.md
python3 twitter-archive/know-thy-father/tracker.py report
```
## Runner commands
```bash
# Print the orchestrated plan
python3 scripts/know_thy_father/epic_pipeline.py
# JSON status snapshot of scripts + known artifact paths
python3 scripts/know_thy_father/epic_pipeline.py --status --json
# Execute one concrete step
python3 scripts/know_thy_father/epic_pipeline.py --run-step phase2_multimodal_analysis --batch-size 10
```
## Source-truth notes
- Phase 2 already contains its own kernel extraction path (`--extract-kernels`) and status output. The epic runner does not reimplement that logic.
- Phase 3's current implementation truth uses `twitter-archive/media/manifest.jsonl` as its default input. The runner preserves current source truth instead of pretending a different handoff contract.
- The processing log in `twitter-archive/know-thy-father/PROCESSING_LOG.md` can drift from current code reality. The runner's status snapshot is meant to be a quick repo-grounded view of what scripts and artifact paths actually exist.
## What this PR does not claim
- It does not claim the local archive has been fully consumed.
- It does not claim the halted processing log has been resumed.
- It does not claim fact_store ingestion has been fully wired end-to-end.
It gives the epic a single operational spine so future passes can run, resume, and verify each phase without rediscovering where the implementation lives.

View File

@@ -0,0 +1,92 @@
# MemPalace v3.0.0 — Ezra Integration Packet
This packet turns issue #570 into an executable, reviewable integration plan for Ezra's Hermes home.
It is a repo-side scaffold: no live Ezra host changes are claimed in this artifact.
## Commands
```bash
pip install mempalace==3.0.0
mempalace init ~/.hermes/ --yes
cat > ~/.hermes/mempalace.yaml <<'YAML'
wing: ezra_home
palace: ~/.mempalace/palace
rooms:
- name: sessions
description: Conversation history and durable agent transcripts
globs:
- "*.json"
- "*.jsonl"
- name: config
description: Hermes configuration and runtime settings
globs:
- "*.yaml"
- "*.yml"
- "*.toml"
- name: docs
description: Notes, markdown docs, and operating reports
globs:
- "*.md"
- "*.txt"
people: []
projects: []
YAML
echo "" | mempalace mine ~/.hermes/
echo "" | mempalace mine ~/.hermes/sessions/ --mode convos
mempalace search "your common queries"
mempalace wake-up
hermes mcp add mempalace -- python -m mempalace.mcp_server
```
## Manual config template
```yaml
wing: ezra_home
palace: ~/.mempalace/palace
rooms:
- name: sessions
description: Conversation history and durable agent transcripts
globs:
- "*.json"
- "*.jsonl"
- name: config
description: Hermes configuration and runtime settings
globs:
- "*.yaml"
- "*.yml"
- "*.toml"
- name: docs
description: Notes, markdown docs, and operating reports
globs:
- "*.md"
- "*.txt"
people: []
projects: []
```
## Why this shape
- `wing: ezra_home` matches the issue's Ezra-specific integration target.
- `rooms` split the mined material into sessions, config, and docs to keep retrieval interpretable.
- Mining commands pipe empty stdin to avoid the interactive entity-detector hang noted in the evaluation.
## Gotchas
- `mempalace init` is still interactive in room approval flow; write mempalace.yaml manually if the init output stalls.
- The yaml key is `wing:` not `wings:`. Using the wrong key causes mine/setup failures.
- Pipe empty stdin into mining commands (`echo "" | ...`) to avoid the entity-detector stdin hang on larger directories.
- First mine downloads the ChromaDB embedding model cache (~79MB).
- Report Ezra's before/after metrics back to issue #568 after live installation and retrieval tests.
## Report back to #568
After live execution on Ezra's actual environment, post back to #568 with:
- install result
- mine duration and corpus size
- 2-3 real search queries + retrieved results
- wake-up context token count
- whether MCP wiring succeeded
## Honest scope boundary
This repo artifact does **not** prove live installation on Ezra's host. It makes the work reproducible and testable so the next pass can execute it without guesswork.

View File

@@ -0,0 +1,62 @@
fleet_name: timmy-laptop-fleet
machines:
- hostname: timmy-anchor-a
machine_type: laptop
ram_gb: 16
cpu_cores: 8
os: macOS
adapter_condition: good
idle_watts: 11
always_on_capable: true
notes: candidate 24/7 anchor agent
- hostname: timmy-anchor-b
machine_type: laptop
ram_gb: 8
cpu_cores: 4
os: Linux
adapter_condition: good
idle_watts: 13
always_on_capable: true
notes: candidate 24/7 anchor agent
- hostname: timmy-daylight-a
machine_type: laptop
ram_gb: 32
cpu_cores: 10
os: macOS
adapter_condition: ok
idle_watts: 22
always_on_capable: true
notes: higher-performance daylight compute
- hostname: timmy-daylight-b
machine_type: laptop
ram_gb: 16
cpu_cores: 8
os: Linux
adapter_condition: ok
idle_watts: 19
always_on_capable: true
notes: daylight compute node
- hostname: timmy-daylight-c
machine_type: laptop
ram_gb: 8
cpu_cores: 4
os: Windows
adapter_condition: needs_replacement
idle_watts: 17
always_on_capable: false
notes: repair power adapter before production duty
- hostname: timmy-desktop-nas
machine_type: desktop
ram_gb: 64
cpu_cores: 12
os: Linux
adapter_condition: good
idle_watts: 58
always_on_capable: false
has_4tb_ssd: true
notes: desktop plus 4TB SSD NAS and heavy compute during peak sun

View File

@@ -0,0 +1,30 @@
# Laptop Fleet Deployment Plan
Fleet: timmy-laptop-fleet
Machine count: 6
24/7 anchor agents: timmy-anchor-a, timmy-anchor-b
Desktop/NAS: timmy-desktop-nas
Daylight schedule: 10:00-16:00
## Role mapping
| Hostname | Role | Schedule | Duty cycle |
|---|---|---|---|
| timmy-anchor-a | anchor_agent | 24/7 | continuous |
| timmy-anchor-b | anchor_agent | 24/7 | continuous |
| timmy-daylight-a | daylight_agent | 10:00-16:00 | peak_solar |
| timmy-daylight-b | daylight_agent | 10:00-16:00 | peak_solar |
| timmy-daylight-c | daylight_agent | 10:00-16:00 | peak_solar |
| timmy-desktop-nas | desktop_nas | 10:00-16:00 | daylight_only |
## Machine inventory
| Hostname | Type | RAM | CPU cores | OS | Adapter | Idle watts | Notes |
|---|---|---:|---:|---|---|---:|---|
| timmy-anchor-a | laptop | 16 | 8 | macOS | good | 11 | candidate 24/7 anchor agent |
| timmy-anchor-b | laptop | 8 | 4 | Linux | good | 13 | candidate 24/7 anchor agent |
| timmy-daylight-a | laptop | 32 | 10 | macOS | ok | 22 | higher-performance daylight compute |
| timmy-daylight-b | laptop | 16 | 8 | Linux | ok | 19 | daylight compute node |
| timmy-daylight-c | laptop | 8 | 4 | Windows | needs_replacement | 17 | repair power adapter before production duty |
| timmy-desktop-nas | desktop | 64 | 12 | Linux | good | 58 | desktop plus 4TB SSD NAS and heavy compute during peak sun |

View File

@@ -0,0 +1,37 @@
# NH Broadband Install Packet
**Packet ID:** nh-bb-20260415-113232
**Generated:** 2026-04-15T11:32:32.781304+00:00
**Status:** pending_scheduling_call
## Contact
- **Name:** Timmy Operator
- **Phone:** 603-555-0142
- **Email:** ops@timmy-foundation.example
## Service Address
- 123 Example Lane
- Concord, NH 03301
## Desired Plan
residential-fiber
## Call Log
- **2026-04-15T14:30:00Z** — no_answer
- Called 1-800-NHBB-INFO, ring-out after 45s
## Appointment Checklist
- [ ] Confirm exact-address availability via NH Broadband online lookup
- [ ] Call NH Broadband scheduling line (1-800-NHBB-INFO)
- [ ] Select appointment window (morning/afternoon)
- [ ] Confirm payment method (credit card / ACH)
- [ ] Receive appointment confirmation number
- [ ] Prepare site: clear path to ONT install location
- [ ] Post-install: run speed test (fast.com / speedtest.net)
- [ ] Log final speeds and appointment outcome

View File

@@ -0,0 +1,27 @@
contact:
name: Timmy Operator
phone: "603-555-0142"
email: ops@timmy-foundation.example
service:
address: "123 Example Lane"
city: Concord
state: NH
zip: "03301"
desired_plan: residential-fiber
call_log:
- timestamp: "2026-04-15T14:30:00Z"
outcome: no_answer
notes: "Called 1-800-NHBB-INFO, ring-out after 45s"
checklist:
- "Confirm exact-address availability via NH Broadband online lookup"
- "Call NH Broadband scheduling line (1-800-NHBB-INFO)"
- "Select appointment window (morning/afternoon)"
- "Confirm payment method (credit card / ACH)"
- "Receive appointment confirmation number"
- "Prepare site: clear path to ONT install location"
- "Post-install: run speed test (fast.com / speedtest.net)"
- "Log final speeds and appointment outcome"

397
genomes/fleet-ops-GENOME.md Normal file
View File

@@ -0,0 +1,397 @@
# GENOME.md — fleet-ops
Host artifact for timmy-home issue #680. The analyzed code lives in the separate `fleet-ops` repository; this document is the curated genome written from a fresh clone of that repo at commit `38c4eab`.
## Project Overview
`fleet-ops` is the infrastructure and operations control plane for the Timmy Foundation fleet. It is not a single deployable application. It is a mixed ops repository with four overlapping layers:
1. Ansible orchestration for VPS provisioning and service rollout.
2. Small Python microservices for shared fleet state.
3. Cron- and CLI-driven operator scripts.
4. A separate local `docker-compose.yml` sandbox for a simplified all-in-one stack.
Two facts shape the repo more than anything else:
- The real fleet deployment path starts at `site.yml``playbooks/site.yml` and lands services through Ansible roles.
- The repo also contains several aspirational or partially wired Python modules whose names imply runtime importance but whose deployment path is weak, indirect, or missing.
Grounded metrics from the fresh analysis run:
- `python3 ~/.hermes/pipelines/codebase-genome.py --path /tmp/fleet-ops-genome --dry-run` reported `97` source files, `12` test files, `29` config files, and `16,658` total lines.
- A local filesystem count found `39` Python source files, `12` Python test files, and `74` YAML files.
- `python3 -m pytest -q --continue-on-collection-errors` produced `158 passed, 1 failed, 2 errors`.
The repo is therefore operationally substantial, but only part of that surface is coherently tested and wired.
## Architecture
```mermaid
graph TD
A[site.yml] --> B[playbooks/site.yml]
B --> C[preflight.yml]
B --> D[baseline.yml]
B --> E[deploy_ollama.yml]
B --> F[deploy_gitea.yml]
B --> G[deploy_hermes.yml]
B --> H[deploy_conduit.yml]
B --> I[harmony_audit role]
G --> J[playbooks/host_vars/* wizard_instances]
G --> K[hermes-agent role]
K --> L[systemd wizard services]
M[templates/fleet-deploy-hook.service] --> N[scripts/deploy-hook.py]
N --> B
O[playbooks/roles/message-bus/templates/busd.service.j2] --> P[message_bus.py]
Q[playbooks/roles/knowledge-store/templates/knowledged.service.j2] --> R[knowledge_store.py]
S[registry.yaml] --> T[health_dashboard.py]
S --> U[scripts/registry_health_updater.py]
S --> V[federation_sync.py]
W[cron/dispatch-consumer.yml] --> X[scripts/dispatch_consumer.py]
Y[morning_report_cron.yml] --> Z[scripts/morning_report_compile.py]
AA[nightly_efficiency_cron.yml] --> AB[scripts/nightly_efficiency_report.py]
AC[burndown_watcher_cron.yml] --> AD[scripts/burndown_cron.py]
AE[docker-compose.yml] --> AF[local ollama]
AE --> AG[local gitea]
AE --> AH[agent container]
AE --> AI[monitor loop]
```
### Structural read
The cleanest mental model is not “one app,” but “one repo that tries to be the fleets operator handbook, deployment engine, shared service shelf, and scratchpad.”
That produces three distinct control planes:
1. `playbooks/` is the strongest source of truth for VPS deployment.
2. `registry.yaml` and `manifest.yaml` act as runtime or operator registries for scripts.
3. `docker-compose.yml` models a separate local sandbox whose assumptions do not fully match the Ansible path.
## Entry Points
### Primary fleet deploy entry points
- `site.yml` — thin repo-root wrapper that imports `playbooks/site.yml`.
- `playbooks/site.yml` — multi-phase orchestrator for preflight, baseline, Ollama, Gitea, Hermes, Conduit, and local harmony audit.
- `playbooks/deploy_hermes.yml` — the most important service rollout for wizard instances; requires `wizard_instances` and pulls `vault_openrouter_api_key` / `vault_openai_api_key`.
- `playbooks/provision_and_deploy.yml` — DigitalOcean create-and-bootstrap path using `community.digital.digital_ocean_droplet` and a dynamic `new_droplets` group.
### Deployed service entry points
- `message_bus.py` — HTTP message queue service deployed by `playbooks/roles/message-bus/templates/busd.service.j2`.
- `knowledge_store.py` — SQLite-backed shared fact service deployed by `playbooks/roles/knowledge-store/templates/knowledged.service.j2`.
- `scripts/deploy-hook.py` — webhook listener launched by `templates/fleet-deploy-hook.service` with `ExecStart=/usr/bin/python3 /opt/fleet-ops/scripts/deploy-hook.py`.
### Cron and operator entry points
- `scripts/dispatch_consumer.py` — wired by `cron/dispatch-consumer.yml`.
- `scripts/morning_report_compile.py` — wired by `morning_report_cron.yml`.
- `scripts/nightly_efficiency_report.py` — wired by `nightly_efficiency_cron.yml`.
- `scripts/burndown_cron.py` — wired by `burndown_watcher_cron.yml`.
- `scripts/fleet_readiness.py` — operator validation script for `manifest.yaml`.
- `scripts/fleet-status.py` — prints a fleet status snapshot directly from top-level code.
### CI / verification entry points
- `.gitea/workflows/ansible-lint.yml` — YAML lint, `ansible-lint`, syntax checks, inventory validation.
- `.gitea/workflows/auto-review.yml` — lightweight review workflow with YAML lint, syntax checks, secret scan, and merge-conflict probe.
### Local development stack entry point
- `docker-compose.yml` — brings up `ollama`, `gitea`, `agent`, and `monitor` for a local stack.
## Data Flow
### 1) Deploy path
1. A repo operator pushes or references deployable state.
2. `scripts/deploy-hook.py` receives the webhook.
3. The hook updates `/opt/fleet-ops`, then invokes Ansible.
4. `playbooks/site.yml` fans into phase playbooks.
5. `playbooks/deploy_hermes.yml` renders per-instance config and systemd services from `wizard_instances` in `playbooks/host_vars/*`.
6. Services expose local `/health` endpoints on assigned ports.
### 2) Shared service path
1. Agents or tools post work to `message_bus.py`.
2. Consumers poll `/messages` and inspect `/queue`, `/deadletter`, and `/audit`.
3. Facts are written into `knowledge_store.py` and federated through peer sync endpoints.
4. `health_dashboard.py` and `scripts/registry_health_updater.py` read `registry.yaml` and probe service URLs.
### 3) Reporting path
1. Cron YAML launches queue/report scripts.
2. Scripts read `~/.hermes/`, Gitea APIs, local logs, or registry files.
3. Output is emitted as JSON, markdown, or console summaries.
### Important integration fracture
`federation_sync.py` does not currently match the services it tries to coordinate.
- `message_bus.py` returns `/messages` as `{"messages": [...], "count": N}` at line 234.
- `federation_sync.py` polls `.../messages?limit=50` and then only iterates if `isinstance(data, list)` at lines 136-140.
- `federation_sync.py` also requests `.../knowledge/stats` at line 230, but `knowledge_store.py` documents `/sync/status`, `/facts`, and `/peers`, not `/knowledge/stats`.
This means the repo contains a federation layer whose assumed contracts drift from the concrete microservices beside it.
## Key Abstractions
### `MessageStore` in `message_bus.py`
Core in-memory queue abstraction. It underlies:
- enqueue / poll behavior
- TTL expiry and dead-letter handling
- queue stats and audit trail endpoints
The tests in `tests/test_message_bus.py` make this one of the best-specified components in the repo.
### `KnowledgeDB` in `knowledge_store.py`
SQLite-backed fact registry with HTTP exposure for:
- storing facts
- querying and deleting facts
- peer registration
- push/pull federation
- sync status reporting
This is the nearest thing the repo has to a durable shared memory service.
### `FleetMonitor` in `health_dashboard.py`
Loads `registry.yaml`, polls wizard endpoints, caches results, and exposes both HTML and JSON views. It is the operator-facing read model of the fleet.
### `SyncEngine` in `federation_sync.py`
Intended as the bridge across message bus, audit trail, and knowledge store. The design intent is strong, but the live endpoint contracts appear out of sync.
### `ProfilePolicy` in `scripts/profile_isolation.py`
Encodes tmux/agent lifecycle policy by profile. This is one of the more disciplined “ops logic” modules: focused, testable, and bounded.
### `GenerationResult` / `VideoEngineClient` in `scripts/video_engine_client.py`
Represents the repos media-generation sidecar boundary. The code is small and clear, but its tests are partially stale relative to implementation behavior.
## API Surface
### `message_bus.py`
Observed HTTP surface includes:
- `POST /message`
- `GET /messages?to=<agent>&limit=<n>`
- `GET /queue`
- `GET /deadletter`
- `GET /audit`
- `GET /health`
### `knowledge_store.py`
Documented surface includes:
- `POST /fact`
- `GET /facts`
- `DELETE /facts/<key>`
- `POST /sync/pull`
- `POST /sync/push`
- `GET /sync/status`
- `GET /peers`
- `POST /peers`
- `GET /health`
### `health_dashboard.py`
- `/`
- `/api/status`
- `/api/wizard/<id>`
### `scripts/deploy-hook.py`
- `/health`
- `/webhook`
### Ansible operator surface
Primary commands implied by the repo:
- `ansible-playbook -i playbooks/inventory site.yml`
- `ansible-playbook -i playbooks/inventory playbooks/provision_and_deploy.yml`
- `ansible-playbook -i playbooks/inventory playbooks/deploy_hermes.yml`
## Dependencies
### Python and shell posture
The repo is mostly Python stdlib plus Ansible/shell orchestration. It is not packaged as a single installable Python project.
### Explicit Ansible collections
`requirements.yml` declares:
- `community.docker`
- `community.general`
- `ansible.posix`
The provisioning docs and playbooks also rely on `community.digital.digital_ocean_droplet` in `playbooks/provision_and_deploy.yml`.
### External service dependencies
- Gitea
- Ollama
- DigitalOcean
- systemd
- Docker / Docker Compose
- local `~/.hermes/` session and burn-log state
### Hidden runtime dependency
Several conceptual modules import `hermes_tools` directly:
- `compassion_layer.py`
- `sovereign_librarian.py`
- `sovereign_muse.py`
- `sovereign_pulse.py`
- `sovereign_sentinel.py`
- `synthesis_engine.py`
That dependency is not self-contained inside the repo and directly causes the local collection errors.
## Test Coverage Gaps
### Current tested strengths
The strongest, most trustworthy tests are around:
- `tests/test_message_bus.py`
- `tests/test_knowledge_store.py`
- `tests/test_health_dashboard.py`
- `tests/test_registry_health_updater.py`
- `tests/test_profile_isolation.py`
- `tests/test_skill_scorer.py`
- `tests/test_nightly_efficiency_report.py`
Those files make the shared-service core much more legible than the deployment layer.
### Current local status
Fresh run result:
- `158 passed, 1 failed, 2 errors`
Collection errors:
- `tests/test_heart.py` fails because `compassion_layer.py` imports `hermes_tools`.
- `tests/test_synthesis.py` fails because `sovereign_librarian.py` imports `hermes_tools`.
Runnable failure:
- `tests/test_video_engine_client.py` expects `generate_draft()` to raise on HTTP 503.
- `scripts/video_engine_client.py` currently catches exceptions and returns `GenerationResult(success=False, error=...)` instead.
### High-value untested paths
The most important missing or weakly validated surfaces are:
- `scripts/deploy-hook.py` — high-blast-radius deploy trigger.
- `playbooks/deploy_gitea.yml` / `playbooks/deploy_hermes.yml` / `playbooks/provision_and_deploy.yml` — critical control plane, almost entirely untested in-repo.
- `scripts/morning_report_compile.py` — cron-facing reporting logic.
- `scripts/burndown_cron.py` and related watcher scripts.
- `scripts/generate_video.py`, `scripts/tiered_render.py`, and broader video-engine operator paths.
- `scripts/fleet-status.py` — prints directly from module scope and has no `__main__` guard.
### Coverage quality note
The repos best tests cluster around internal Python helpers. The repos biggest operational risk lives in deployment, cron wiring, and shell/Ansible behaviors that are not equivalently exercised.
## Security Considerations
### Strong points
- Vault use exists in `playbooks/group_vars/vault.yml` and inline vaulted material in `manifest.yaml`.
- `playbooks/deploy_gitea.yml` sets `gitea_disable_registration: true`, `gitea_require_signin: true`, and `gitea_register_act_runner: false`.
- The Hermes role renders per-instance env/config and uses systemd hardening patterns.
- Gitea, Nostr relay, and other web surfaces are designed around nginx/TLS roles.
### Concrete risks
1. `scripts/deploy-hook.py` explicitly disables signature enforcement when `DEPLOY_HOOK_SECRET` is unset.
2. `playbooks/roles/gitea/defaults/main.yml` sets `gitea_webhook_allowed_host_list: "*"`.
3. Both `ansible.cfg` files disable host key checking.
4. The repo has multiple sources of truth for ports and service topology:
- `playbooks/host_vars/ezra-primary.yml` uses `8643`
- `manifest.yaml` uses `8643`
- `registry.yaml` points Ezra health to `8646`
5. `registry.yaml` advertises services like `busd`, `auditd`, and `knowledged`, but the main `playbooks/site.yml` phases do not include message-bus or knowledge-store roles.
### Drift / correctness risks that become security risks
- `playbooks/deploy_auto_merge.yml` targets `hosts: gitea_servers`, but the inventory groups visible in `playbooks/inventory` are `forge`, `vps`, `agents`, and `wizards`.
- `playbooks/roles/gitea/defaults/main.yml` includes runner labels with a probable typo: `ubuntu-22.04:docker://catthehocker/ubuntu:act-22.04`.
- The local compose quick start is not turnkey: `Dockerfile.agent` copies `requirements-agent.txt*` and `agent/`, but the runtime falls back to a tiny health/tick loop if the real agent source is absent.
## Deployment
### VPS / real fleet path
Repo-root wrapper:
```bash
ansible-playbook -i playbooks/inventory site.yml
```
Direct orchestrator:
```bash
ansible-playbook -i playbooks/inventory playbooks/site.yml
```
Provision and bootstrap a new node:
```bash
ansible-playbook -i playbooks/inventory playbooks/provision_and_deploy.yml
```
### Local sandbox path
```bash
cp .env.example .env
docker compose up -d
```
But this path must be read skeptically. `docker-compose.yml` is a local convenience stack, while the real fleet path uses Ansible + systemd + host vars + vault-backed secrets.
## Dead Code Candidates and Operator Footguns
- `scripts/fleet-status.py` behaves like a one-shot report script with top-level execution, not a reusable CLI module.
- `README.md` ends with a visibly corrupted Nexus Watchdog section containing broken formatting.
- `Sovereign_Health_Check.md` still recommends running the broken `tests/test_heart.py` and `tests/test_synthesis.py` health suite.
- `federation_sync.py` currently looks architecturally important but contractually out of sync with `message_bus.py` and `knowledge_store.py`.
## Bottom Line
`fleet-ops` contains the real bones of a sovereign fleet control plane, but those bones are unevenly ossified.
The strong parts are:
- the phase-based Ansible deployment structure in `playbooks/site.yml`
- the microservice-style core in `message_bus.py`, `knowledge_store.py`, and `health_dashboard.py`
- several focused Python test suites that genuinely specify behavior
The weak parts are:
- duplicated sources of truth (`playbooks/host_vars/*`, `manifest.yaml`, `registry.yaml`, local compose)
- deployment and cron surfaces that matter more operationally than they are tested
- conceptual “sovereign_*” modules that pull in `hermes_tools` and currently break local collection
If this repo were being hardened next, the highest-leverage moves would be:
1. Make the registries consistent (`8643` vs `8646`, service inventory vs deployed phases).
2. Add focused tests around `scripts/deploy-hook.py` and the deploy/report cron scripts.
3. Decide which Python modules are truly production runtime and which are prototypes, then wire or prune accordingly.
4. Collapse the number of “truth” files an operator has to trust during a deploy.

View File

@@ -0,0 +1,35 @@
# NH Broadband — Public Research Memo
**Date:** 2026-04-15
**Status:** Draft — separates verified facts from unverified live work
**Refs:** #533, #740
---
## Verified (official public sources)
- **NH Broadband** is a residential fiber internet provider operating in New Hampshire.
- Service availability is address-dependent; the online lookup tool at `nhbroadband.com` reports coverage by street address.
- Residential fiber plans are offered; speed tiers vary by location.
- Scheduling line: **1-800-NHBB-INFO** (published on official site).
- Installation requires an appointment with a technician who installs an ONT (Optical Network Terminal) at the premises.
- Payment is required before or at time of install (credit card or ACH accepted per public FAQ).
## Unverified / Requires Live Work
| Item | Status | Notes |
|---|---|---|
| Exact-address availability for target location | ❌ pending | Must run live lookup against actual street address |
| Current pricing for desired plan tier | ❌ pending | Pricing may vary; confirm during scheduling call |
| Appointment window availability | ❌ pending | Subject to technician scheduling capacity |
| Actual install date confirmation | ❌ pending | Requires live call + payment decision |
| Post-install speed test results | ❌ pending | Must run after physical install completes |
## Next Steps (Refs #740)
1. Run address availability lookup on `nhbroadband.com`
2. Call 1-800-NHBB-INFO to schedule install
3. Confirm payment method
4. Receive appointment confirmation number
5. Prepare site (clear ONT install path)
6. Post-install: speed test and log results

View File

@@ -0,0 +1,102 @@
# Long Context vs RAG Decision Framework
**Research Backlog Item #4.3** | Impact: 4 | Effort: 1 | Ratio: 4.0
**Date**: 2026-04-15
**Status**: RESEARCHED
## Executive Summary
Modern LLMs have 128K-200K+ context windows, but we still treat them like 4K models by default. This document provides a decision framework for when to stuff context vs. use RAG, based on empirical findings and our stack constraints.
## The Core Insight
**Long context ≠ better answers.** Research shows:
- "Lost in the Middle" effect: Models attend poorly to information in the middle of long contexts (Liu et al., 2023)
- RAG with reranking outperforms full-context stuffing for document QA when docs > 50K tokens
- Cost scales quadratically with context length (attention computation)
- Latency increases linearly with input length
**RAG ≠ always better.** Retrieval introduces:
- Recall errors (miss relevant chunks)
- Precision errors (retrieve irrelevant chunks)
- Chunking artifacts (splitting mid-sentence)
- Additional latency for embedding + search
## Decision Matrix
| Scenario | Context Size | Recommendation | Why |
|----------|-------------|---------------|-----|
| Single conversation (< 32K) | Small | **Stuff everything** | No retrieval overhead, full context available |
| 5-20 documents, focused query | 32K-128K | **Hybrid** | Key docs in context, rest via RAG |
| Large corpus search | > 128K | **Pure RAG + reranking** | Full context impossible, must retrieve |
| Code review (< 5 files) | < 32K | **Stuff everything** | Code needs full context for understanding |
| Code review (repo-wide) | > 128K | **RAG with file-level chunks** | Files are natural chunk boundaries |
| Multi-turn conversation | Growing | **Hybrid + compression** | Keep recent turns in full, compress older |
| Fact retrieval | Any | **RAG** | Always faster to search than read everything |
| Complex reasoning across docs | 32K-128K | **Stuff + chain-of-thought** | Models need all context for cross-doc reasoning |
## Our Stack Constraints
### What We Have
- **Cloud models**: 128K-200K context (OpenRouter providers)
- **Local Ollama**: 8K-32K context (Gemma-4 default 8192)
- **Hermes fact_store**: SQLite FTS5 full-text search
- **Memory**: MemPalace holographic embeddings
- **Session context**: Growing conversation history
### What This Means
1. **Cloud sessions**: We CAN stuff up to 128K but SHOULD we? Cost and latency matter.
2. **Local sessions**: MUST use RAG for anything beyond 8K. Long context not available.
3. **Mixed fleet**: Need a routing layer that decides per-session.
## Advanced Patterns
### 1. Progressive Context Loading
Don't load everything at once. Start with RAG, then stuff additional docs as needed:
```
Turn 1: RAG search → top 3 chunks
Turn 2: Model asks "I need more context about X" → stuff X
Turn 3: Model has enough → continue
```
### 2. Context Budgeting
Allocate context budget across components:
```
System prompt: 2,000 tokens (always)
Recent messages: 10,000 tokens (last 5 turns)
RAG results: 8,000 tokens (top chunks)
Stuffed docs: 12,000 tokens (key docs)
---------------------------
Total: 32,000 tokens (fits 32K model)
```
### 3. Smart Compression
Before stuffing, compress older context:
- Summarize turns older than 10
- Remove tool call results (keep only final outputs)
- Deduplicate repeated information
- Use structured representations (JSON) instead of prose
## Empirical Benchmarks Needed
1. **Stuffing vs RAG accuracy** on our fact_store queries
2. **Latency comparison** at 32K, 64K, 128K context
3. **Cost per query** for cloud models at various context sizes
4. **Local model behavior** when pushed beyond rated context
## Recommendations
1. **Audit current context usage**: How many sessions hit > 32K? (Low effort, high value)
2. **Implement ContextRouter**: ~50 LOC, adds routing decisions to hermes
3. **Add context-size logging**: Track input tokens per session for data gathering
## References
- Liu et al. "Lost in the Middle: How Language Models Use Long Contexts" (2023) — https://arxiv.org/abs/2307.03172
- Shi et al. "Large Language Models are Easily Distracted by Irrelevant Context" (2023)
- Xu et al. "Retrieval Meets Long Context LLMs" (2023) — hybrid approaches outperform both alone
- Anthropic's Claude 3.5 context caching — built-in prefix caching reduces cost for repeated system prompts
---
*Sovereignty and service always.*

View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
"""Operational runner and status view for the Know Thy Father multimodal epic."""
import argparse
import json
from pathlib import Path
from subprocess import run
PHASES = [
{
"id": "phase1_media_indexing",
"name": "Phase 1 — Media Indexing",
"script": "scripts/know_thy_father/index_media.py",
"command_template": "python3 scripts/know_thy_father/index_media.py --tweets twitter-archive/extracted/tweets.jsonl --output twitter-archive/know-thy-father/media_manifest.jsonl",
"outputs": ["twitter-archive/know-thy-father/media_manifest.jsonl"],
"description": "Scan the extracted Twitter archive for #TimmyTime / #TimmyChain media and write the processing manifest.",
},
{
"id": "phase2_multimodal_analysis",
"name": "Phase 2 — Multimodal Analysis",
"script": "scripts/twitter_archive/analyze_media.py",
"command_template": "python3 scripts/twitter_archive/analyze_media.py --batch {batch_size}",
"outputs": [
"twitter-archive/know-thy-father/analysis.jsonl",
"twitter-archive/know-thy-father/meaning-kernels.jsonl",
"twitter-archive/know-thy-father/pipeline-status.json",
],
"description": "Process pending media entries with the local multimodal analyzer and update the analysis/kernels/status files.",
},
{
"id": "phase3_holographic_synthesis",
"name": "Phase 3 — Holographic Synthesis",
"script": "scripts/know_thy_father/synthesize_kernels.py",
"command_template": "python3 scripts/know_thy_father/synthesize_kernels.py --input twitter-archive/media/manifest.jsonl --output twitter-archive/knowledge/fathers_ledger.jsonl --summary twitter-archive/knowledge/fathers_ledger.summary.json",
"outputs": [
"twitter-archive/knowledge/fathers_ledger.jsonl",
"twitter-archive/knowledge/fathers_ledger.summary.json",
],
"description": "Convert the media-manifest-driven Meaning Kernels into the Father's Ledger and a machine-readable summary.",
},
{
"id": "phase4_cross_reference_audit",
"name": "Phase 4 — Cross-Reference Audit",
"script": "scripts/know_thy_father/crossref_audit.py",
"command_template": "python3 scripts/know_thy_father/crossref_audit.py --soul SOUL.md --kernels twitter-archive/notes/know_thy_father_crossref.md --output twitter-archive/notes/crossref_report.md",
"outputs": ["twitter-archive/notes/crossref_report.md"],
"description": "Compare Know Thy Father kernels against SOUL.md and related canon, then emit a Markdown audit report.",
},
{
"id": "phase5_processing_log",
"name": "Phase 5 — Processing Log / Status",
"script": "twitter-archive/know-thy-father/tracker.py",
"command_template": "python3 twitter-archive/know-thy-father/tracker.py report",
"outputs": ["twitter-archive/know-thy-father/REPORT.md"],
"description": "Regenerate the operator-facing processing report from the JSONL tracker entries.",
},
]
def build_pipeline_plan(batch_size: int = 10):
plan = []
for phase in PHASES:
plan.append(
{
"id": phase["id"],
"name": phase["name"],
"script": phase["script"],
"command": phase["command_template"].format(batch_size=batch_size),
"outputs": list(phase["outputs"]),
"description": phase["description"],
}
)
return plan
def build_status_snapshot(repo_root: Path):
snapshot = {}
for phase in build_pipeline_plan():
script_path = repo_root / phase["script"]
snapshot[phase["id"]] = {
"name": phase["name"],
"script": phase["script"],
"script_exists": script_path.exists(),
"outputs": [
{
"path": output,
"exists": (repo_root / output).exists(),
}
for output in phase["outputs"]
],
}
return snapshot
def run_step(repo_root: Path, step_id: str, batch_size: int = 10):
plan = {step["id"]: step for step in build_pipeline_plan(batch_size=batch_size)}
if step_id not in plan:
raise SystemExit(f"Unknown step: {step_id}")
step = plan[step_id]
return run(step["command"], cwd=repo_root, shell=True, check=False)
def main():
parser = argparse.ArgumentParser(description="Know Thy Father epic orchestration helper")
parser.add_argument("--batch-size", type=int, default=10)
parser.add_argument("--status", action="store_true")
parser.add_argument("--run-step", default=None)
parser.add_argument("--json", action="store_true")
args = parser.parse_args()
repo_root = Path(__file__).resolve().parents[2]
if args.run_step:
result = run_step(repo_root, args.run_step, batch_size=args.batch_size)
raise SystemExit(result.returncode)
payload = build_status_snapshot(repo_root) if args.status else build_pipeline_plan(batch_size=args.batch_size)
if args.json or args.status:
print(json.dumps(payload, indent=2))
else:
for step in payload:
print(f"[{step['id']}] {step['command']}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
"""Prepare a MemPalace v3.0.0 integration packet for Ezra's Hermes home."""
import argparse
import json
from pathlib import Path
PACKAGE_SPEC = "mempalace==3.0.0"
DEFAULT_HERMES_HOME = "~/.hermes/"
DEFAULT_SESSIONS_DIR = "~/.hermes/sessions/"
DEFAULT_PALACE_PATH = "~/.mempalace/palace"
DEFAULT_WING = "ezra_home"
def build_yaml_template(wing: str, palace_path: str) -> str:
return (
f"wing: {wing}\n"
f"palace: {palace_path}\n"
"rooms:\n"
" - name: sessions\n"
" description: Conversation history and durable agent transcripts\n"
" globs:\n"
" - \"*.json\"\n"
" - \"*.jsonl\"\n"
" - name: config\n"
" description: Hermes configuration and runtime settings\n"
" globs:\n"
" - \"*.yaml\"\n"
" - \"*.yml\"\n"
" - \"*.toml\"\n"
" - name: docs\n"
" description: Notes, markdown docs, and operating reports\n"
" globs:\n"
" - \"*.md\"\n"
" - \"*.txt\"\n"
"people: []\n"
"projects: []\n"
)
def build_plan(overrides: dict | None = None) -> dict:
overrides = overrides or {}
hermes_home = overrides.get("hermes_home", DEFAULT_HERMES_HOME)
sessions_dir = overrides.get("sessions_dir", DEFAULT_SESSIONS_DIR)
palace_path = overrides.get("palace_path", DEFAULT_PALACE_PATH)
wing = overrides.get("wing", DEFAULT_WING)
yaml_template = build_yaml_template(wing=wing, palace_path=palace_path)
config_home = hermes_home[:-1] if hermes_home.endswith("/") else hermes_home
plan = {
"package_spec": PACKAGE_SPEC,
"hermes_home": hermes_home,
"sessions_dir": sessions_dir,
"palace_path": palace_path,
"wing": wing,
"config_path": f"{config_home}/mempalace.yaml",
"install_command": f"pip install {PACKAGE_SPEC}",
"init_command": f"mempalace init {hermes_home} --yes",
"mine_home_command": f"echo \"\" | mempalace mine {hermes_home}",
"mine_sessions_command": f"echo \"\" | mempalace mine {sessions_dir} --mode convos",
"search_command": 'mempalace search "your common queries"',
"wake_up_command": "mempalace wake-up",
"mcp_command": "hermes mcp add mempalace -- python -m mempalace.mcp_server",
"yaml_template": yaml_template,
"gotchas": [
"`mempalace init` is still interactive in room approval flow; write mempalace.yaml manually if the init output stalls.",
"The yaml key is `wing:` not `wings:`. Using the wrong key causes mine/setup failures.",
"Pipe empty stdin into mining commands (`echo \"\" | ...`) to avoid the entity-detector stdin hang on larger directories.",
"First mine downloads the ChromaDB embedding model cache (~79MB).",
"Report Ezra's before/after metrics back to issue #568 after live installation and retrieval tests.",
],
}
return plan
def render_markdown(plan: dict) -> str:
gotchas = "\n".join(f"- {item}" for item in plan["gotchas"])
return f"""# MemPalace v3.0.0 — Ezra Integration Packet
This packet turns issue #570 into an executable, reviewable integration plan for Ezra's Hermes home.
It is a repo-side scaffold: no live Ezra host changes are claimed in this artifact.
## Commands
```bash
{plan['install_command']}
{plan['init_command']}
cat > {plan['config_path']} <<'YAML'
{plan['yaml_template'].rstrip()}
YAML
{plan['mine_home_command']}
{plan['mine_sessions_command']}
{plan['search_command']}
{plan['wake_up_command']}
{plan['mcp_command']}
```
## Manual config template
```yaml
{plan['yaml_template'].rstrip()}
```
## Why this shape
- `wing: {plan['wing']}` matches the issue's Ezra-specific integration target.
- `rooms` split the mined material into sessions, config, and docs to keep retrieval interpretable.
- Mining commands pipe empty stdin to avoid the interactive entity-detector hang noted in the evaluation.
## Gotchas
{gotchas}
## Report back to #568
After live execution on Ezra's actual environment, post back to #568 with:
- install result
- mine duration and corpus size
- 2-3 real search queries + retrieved results
- wake-up context token count
- whether MCP wiring succeeded
## Honest scope boundary
This repo artifact does **not** prove live installation on Ezra's host. It makes the work reproducible and testable so the next pass can execute it without guesswork.
"""
def main() -> None:
parser = argparse.ArgumentParser(description="Prepare the MemPalace Ezra integration packet")
parser.add_argument("--hermes-home", default=DEFAULT_HERMES_HOME)
parser.add_argument("--sessions-dir", default=DEFAULT_SESSIONS_DIR)
parser.add_argument("--palace-path", default=DEFAULT_PALACE_PATH)
parser.add_argument("--wing", default=DEFAULT_WING)
parser.add_argument("--output", default=None)
parser.add_argument("--json", action="store_true")
args = parser.parse_args()
plan = build_plan(
{
"hermes_home": args.hermes_home,
"sessions_dir": args.sessions_dir,
"palace_path": args.palace_path,
"wing": args.wing,
}
)
rendered = json.dumps(plan, indent=2) if args.json else render_markdown(plan)
if args.output:
output_path = Path(args.output).expanduser()
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(rendered, encoding="utf-8")
print(f"MemPalace integration packet written to {output_path}")
else:
print(rendered)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import json
from pathlib import Path
from typing import Any
import yaml
DAYLIGHT_START = "10:00"
DAYLIGHT_END = "16:00"
def load_manifest(path: str | Path) -> dict[str, Any]:
data = yaml.safe_load(Path(path).read_text()) or {}
data.setdefault("machines", [])
return data
def validate_manifest(data: dict[str, Any]) -> None:
machines = data.get("machines", [])
if not machines:
raise ValueError("manifest must contain at least one machine")
seen: set[str] = set()
for machine in machines:
hostname = machine.get("hostname", "").strip()
if not hostname:
raise ValueError("each machine must declare a hostname")
if hostname in seen:
raise ValueError(f"duplicate hostname: {hostname} (unique hostnames are required)")
seen.add(hostname)
for field in ("machine_type", "ram_gb", "cpu_cores", "os", "adapter_condition"):
if field not in machine:
raise ValueError(f"machine {hostname} missing required field: {field}")
def _laptops(machines: list[dict[str, Any]]) -> list[dict[str, Any]]:
return [m for m in machines if m.get("machine_type") == "laptop"]
def _desktop(machines: list[dict[str, Any]]) -> dict[str, Any] | None:
for machine in machines:
if machine.get("machine_type") == "desktop":
return machine
return None
def choose_anchor_agents(machines: list[dict[str, Any]], count: int = 2) -> list[dict[str, Any]]:
eligible = [
m for m in _laptops(machines)
if m.get("adapter_condition") in {"good", "ok"} and m.get("always_on_capable", True)
]
eligible.sort(key=lambda m: (m.get("idle_watts", 9999), -m.get("ram_gb", 0), -m.get("cpu_cores", 0), m["hostname"]))
return eligible[:count]
def assign_roles(machines: list[dict[str, Any]]) -> dict[str, Any]:
anchors = choose_anchor_agents(machines, count=2)
anchor_names = {m["hostname"] for m in anchors}
desktop = _desktop(machines)
mapping: dict[str, dict[str, Any]] = {}
for machine in machines:
hostname = machine["hostname"]
if desktop and hostname == desktop["hostname"]:
mapping[hostname] = {
"role": "desktop_nas",
"schedule": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
"duty_cycle": "daylight_only",
}
elif hostname in anchor_names:
mapping[hostname] = {
"role": "anchor_agent",
"schedule": "24/7",
"duty_cycle": "continuous",
}
else:
mapping[hostname] = {
"role": "daylight_agent",
"schedule": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
"duty_cycle": "peak_solar",
}
return {
"anchor_agents": [m["hostname"] for m in anchors],
"desktop_nas": desktop["hostname"] if desktop else None,
"role_mapping": mapping,
}
def build_plan(data: dict[str, Any]) -> dict[str, Any]:
validate_manifest(data)
machines = data["machines"]
role_plan = assign_roles(machines)
return {
"fleet_name": data.get("fleet_name", "timmy-laptop-fleet"),
"machine_count": len(machines),
"anchor_agents": role_plan["anchor_agents"],
"desktop_nas": role_plan["desktop_nas"],
"daylight_window": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
"role_mapping": role_plan["role_mapping"],
}
def render_markdown(plan: dict[str, Any], data: dict[str, Any]) -> str:
lines = [
"# Laptop Fleet Deployment Plan",
"",
f"Fleet: {plan['fleet_name']}",
f"Machine count: {plan['machine_count']}",
f"24/7 anchor agents: {', '.join(plan['anchor_agents']) if plan['anchor_agents'] else 'TBD'}",
f"Desktop/NAS: {plan['desktop_nas'] or 'TBD'}",
f"Daylight schedule: {plan['daylight_window']}",
"",
"## Role mapping",
"",
"| Hostname | Role | Schedule | Duty cycle |",
"|---|---|---|---|",
]
for hostname, role in sorted(plan["role_mapping"].items()):
lines.append(f"| {hostname} | {role['role']} | {role['schedule']} | {role['duty_cycle']} |")
lines.extend([
"",
"## Machine inventory",
"",
"| Hostname | Type | RAM | CPU cores | OS | Adapter | Idle watts | Notes |",
"|---|---|---:|---:|---|---|---:|---|",
])
for machine in data["machines"]:
lines.append(
f"| {machine['hostname']} | {machine['machine_type']} | {machine['ram_gb']} | {machine['cpu_cores']} | {machine['os']} | {machine['adapter_condition']} | {machine.get('idle_watts', 'n/a')} | {machine.get('notes', '')} |"
)
return "\n".join(lines) + "\n"
def main() -> int:
parser = argparse.ArgumentParser(description="Plan LAB-005 laptop fleet deployment.")
parser.add_argument("manifest", help="Path to laptop fleet manifest YAML")
parser.add_argument("--markdown", action="store_true", help="Render a markdown deployment plan instead of JSON")
args = parser.parse_args()
data = load_manifest(args.manifest)
plan = build_plan(data)
if args.markdown:
print(render_markdown(plan, data))
else:
print(json.dumps(plan, indent=2))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""NH Broadband install packet builder for the live scheduling step."""
from __future__ import annotations
import argparse
import json
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
import yaml
def load_request(path: str | Path) -> dict[str, Any]:
data = yaml.safe_load(Path(path).read_text()) or {}
data.setdefault("contact", {})
data.setdefault("service", {})
data.setdefault("call_log", [])
data.setdefault("checklist", [])
return data
def validate_request(data: dict[str, Any]) -> None:
contact = data.get("contact", {})
for field in ("name", "phone"):
if not contact.get(field, "").strip():
raise ValueError(f"contact.{field} is required")
service = data.get("service", {})
for field in ("address", "city", "state"):
if not service.get(field, "").strip():
raise ValueError(f"service.{field} is required")
if not data.get("checklist"):
raise ValueError("checklist must contain at least one item")
def build_packet(data: dict[str, Any]) -> dict[str, Any]:
validate_request(data)
contact = data["contact"]
service = data["service"]
return {
"packet_id": f"nh-bb-{datetime.now(timezone.utc).strftime('%Y%m%d-%H%M%S')}",
"generated_utc": datetime.now(timezone.utc).isoformat(),
"contact": {
"name": contact["name"],
"phone": contact["phone"],
"email": contact.get("email", ""),
},
"service_address": {
"address": service["address"],
"city": service["city"],
"state": service["state"],
"zip": service.get("zip", ""),
},
"desired_plan": data.get("desired_plan", "residential-fiber"),
"call_log": data.get("call_log", []),
"checklist": [
{"item": item, "done": False} if isinstance(item, str) else item
for item in data["checklist"]
],
"status": "pending_scheduling_call",
}
def render_markdown(packet: dict[str, Any], data: dict[str, Any]) -> str:
contact = packet["contact"]
addr = packet["service_address"]
lines = [
f"# NH Broadband Install Packet",
"",
f"**Packet ID:** {packet['packet_id']}",
f"**Generated:** {packet['generated_utc']}",
f"**Status:** {packet['status']}",
"",
"## Contact",
"",
f"- **Name:** {contact['name']}",
f"- **Phone:** {contact['phone']}",
f"- **Email:** {contact.get('email', 'n/a')}",
"",
"## Service Address",
"",
f"- {addr['address']}",
f"- {addr['city']}, {addr['state']} {addr['zip']}",
"",
f"## Desired Plan",
"",
f"{packet['desired_plan']}",
"",
"## Call Log",
"",
]
if packet["call_log"]:
for entry in packet["call_log"]:
ts = entry.get("timestamp", "n/a")
outcome = entry.get("outcome", "n/a")
notes = entry.get("notes", "")
lines.append(f"- **{ts}** — {outcome}")
if notes:
lines.append(f" - {notes}")
else:
lines.append("_No calls logged yet._")
lines.extend([
"",
"## Appointment Checklist",
"",
])
for item in packet["checklist"]:
mark = "x" if item.get("done") else " "
lines.append(f"- [{mark}] {item['item']}")
lines.append("")
return "\n".join(lines)
def main() -> int:
parser = argparse.ArgumentParser(description="Build NH Broadband install packet.")
parser.add_argument("request", help="Path to install request YAML")
parser.add_argument("--markdown", action="store_true", help="Render markdown instead of JSON")
args = parser.parse_args()
data = load_request(args.request)
packet = build_packet(data)
if args.markdown:
print(render_markdown(packet, data))
else:
print(json.dumps(packet, indent=2))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,60 @@
from pathlib import Path
import unittest
ROOT = Path(__file__).resolve().parent.parent
GENOME_PATH = ROOT / "genomes" / "fleet-ops-GENOME.md"
class TestFleetOpsGenome(unittest.TestCase):
def test_genome_file_exists_with_required_sections(self):
self.assertTrue(GENOME_PATH.exists(), "missing genomes/fleet-ops-GENOME.md")
text = GENOME_PATH.read_text(encoding="utf-8")
required_sections = [
"# GENOME.md — fleet-ops",
"## Project Overview",
"## Architecture",
"## Entry Points",
"## Data Flow",
"## Key Abstractions",
"## API Surface",
"## Test Coverage Gaps",
"## Security Considerations",
"## Deployment",
]
for section in required_sections:
self.assertIn(section, text)
def test_genome_names_real_files_and_grounded_findings(self):
text = GENOME_PATH.read_text(encoding="utf-8")
required_snippets = [
"```mermaid",
"playbooks/site.yml",
"playbooks/deploy_hermes.yml",
"scripts/deploy-hook.py",
"scripts/dispatch_consumer.py",
"message_bus.py",
"knowledge_store.py",
"health_dashboard.py",
"registry.yaml",
"manifest.yaml",
"DEPLOY_HOOK_SECRET",
"gitea_register_act_runner: false",
"gitea_webhook_allowed_host_list",
"tests/test_video_engine_client.py",
"158 passed, 1 failed, 2 errors",
"hermes_tools",
"8643",
"8646",
]
for snippet in required_snippets:
self.assertIn(snippet, text)
def test_genome_is_substantial(self):
text = GENOME_PATH.read_text(encoding="utf-8")
self.assertGreaterEqual(len(text.splitlines()), 120)
self.assertGreaterEqual(len(text), 7000)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,84 +0,0 @@
from pathlib import Path
GENOME = Path('GENOME.md')
def read_genome() -> str:
assert GENOME.exists(), 'GENOME.md must exist at repo root'
return GENOME.read_text(encoding='utf-8')
def test_genome_exists():
assert GENOME.exists(), 'GENOME.md must exist at repo root'
def test_genome_has_required_sections():
text = read_genome()
for heading in [
'# GENOME.md — hermes-agent',
'## Project Overview',
'## Architecture Diagram',
'## Entry Points and Data Flow',
'## Key Abstractions',
'## API Surface',
'## Test Coverage Gaps',
'## Security Considerations',
'## Performance Characteristics',
'## Critical Modules to Name Explicitly',
]:
assert heading in text
def test_genome_contains_mermaid_diagram():
text = read_genome()
assert '```mermaid' in text
assert 'flowchart TD' in text
def test_genome_mentions_control_plane_modules():
text = read_genome()
for token in [
'run_agent.py',
'model_tools.py',
'tools/registry.py',
'toolsets.py',
'cli.py',
'hermes_cli/main.py',
'hermes_state.py',
'gateway/run.py',
'acp_adapter/server.py',
'cron/scheduler.py',
]:
assert token in text
def test_genome_mentions_test_gap_and_collection_findings():
text = read_genome()
for token in [
'11,470 tests collected',
'6 collection errors',
'ModuleNotFoundError: No module named `acp`',
'trajectory_compressor.py',
'batch_runner.py',
]:
assert token in text
def test_genome_mentions_security_and_performance_layers():
text = read_genome()
for token in [
'prompt_builder.py',
'approval.py',
'file_tools.py',
'mcp_tool.py',
'WAL mode',
'prompt caching',
'context compression',
'parallel tool execution',
]:
assert token in text
def test_genome_is_substantial():
text = read_genome()
assert len(text) >= 10000

View File

@@ -0,0 +1,76 @@
from pathlib import Path
import importlib.util
import unittest
ROOT = Path(__file__).resolve().parent.parent
SCRIPT_PATH = ROOT / "scripts" / "know_thy_father" / "epic_pipeline.py"
DOC_PATH = ROOT / "docs" / "KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md"
def load_module(path: Path, name: str):
assert path.exists(), f"missing {path.relative_to(ROOT)}"
spec = importlib.util.spec_from_file_location(name, path)
assert spec and spec.loader
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
class TestKnowThyFatherEpicPipeline(unittest.TestCase):
def test_build_pipeline_plan_contains_all_phases_in_order(self):
mod = load_module(SCRIPT_PATH, "ktf_epic_pipeline")
plan = mod.build_pipeline_plan(batch_size=10)
self.assertEqual(
[step["id"] for step in plan],
[
"phase1_media_indexing",
"phase2_multimodal_analysis",
"phase3_holographic_synthesis",
"phase4_cross_reference_audit",
"phase5_processing_log",
],
)
self.assertIn("scripts/know_thy_father/index_media.py", plan[0]["command"])
self.assertIn("scripts/twitter_archive/analyze_media.py --batch 10", plan[1]["command"])
self.assertIn("scripts/know_thy_father/synthesize_kernels.py", plan[2]["command"])
self.assertIn("scripts/know_thy_father/crossref_audit.py", plan[3]["command"])
self.assertIn("twitter-archive/know-thy-father/tracker.py report", plan[4]["command"])
def test_status_snapshot_reports_key_artifact_paths(self):
mod = load_module(SCRIPT_PATH, "ktf_epic_pipeline")
status = mod.build_status_snapshot(ROOT)
self.assertIn("phase1_media_indexing", status)
self.assertIn("phase2_multimodal_analysis", status)
self.assertIn("phase3_holographic_synthesis", status)
self.assertIn("phase4_cross_reference_audit", status)
self.assertIn("phase5_processing_log", status)
self.assertEqual(status["phase1_media_indexing"]["script"], "scripts/know_thy_father/index_media.py")
self.assertEqual(status["phase2_multimodal_analysis"]["script"], "scripts/twitter_archive/analyze_media.py")
self.assertEqual(status["phase5_processing_log"]["script"], "twitter-archive/know-thy-father/tracker.py")
self.assertTrue(status["phase1_media_indexing"]["script_exists"])
self.assertTrue(status["phase2_multimodal_analysis"]["script_exists"])
self.assertTrue(status["phase3_holographic_synthesis"]["script_exists"])
self.assertTrue(status["phase4_cross_reference_audit"]["script_exists"])
self.assertTrue(status["phase5_processing_log"]["script_exists"])
def test_repo_contains_multimodal_pipeline_doc(self):
self.assertTrue(DOC_PATH.exists(), "missing committed Know Thy Father pipeline doc")
text = DOC_PATH.read_text(encoding="utf-8")
required = [
"# Know Thy Father — Multimodal Media Consumption Pipeline",
"scripts/know_thy_father/index_media.py",
"scripts/twitter_archive/analyze_media.py --batch 10",
"scripts/know_thy_father/synthesize_kernels.py",
"scripts/know_thy_father/crossref_audit.py",
"twitter-archive/know-thy-father/tracker.py report",
"Refs #582",
]
for snippet in required:
self.assertIn(snippet, text)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,52 @@
from pathlib import Path
import yaml
from scripts.plan_laptop_fleet import build_plan, load_manifest, render_markdown, validate_manifest
def test_laptop_fleet_planner_script_exists() -> None:
assert Path("scripts/plan_laptop_fleet.py").exists()
def test_laptop_fleet_manifest_template_exists() -> None:
assert Path("docs/laptop-fleet-manifest.example.yaml").exists()
def test_build_plan_selects_two_lowest_idle_watt_laptops_as_anchors() -> None:
data = load_manifest("docs/laptop-fleet-manifest.example.yaml")
plan = build_plan(data)
assert plan["anchor_agents"] == ["timmy-anchor-a", "timmy-anchor-b"]
assert plan["desktop_nas"] == "timmy-desktop-nas"
assert plan["role_mapping"]["timmy-daylight-a"]["schedule"] == "10:00-16:00"
def test_validate_manifest_requires_unique_hostnames() -> None:
data = {
"machines": [
{"hostname": "dup", "machine_type": "laptop", "ram_gb": 8, "cpu_cores": 4, "os": "Linux", "adapter_condition": "good"},
{"hostname": "dup", "machine_type": "laptop", "ram_gb": 16, "cpu_cores": 8, "os": "Linux", "adapter_condition": "good"},
]
}
try:
validate_manifest(data)
except ValueError as exc:
assert "duplicate hostname" in str(exc)
assert "unique hostnames" in str(exc)
else:
raise AssertionError("validate_manifest should reject duplicate hostname")
def test_markdown_contains_anchor_agents_and_daylight_schedule() -> None:
data = load_manifest("docs/laptop-fleet-manifest.example.yaml")
plan = build_plan(data)
content = render_markdown(plan, data)
assert "24/7 anchor agents: timmy-anchor-a, timmy-anchor-b" in content
assert "Daylight schedule: 10:00-16:00" in content
assert "desktop_nas" in content
def test_manifest_template_is_valid_yaml() -> None:
data = yaml.safe_load(Path("docs/laptop-fleet-manifest.example.yaml").read_text())
assert data["fleet_name"] == "timmy-laptop-fleet"
assert len(data["machines"]) == 6

View File

@@ -0,0 +1,68 @@
from pathlib import Path
import importlib.util
import unittest
ROOT = Path(__file__).resolve().parent.parent
SCRIPT_PATH = ROOT / "scripts" / "mempalace_ezra_integration.py"
DOC_PATH = ROOT / "docs" / "MEMPALACE_EZRA_INTEGRATION.md"
def load_module(path: Path, name: str):
assert path.exists(), f"missing {path.relative_to(ROOT)}"
spec = importlib.util.spec_from_file_location(name, path)
assert spec and spec.loader
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
class TestMempalaceEzraIntegration(unittest.TestCase):
def test_build_plan_contains_issue_required_steps_and_gotchas(self):
mod = load_module(SCRIPT_PATH, "mempalace_ezra_integration")
plan = mod.build_plan({})
self.assertEqual(plan["package_spec"], "mempalace==3.0.0")
self.assertIn("pip install mempalace==3.0.0", plan["install_command"])
self.assertEqual(plan["wing"], "ezra_home")
self.assertIn('echo "" | mempalace mine ~/.hermes/', plan["mine_home_command"])
self.assertIn('--mode convos', plan["mine_sessions_command"])
self.assertIn('mempalace wake-up', plan["wake_up_command"])
self.assertIn('hermes mcp add mempalace -- python -m mempalace.mcp_server', plan["mcp_command"])
self.assertIn('wing:', plan["yaml_template"])
self.assertTrue(any('stdin' in item.lower() for item in plan["gotchas"]))
self.assertTrue(any('wing:' in item for item in plan["gotchas"]))
def test_build_plan_accepts_path_and_wing_overrides(self):
mod = load_module(SCRIPT_PATH, "mempalace_ezra_integration")
plan = mod.build_plan(
{
"hermes_home": "/root/wizards/ezra/home",
"sessions_dir": "/root/wizards/ezra/home/sessions",
"wing": "ezra_archive",
}
)
self.assertEqual(plan["wing"], "ezra_archive")
self.assertIn('/root/wizards/ezra/home', plan["mine_home_command"])
self.assertIn('/root/wizards/ezra/home/sessions', plan["mine_sessions_command"])
self.assertIn('wing: ezra_archive', plan["yaml_template"])
def test_repo_contains_mem_palace_ezra_doc(self):
self.assertTrue(DOC_PATH.exists(), "missing committed MemPalace Ezra integration doc")
text = DOC_PATH.read_text(encoding="utf-8")
required = [
"# MemPalace v3.0.0 — Ezra Integration Packet",
"pip install mempalace==3.0.0",
'echo "" | mempalace mine ~/.hermes/',
"mempalace mine ~/.hermes/sessions/ --mode convos",
"mempalace wake-up",
"hermes mcp add mempalace -- python -m mempalace.mcp_server",
"Report back to #568",
]
for snippet in required:
self.assertIn(snippet, text)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,105 @@
from pathlib import Path
import yaml
from scripts.plan_nh_broadband_install import (
build_packet,
load_request,
render_markdown,
validate_request,
)
def test_script_exists() -> None:
assert Path("scripts/plan_nh_broadband_install.py").exists()
def test_example_request_exists() -> None:
assert Path("docs/nh-broadband-install-request.example.yaml").exists()
def test_example_packet_exists() -> None:
assert Path("docs/nh-broadband-install-packet.example.md").exists()
def test_research_memo_exists() -> None:
assert Path("reports/operations/2026-04-15-nh-broadband-public-research.md").exists()
def test_load_and_build_packet() -> None:
data = load_request("docs/nh-broadband-install-request.example.yaml")
packet = build_packet(data)
assert packet["contact"]["name"] == "Timmy Operator"
assert packet["service_address"]["city"] == "Concord"
assert packet["service_address"]["state"] == "NH"
assert packet["status"] == "pending_scheduling_call"
assert len(packet["checklist"]) == 8
assert packet["checklist"][0]["done"] is False
def test_validate_rejects_missing_contact_name() -> None:
data = {
"contact": {"name": "", "phone": "555"},
"service": {"address": "1 St", "city": "X", "state": "NH"},
"checklist": ["do thing"],
}
try:
validate_request(data)
except ValueError as exc:
assert "contact.name" in str(exc)
else:
raise AssertionError("should reject empty contact name")
def test_validate_rejects_missing_service_address() -> None:
data = {
"contact": {"name": "A", "phone": "555"},
"service": {"address": "", "city": "X", "state": "NH"},
"checklist": ["do thing"],
}
try:
validate_request(data)
except ValueError as exc:
assert "service.address" in str(exc)
else:
raise AssertionError("should reject empty service address")
def test_validate_rejects_empty_checklist() -> None:
data = {
"contact": {"name": "A", "phone": "555"},
"service": {"address": "1 St", "city": "X", "state": "NH"},
"checklist": [],
}
try:
validate_request(data)
except ValueError as exc:
assert "checklist" in str(exc)
else:
raise AssertionError("should reject empty checklist")
def test_render_markdown_contains_key_sections() -> None:
data = load_request("docs/nh-broadband-install-request.example.yaml")
packet = build_packet(data)
md = render_markdown(packet, data)
assert "# NH Broadband Install Packet" in md
assert "## Contact" in md
assert "## Service Address" in md
assert "## Call Log" in md
assert "## Appointment Checklist" in md
assert "Concord" in md
assert "NH" in md
def test_render_markdown_shows_checklist_items() -> None:
data = load_request("docs/nh-broadband-install-request.example.yaml")
packet = build_packet(data)
md = render_markdown(packet, data)
assert "- [ ] Confirm exact-address availability" in md
def test_example_yaml_is_valid() -> None:
data = yaml.safe_load(Path("docs/nh-broadband-install-request.example.yaml").read_text())
assert data["contact"]["name"] == "Timmy Operator"
assert len(data["checklist"]) == 8

View File

@@ -17,8 +17,24 @@ from typing import Dict, Any, Optional, List
from pathlib import Path
from dataclasses import dataclass
from enum import Enum
import importlib.util
from harness import UniWizardHarness, House, ExecutionResult
def _load_local(module_name: str, filename: str):
"""Import a module from an explicit file path, bypassing sys.path resolution."""
spec = importlib.util.spec_from_file_location(
module_name,
str(Path(__file__).parent / filename),
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
_harness = _load_local("v2_harness", "harness.py")
UniWizardHarness = _harness.UniWizardHarness
House = _harness.House
ExecutionResult = _harness.ExecutionResult
class TaskType(Enum):

View File

@@ -8,13 +8,30 @@ import time
import sys
import argparse
import os
import importlib.util
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional
sys.path.insert(0, str(Path(__file__).parent))
def _load_local(module_name: str, filename: str):
"""Import a module from an explicit file path, bypassing sys.path resolution.
Prevents namespace collisions when multiple directories contain modules
with the same name (e.g. uni-wizard/harness.py vs uni-wizard/v2/harness.py).
"""
spec = importlib.util.spec_from_file_location(
module_name,
str(Path(__file__).parent / filename),
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
_harness = _load_local("v2_harness", "harness.py")
UniWizardHarness = _harness.UniWizardHarness
House = _harness.House
ExecutionResult = _harness.ExecutionResult
from harness import UniWizardHarness, House, ExecutionResult
from router import HouseRouter, TaskType
from author_whitelist import AuthorWhitelist