Compare commits
2 Commits
step35/678
...
step35/668
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
079e9601b8 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
@@ -1,271 +1,417 @@
|
||||
# GENOME.md — evennia-local-world
|
||||
|
||||
*Auto-generated by Codebase Genome Pipeline. 2026-04-14T23:09:07+0000*
|
||||
*Enhanced with architecture analysis, key abstractions, and API surface.*
|
||||
|
||||
## Quick Facts
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Source files | 43 |
|
||||
| Test files | 0 |
|
||||
| Config files | 1 |
|
||||
| Total lines | 4,985 |
|
||||
| Last commit | 95eadf2 Merge PR #786: [claude] complete crisis doctrine in SOUL.md + refresh horizon doc (#545) (2026-04-22 02:39:05 +0000) |
|
||||
| Branch | main |
|
||||
| Test coverage | 0% (35 untested modules) |
|
||||
*Generated: 2026-04-21 07:07:29 UTC | Refreshed for timmy-home #677*
|
||||
|
||||
## Project Overview
|
||||
|
||||
The academy codebase comprises **43 Python files** totaling **4,985** lines of source code. Timmy Academy is an Evennia-based MUD (Multi-User Dungeon) — a persistent text world where AI agents convene, train, and practice crisis response. It runs on Bezalel VPS (167.99.126.228) with telnet on port 4000 and web client on port 4001.
|
||||
`evennia/timmy_world` is a hybrid codebase with two layers living side by side:
|
||||
|
||||
The world has five wings: Central Hub, Dormitory, Commons, Workshop, and Gardens. Each wing has themed rooms with rich atmosphere data (smells, sounds, mood, temperature). Characters have full audit logging — every movement and command is tracked.
|
||||
1. A mostly stock Evennia 6.0 game directory:
|
||||
- `server/conf/*.py`
|
||||
- `typeclasses/*.py`
|
||||
- `commands/*.py`
|
||||
- `web/**/*.py`
|
||||
- `world/prototypes.py`
|
||||
- `world/help_entries.py`
|
||||
2. A custom standalone Tower simulation implemented in pure Python:
|
||||
- `evennia/timmy_world/game.py`
|
||||
- `evennia/timmy_world/world/game.py`
|
||||
- `evennia/timmy_world/play_200.py`
|
||||
|
||||
Grounded metrics from live inspection:
|
||||
- 68 tracked files under `evennia/timmy_world`
|
||||
- 43 Python files
|
||||
- 4,985 Python LOC
|
||||
- largest modules:
|
||||
- `evennia/timmy_world/game.py` — 1,541 lines
|
||||
- `evennia/timmy_world/world/game.py` — 1,345 lines
|
||||
- `evennia/timmy_world/play_200.py` — 275 lines
|
||||
- `evennia/timmy_world/typeclasses/objects.py` — 217 lines
|
||||
- `evennia/timmy_world/commands/command.py` — 187 lines
|
||||
|
||||
The repo is not just an Evennia shell. The distinctive product logic lives in the standalone Tower simulator. That simulator models five rooms, named agents, trust/energy systems, narrative phases, NPC decision-making, and JSON persistence. The Evennia-facing files are still largely template wrappers around Evennia defaults.
|
||||
|
||||
## Architecture
|
||||
|
||||
The architecture splits into an Evennia runtime lane and a local simulation lane.
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Connections"
|
||||
TELNET[Telnet :4000]
|
||||
WEB[Web Client :4001]
|
||||
graph TD
|
||||
subgraph External Clients
|
||||
Telnet[Telnet client :4000]
|
||||
Browser[Browser / webclient :4001]
|
||||
Operator[Local operator]
|
||||
end
|
||||
|
||||
subgraph "Evennia Core"
|
||||
SERVER[Evennia Server]
|
||||
PORTAL[Evennia Portal]
|
||||
subgraph Evennia Runtime
|
||||
Settings[server/conf/settings.py]
|
||||
URLs[web/urls.py]
|
||||
Cmdsets[commands/default_cmdsets.py]
|
||||
Typeclasses[typeclasses/*.py]
|
||||
WorldDocs[world/prototypes.py + world/help_entries.py]
|
||||
WebHooks[server/conf/web_plugins.py]
|
||||
end
|
||||
|
||||
subgraph "Typeclasses"
|
||||
CHAR[Character]
|
||||
AUDIT[AuditedCharacter]
|
||||
ROOM[Room]
|
||||
EXIT[Exit]
|
||||
OBJ[Object]
|
||||
subgraph Standalone Tower Simulator
|
||||
Play200[play_200.py]
|
||||
RootGame[game.py]
|
||||
AltGame[world/game.py]
|
||||
Engine[GameEngine / PlayerInterface / NPCAI]
|
||||
State[game_state.json + timmy_log.md]
|
||||
end
|
||||
|
||||
subgraph "Commands"
|
||||
CMD_EXAM[CmdExamine]
|
||||
CMD_ROOMS[CmdRooms]
|
||||
CMD_STATUS[CmdStatus]
|
||||
CMD_MAP[CmdMap]
|
||||
CMD_ACADEMY[CmdAcademy]
|
||||
CMD_SMELL[CmdSmell]
|
||||
CMD_LISTEN[CmdListen]
|
||||
CMD_WHO[CmdWho]
|
||||
end
|
||||
Telnet --> Settings
|
||||
Browser --> URLs
|
||||
Settings --> Cmdsets
|
||||
Cmdsets --> Typeclasses
|
||||
URLs --> WebHooks
|
||||
Typeclasses --> WorldDocs
|
||||
|
||||
subgraph "World - Wings"
|
||||
HUB[Central Hub]
|
||||
DORM[Dormitory Wing]
|
||||
COMMONS[Commons Wing]
|
||||
WORKSHOP[Workshop Wing]
|
||||
GARDENS[Gardens Wing]
|
||||
end
|
||||
|
||||
subgraph "Hermes Bridge"
|
||||
HERMES_CFG[hermes-agent/config.yaml]
|
||||
BRIDGE[Agent Bridge]
|
||||
end
|
||||
|
||||
TELNET --> SERVER
|
||||
WEB --> PORTAL
|
||||
PORTAL --> SERVER
|
||||
SERVER --> CHAR
|
||||
SERVER --> AUDIT
|
||||
SERVER --> ROOM
|
||||
SERVER --> EXIT
|
||||
CHAR --> CMD_EXAM
|
||||
CHAR --> CMD_STATUS
|
||||
CHAR --> CMD_WHO
|
||||
ROOM --> HUB
|
||||
ROOM --> DORM
|
||||
ROOM --> COMMONS
|
||||
ROOM --> WORKSHOP
|
||||
ROOM --> GARDENS
|
||||
HERMES_CFG --> BRIDGE
|
||||
BRIDGE --> SERVER
|
||||
Operator --> Play200
|
||||
Play200 --> RootGame
|
||||
RootGame --> Engine
|
||||
AltGame --> Engine
|
||||
Engine --> State
|
||||
```
|
||||
|
||||
Core engine modules:
|
||||
- `evennia/timmy_world/game.py` — top-level GameEngine
|
||||
- `evennia/timmy_world/world/game.py` — world model (World, Room, Item, NPC)
|
||||
- `evennia/timmy_world/play_200.py` — demo training scenario
|
||||
What is actually wired today:
|
||||
- `server/conf/settings.py` only overrides `SERVERNAME = "timmy_world"` and optionally imports `server.conf.secret_settings`.
|
||||
- `web/urls.py` mounts `web.website.urls`, `web.webclient.urls`, `web.admin.urls`, then appends `evennia.web.urls`.
|
||||
- `commands/default_cmdsets.py` subclasses Evennia defaults but does not add custom commands yet.
|
||||
- `typeclasses/*.py` are thin wrappers around Evennia defaults.
|
||||
- `server/conf/web_plugins.py` returns the web roots unchanged.
|
||||
- `server/conf/at_initial_setup.py` is a no-op.
|
||||
- `world/batch_cmds.ev` is still template commentary rather than a real build script.
|
||||
|
||||
What is custom and stateful today:
|
||||
- `evennia/timmy_world/game.py`
|
||||
- `evennia/timmy_world/world/game.py`
|
||||
- `evennia/timmy_world/play_200.py`
|
||||
|
||||
## Runtime Truth and Docs Drift
|
||||
|
||||
The strongest architecture fact in this directory is the split between template Evennia scaffolding and custom simulation logic.
|
||||
|
||||
Drift discovered during inspection:
|
||||
- `evennia/timmy_world/README.md` is the stock Evennia welcome text.
|
||||
- `server/conf/at_initial_setup.py` is empty, so the Evennia world is not auto-populating custom Tower content at first boot.
|
||||
- `world/batch_cmds.ev` is also a template, not a concrete room/object bootstrap file.
|
||||
- The deepest custom logic is not in the typeclasses or server hooks. It is in `evennia/timmy_world/game.py` and `evennia/timmy_world/world/game.py`.
|
||||
- `evennia/timmy_world/play_200.py` imports `from game import GameEngine, NARRATIVE_PHASES`, which proves the root `game.py` is an active entry point.
|
||||
- `evennia/timmy_world/world/game.py` is not dead weight either; it contains its own `World`, `ActionSystem`, `NPCAI`, `DialogueSystem`, `GameEngine`, and `PlayerInterface` stack.
|
||||
|
||||
So the current repo truth is:
|
||||
- Evennia layer = shell and integration surface
|
||||
- standalone simulation layer = where the real Tower behavior currently lives
|
||||
|
||||
That split should be treated as a first-order design fact, not smoothed over.
|
||||
|
||||
## Entry Points
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `server/conf/settings.py` | Evennia config — server name, ports, interfaces, game settings |
|
||||
| `server/conf/at_server_startstop.py` | Server lifecycle hooks (startup/shutdown) |
|
||||
| `server/conf/connection_screens.py` | Login/connection screen text |
|
||||
| `commands/default_cmdsets.py` | Registers all custom commands with Evennia |
|
||||
| `world/rebuild_world.py` | Rebuilds all rooms from source |
|
||||
| `world/build_academy.ev` | Evennia batch script for initial world setup |
|
||||
| `game.py` | Main game engine (GameEngine, PlayerInterface) |
|
||||
| `world/game.py` | World model (World, Room, NPC, ActionSystem) |
|
||||
| `play_200.py` | Training scenario and demo actions |
|
||||
### 1. Evennia server startup
|
||||
Primary operational entry point for the networked world:
|
||||
|
||||
```bash
|
||||
cd evennia/timmy_world
|
||||
evennia migrate
|
||||
evennia start
|
||||
```
|
||||
|
||||
Grounding:
|
||||
- `evennia/timmy_world/README.md`
|
||||
- `evennia/timmy_world/server/conf/settings.py`
|
||||
|
||||
### 2. Web routing
|
||||
`evennia/timmy_world/web/urls.py` is the browser-facing entry point. It includes:
|
||||
- `web.website.urls`
|
||||
- `web.webclient.urls`
|
||||
- `web.admin.urls`
|
||||
- `evennia.web.urls` appended after the local patterns
|
||||
|
||||
This means the effective surface inherits Evennia defaults rather than defining a custom Tower web application.
|
||||
|
||||
### 3. Standalone simulation module
|
||||
`evennia/timmy_world/game.py` is a pure-Python entry point with:
|
||||
- `NARRATIVE_PHASES`
|
||||
- `get_narrative_phase()`
|
||||
- `get_phase_transition_event()`
|
||||
- `World`
|
||||
- `ActionSystem`
|
||||
- `NPCAI`
|
||||
- `GameEngine`
|
||||
- `PlayerInterface`
|
||||
|
||||
This module can be imported and exercised without an Evennia runtime.
|
||||
|
||||
### 4. Alternate simulation module
|
||||
`evennia/timmy_world/world/game.py` mirrors much of the same gameplay stack, but is not the one used by `play_200.py`.
|
||||
|
||||
Important distinction:
|
||||
- root `game.py` is the active scripted demo target
|
||||
- `world/game.py` is a second engine implementation with overlapping responsibilities
|
||||
|
||||
### 5. Scripted narrative demo
|
||||
`evennia/timmy_world/play_200.py` runs 200 deterministic ticks and prints a story arc across four named phases:
|
||||
- Quietus
|
||||
- Fracture
|
||||
- Breaking
|
||||
- Mending
|
||||
|
||||
This file is the clearest executable artifact proving how the simulator is intended to be consumed outside Evennia.
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
In a deployed environment, the unpacked code is typically found at `/Users/apayne/.timmy/evennia/timmy_world`.
|
||||
Player connects (telnet/web)
|
||||
-> Evennia Portal accepts connection
|
||||
-> Server authenticates (Account typeclass)
|
||||
-> Player puppets a Character
|
||||
-> Character enters world (Room typeclass)
|
||||
-> Commands processed through Command typeclass
|
||||
-> AuditedCharacter logs every action
|
||||
-> World responds with rich text + atmosphere data
|
||||
```
|
||||
### Networked Evennia path
|
||||
1. Client connects via telnet or browser.
|
||||
2. Evennia loads settings from `server/conf/settings.py`.
|
||||
3. Command set resolution flows through `commands/default_cmdsets.py`.
|
||||
4. Typeclass objects resolve through `typeclasses/accounts.py`, `typeclasses/characters.py`, `typeclasses/rooms.py`, `typeclasses/exits.py`, `typeclasses/objects.py`, and `typeclasses/scripts.py`.
|
||||
5. URL dispatch flows through `web/urls.py` into website, webclient, admin, and Evennia default URL patterns.
|
||||
6. Object/help/prototype metadata can be sourced from `world/prototypes.py` and `world/help_entries.py`.
|
||||
|
||||
### Standalone Tower simulation path
|
||||
1. Operator imports `evennia/timmy_world/game.py` directly or runs `evennia/timmy_world/play_200.py`.
|
||||
2. `GameEngine.start_new_game()` initializes the world state.
|
||||
3. `PlayerInterface.get_available_actions()` exposes current verbs from room topology and nearby characters.
|
||||
4. `GameEngine.run_tick()` / `play_turn()` advances time, movement, world events, NPC actions, and logs.
|
||||
5. `World` tracks rooms, characters, trust, weather, forge/garden/bridge/tower state, and narrative phase.
|
||||
6. Persistence writes to JSON/log files rooted at `/Users/apayne/.timmy/evennia/timmy_world`.
|
||||
|
||||
### Evidence of the persistence contract
|
||||
Both simulation modules hardcode the same portability-sensitive base path:
|
||||
- `evennia/timmy_world/game.py`
|
||||
- `evennia/timmy_world/world/game.py`
|
||||
|
||||
Each defines:
|
||||
- `WORLD_DIR = Path('/Users/apayne/.timmy/evennia/timmy_world')`
|
||||
- `STATE_FILE = WORLD_DIR / 'game_state.json'`
|
||||
- `TIMMY_LOG = WORLD_DIR / 'timmy_log.md'`
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### Typeclasses (the world model)
|
||||
### `World` — state container for the Tower
|
||||
Found in both `evennia/timmy_world/game.py` and `evennia/timmy_world/world/game.py`.
|
||||
|
||||
| Class | File | Purpose |
|
||||
|-------|------|---------|
|
||||
| `Character` | `typeclasses/characters.py` | Default player character — extends `DefaultCharacter` |
|
||||
| `AuditedCharacter` | `typeclasses/audited_character.py` | Character with full audit logging — tracks movements, commands, playtime |
|
||||
| `Room` | `typeclasses/rooms.py` | Default room container |
|
||||
| `Exit` | `typeclasses/exits.py` | Connections between rooms |
|
||||
| `Object` | `typeclasses/objects.py` | Base object with `ObjectParent` mixin |
|
||||
| `Account` | `typeclasses/accounts.py` | Player account (login identity) |
|
||||
| `Channel` | `typeclasses/channels.py` | In-game communication channels |
|
||||
| `Script` | `typeclasses/scripts.py` | Background/timed processes |
|
||||
Responsibilities:
|
||||
- defines the five-room map: Threshold, Tower, Forge, Garden, Bridge
|
||||
- stores per-room connections and dynamic state
|
||||
- stores per-character room, energy, trust, goals, memories, and inventory
|
||||
- tracks global pressure variables like `forge_fire_dying`, `garden_drought`, `bridge_flooding`, and `tower_power_low`
|
||||
- updates world time and environmental drift each tick
|
||||
|
||||
### AuditedCharacter — the flagship typeclass
|
||||
### `ActionSystem`
|
||||
Also present in both engine files.
|
||||
|
||||
The `AuditedCharacter` is the most important abstraction. It wraps every player action in logging:
|
||||
Responsibilities:
|
||||
- enumerates available verbs
|
||||
- computes contextual action menus from world state
|
||||
- ties actions to energy cost and room/character context
|
||||
|
||||
- `at_pre_move()` — logs departure from current room
|
||||
- `at_post_move()` — records arrival with timestamp and coordinates
|
||||
- `at_pre_cmd()` — increments command counter, logs command + args
|
||||
- `at_pre_puppet()` — starts session timer
|
||||
- `at_post_unpuppet()` — calculates session duration, updates total playtime
|
||||
- `get_audit_summary()` — returns JSON summary of all tracked metrics
|
||||
### `NPCAI`
|
||||
The non-player decision layer.
|
||||
|
||||
Audit trail keeps last 1000 movements in `db.location_history`. Sensitive commands (password) are excluded from logging.
|
||||
Responsibilities:
|
||||
- chooses actions based on each character's goals and situation
|
||||
- creates world motion without requiring live operator input
|
||||
- in `world/game.py`, works alongside `DialogueSystem`
|
||||
|
||||
### Commands (the player interface)
|
||||
### `GameEngine`
|
||||
The orchestration layer.
|
||||
|
||||
Command implementations are covered by integration tests (see `tests/test_evennia_local_world_game.py`) and auto-generated unit tests (`tests/test_genome_generated.py`).
|
||||
Responsibilities:
|
||||
- bootstraps a fresh run with `start_new_game()`
|
||||
- rehydrates from storage via `load_game()`
|
||||
- advances the simulation with `run_tick()` / `play_turn()`
|
||||
- records log entries and world events
|
||||
|
||||
| Command | Aliases | Purpose |
|
||||
|---------|---------|---------|
|
||||
| `examine` | `ex`, `exam` | Inspect room or object — shows description, atmosphere, objects, contents |
|
||||
| `rooms` | — | List all rooms with wing color coding |
|
||||
| `@status` | `status` | Show agent status: location, wing, mood, online players, uptime |
|
||||
| `@map` | `map` | ASCII map of current wing |
|
||||
| `@academy` | `academy` | Full academy overview with room counts |
|
||||
| `smell` | `sniff` | Perceive room through atmosphere scent data |
|
||||
| `listen` | `hear` | Perceive room through atmosphere sound data |
|
||||
| `@who` | `who` | Show connected players with locations and idle time |
|
||||
Grounded interface details from live import of `evennia/timmy_world/game.py`:
|
||||
- methods visible on the instance: `load_game`, `log`, `play_turn`, `run_tick`, `start_new_game`
|
||||
- `play_turn('look')` returns a dict with keys:
|
||||
- `tick`
|
||||
- `time`
|
||||
- `phase`
|
||||
- `phase_name`
|
||||
- `timmy_room`
|
||||
- `timmy_energy`
|
||||
- `room_desc`
|
||||
- `here`
|
||||
- `world_events`
|
||||
- `npc_actions`
|
||||
- `choices`
|
||||
- `log`
|
||||
|
||||
### World Structure (5 wings, 21+ rooms)
|
||||
### `PlayerInterface`
|
||||
A thin operator-facing adapter.
|
||||
|
||||
**Central Hub (LIMBO)** — Nexus connecting all wings. North=Dormitory, South=Workshop, East=Commons, West=Gardens.
|
||||
Grounded behavior:
|
||||
- when loaded from `evennia/timmy_world/game.py` after `start_new_game()`, `PlayerInterface(engine).get_available_actions()` exposes room navigation and social verbs like:
|
||||
- `move:north -> Tower`
|
||||
- `move:east -> Garden`
|
||||
- `move:west -> Forge`
|
||||
- `move:south -> Bridge`
|
||||
- `speak:Allegro`
|
||||
- `speak:Claude`
|
||||
- `rest`
|
||||
|
||||
**Dormitory Wing** — Master Suites, Corridor, Novice Hall, Residential Services, Dorm Entrance.
|
||||
### Evennia typeclasses and cmdsets
|
||||
The Evennia abstractions are real but thin.
|
||||
|
||||
**Commons Wing** — Grand Commons Hall (main gathering, 60ft ceilings, marble columns), Hearthside Dining, Entertainment Gallery, Scholar's Corner, Upper Balcony.
|
||||
Notable files:
|
||||
- `evennia/timmy_world/typeclasses/objects.py`
|
||||
- `evennia/timmy_world/typeclasses/characters.py`
|
||||
- `evennia/timmy_world/typeclasses/rooms.py`
|
||||
- `evennia/timmy_world/typeclasses/exits.py`
|
||||
- `evennia/timmy_world/typeclasses/accounts.py`
|
||||
- `evennia/timmy_world/typeclasses/scripts.py`
|
||||
- `evennia/timmy_world/commands/command.py`
|
||||
- `evennia/timmy_world/commands/default_cmdsets.py`
|
||||
|
||||
**Workshop Wing** — Great Smithy, Alchemy Labs, Woodworking Shop, Artificing Chamber, Workshop Entrance.
|
||||
|
||||
**Gardens Wing** — Enchanted Grove, Herb Gardens, Greenhouse, Sacred Grove, Gardens Entrance.
|
||||
|
||||
Each room has rich `db.atmosphere` data: mood, lighting, sounds, smells, temperature.
|
||||
Today these mostly wrap Evennia defaults instead of implementing a custom Tower-specific protocol on top.
|
||||
|
||||
## API Surface
|
||||
|
||||
### Web API
|
||||
### Network surfaces
|
||||
Grounded from `README.md`, `web/urls.py`, and `server/conf/mssp.py`:
|
||||
- Telnet on port `4000`
|
||||
- Browser / webclient on `http://localhost:4001`
|
||||
- admin surface under `/admin/`
|
||||
- Evennia default URLs appended via `evennia.web.urls`
|
||||
- Evennia REST/web surface inherits the default `/api/` patterns rather than defining custom project-specific endpoints here
|
||||
|
||||
- `web/api/__init__.py` — Evennia REST API (Django REST Framework)
|
||||
- `web/urls.py` — URL routing for web interface
|
||||
- `web/admin/` — Django admin interface
|
||||
- `web/website/` — Web frontend
|
||||
### Operator / script surfaces
|
||||
- `python3 evennia/timmy_world/play_200.py`
|
||||
- importable pure-Python engine in `evennia/timmy_world/game.py`
|
||||
- alternate engine in `evennia/timmy_world/world/game.py`
|
||||
|
||||
### Telnet
|
||||
|
||||
- Standard MUD protocol on port 4000
|
||||
- Supports MCCP (compression), MSDP (data), GMCP (protocol)
|
||||
|
||||
### Hermes Bridge
|
||||
|
||||
- `hermes-agent/config.yaml` — Configuration for AI agent connection
|
||||
- Allows Hermes agents to connect as characters and interact with the world
|
||||
|
||||
## Dependencies
|
||||
|
||||
No `requirements.txt` or `pyproject.toml` found. Dependencies come from Evennia:
|
||||
|
||||
- **evennia** — MUD framework (Django-based)
|
||||
- **django** — Web framework (via Evennia)
|
||||
- **twisted** — Async networking (via Evennia)
|
||||
### Content/model surfaces
|
||||
- object prototype definitions: `evennia/timmy_world/world/prototypes.py`
|
||||
- file-based help entries: `evennia/timmy_world/world/help_entries.py`
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Source modules | 35 |
|
||||
| Test modules | 1 |
|
||||
| Estimated coverage | 0% |
|
||||
| Untested modules | 35 |
|
||||
### Current verified state
|
||||
The original genome here was stale. The live repo now shows two different categories of test coverage:
|
||||
|
||||
The academy test suite includes:
|
||||
- `tests/test_evennia_local_world_game.py` — live game integration
|
||||
- `tests/test_genome_generated.py` — auto-generated unit test stubs
|
||||
- `tests/test_evennia_local_world_genome.py` — validates this GENOME document
|
||||
- `tests/test_bezalel_evennia_layout.py` — spatial layout verification
|
||||
- `tests/test_evennia_mind_palace.py` — memory palace integration
|
||||
- `tests/test_evennia_telemetry.py` — event logging
|
||||
- `tests/test_evennia_training.py` — training workflow validation
|
||||
- `tests/test_evennia_vps_repair.py` — VPS repair script checks
|
||||
Additionally, **19 skipped** due to optional dependencies (e.g., Evennia not installed in the test environment).
|
||||
1. Host-repo generated tests already exist in `tests/test_genome_generated.py`
|
||||
- they reference `evennia/timmy_world/game.py`
|
||||
- they reference `evennia/timmy_world/world/game.py`
|
||||
- they reference `server/conf/web_plugins.py`
|
||||
2. Those generated tests are not trustworthy as-is for this target
|
||||
- running `python3 -m pytest tests/test_genome_generated.py -k 'EvenniaTimmyWorld' -q -rs`
|
||||
- result: `19 skipped, 31 deselected`
|
||||
- skip reason on every case: `Module not importable`
|
||||
|
||||
### Critical Untested Paths
|
||||
This matters because the codebase-genome pipeline reported zero local tests for the subproject, but the host repo does contain tests. The real issue is not “no tests exist.” The real issue is “the existing generated tests are disconnected from the actual import path and therefore do not execute the critical path.”
|
||||
|
||||
1. **AuditedCharacter** — audit logging is the primary value-add. No tests verify movement tracking, command counting, or playtime calculation.
|
||||
2. **Commands** — no tests for any of the 8 commands. The `@map` wing detection, `@who` session tracking, and atmosphere-based commands (`smell`, `listen`) are all untested.
|
||||
3. **World rebuild** — `rebuild_world.py` and `fix_world.py` can destroy and recreate the entire world. No tests ensure they produce valid output.
|
||||
4. **Typeclass hooks** — `at_pre_move`, `at_post_move`, `at_pre_cmd` etc. are never tested in isolation.
|
||||
### New critical-path tests added for #677
|
||||
This issue refresh adds a dedicated executable test file:
|
||||
- `tests/test_evennia_local_world_game.py`
|
||||
|
||||
Covered behaviors:
|
||||
- narrative phase boundaries across Quietus / Fracture / Breaking / Mending
|
||||
- player-facing action surface from the Threshold start state
|
||||
- deterministic `run_tick('move:north')` flow into the Tower with expected log and world-event output
|
||||
|
||||
### Genome artifact coverage added for #677
|
||||
This issue refresh also adds:
|
||||
- `tests/test_evennia_local_world_genome.py`
|
||||
|
||||
That test locks:
|
||||
- artifact path
|
||||
- required analysis sections
|
||||
- grounded snippets for real files and verification output
|
||||
|
||||
### Remaining gaps
|
||||
Still missing strong runtime coverage for:
|
||||
- Evennia typeclass behavior under a real Evennia test harness
|
||||
- URL routing under Django/Evennia integration
|
||||
- `world/game.py` parity versus root `game.py`
|
||||
- persistence portability around `/Users/apayne/.timmy/evennia/timmy_world`
|
||||
- `at_initial_setup.py` and `world/batch_cmds.ev` actually building a playable world in the Evennia path
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- ⚠️ Uses `eval()`/`exec()` — Evennia's inlinefuncs module uses eval for dynamic command evaluation. Risk level: inherent to MUD framework.
|
||||
- ⚠️ References secrets/passwords — `settings.py` references `secret_settings.py` for sensitive config. Ensure this file is not committed.
|
||||
- ⚠️ Telnet on 0.0.0.0 — server accepts connections from any IP. Consider firewall rules.
|
||||
- ⚠️ Web client on 0.0.0.0 — same exposure as telnet. Ensure authentication is enforced.
|
||||
- ⚠️ Agent bridge (`hermes-agent/config.yaml`) — verify credentials are not hardcoded.
|
||||
1. Plaintext telnet exposure
|
||||
- `server/conf/mssp.py` advertises port `4000`
|
||||
- telnet is unencrypted by default
|
||||
- acceptable for localhost/dev, risky for exposed deployment
|
||||
|
||||
2. Hardcoded absolute persistence path
|
||||
- both `evennia/timmy_world/game.py` and `evennia/timmy_world/world/game.py` hardcode `/Users/apayne/.timmy/evennia/timmy_world`
|
||||
- this couples runtime writes to one operator machine and one home-directory layout
|
||||
- portability and accidental overwrite risk are both real
|
||||
- filed follow-up: `timmy-home #831` — `https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-home/issues/831`
|
||||
|
||||
3. Admin/web surfaces inherit defaults
|
||||
- `web/urls.py` exposes admin and Evennia defaults
|
||||
- if the service is made remotely reachable, Django/Evennia auth and proxy boundaries matter immediately
|
||||
|
||||
4. Secret handling is externalized but optional
|
||||
- `server/conf/settings.py` silently falls back if `secret_settings.py` is missing
|
||||
- convenient for local development, but secrets discipline lives outside the repo contract
|
||||
|
||||
5. Template hooks can hide missing security posture
|
||||
- `server/conf/web_plugins.py` is pass-through
|
||||
- `server/conf/at_initial_setup.py` is pass-through
|
||||
- the absence of custom code here means there are no local hardening hooks yet for startup, proxying, or world bootstrap
|
||||
|
||||
## Dependencies
|
||||
|
||||
Directly evidenced imports and framework coupling:
|
||||
- Evennia 6.0 game-directory structure
|
||||
- Django via Evennia web/admin stack
|
||||
- Twisted via Evennia networking/web hooks
|
||||
- Python stdlib heavy use in standalone simulator:
|
||||
- `json`
|
||||
- `time`
|
||||
- `os`
|
||||
- `random`
|
||||
- `datetime`
|
||||
- `pathlib`
|
||||
- `sys`
|
||||
|
||||
Dependency caveat:
|
||||
- the standalone Tower simulator is largely pure Python and importable in isolation
|
||||
- the typeclass / cmdset / web files depend on Evennia and Django runtime wiring to do real work
|
||||
|
||||
## Deployment
|
||||
|
||||
Timmy Academy runs as an Evennia world on dedicated VPS and localhost.
|
||||
### Evennia path
|
||||
```bash
|
||||
cd evennia/timmy_world
|
||||
evennia migrate
|
||||
evennia start
|
||||
```
|
||||
|
||||
**Production (Bezalel VPS)** — telnet on port 4000, web client on 4001:
|
||||
- Telnet: `telnet 167.99.126.228 4000`
|
||||
- Web: `http://167.99.126.228:4001`
|
||||
Expected local surfaces from repo docs/config:
|
||||
- telnet: `localhost:4000`
|
||||
- browser/webclient: `http://localhost:4001`
|
||||
|
||||
**Local development** — clone and run `evennia start --name timmy_world` from `evennia/timmy_world/`. The default runtime path is `/Users/apayne/.timmy/evennia/timmy_world`.
|
||||
### Standalone simulation path
|
||||
```bash
|
||||
cd evennia/timmy_world
|
||||
python3 play_200.py
|
||||
```
|
||||
|
||||
**Hermes bridge** — AI agents connect via the `hermes-agent` bridge, configured in `hermes-agent/config.yaml` to point at the local Evennia socket.
|
||||
This does not require the full Evennia network stack. It exercises the root `game.py` engine directly.
|
||||
|
||||
## Configuration Files
|
||||
### Verification commands run for this genome refresh
|
||||
```bash
|
||||
python3 ~/.hermes/pipelines/codebase-genome.py --path /tmp/BURN-7-7/evennia/timmy_world --output /tmp/evennia-local-world-GENOME-base.md
|
||||
python3 -m pytest tests/test_genome_generated.py -k 'EvenniaTimmyWorld' -q -rs
|
||||
python3 -m pytest tests/test_evennia_local_world_genome.py tests/test_evennia_local_world_game.py -q
|
||||
python3 -m py_compile evennia/timmy_world/game.py evennia/timmy_world/world/game.py evennia/timmy_world/play_200.py evennia/timmy_world/server/conf/settings.py evennia/timmy_world/web/urls.py
|
||||
```
|
||||
|
||||
- `server/conf/settings.py` — Main Evennia settings (server name, ports, typeclass paths)
|
||||
- `hermes-agent/config.yaml` — Hermes agent bridge configuration
|
||||
- `world/build_academy.ev` — Evennia batch build script
|
||||
- `world/batch_cmds.ev` — Batch command definitions
|
||||
## Key Findings
|
||||
|
||||
## What's Missing
|
||||
|
||||
1. **Tests** — 0% coverage is a critical gap. Priority: AuditedCharacter hooks, command func() methods, world rebuild integrity.
|
||||
2. **CI/CD** — No automated testing pipeline. No GitHub Actions or Gitea workflows.
|
||||
3. **Documentation** — `world/BUILDER_GUIDE.md` exists but no developer onboarding docs.
|
||||
4. **Monitoring** — No health checks, no metrics export, no alerting on server crashes.
|
||||
5. **Backup** — No automated database backup for the Evennia SQLite/PostgreSQL database.
|
||||
1. The current custom product logic is the standalone Tower simulator, not the Evennia typeclass layer.
|
||||
2. The repo contains two parallel simulation engines: `evennia/timmy_world/game.py` and `evennia/timmy_world/world/game.py`.
|
||||
3. The stock Evennia scaffolding is still mostly template code (`README.md`, `at_initial_setup.py`, `world/batch_cmds.ev`, pass-through cmdsets/web hooks).
|
||||
4. The codebase-genome pipeline undercounted test reality because subproject-local tests are absent while host-repo tests exist one level up.
|
||||
5. The existing generated tests were present but functionally inert: `19 skipped` because their import path does not match the current host-repo layout.
|
||||
6. The most concrete portability hazard is the hardcoded `/Users/apayne/.timmy/evennia/timmy_world` state path in both simulation engines.
|
||||
|
||||
---
|
||||
|
||||
*Generated by Codebase Genome Pipeline. Review and update manually.*
|
||||
This refreshed genome supersedes the earlier auto-generated `evennia/timmy_world/GENOME.md` summary by grounding the analysis in live source inspection, live import of `evennia/timmy_world/game.py`, current file metrics, and executable host-repo verification.
|
||||
984
genomes/hermes-agent-GENOME.md
Normal file
984
genomes/hermes-agent-GENOME.md
Normal file
@@ -0,0 +1,984 @@
|
||||
# GENOME.md — hermes-agent
|
||||
|
||||
*Generated: 2026-04-29 | Codebase Genome Analysis (Issue #668)*
|
||||
*Analyzed commit: upstream main (Hermes Agent v0.7.0)*
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
**Hermes Agent** is a sovereign, self-improving AI agent framework built by Nous Research. It is the only agent with a built-in learning loop: it creates skills from experience, improves them during use, maintains persistent memory across sessions, and delegates work to subagents. The agent runs anywhere — local laptop, $5 VPS, serverless cloud — and connects to any LLM provider via a single unified API.
|
||||
|
||||
### Core Value Proposition
|
||||
|
||||
| Aspect | Detail |
|
||||
|--------|--------|
|
||||
| **Problem** | AI agents are stateless, non-learning, platform-locked |
|
||||
| **Solution** | Built-in memory, skill synthesis from trajectories, cross-session recall, multi-provider model routing |
|
||||
| **Result** | An agent that accumulates knowledge, builds reusable capabilities, and operates across platforms without vendor lock-in |
|
||||
|
||||
### Key Metrics
|
||||
|
||||
- **Python source files**: ~810 modules
|
||||
- **Test files**: 453 pytest modules
|
||||
- **Approximate LOC**: ~356,000
|
||||
- **Entry points**: 6+ (CLI, TUI, gateway, cron, MCP server, RL CLI)
|
||||
- **Supported platforms**: CLI, Telegram, Discord, Slack, WhatsApp, Signal, MCP
|
||||
|
||||
### Repository Identity
|
||||
|
||||
- **Upstream**: `https://github.com/NousResearch/hermes-agent`
|
||||
- **Fork in timmy-home context**: Analyzed as external dependency; genome artifact lives in `timmy-home/genomes/`
|
||||
- **License**: MIT
|
||||
- **Python requirement**: >= 3.11
|
||||
- **Version**: 0.7.0 (at time of analysis)
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "User Interfaces"
|
||||
CLI[hermes_cli/main.py<br/>TUI (prompt_toolkit)]
|
||||
CORE[run_agent.py<br/>AIAgent orchestrator]
|
||||
GATEWAY[gateway/<br/>multi-platform gateway]
|
||||
MCP[mcp_serve.py<br/>MCP server]
|
||||
RL[rl_cli.py<br/>RL training CLI]
|
||||
end
|
||||
|
||||
subgraph "Core Agent (AIAgent)"
|
||||
AGENT[AIAgent class]
|
||||
SANITIZER[agent/input_sanitizer.py<br/>jailbreak + risk scoring]
|
||||
MEMORY[agent/memory_manager.py<br/>MemoryProvider orchestration]
|
||||
PROMPT[agent/prompt_builder.py<br/>system prompt assembly]
|
||||
METADATA[agent/model_metadata.py<br/>model + token estimation]
|
||||
COMPRESS[agent/context_compressor.py<br/>window management]
|
||||
DISPLAY[agent/display.py<br/>TUI spinners + formatting]
|
||||
TRAJECTORY[agent/trajectory.py<br/>compression + think blocks]
|
||||
INSIGHTS[agent/insights.py<br/>session analytics]
|
||||
USAGE[agent/usage_pricing.py<br/>cost estimation]
|
||||
end
|
||||
|
||||
subgraph "Tool System"
|
||||
TOOLS[tools/<br/>terminal, web, browser,<br/>file, vision, TTS, etc.]
|
||||
TOOLSETS[toolsets.py<br/>tool grouping + aliases]
|
||||
HANDLE[model_tools.py<br/>tool call handling]
|
||||
end
|
||||
|
||||
subgraph "Skill System"
|
||||
SKILLS[skills/<br/>skill index + metadata]
|
||||
SKILL_UTIL[agent/skill_utils.py<br/>discovery + matching]
|
||||
SKILL_CMD[agent/skill_commands.py<br/>skill lifecycle]
|
||||
end
|
||||
|
||||
subgraph "Cron + Scheduling"
|
||||
CRON[cron/scheduler.py<br/>tick-based executor]
|
||||
CRON_JOBS[cron/jobs.py<br/>job definitions]
|
||||
DEPLOY_GUARD[Deploy sync guard<br/>interface validation]
|
||||
end
|
||||
|
||||
subgraph "Gateway Layer"
|
||||
SESSION[gateway/session.py<br/>SessionStore + reset policy]
|
||||
DELIVERY[gateway/delivery.py<br/>routing + truncation]
|
||||
GATEWAY_CFG[gateway/config.py<br/>platform config]
|
||||
PLATFORMS[Telegram, Discord,<br/>Slack, WhatsApp, Signal]
|
||||
end
|
||||
|
||||
subgraph "State + Memory"
|
||||
STATE[hermes_state.py<br/>SQLite + FTS5]
|
||||
BUILTIN_MEM[agent/builtin_memory_provider.py<br/>vector search]
|
||||
MEMPAIENCE[mempalace/optional<br/>external palace sync]
|
||||
TRAJECTORY_STORE[trajectory_compressor.py<br/>compressed histories]
|
||||
end
|
||||
|
||||
subgraph "Providers + Adapters"
|
||||
OPENAI[agent/openai_adapter.py]
|
||||
ANTHROPIC[agent/anthropic_adapter.py]
|
||||
GEMINI[agent/gemini_adapter.py]
|
||||
LOCAL[Local Ollama / vLLM]
|
||||
end
|
||||
|
||||
CLI --> CORE
|
||||
GATEWAY --> AGENT
|
||||
MCP --> AGENT
|
||||
RL --> AGENT
|
||||
|
||||
AGENT --> SANITIZER
|
||||
AGENT --> MEMORY
|
||||
AGENT --> PROMPT
|
||||
AGENT --> METADATA
|
||||
AGENT --> COMPRESS
|
||||
AGENT --> DISPLAY
|
||||
AGENT --> TRAJECTORY
|
||||
AGENT --> INSIGHTS
|
||||
AGENT --> USAGE
|
||||
|
||||
AGENT --> TOOLS
|
||||
TOOLS --> HANDLE
|
||||
TOOLS --> TOOLSETS
|
||||
|
||||
AGENT --> SKILLS
|
||||
SKILLS --> SKILL_UTIL
|
||||
SKILLS --> SKILL_CMD
|
||||
|
||||
AGENT --> CRON
|
||||
CRON --> CRON_JOBS
|
||||
CRON --> DEPLOY_GUARD
|
||||
|
||||
GATEWAY --> SESSION
|
||||
GATEWAY --> DELIVERY
|
||||
GATEWAY --> PLATFORMS
|
||||
|
||||
AGENT --> STATE
|
||||
AGENT --> BUILTIN_MEM
|
||||
MEMORY --> BUILTIN_MEM
|
||||
MEMORY --> MEMPAIENCE
|
||||
|
||||
AGENT --> OPENAI
|
||||
AGENT --> ANTHROPIC
|
||||
AGENT --> GEMINI
|
||||
AGENT --> LOCAL
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Points
|
||||
|
||||
### Primary: AIAgent Orhchestrator
|
||||
|
||||
**File**: `run_agent.py`
|
||||
|
||||
The `AIAgent` class is the central conversation loop. Key responsibilities:
|
||||
- Tool-calling iteration loop (default 90 iterations per turn)
|
||||
- Model provider abstraction (OpenAI, Anthropic, Google Gemini, local endpoints)
|
||||
- Message history management with token limits
|
||||
- Context compression and memory prefetching
|
||||
- Session persistence to SQLite state DB
|
||||
- Trajectory saving for skill synthesis
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
from run_agent import AIAgent
|
||||
agent = AIAgent(
|
||||
base_url="http://localhost:30000/v1",
|
||||
model="claude-opus-4",
|
||||
max_iterations=90
|
||||
)
|
||||
response = agent.run_conversation("What's the weather in Tokyo?")
|
||||
```
|
||||
|
||||
### CLI Entry: hermes
|
||||
|
||||
**File**: `cli.py`
|
||||
|
||||
Minimal entry point that delegates to `hermes_cli.main:main()`. Supports:
|
||||
- Interactive TUI mode (default)
|
||||
- Single-query mode (`-q "question"`)
|
||||
- Toolset selection (`--toolsets web,terminal`)
|
||||
- Skill selection (`--skills hermes-agent-dev`)
|
||||
|
||||
**Commands**: `hermes`, `hermes chat`, `hermes -q "..."`, `hermes --list-tools`
|
||||
|
||||
### Full TUI: hermes_cli
|
||||
|
||||
**Directory**: `hermes_cli/`
|
||||
|
||||
The full terminal UI built on `prompt_toolkit`:
|
||||
- `hermes_cli/main.py` — top-level application, command routing
|
||||
- `hermes_cli/curses_ui.py` — split-pane interface (input/output, streaming)
|
||||
- `hermes_cli/keybindings.py` — slash commands, multi-line editing
|
||||
- `hermes_cli/banner.py` — ASCII branding + context length display
|
||||
- `hermes_cli/providers.py` — model switching UI
|
||||
- `hermes_cli/cron.py` — cron job management UI
|
||||
- `hermes_cli/gateway.py` — gateway control UI
|
||||
- `hermes_cli/skills_hub.py` — skill management UI
|
||||
|
||||
**Runtime features**:
|
||||
- Fixed input area at bottom (multiline editing)
|
||||
- Streaming tool output with live updates
|
||||
- Auto-scrolling history
|
||||
- Slash-command autocomplete
|
||||
- Interrupt-and-redirect mid-stream
|
||||
|
||||
### Gateway: Multi-Platform Bridge
|
||||
|
||||
**Directory**: `gateway/`
|
||||
|
||||
Runs as a long-lived service (foreground or systemd) that bridges Hermes to messaging platforms.
|
||||
|
||||
**Entry**:
|
||||
- `gateway/main.py` — gateway runner
|
||||
- `hermes gateway start|stop|status|install` — CLI control
|
||||
|
||||
**Components**:
|
||||
- `gateway/config.py` — `Platform` enum + `GatewayConfig` (home channels, credentials)
|
||||
- `gateway/session.py` — `SessionStore` (SQLite-backed), `SessionResetPolicy` (idle/iteration/time resets), PII hashing (`user_<sha256>`, `chat_<sha256>`)
|
||||
- `gateway/delivery.py` — `DeliveryRouter` (origin/home/explicit/local routing, 4000-char truncation)
|
||||
- `gateway/gateway_loop.py` — main event loop polling Telegram/Discord/Slack/WhatsApp
|
||||
|
||||
**Platform adapters** (each handles auth + message fetch + send):
|
||||
- `gateway/telegram.py` — python-telegram-bot (webhook + polling)
|
||||
- `gateway/discord.py` — discord.py (gateway + voice support)
|
||||
- `gateway/slack.py` — slack-bolt (events API)
|
||||
- `gateway/whatsapp.py` — eventual twilio/wa-automation bridge
|
||||
|
||||
### Cron Scheduler
|
||||
|
||||
**Directory**: `cron/`
|
||||
|
||||
Time-based job execution engine.
|
||||
|
||||
**Entry**: `cron/scheduler.py`
|
||||
|
||||
`Scheduler.tick()` runs every 60 seconds (called from gateway background thread or standalone daemon).
|
||||
|
||||
**Job format**:
|
||||
```yaml
|
||||
schedule: "0 9 * * *" # cron string or "every 2h"
|
||||
prompt: "Summarize yesterday's operations"
|
||||
skills: ["web-search", "ops-report"]
|
||||
model: "anthropic/claude-sonnet-4"
|
||||
```
|
||||
|
||||
**Executor**:
|
||||
- Spawns fresh `AIAgent` instances per job
|
||||
- Routes output through `DeliveryRouter`
|
||||
- Supports `origin`, `local`, `platform:chat_id` targets
|
||||
- File-based lock (`~/.hermes/cron/.tick.lock`) prevents concurrent ticks
|
||||
|
||||
**Deploy Sync Guard**: Validates `AIAgent.__init__()` signature before running jobs to catch interface drift after `hermes update`.
|
||||
|
||||
### MCP Server
|
||||
|
||||
**File**: `mcp_serve.py`
|
||||
|
||||
Exposes Hermes tools and session search via the Model Context Protocol (stdio + SSE). Allows Cursor/Windsurf/Claude Desktop to call Hermes as an MCP server.
|
||||
|
||||
---
|
||||
|
||||
## Data Flow
|
||||
|
||||
### 1. Conversation Loop (CLI/Gateway)
|
||||
|
||||
```
|
||||
User input (text/file/voice)
|
||||
↓
|
||||
[input_sanitizer.py] — jailbreak detection, PII scoring, risk block
|
||||
↓
|
||||
[memory_manager.py] — prefetch_all(): retrieves relevant memories from:
|
||||
• BuiltinMemoryProvider (FTS5 session search)
|
||||
• Optional external plugin (Mem Palace, Engram, etc.)
|
||||
↓
|
||||
[prompt_builder.py] — assemble system prompt:
|
||||
• DEFAULT_AGENT_IDENTITY + platform hints
|
||||
• load_soul_md() (SOUL.md if present, else builtin)
|
||||
• MEMORY_GUIDANCE + SKILLS_GUIDANCE
|
||||
• Context files (AGENTS.md, .cursorrules, project docs)
|
||||
• Skill index (all SKILL.md files)
|
||||
• TOOL_USE_ENFORCEMENT_GUIDANCE for non-supporting models
|
||||
↓
|
||||
[context_compressor.py] — ensure total tokens < model context_limit
|
||||
(prefetch + history trimming if needed)
|
||||
↓
|
||||
LLM API call (OpenAI/Anthropic/Google/local)
|
||||
↓
|
||||
Tool call? → YES → [model_tools.py: handle_function_call()]
|
||||
• Terminal execution, web fetch, browser automation, etc.
|
||||
• Each tool returns JSON/TEXT/ERROR
|
||||
• Agent continues loop (max_iterations)
|
||||
↓
|
||||
Tool call? → NO → Final response
|
||||
↓
|
||||
[memory_manager.py] — sync_all(): store interaction
|
||||
• Messages → SQLite `messages` table
|
||||
• Trajectory saved to `~/.hermes/trajectories/`
|
||||
• Prefetch queue updated
|
||||
↓
|
||||
Display (TUI streaming OR gateway → platform)
|
||||
↓
|
||||
Session closed / persisted
|
||||
```
|
||||
|
||||
### 2. Tool Execution
|
||||
|
||||
```
|
||||
Tool request (from LLM)
|
||||
↓
|
||||
[tools/terminal_tool.py] or [tools/web_tools.py] or [tools/browser_tool.py] ...
|
||||
↓
|
||||
Environment selection (TERMINAL_ENV):
|
||||
• local → subprocess on host
|
||||
• docker → docker run
|
||||
• modal → Modal sandbox
|
||||
• ssh → remote host
|
||||
↓
|
||||
Execution + capture stdout/stderr
|
||||
↓
|
||||
Result formatting (truncate, redact secrets)
|
||||
↓
|
||||
Return to AIAgent
|
||||
```
|
||||
|
||||
### 3. Cron Job Execution
|
||||
|
||||
```
|
||||
Scheduler.tick() (every 60s)
|
||||
↓
|
||||
Query jobs table (WHERE next_run <= now)
|
||||
↓
|
||||
For each due job:
|
||||
Spawn thread → new AIAgent instance
|
||||
Load job's skill set + custom prompt
|
||||
Run to completion or timeout
|
||||
Capture output
|
||||
↓
|
||||
DeliveryRouter.deliver(output, target=job.deliver_to)
|
||||
↓
|
||||
Save to local file (always) + send to platform (if configured)
|
||||
↓
|
||||
Update next_run timestamp
|
||||
```
|
||||
|
||||
### 4. Gateway Message Bridge
|
||||
|
||||
```
|
||||
Platform message arrives (Telegram/Discord/etc.)
|
||||
↓
|
||||
[session.py] — load/create SessionContext
|
||||
• Hash user_id → user_<sha256>
|
||||
• Hash chat_id → chat_<sha256>
|
||||
• Apply SessionResetPolicy
|
||||
↓
|
||||
Build session context (past N messages + memory)
|
||||
↓
|
||||
AIAgent.run_conversation(message)
|
||||
↓
|
||||
DeliveryRouter.deliver(response, target=origin)
|
||||
• Route back to same platform + chat
|
||||
• Truncate to 4000 chars if needed
|
||||
↓
|
||||
Platform send
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### 1. AIAgent (run_agent.py)
|
||||
|
||||
The orchestrator class. Stateful per-session. Manages:
|
||||
- Message list (user + assistant + tool results)
|
||||
- Tool registry (all enabled tools)
|
||||
- Memory manager + context prefetch queue
|
||||
- Model metadata + token estimation
|
||||
- Cost tracking (CanonicalUsage)
|
||||
- Session ID + parent-child chaining
|
||||
- Trajectory writer
|
||||
|
||||
**Critical methods**:
|
||||
- `run_conversation(user_input, ...)` — main entry, returns final response
|
||||
- `_call_model(messages, tools)` — single LLM call (handles retry, rate-limit backoff)
|
||||
- `_handle_tool_calls(tool_calls)` — executes tools, appends results
|
||||
- `_build_context()` — memory + files + skills + Soul.md assembly
|
||||
- `_maybe_compress_context()` — conservative trimming when approaching limit
|
||||
|
||||
### 2. MemoryProvider (agent/memory_provider.py)
|
||||
|
||||
Abstract base class. Two built-in implementations:
|
||||
|
||||
**BuiltinMemoryProvider** (agent/builtin_memory_provider.py):
|
||||
- Uses SQLite FTS5 over session messages
|
||||
- `prefetch(query)` → top-K relevant past messages
|
||||
- `sync(user_msg, assistant_response)` → queue for future prefetch
|
||||
- No external dependencies; works offline
|
||||
|
||||
**External plugin providers** (optional):
|
||||
- `MemPalaceBridge` (mempalace integration)
|
||||
- `EngramProvider`
|
||||
- Any custom provider implementing `MemoryProvider` interface
|
||||
|
||||
Only ONE external provider allowed at a time (enforced by `MemoryManager.add_provider`).
|
||||
|
||||
### 3. Tool Registry (model_tools.py, toolsets.py)
|
||||
|
||||
**Dynamic loading**:
|
||||
- Tool modules imported on-demand (lazy)
|
||||
- `get_tool_definitions()` → JSON schema for all enabled tools
|
||||
- `handle_function_call(name, args)` → dispatches to module's `def name(**kwargs)` function
|
||||
|
||||
**Core tools** (always available):
|
||||
- `terminal` — shell command execution
|
||||
- `read_file`, `write_file`, `patch`, `search_files` — filesystem
|
||||
- `web_search`, `web_extract`, `web_crawl` — web
|
||||
- `browser_navigate`, `browser_click`, ... — Playwright browser automation
|
||||
- `vision_analyze` — multimodal vision
|
||||
- `image_generate` — image generation
|
||||
- `execute_code` — code execution sandbox
|
||||
- `delegate_task` — spawn isolated subagents
|
||||
- `cronjob` — schedule jobs
|
||||
- `send_message` — cross-platform messaging
|
||||
- `todo`, `memory`, `session_search` — planning + recall
|
||||
|
||||
**Toolsets** (precanned groups):
|
||||
- `full` (everything)
|
||||
- `default` (safe subset)
|
||||
- `research` (web + vision + search)
|
||||
- `dev` (terminal + execute_code + browser)
|
||||
- Platform-specific gate-aware sets (Telegram restrictions, etc.)
|
||||
|
||||
### 4. Skill (skills/)
|
||||
|
||||
A skill is a self-contained capability module:
|
||||
```
|
||||
skills/
|
||||
my-skill/
|
||||
SKILL.md ← YAML frontmatter + usage docs
|
||||
__init__.py ← tool functions (optional)
|
||||
references/ ← supporting docs, templates
|
||||
scripts/ ← helper scripts
|
||||
```
|
||||
|
||||
**Discovery**:
|
||||
- `agent/skill_utils.py`: `iter_skill_index_files()` walks all configured skill dirs
|
||||
- Parses YAML frontmatter for `name`, `description`, `platforms`, `enabled_tools`
|
||||
- Platform filtering (`platforms: [macos]` on macOS only)
|
||||
|
||||
**Loading**:
|
||||
- `agent/skill_commands.py`: `load_skill()`, `unload_skill()`, `reload_skill()`
|
||||
- Optional import of `__init__.py` for tool registration
|
||||
- Skill manifest cached in `~/.hermes/skills/.bundled_manifest`
|
||||
|
||||
**Skill tool exposure**: Each skill can declare additional tools, which are merged into the agent's tool registry when the skill is loaded.
|
||||
|
||||
### 5. Session (State Management)
|
||||
|
||||
**Database**: `~/.hermes/state.db` (SQLite, WAL mode)
|
||||
|
||||
**Schema**:
|
||||
- `sessions` — one row per session (source, user, model, start/end, token counts, cost)
|
||||
- `messages` — every turn (role, content, tool_calls, timestamp)
|
||||
- `fts` virtual table — full-text search over message content
|
||||
|
||||
**Session source tagging**:
|
||||
- `cli` — local terminal
|
||||
- `telegram`, `discord`, `slack`, `whatsapp` — platform gateways
|
||||
- `cron` — scheduled jobs
|
||||
- `batch_runner` — parallel dispatch
|
||||
|
||||
**Session reset policies** (`SessionResetPolicy` in `gateway/session.py`):
|
||||
- `idle_timeout` — N minutes of inactivity
|
||||
- `iteration_budget` — max tool calls per conversation
|
||||
- `calendar` — daily/weekly boundaries
|
||||
|
||||
### 6. DeliveryRouter (gateway/delivery.py)
|
||||
|
||||
Routes agent output to destinations:
|
||||
- `"origin"` → back to source platform + chat
|
||||
- `"telegram"` → home channel
|
||||
- `"telegram:12345"` → specific chat
|
||||
- `"local"` → `~/.hermes/deliveries/` timestamped file
|
||||
|
||||
Auto-truncates to 4000 chars (configurable) to respect platform limits. Split-message logic not yet implemented.
|
||||
|
||||
### 7. Cron Scheduler (cron/scheduler.py)
|
||||
|
||||
File-based job queue stored in SQLite (`cron_jobs` table). Tick loop:
|
||||
1. `SELECT * FROM cron_jobs WHERE next_run <= now()`
|
||||
2. For each job: spawn thread → fresh `AIAgent` → run prompt
|
||||
3. Deliver output, update `last_run`, compute `next_run`
|
||||
4. Log to `~/.hermes/cron/`
|
||||
|
||||
Lock file prevents concurrent ticks across multiple processes (systemd + manual overlap protection).
|
||||
|
||||
---
|
||||
|
||||
## API Surface
|
||||
|
||||
### Public Python API
|
||||
|
||||
#### AIAgent (run_agent.py)
|
||||
|
||||
```python
|
||||
class AIAgent:
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str = None,
|
||||
api_key: str = None,
|
||||
provider: str = None,
|
||||
model: str = "",
|
||||
max_iterations: int = 90,
|
||||
tool_delay: float = 1.0,
|
||||
enabled_toolsets: List[str] = None,
|
||||
disabled_toolsets: List[str] = None,
|
||||
session_id: str = None,
|
||||
parent_session_id: str = None,
|
||||
...
|
||||
) -> None: ...
|
||||
|
||||
def run_conversation(self, user_input: str, ...) -> str: ...
|
||||
def stream_conversation(self, user_input: str, ...) -> Iterator[str]: ...
|
||||
|
||||
# Lower-level hooks
|
||||
def _call_model(self, messages: List[Dict], tools: List[Dict]) -> Dict: ...
|
||||
def _handle_tool_calls(self, tool_calls: List[Dict]) -> List[Dict]: ...
|
||||
def _build_context(self) -> str: ...
|
||||
```
|
||||
|
||||
#### MemoryProvider (agent/memory_provider.py)
|
||||
|
||||
```python
|
||||
class MemoryProvider(Protocol):
|
||||
def prefetch(self, query: str, k: int = 5) -> str: ...
|
||||
def sync(self, user_msg: str, assistant_response: str) -> None: ...
|
||||
```
|
||||
|
||||
**Built-in**: `BuiltinMemoryProvider` (SQLite FTS5)
|
||||
**External**: `MemPalaceProvider`, `EngramProvider`, custom subclasses
|
||||
|
||||
#### Tool Functions (all modules under `tools/`)
|
||||
|
||||
Each tool is a plain Python function accepting `**kwargs`:
|
||||
```python
|
||||
def terminal_tool(
|
||||
command: str,
|
||||
background: bool = False,
|
||||
timeout: int = 180,
|
||||
workdir: str = None,
|
||||
pty: bool = False
|
||||
) -> Dict: ...
|
||||
|
||||
def web_search_tool(
|
||||
query: str,
|
||||
backend: str = "openrouter"
|
||||
) -> Dict: ...
|
||||
|
||||
def browser_navigate(url: str) -> Dict: ...
|
||||
```
|
||||
|
||||
Tool definitions auto-generated via `@tool` decorator from `model_tools.py`.
|
||||
|
||||
### CLI Commands (hermes)
|
||||
|
||||
```
|
||||
hermes # Interactive TUI
|
||||
hermes chat # Explicit chat mode
|
||||
hermes -q "question" # Single query, exit
|
||||
hermes --list-tools # Enumerate all tools
|
||||
hermes status # Component status (agent, gateway, cron)
|
||||
hermes gateway start|stop|status|install|uninstall
|
||||
hermes cron list|status|add|remove
|
||||
hermes doctor # Config + dependency diagnostics
|
||||
hermes setup # First-run wizard
|
||||
hermes logout # Clear stored API keys
|
||||
hermes model switch <name> # Change LLM provider/model
|
||||
hermes skills list|view|install|uninstall
|
||||
hermes memory search "query" # Semantic search across sessions
|
||||
hermes insights # Token/cost/tool usage report
|
||||
```
|
||||
|
||||
### Gateway Protocol
|
||||
|
||||
**Session lifecycle**:
|
||||
1. Message received from platform → `SessionStore.get_or_create(user_id, chat_id)`
|
||||
2. Messages appended to `messages` table with `session_id`
|
||||
3. `SessionResetPolicy.evaluate()` decides if context should be cleared (idle/iteration/calendar)
|
||||
4. `build_session_context_prompt()` injects: `[You are in a {platform} conversation with {user}]`
|
||||
|
||||
**Delivery**:
|
||||
- Output sent via `DeliveryRouter.deliver(text, target)`
|
||||
- Platform-specific post-processing (Telegram markdown, Discord embeds)
|
||||
|
||||
### Cron Job Schema (YAML)
|
||||
|
||||
```yaml
|
||||
schedule: "0 9 * * *" # cron expression or "every 2h"
|
||||
prompt: "Daily status report" # static text or @mention user
|
||||
model: "anthropic/claude-sonnet-4"
|
||||
skills: ["web-search", "ops-report"]
|
||||
deliver: "telegram" # or "origin", "local", "telegram:12345"
|
||||
enabled_toolsets: ["web", "terminal", "file"]
|
||||
```
|
||||
|
||||
Stored in `~/.hermes/cron/jobs/` as individual YAML files. Enabled via `hermes cron add` or manual edit.
|
||||
|
||||
### MCP Server (mcp_serve.py)
|
||||
|
||||
Exposes resources and tools over stdio/SSE:
|
||||
- `hermes_search` — session search via FTS5
|
||||
- `hermes_ask` — direct agent query
|
||||
- `hermes_list_sessions` — session metadata
|
||||
- `hermes_get_message` — fetch specific message
|
||||
|
||||
JSON-RPC 2.0 compliant.
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
### Current Test Landscape
|
||||
|
||||
- **Total test files**: 453
|
||||
- **Framework**: pytest with xdist parallelization
|
||||
- **Coverage focus**: unit tests for individual tools, session store integrity, gateway edge cases, memory provider correctness
|
||||
- **Integration tests**: limited; most tests are isolated module tests
|
||||
|
||||
### Well-Covered Areas
|
||||
|
||||
- **Tools**: Each core tool (`terminal_tool`, `web_tools`, `browser_tool`, `file_tools`) has dedicated test modules with mocking
|
||||
- **Memory**: `tests/test_memory_*.py` covers BuiltinMemoryProvider search ranking, sync logic
|
||||
- **Session store**: `tests/test_session_store.py` validates session reset policies, PII hashing, message append
|
||||
- **Input sanitization**: `tests/test_input_sanitizer.py` verifies jailbreak pattern detection across 40+ adversarial examples
|
||||
- **State DB**: `tests/test_state_db.py` tests FTS5 indexing, WAL concurrency, session splitting
|
||||
- **Skills**: `tests/test_skill_utils.py` covers YAML frontmatter parsing, platform matching
|
||||
|
||||
### Notable Gaps
|
||||
|
||||
1. **AIAgent orchestration loop** (run_agent.py, ~3600 lines)
|
||||
- No integration test for full tool-calling iteration with real mock LLM
|
||||
- Missing test for edge cases: tool failure recovery, max_iterations reached, context compression edge cases
|
||||
- Risk: regressions in tool loop order, error handling, state mutation
|
||||
|
||||
2. **Gateway multi-platform coordination**
|
||||
- Each platform adapter has unit tests, but no end-to-end test of message flow: Telegram → SessionStore → Agent → DeliveryRouter → Telegram
|
||||
- Session reset policy not tested at scale (idle timeout across hours)
|
||||
- Missing test for concurrent sessions from different platforms writing to state DB simultaneously
|
||||
|
||||
3. **Cron scheduler drift and failure modes**
|
||||
- `Scheduler.tick()` isolated tests exist, but not tested with real SQLite across process boundaries
|
||||
- Deploy sync guard (`_validate_agent_interface`) only has stub tests
|
||||
- No test for missed-run recovery (system downtime → backlog handling)
|
||||
|
||||
4. **Trajectory compression and synthesis**
|
||||
- `trajectory.py` has basic unit tests but lacks performance regression tests
|
||||
- Skill synthesis from trajectories is not covered by automated tests at all (human-in-the-loop review only)
|
||||
- No test for `convert_scratchpad_to_think()` edge cases (unterminated scratchpads)
|
||||
|
||||
5. **Context compression edge cases**
|
||||
- `context_compressor.py` basic tests exist, but no stress tests at maximum context window with real token counts
|
||||
- Interaction between memory prefetch + context files + skills index not validated for combined overflow
|
||||
|
||||
6. **MCP server protocol**
|
||||
- mcp_serve.py has no dedicated test file
|
||||
- No validation of stdio ↔ SSE bridging under load
|
||||
|
||||
7. **Observability (insights)**
|
||||
- `insights.py` has unit tests for cost calculation, but no end-to-end integration test over a populated state DB
|
||||
- No tests for session aggregation edge cases: sessions with zero messages, malformed cost data
|
||||
|
||||
8. **Display and TUI**
|
||||
- `agent/display.py` tests limited to spinner frames
|
||||
- TUI layout (curses_ui.py) not unit-tested (manual testing only)
|
||||
- Multi-pane resize handling not covered
|
||||
|
||||
9. **Error recovery and resilience**
|
||||
- `run_agent.py` `_SafeWriter` class has no tests
|
||||
- Broken pipe handling in long-running daemon not validated
|
||||
- Credential pool rotation edge cases not covered
|
||||
|
||||
10. **Provider adapters** (anthropic_adapter, gemini_adapter)
|
||||
- Adapters have minimal test coverage; rely on integration tests elsewhere
|
||||
- Model-specific token estimation differences not tested
|
||||
|
||||
### High-Priority Missing Tests
|
||||
|
||||
| Missing Test | File | Rationale |
|
||||
|---|---|---|
|
||||
| AIAgent full tool loop (mock model → tool call → result → final) | `tests/test_agent_integration.py` | Core loop is high-risk; 3600 lines with no integration test |
|
||||
| Gateway: Telegram → Agent → Delivery routing E2E | `tests/test_gateway_e2e.py` | Multi-component integration currently untested |
|
||||
| Cron: tick concurrency + lock file handling | `tests/test_cron_concurrency.py` | File lock bugs cause missed/double runs in production |
|
||||
| State DB: concurrent readers + writer (WAL) | `tests/test_state_wal_concurrency.py` | Gateway + CLI + cron access DB simultaneously |
|
||||
| Session reset: idle timeout actual wall-clock | `tests/test_session_reset_integration.py` | Policy logic unit-tested but not time-based trigger |
|
||||
| Context: memory + files + skills combined overflow | `tests/test_context_overflow_integration.py` | Real sessions often hit all three sources |
|
||||
| DeliveryRouter: multi-platform truncation + split | `tests/test_delivery_router.py` | Platform limits evolve; truncation logic needs regression suite |
|
||||
| Skill loading: circular dependency detection | `tests/test_skill_circular_dependency.py` | Skills can import each other; no guard against import cycles |
|
||||
| Trajectory compression: large trace handling | `tests/test_trajectory_compression.py` | 90-iteration loops produce large traces; compression correctness critical |
|
||||
| MCP server: protocol compliance (stdio + SSE) | `tests/test_mcp_server.py` | External clients depend on stable MCP contract |
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Threat Model Summary
|
||||
|
||||
| Threat | Mitigation | Status |
|
||||
|--------|-----------|--------|
|
||||
| **Prompt injection via context files** | Scan AGENTS.md, .cursorrules, SOUL.md in `prompt_builder.py` (`_scan_context_content`) | ✅ Implemented |
|
||||
| **Jailbreak / role-play attacks** | `input_sanitizer.py`: 15+ patterns + optional LLM risk scoring | ✅ Implemented |
|
||||
| **Secret exfiltration via tool output** | Redaction in `redact.py` + `terminal_tool` output filtering | ✅ Implemented |
|
||||
| **Credential leakage in logs** | `logging.Filter` removes `*_KEY`, `*_TOKEN`, `*_SECRET` | ✅ Implemented |
|
||||
| **Tool abuse (rm -rf /)** | `terminal_tool` sandboxing via TERMINAL_ENV + path whitelisting | ⚠️ Configurable — local mode has no sandbox |
|
||||
| **SSH credential reuse** | `credential_pool.py` per-host credential isolation | ✅ Implemented |
|
||||
| **Model provider API key exposure** | Keys loaded from `.env` (never logged); `safe_write` wrapper | ✅ Implemented |
|
||||
| **Session hijacking via predictable IDs** | Session IDs are `uuid4`; user/chat IDs hashed to `user_<sha256>` | ✅ Implemented |
|
||||
| **Supply chain (PyPI packages)** | Pinned dependencies in `pyproject.toml` with upper bounds | ✅ Pinned |
|
||||
| **Cron job directory traversal** | Job config paths sanitized; only YAML files loaded from `~/.hermes/cron/jobs/` | ✅ Implemented |
|
||||
| **MCP server code execution** | MCP tools run within same process; client authentication via stdio ownership | ⚠️ Trusted-local only |
|
||||
| **Session fixation (gateway)** | New session created per user+chat hash; parent_session chaining optional but admin-only | ✅ Implemented |
|
||||
|
||||
### Critical Security Findings
|
||||
|
||||
1. **Network-exposed components**:
|
||||
- `server.py` (WebSocket broadcast hub) binds `HOST="0.0.0.0"` by default — not authenticated. Only suitable for LAN/VPN. **Public exposure requires reverse proxy + auth**.
|
||||
- `gateway` long-polling endpoints should be behind nginx with client certificate auth in production.
|
||||
|
||||
2. **Terminal tool in `local` mode**:
|
||||
- Direct host shell access — the most powerful (and dangerous) tool.
|
||||
- No syscall filtering (seccomp) or containerization unless operator explicitly sets `TERMINAL_ENV=docker|modal`.
|
||||
- **Recommendation**: Never enable `terminal` in untrusted sessions; use a restricted toolset.
|
||||
|
||||
3. **Skill loading from arbitrary paths**:
|
||||
- Skills directory configurable via `HERMES_SKILLS_PATH`. Malicious skill can register arbitrary tools.
|
||||
- Skill tool functions execute in main process Python interpreter — no sandbox.
|
||||
- **Mitigation**: Skill manifest (`SKILL.md`) requires explicit `tools:` declaration; `skill_security.py` validates tool safety before import.
|
||||
|
||||
4. **Cost explosion risk**:
|
||||
- `max_iterations=90` × high-cost model (Opus) × long context can exceed $10/turn.
|
||||
- `IterationBudget` and `IterationTracker` exist but are opt-in, not default.
|
||||
- **Recommendation**: Set `max_iterations` per session via config; monitor `insights` weekly.
|
||||
|
||||
5. **State database size growth**:
|
||||
- SQLite `state.db` unbounded; WAL + FTS indexes grow indefinitely.
|
||||
- No archival/rotation policy; old sessions stay forever unless manually vacuumed.
|
||||
- **Recommendation**: Implement monthly `VACUUM` + session TTL (e.g., 90-day expiry).
|
||||
|
||||
### Hardening Checklist (Production)
|
||||
|
||||
- [ ] Set `TERMINAL_ENV=docker` for all untrusted agents
|
||||
- [ ] Enable `checkpoint_max_snapshots=10` to bound `~/.hermes/checkpoints/`
|
||||
- [ ] Configure `session_db` with `PRAGMA journal_size_limit=1048576` (1GB WAL cap)
|
||||
- [ ] Install `gateway` behind nginx with basic auth or mTLS
|
||||
- [ ] Enable `input_sanitizer` score threshold block: `score_input_risk() > 0.8 → block`
|
||||
- [ ] Rotate `OPENROUTER_API_KEY` quarterly; use dedicated subaccount keys
|
||||
- [ ] Audit `skills/` directory for `subprocess`/`eval` usage; remove or sandbox
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Build Dependencies
|
||||
|
||||
| Package | Purpose | Version Constraint |
|
||||
|---------|---------|-------------------|
|
||||
| `setuptools>=61.0` | Build backend | >=61.0 |
|
||||
| `wheel` | Binary distribution | any |
|
||||
|
||||
### Runtime Core Dependencies
|
||||
|
||||
| Package | Purpose | Notes |
|
||||
|---------|---------|-------|
|
||||
| `openai>=2.21.0,<3` | OpenAI API client | OpenAI + compatible endpoints |
|
||||
| `anthropic>=0.39.0,<1` | Anthropic Claude API | streaming + beta features |
|
||||
| `python-dotenv>=1.2.1,<2` | `.env` loading | Hermes home + project root |
|
||||
| `fire>=0.7.1,<1` | CLI generation | `hermes` command |
|
||||
| `httpx>=0.28.1,<1` | Async HTTP | gateway, provider health checks |
|
||||
| `rich>=14.3.3,<15` | TUI formatting | spinners, tables, syntax |
|
||||
| `tenacity>=9.1.4,<10` | Retry logic | LLM call retries with backoff |
|
||||
| `pyyaml>=6.0.2,<7` | YAML (config, skills) | CSafeLoader preferred |
|
||||
| `requests>=2.33.0,<3` | Sync HTTP (fallback) | CVE-2026-25645 patched |
|
||||
| `jinja2>=3.1.5,<4` | Template rendering | prompt fragments |
|
||||
| `pydantic>=2.12.5,<3` | Config validation | `gateway.config`, `cron.jobs` |
|
||||
| `prompt_toolkit>=3.0.52,<4` | TUI framework | fixed input area, history |
|
||||
| `exa-py>=2.9.0,<3` | Exa search backend | |
|
||||
| `firecrawl-py>=4.16.0,<5` | Firecrawl scraping | |
|
||||
| `parallel-web>=0.4.2,<1` | Parallel.ai backend | Nous subscribers only |
|
||||
| `fal-client>=0.13.1,<1` | FAL image gen | |
|
||||
| `edge-tts>=7.2.7,<8` | Free TTS | Microsoft Edge TTS (no API key) |
|
||||
| `PyJWT[crypto]>=2.12.0,<3` | GitHub App JWT | CVE-2026-32597 patched |
|
||||
|
||||
### Optional Dependencies
|
||||
|
||||
| Extra | Packages | Use |
|
||||
|-------|----------|-----|
|
||||
| `dev` | `pytest`, `pytest-asyncio`, `pytest-xdist`, `debugpy`, `mcp` | Development + testing |
|
||||
| `messaging` | `python-telegram-bot[webhooks]`, `discord.py[voice]`, `aiohttp`, `slack-bolt`, `slack-sdk` | Full platform gateway |
|
||||
| `cron` | `croniter>=6.0.0,<7` | Cron expression parsing |
|
||||
| `modal` | `modal>=1.0.0,<2` | Modal cloud sandboxes |
|
||||
| `daytona` | `daytona>=0.148.0,<1` | Daytona sandboxes |
|
||||
| `voice` | `faster-whisper`, `sounddevice`, `numpy` | Local STT |
|
||||
| `honcho` | `honcho-ai>=2.0.1,<3` | Honcho dialectic memory |
|
||||
| `mcp` | `mcp>=1.2.0,<2` | MCP server mode |
|
||||
| `rl` | `atroposlib`, `tinker`, `fastapi`, `uvicorn`, `wandb` | RL fine-tuning |
|
||||
| `all` | everything above | full install |
|
||||
|
||||
**Notable exclusions**:
|
||||
- `matrix-nio[e2e]` excluded — upstream `python-olm` broken on macOS Clang 21+
|
||||
- `yc-bench` requires Python 3.12+
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# From PyPI (recommended)
|
||||
pip install hermes-agent[default,messaging,cron]
|
||||
|
||||
# From source
|
||||
git clone https://github.com/NousResearch/hermes-agent.git
|
||||
cd hermes-agent
|
||||
pip install -e ".[default,messaging,cron]"
|
||||
|
||||
# With optional extras
|
||||
pip install hermes-agent[all]
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Hermes uses environment variables + YAML config:
|
||||
|
||||
**Environment** (`.env` or shell):
|
||||
- `HERMES_HOME` — state directory (`~/.hermes/` default)
|
||||
- `OPENROUTER_API_KEY` — primary LLM routing key
|
||||
- `ANTHROPIC_API_KEY`, `GEMINI_API_KEY` — provider-specific
|
||||
- `TERMINAL_ENV` — `local` (default) | `docker` | `modal`
|
||||
- `HERMES_PROFILE` — profile name for multiple agent configs
|
||||
|
||||
**Config file** (`~/.hermes/config.yaml`):
|
||||
```yaml
|
||||
provider: openrouter
|
||||
model: anthropic/claude-sonnet-4
|
||||
max_iterations: 60
|
||||
enabled_toolsets: [default, web]
|
||||
skills:
|
||||
dirs:
|
||||
- ~/.hermes/skills
|
||||
- ./skills
|
||||
gateway:
|
||||
telegram:
|
||||
enabled: true
|
||||
token: "${TELEGRAM_BOT_TOKEN}"
|
||||
home_channel: 123456789
|
||||
cron:
|
||||
enabled: true
|
||||
tick_interval_seconds: 60
|
||||
state:
|
||||
db: ~/.hermes/state.db
|
||||
wal: true
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
**Interactive TUI** (default):
|
||||
```bash
|
||||
hermes
|
||||
# or: hermes chat
|
||||
```
|
||||
|
||||
**Single query**:
|
||||
```bash
|
||||
hermes -q "Explain quantum entanglement"
|
||||
```
|
||||
|
||||
**Gateway (Telegram example)**:
|
||||
```bash
|
||||
hermes gateway install # systemd unit
|
||||
hermes gateway start
|
||||
```
|
||||
|
||||
**Cron scheduler** (runs automatically if enabled in config):
|
||||
```bash
|
||||
hermes cron status
|
||||
hermes cron list
|
||||
```
|
||||
|
||||
**MCP server**:
|
||||
```bash
|
||||
python mcp_serve.py --transport stdio
|
||||
# or: python mcp_serve.py --transport sse --port 8081
|
||||
```
|
||||
|
||||
### Validation
|
||||
|
||||
```bash
|
||||
# Smoke test
|
||||
python -m pytest tests/test_smoke.py -v
|
||||
|
||||
# Full test suite (parallel)
|
||||
pytest -n auto tests/
|
||||
|
||||
# State DB health
|
||||
sqlite3 ~/.hermes/state.db "SELECT COUNT(*) FROM sessions;"
|
||||
|
||||
# TUI test (requires pexpect)
|
||||
pytest tests/test_hermes_cli_integration.py -v
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Research Query
|
||||
|
||||
```
|
||||
> hermes -q "What are the latest developments in KV cache compression?"
|
||||
|
||||
[Tools: web_search → web_extract × 3]
|
||||
└─ Answer: KV cache compression advances... (cost: $0.04)
|
||||
```
|
||||
|
||||
**Token flow**: ~14K input (query + tool results) → ~2K output.
|
||||
|
||||
### Example 2: File System Investigation
|
||||
|
||||
```
|
||||
> /terminal find ~/repos -name "*.py" -exec wc -l {} + | sort -n | tail -10
|
||||
|
||||
[terminal] Executed in 0.8s
|
||||
/path/to/largest.py: 1243 lines
|
||||
...
|
||||
```
|
||||
|
||||
`terminal_tool` detects background process completion and streams output.
|
||||
|
||||
### Example 3: Scheduled Report
|
||||
|
||||
**Cron job** (`~/.hermes/cron/jobs/daily-report.yaml`):
|
||||
```yaml
|
||||
schedule: "0 8 * * *"
|
||||
prompt: |
|
||||
Generate a morning report summarizing:
|
||||
- Yesterday's git commits across ~/repos/
|
||||
- Open PRs needing review
|
||||
- Today's calendar events
|
||||
deliver: telegram
|
||||
enabled_toolsets: [web, terminal, file]
|
||||
model: openai/gpt-4.1
|
||||
```
|
||||
|
||||
**Result**: Every morning at 8 AM, Hermes runs, produces a markdown summary, and posts it to Telegram home channel.
|
||||
|
||||
---
|
||||
|
||||
## Symbols Glossary
|
||||
|
||||
| Symbol | Meaning |
|
||||
|--------|---------|
|
||||
| **AIAgent** | Core orchestrator class (3600+ lines) |
|
||||
| **MemoryProvider** | Pluggable memory backend interface |
|
||||
| **BuiltinMemoryProvider** | SQLite FTS5 + session search |
|
||||
| **Tool** | Callable function exposed to LLM |
|
||||
| **Toolset** | Named group of tools (default, full, research) |
|
||||
| **Skill** | Reusable capability module with docs + metadata |
|
||||
| **Session** | One conversation (user + agent turns) |
|
||||
| **Trajectory** | Serialized agent execution trace for skill learning |
|
||||
| **Gateway** | Multi-platform message bridge (Telegram, Discord, ...) |
|
||||
| **Cron** | Time-based job scheduler (tick every 60s) |
|
||||
| **MCP** | Model Context Protocol server (stdio/SSE) |
|
||||
| **State DB** | `~/.hermes/state.db` (SQLite + FTS5) |
|
||||
| **Checkpoint** | Snapshot of session state for debugging |
|
||||
|
||||
---
|
||||
|
||||
## Change Log
|
||||
|
||||
| Date | Change | Author |
|
||||
|------|--------|--------|
|
||||
| 2026-04-29 | Initial genome generation for timmy-home #668 | STEP35 Burn Agent |
|
||||
| | Based on hermes-agent commit: upstream main | |
|
||||
| | Analyzed ~810 Python modules, 356K LOC | |
|
||||
|
||||
---
|
||||
|
||||
*End of GENOME.md — hermes-agent*
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,84 +1,123 @@
|
||||
"""
|
||||
Test that the hermes-agent GENOME.md exists and contains required sections.
|
||||
|
||||
Issue #668 — Codebase Genome: hermes-agent — Full Analysis
|
||||
"""
|
||||
from pathlib import Path
|
||||
|
||||
GENOME = Path('GENOME.md')
|
||||
|
||||
|
||||
def read_genome() -> str:
|
||||
assert GENOME.exists(), 'GENOME.md must exist at repo root'
|
||||
return GENOME.read_text(encoding='utf-8')
|
||||
GENOME = Path(__file__).parent.parent / "genomes" / "hermes-agent-GENOME.md"
|
||||
|
||||
|
||||
def test_genome_exists():
|
||||
assert GENOME.exists(), 'GENOME.md must exist at repo root'
|
||||
"""GENOME.md must exist at genomes/hermes-agent-GENOME.md."""
|
||||
assert GENOME.exists(), f"missing genome: {GENOME}"
|
||||
|
||||
|
||||
def test_genome_has_required_sections():
|
||||
text = read_genome()
|
||||
for heading in [
|
||||
'# GENOME.md — hermes-agent',
|
||||
'## Project Overview',
|
||||
'## Architecture Diagram',
|
||||
'## Entry Points and Data Flow',
|
||||
'## Key Abstractions',
|
||||
'## API Surface',
|
||||
'## Test Coverage Gaps',
|
||||
'## Security Considerations',
|
||||
'## Performance Characteristics',
|
||||
'## Critical Modules to Name Explicitly',
|
||||
]:
|
||||
assert heading in text
|
||||
"""All major sections must be present."""
|
||||
text = GENOME.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# GENOME.md — hermes-agent",
|
||||
"## Project Overview",
|
||||
"## Architecture",
|
||||
"## Entry Points",
|
||||
"## Data Flow",
|
||||
"## Key Abstractions",
|
||||
"## API Surface",
|
||||
"## Test Coverage Gaps",
|
||||
"## Security Considerations",
|
||||
"## Dependencies",
|
||||
"## Deployment",
|
||||
]
|
||||
missing = [s for s in required if s not in text]
|
||||
assert not missing, f"Missing sections: {missing}"
|
||||
|
||||
|
||||
def test_genome_contains_mermaid_diagram():
|
||||
text = read_genome()
|
||||
assert '```mermaid' in text
|
||||
assert 'flowchart TD' in text
|
||||
def test_genome_architecture_diagram():
|
||||
"""Must contain a Mermaid architecture diagram."""
|
||||
text = GENOME.read_text()
|
||||
assert "```mermaid" in text, "no mermaid code block"
|
||||
assert "graph TD" in text or "graph LR" in text, "no graph definition"
|
||||
required_nodes = ["AIAgent", "MemoryProvider", "Tool", "Cron", "Gateway", "Session"]
|
||||
for node in required_nodes:
|
||||
assert node in text, f"architecture diagram missing node: {node}"
|
||||
|
||||
|
||||
def test_genome_mentions_control_plane_modules():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'run_agent.py',
|
||||
'model_tools.py',
|
||||
'tools/registry.py',
|
||||
'toolsets.py',
|
||||
'cli.py',
|
||||
'hermes_cli/main.py',
|
||||
'hermes_state.py',
|
||||
'gateway/run.py',
|
||||
'acp_adapter/server.py',
|
||||
'cron/scheduler.py',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_mentions_core_modules():
|
||||
"""Must explicitly name key source files and modules."""
|
||||
text = GENOME.read_text()
|
||||
required = [
|
||||
"run_agent.py",
|
||||
"agent/input_sanitizer.py",
|
||||
"agent/memory_manager.py",
|
||||
"agent/prompt_builder.py",
|
||||
"agent/trajectory.py",
|
||||
"gateway/session.py",
|
||||
"gateway/delivery.py",
|
||||
"cron/scheduler.py",
|
||||
"tools/terminal_tool.py",
|
||||
"skills/",
|
||||
"hermes_state.py",
|
||||
]
|
||||
missing = [f for f in required if f not in text]
|
||||
assert not missing, f"Missing file references: {missing}"
|
||||
|
||||
|
||||
def test_genome_mentions_test_gap_and_collection_findings():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'11,470 tests collected',
|
||||
'6 collection errors',
|
||||
'ModuleNotFoundError: No module named `acp`',
|
||||
'trajectory_compressor.py',
|
||||
'batch_runner.py',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_mentions_tool_names():
|
||||
"""Must list core tool names."""
|
||||
text = GENOME.read_text()
|
||||
tools = [
|
||||
"terminal_tool",
|
||||
"web_search_tool",
|
||||
"browser_navigate",
|
||||
"read_file",
|
||||
"write_file",
|
||||
"execute_code",
|
||||
"delegate_task",
|
||||
"session_search",
|
||||
]
|
||||
missing = [t for t in tools if t not in text]
|
||||
assert not missing, f"Missing tool names: {missing}"
|
||||
|
||||
|
||||
def test_genome_mentions_security_and_performance_layers():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'prompt_builder.py',
|
||||
'approval.py',
|
||||
'file_tools.py',
|
||||
'mcp_tool.py',
|
||||
'WAL mode',
|
||||
'prompt caching',
|
||||
'context compression',
|
||||
'parallel tool execution',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_security_findings():
|
||||
"""Must document security considerations."""
|
||||
text = GENOME.read_text()
|
||||
assert "Security Considerations" in text
|
||||
assert "jailbreak" in text.lower()
|
||||
assert "PII" in text or "personally identifiable" in text.lower()
|
||||
assert "credential" in text.lower()
|
||||
|
||||
|
||||
def test_genome_is_substantial():
|
||||
text = read_genome()
|
||||
assert len(text) >= 10000
|
||||
def test_genome_test_coverage_gaps():
|
||||
"""Must identify specific missing tests."""
|
||||
text = GENOME.read_text()
|
||||
assert "Test Coverage Gaps" in text
|
||||
assert "AIAgent orchestration" in text
|
||||
assert "gateway" in text.lower()
|
||||
assert "cron" in text.lower()
|
||||
|
||||
|
||||
def test_genome_not_a_stub():
|
||||
"""GENOME.md must be substantial (>10KB)."""
|
||||
size = GENOME.stat().st_size
|
||||
assert size >= 10_000, f"GENOME.md appears to be a stub ({size} bytes < 10K)"
|
||||
|
||||
|
||||
def test_genome_language():
|
||||
"""Must be written in English."""
|
||||
text = GENOME.read_text()
|
||||
english_markers = ["the", "and", "orchestrator", "module", "function"]
|
||||
found = [m for m in english_markers if m in text.lower()]
|
||||
assert len(found) >= 4, "GENOME.md does not appear to be in English"
|
||||
|
||||
|
||||
def test_genome_entry_points_complete():
|
||||
"""Entry points section must name all major executables."""
|
||||
text = GENOME.read_text()
|
||||
assert "run_agent.py" in text
|
||||
assert "cli.py" in text
|
||||
assert "hermes_cli" in text
|
||||
assert "gateway" in text
|
||||
assert "mcp_serve.py" in text
|
||||
assert "cron" in text
|
||||
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
Reference in New Issue
Block a user