Compare commits

..

2 Commits

Author SHA1 Message Date
Timmy
375a45e0ae docs: refresh timmy-academy genome for #678
Some checks failed
Agent PR Gate / gate (pull_request) Failing after 49s
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 25s
Smoke Test / smoke (pull_request) Failing after 27s
Agent PR Gate / report (pull_request) Successful in 24s
2026-04-21 23:54:56 -04:00
Timmy
952a604e1c test: lock timmy-academy genome facts for #678 2026-04-21 23:51:06 -04:00
11 changed files with 319 additions and 1362 deletions

View File

@@ -1,7 +1,9 @@
# GENOME.md — timmy-academy
*Auto-generated by Codebase Genome Pipeline. 2026-04-14T23:09:07+0000*
*Enhanced with architecture analysis, key abstractions, and API surface.*
Refreshed against live repo state on 2026-04-22.
Target repo: `Timmy_Foundation/timmy-academy`
Default branch: `master`
Last verified commit: `d860034``Merge PR #23: fix: Add audit log rotation to prevent unbounded growth (closes #10)`
## Quick Facts
@@ -10,229 +12,312 @@
| Source files | 48 |
| Test files | 1 |
| Config files | 1 |
| Total lines | 5,353 |
| Last commit | 395c9f7 Merge PR 'Add @who command' (#7) into master (2026-04-13) |
| Branch | master |
| Test coverage | 0% (35 untested modules) |
| Total lines | 5,405 |
| Primary framework | Evennia / Django / Twisted |
| Default telnet port | `4000` |
| Default web client ports | `4001`, `4005` |
| Runtime verification | `py_compile` on core modules + `python3 tests/stress_test.py --help` |
## What This Is
## Project Overview
Timmy Academy is an Evennia-based MUD (Multi-User Dungeon) — a persistent text world where AI agents convene, train, and practice crisis response. It runs on Bezalel VPS (167.99.126.228) with telnet on port 4000 and web client on port 4001.
`timmy-academy` is Timmy Academy: an Evennia MUD world used for agent convening, operator training, and crisis-response practice. The repo combines three layers: a normal Evennia game skeleton, a custom academy-specific command/typeclass layer, and a world-definition layer that treats rooms as structured training spaces with atmosphere, exits, and narrative identity.
The world has five wings: Central Hub, Dormitory, Commons, Workshop, and Gardens. Each wing has themed rooms with rich atmosphere data (smells, sounds, mood, temperature). Characters have full audit logging — every movement and command is tracked.
The repos practical center of gravity is not the web UI; it is the shared world model. Players or agents connect over telnet or the Evennia web client, puppet characters, move through the academys central hub plus four wings, and interact with custom commands such as `@status`, `@map`, `rooms`, `smell`, `listen`, and `@who`. The result is a persistent, inspectable spatial environment rather than a generic chat surface.
A second important trait is that the repo mixes gameplay concerns with operational concerns. `server/conf/settings.py` enables detailed audit logging. `typeclasses/audited_character.py` records movement and command trails. `world/rebuild_world.py` can rehydrate the academy from source definitions. `tests/stress_test.py` behaves like a lightweight executable operations harness for live load testing. Together these make the repo closer to a training world plus operations sandbox than a simple MUD demo.
## Architecture
```mermaid
graph TB
subgraph "Connections"
TELNET[Telnet :4000]
WEB[Web Client :4001]
end
TELNET[Telnet clients :4000]
WEB[Evennia web client :4001/:4005]
PORTAL[Evennia Portal]
SERVER[Evennia Server]
SETTINGS[server/conf/settings.py]
CMDSETS[commands/default_cmdsets.py]
COMMANDS[commands/command.py]
TYPECLASSES[typeclasses/*]
AUDIT[typeclasses/audited_character.py]
WORLD[world/*_wing.py]
REBUILD[world/rebuild_world.py]
BATCH[world/build_academy.ev]
WEBURLS[web/urls.py]
HERMESCFG[hermes-agent/config.yaml]
STRESS[tests/stress_test.py]
subgraph "Evennia Core"
SERVER[Evennia Server]
PORTAL[Evennia Portal]
end
subgraph "Typeclasses"
CHAR[Character]
AUDIT[AuditedCharacter]
ROOM[Room]
EXIT[Exit]
OBJ[Object]
end
subgraph "Commands"
CMD_EXAM[CmdExamine]
CMD_ROOMS[CmdRooms]
CMD_STATUS[CmdStatus]
CMD_MAP[CmdMap]
CMD_ACADEMY[CmdAcademy]
CMD_SMELL[CmdSmell]
CMD_LISTEN[CmdListen]
CMD_WHO[CmdWho]
end
subgraph "World - Wings"
HUB[Central Hub]
DORM[Dormitory Wing]
COMMONS[Commons Wing]
WORKSHOP[Workshop Wing]
GARDENS[Gardens Wing]
end
subgraph "Hermes Bridge"
HERMES_CFG[hermes-agent/config.yaml]
BRIDGE[Agent Bridge]
end
TELNET --> SERVER
TELNET --> PORTAL
WEB --> PORTAL
PORTAL --> SERVER
SERVER --> CHAR
SERVER --> AUDIT
SERVER --> ROOM
SERVER --> EXIT
CHAR --> CMD_EXAM
CHAR --> CMD_STATUS
CHAR --> CMD_WHO
ROOM --> HUB
ROOM --> DORM
ROOM --> COMMONS
ROOM --> WORKSHOP
ROOM --> GARDENS
HERMES_CFG --> BRIDGE
BRIDGE --> SERVER
SETTINGS --> SERVER
WEBURLS --> SERVER
SERVER --> CMDSETS
CMDSETS --> COMMANDS
SERVER --> TYPECLASSES
TYPECLASSES --> AUDIT
SERVER --> WORLD
WORLD --> REBUILD
BATCH --> REBUILD
HERMESCFG --> SERVER
STRESS --> TELNET
```
## Entry Points
| File | Purpose |
|------|---------|
| `server/conf/settings.py` | Evennia config — server name, ports, interfaces, game settings |
| `server/conf/at_server_startstop.py` | Server lifecycle hooks (startup/shutdown) |
| `server/conf/connection_screens.py` | Login/connection screen text |
| `commands/default_cmdsets.py` | Registers all custom commands with Evennia |
| `world/rebuild_world.py` | Rebuilds all rooms from source |
| `world/build_academy.ev` | Evennia batch script for initial world setup |
| File | Role |
|------|------|
| `README.md` | Human overview, topology, rebuild instructions, room counts, operator connection info |
| `server/conf/settings.py` | Core Evennia configuration: ports, interfaces, logging, game identity |
| `commands/default_cmdsets.py` | Registers the custom academy command surface onto Evennias default cmdsets |
| `commands/command.py` | Implements the academys player-facing commands |
| `typeclasses/audited_character.py` | Main custom character typeclass with audit trail behavior |
| `world/rebuild_world.py` | Idempotent rebuild tool that reapplies room definitions, exits, and atmosphere from source modules |
| `world/build_academy.ev` | Evennia batch setup entrypoint |
| `web/urls.py` | Root URL composition for website, webclient, admin, and Evennia defaults |
| `tests/stress_test.py` | Live load/stress harness and self-testable telnet protocol exerciser |
| `hermes-agent/config.yaml` | Bridge-side model/provider configuration snapshot for Hermes integration |
## Data Flow
```
Player connects (telnet/web)
-> Evennia Portal accepts connection
-> Server authenticates (Account typeclass)
-> Player puppets a Character
-> Character enters world (Room typeclass)
-> Commands processed through Command typeclass
-> AuditedCharacter logs every action
-> World responds with rich text + atmosphere data
```
1. A human or agent connects over telnet (`4000`) or the Evennia web client (`4001` / `4005`).
2. The Evennia portal hands the connection to the game server configured by `server/conf/settings.py`.
3. Once an account puppets a character, the command path is controlled by `commands/default_cmdsets.py`, which mounts the academy-specific commands from `commands/command.py`.
4. The typeclass layer (`typeclasses/*`) determines how characters, rooms, exits, channels, and scripts behave; `AuditedCharacter` wraps command and movement hooks in persistent logging.
5. The world layer (`world/*_wing.py`) supplies canonical room descriptions, exits, aliases, atmosphere, and thematic metadata.
6. `world/rebuild_world.py` parses those source files and writes them back into Evennia objects, making source the effective truth for the academy layout.
7. `tests/stress_test.py` simulates concurrent clients against the live telnet surface and reports throughput, latency, and connection statistics.
## Key Abstractions
### Typeclasses (the world model)
### 1. `AuditedCharacter`
File: `typeclasses/audited_character.py`
| Class | File | Purpose |
|-------|------|---------|
| `Character` | `typeclasses/characters.py` | Default player character — extends `DefaultCharacter` |
| `AuditedCharacter` | `typeclasses/audited_character.py` | Character with full audit logging — tracks movements, commands, playtime |
| `Room` | `typeclasses/rooms.py` | Default room container |
| `Exit` | `typeclasses/exits.py` | Connections between rooms |
| `Object` | `typeclasses/objects.py` | Base object with `ObjectParent` mixin |
| `Account` | `typeclasses/accounts.py` | Player account (login identity) |
| `Channel` | `typeclasses/channels.py` | In-game communication channels |
| `Script` | `typeclasses/scripts.py` | Background/timed processes |
This is the repos flagship abstraction. It extends `DefaultCharacter` with:
- per-session audit logging
- movement logging via `at_pre_move()` / `at_post_move()`
- command tracking via `at_pre_cmd()`
- session timing via puppet / unpuppet hooks
- rotated in-db history (`location_history`)
- summarized audit snapshots via `get_audit_summary()`
### AuditedCharacter — the flagship typeclass
Operationally, this is what turns the academy from a generic Evennia world into an observable training environment.
The `AuditedCharacter` is the most important abstraction. It wraps every player action in logging:
### 2. `CharacterCmdSet`
File: `commands/default_cmdsets.py`
- `at_pre_move()` — logs departure from current room
- `at_post_move()` — records arrival with timestamp and coordinates
- `at_pre_cmd()` — increments command counter, logs command + args
- `at_pre_puppet()` — starts session timer
- `at_post_unpuppet()` — calculates session duration, updates total playtime
- `get_audit_summary()` — returns JSON summary of all tracked metrics
This cmdset is the binding point between the world and its training interface. It mounts:
- `CmdExamine`
- `CmdRooms`
- `CmdStatus`
- `CmdMap`
- `CmdAcademy`
- `CmdSmell`
- `CmdListen`
- `CmdWho`
Audit trail keeps last 1000 movements in `db.location_history`. Sensitive commands (password) are excluded from logging.
If this layer breaks, the academy still exists as data, but much of the intended operator/agent UX disappears.
### Commands (the player interface)
### 3. `CmdStatus`, `CmdMap`, `CmdAcademy`, `CmdWho`
File: `commands/command.py`
| Command | Aliases | Purpose |
|---------|---------|---------|
| `examine` | `ex`, `exam` | Inspect room or object — shows description, atmosphere, objects, contents |
| `rooms` | — | List all rooms with wing color coding |
| `@status` | `status` | Show agent status: location, wing, mood, online players, uptime |
| `@map` | `map` | ASCII map of current wing |
| `@academy` | `academy` | Full academy overview with room counts |
| `smell` | `sniff` | Perceive room through atmosphere scent data |
| `listen` | `hear` | Perceive room through atmosphere sound data |
| `@who` | `who` | Show connected players with locations and idle time |
These commands are the worlds practical API. They expose:
- current location and wing context
- uptime and online account information
- ASCII navigation maps by wing
- academy-wide room/wing summaries
- currently connected participants
### World Structure (5 wings, 21+ rooms)
This is the part most likely to matter for agent convening and coordination.
**Central Hub (LIMBO)** — Nexus connecting all wings. North=Dormitory, South=Workshop, East=Commons, West=Gardens.
### 4. Wing room classes
Files: `world/commons_wing.py`, `world/dormitory_entrance.py`, `world/workshop_wing.py`, `world/gardens_wing.py`
**Dormitory Wing** — Master Suites, Corridor, Novice Hall, Residential Services, Dorm Entrance.
These classes encode the academys content model. Each room defines:
- `self.key`
- aliases
- long-form description
- `db.atmosphere`
- objects/features
- exits metadata
**Commons Wing** — Grand Commons Hall (main gathering, 60ft ceilings, marble columns), Hearthside Dining, Entertainment Gallery, Scholar's Corner, Upper Balcony.
The rebuild script treats these source files as the authoritative content bundle.
**Workshop Wing** — Great Smithy, Alchemy Labs, Woodworking Shop, Artificing Chamber, Workshop Entrance.
### 5. `ROOM_CONFIG` / `WING_INFO`
File: `world/rebuild_world.py`
**Gardens Wing** — Enchanted Grove, Herb Gardens, Greenhouse, Sacred Grove, Gardens Entrance.
This is the worlds rehydration map. It hard-binds Evennia object IDs to source classes and wings. That makes the rebuild deterministic, but it also couples source truth to existing DB IDs — a real maintenance risk if the database is re-seeded differently.
Each room has rich `db.atmosphere` data: mood, lighting, sounds, smells, temperature.
### 6. Stress-test dataclasses and `MudClient`
File: `tests/stress_test.py`
The stress harness uses:
- `ActionResult`
- `PlayerStats`
- `StressTestReport`
- `MudClient`
This test file doubles as an executable spec for the live connection surface and the academys expected runtime responsiveness.
## API Surface
### Web API
### In-world commands
Defined in `commands/command.py` and registered in `commands/default_cmdsets.py`.
- `web/api/__init__.py` — Evennia REST API (Django REST Framework)
- `web/urls.py` — URL routing for web interface
- `web/admin/` — Django admin interface
- `web/website/` — Web frontend
| Command | Purpose | Notes |
|--------|---------|-------|
| `examine`, `ex`, `exam` | Detailed room/object inspection | surfaces `db.atmosphere`, notable objects, contents |
| `rooms` | List all room objects by wing | uses Evennia ORM room query |
| `@status`, `status` | Current agent/player status | includes location, wing, online users, uptime |
| `@map`, `map` | ASCII wing map | hardcoded wing maps inside the command class |
| `@academy`, `academy` | Academy-wide overview | high-level summary command |
| `smell`, `sniff` | Scent channel for room atmosphere | depends on atmosphere metadata |
| `listen`, `hear` | Sound channel for room atmosphere | depends on atmosphere metadata |
| `@who`, `who` | Online player listing | intended convening/awareness surface |
### Telnet
All of these use permissive `locks = "cmd:all()"`, which is convenient for training but worth noting from a security and abuse perspective.
- Standard MUD protocol on port 4000
- Supports MCCP (compression), MSDP (data), GMCP (protocol)
### Network/API surface
| Surface | Location | Notes |
|--------|----------|-------|
| Telnet | `TELNET_PORTS = [4000]` | bound on `0.0.0.0` |
| Web client | `WEBSERVER_PORTS = [(4001, 4005)]` | bound on `0.0.0.0` |
| Django web stack | `web/urls.py` | includes website, webclient, admin, and Evennia defaults |
| Hermes bridge config | `hermes-agent/config.yaml` | configuration-only integration point; not an executable bridge implementation inside this repo |
### Hermes Bridge
## World Model
- `hermes-agent/config.yaml` — Configuration for AI agent connection
- Allows Hermes agents to connect as characters and interact with the world
The academy is modeled as a central hub plus four themed wings, matching the repos source files better than the older “five wings” phrasing in the stale genome artifact.
## Dependencies
| Zone | Source | Notes |
|------|--------|------|
| Central Hub / Limbo | `world/rebuild_world.py` | special-case hub description and routing nexus |
| Dormitory Wing | `world/dormitory_entrance.py` | residence/rest zone |
| Commons Wing | `world/commons_wing.py` | social and gathering zone |
| Workshop Wing | `world/workshop_wing.py` | crafting and alchemy zone |
| Gardens Wing | `world/gardens_wing.py` | nature and contemplative zone |
No `requirements.txt` or `pyproject.toml` found. Dependencies come from Evennia:
Grounded repo facts:
- README advertises `21 rooms, 43+ exits across 5 zones`
- `ROOM_CONFIG` in `world/rebuild_world.py` maps room IDs `3..22` for wing rooms, while Limbo/hub is treated separately
- atmosphere metadata is a first-class room feature, not cosmetic prose
- **evennia** — MUD framework (Django-based)
- **django** — Web framework (via Evennia)
- **twisted** — Async networking (via Evennia)
## Verification Performed
## Test Coverage Analysis
Target repo verification from a fresh clone at `/tmp/timmy-academy-verify`:
| Metric | Value |
|--------|-------|
| Source modules | 35 |
| Test modules | 1 |
| Estimated coverage | 0% |
| Untested modules | 35 |
- `python3 -m py_compile commands/command.py commands/default_cmdsets.py server/conf/settings.py typeclasses/audited_character.py world/rebuild_world.py web/urls.py`
- `python3 tests/stress_test.py --help`
- `python3 tests/stress_test.py --self-test`
- `python3 ~/.hermes/pipelines/codebase-genome.py --path /tmp/timmy-academy-verify --output /tmp/timmy-academy-base.md`
Only one test file exists: `tests/stress_test.py`. All 35 source modules are untested.
Observed runtime-adjacent facts:
- core modules compile as Python
- the stress harness advertises `--self-test` and `--json` modes
- target repo does **not** contain a checked-in `GENOME.md` at its own root
### Critical Untested Paths
## Test Coverage Gaps
1. **AuditedCharacter** — audit logging is the primary value-add. No tests verify movement tracking, command counting, or playtime calculation.
2. **Commands** — no tests for any of the 8 commands. The `@map` wing detection, `@who` session tracking, and atmosphere-based commands (`smell`, `listen`) are all untested.
3. **World rebuild**`rebuild_world.py` and `fix_world.py` can destroy and recreate the entire world. No tests ensure they produce valid output.
4. **Typeclass hooks**`at_pre_move`, `at_post_move`, `at_pre_cmd` etc. are never tested in isolation.
The repo still has only one test file: `tests/stress_test.py`.
Critical untested paths:
1. `typeclasses/audited_character.py`
- no direct tests for move logging, audit pruning, command counting, or session accounting
2. `commands/command.py`
- no command-level unit tests for `@status`, `@map`, `rooms`, `smell`, `listen`, or `@who`
3. `world/rebuild_world.py`
- no tests for parsing wing files, room ID mapping, exit verification, or idempotent rebuild behavior
4. `server/conf/settings.py`
- no configuration sanity checks for port exposure, logging handlers, or audit defaults
5. `web/urls.py`
- no tests confirming routing composition for website/webclient/admin
The existing stress harness is valuable, but it is not a substitute for unit or integration tests around the repos custom command/typeclass logic.
## Security Considerations
- ⚠️ Uses `eval()`/`exec()` — Evennia's inlinefuncs module uses eval for dynamic command evaluation. Risk level: inherent to MUD framework.
- ⚠️ References secrets/passwords — `settings.py` references `secret_settings.py` for sensitive config. Ensure this file is not committed.
- ⚠️ Telnet on 0.0.0.0 — server accepts connections from any IP. Consider firewall rules.
- ⚠️ Web client on 0.0.0.0 — same exposure as telnet. Ensure authentication is enforced.
- ⚠️ Agent bridge (`hermes-agent/config.yaml`) — verify credentials are not hardcoded.
1. Network exposure
- `TELNET_INTERFACES = ['0.0.0.0']`
- `WEBSERVER_INTERFACES = ['0.0.0.0']`
These settings expose the academy to all interfaces. That may be intended on the VPS, but it shifts safety to firewall/reverse-proxy controls.
## Configuration Files
2. Secrets split is expected but must be enforced
- `server/conf/settings.py` imports `secret_settings.py`
- this is the right shape, but only if `secret_settings.py` is never committed and contains the truly sensitive deployment values
- `server/conf/settings.py` — Main Evennia settings (server name, ports, typeclass paths)
- `hermes-agent/config.yaml` — Hermes agent bridge configuration
- `world/build_academy.ev` — Evennia batch build script
- `world/batch_cmds.ev` — Batch command definitions
3. Audit log sensitivity
- `AuditedCharacter.at_pre_cmd()` excludes password commands from audit logging
- good safeguard, but the rest of the command stream is still intentionally retained and should be treated as sensitive behavioral telemetry
## What's Missing
4. Checked-in bridge environment file
- the repo contains `hermes-agent/.env`
- even if it is benign now, a checked-in `.env` path is a standing secret-handling risk and should be treated carefully
1. **Tests** — 0% coverage is a critical gap. Priority: AuditedCharacter hooks, command func() methods, world rebuild integrity.
2. **CI/CD** — No automated testing pipeline. No GitHub Actions or Gitea workflows.
3. **Documentation**`world/BUILDER_GUIDE.md` exists but no developer onboarding docs.
4. **Monitoring** — No health checks, no metrics export, no alerting on server crashes.
5. **Backup** — No automated database backup for the Evennia SQLite/PostgreSQL database.
5. Framework-level dynamic evaluation risk
- Evennias config surface includes modules like `server/conf/inlinefuncs.py`
- this is inherited framework behavior, but still part of the runtime attack surface
## CI / Runtime Drift
This repo has meaningful operational drift and missing automation:
1. No checked-in CI workflows
- no `.gitea/workflows/*` or `.github/workflows/*` coverage surfaced in the fresh clone
- the academy relies on manual rebuild and manual stress testing
2. Target repo root lacks its own `GENOME.md`
- the genome issue lives in `timmy-home`
- the analyzed repo itself still does not carry an in-repo architecture artifact
3. `README.md` vs command docs wording drift
- README frames the academy as four thematic wings plus a hub/zone model
- older generated genome wording called these “five wings”
- the source-of-truth model is more accurately “central hub + four wings”
4. Bridge configuration drift
- `hermes-agent/config.yaml` still references `anthropic/claude-opus-4.6`
- this is a real integration snapshot inside the repo and should be treated as provider-policy drift if the surrounding stack has moved away from Anthropic
## Dependencies
No `requirements.txt`, `pyproject.toml`, or other dependency lockfile is checked in at the repo root.
Grounded dependency picture instead comes from source and README:
- Evennia 6.0.0
- Django (via Evennia)
- Twisted (via Evennia)
- Python 3.12.x
This means environment reproducibility currently depends on external operator knowledge rather than repo-local dependency locking.
## Deployment
README-documented rebuild path:
```bash
ssh root@167.99.126.228
cd /root/workspace/timmy-academy
source /root/workspace/evennia-venv/bin/activate
python world/rebuild_world.py
```
Operationally relevant deployment facts:
- target VPS in README: `167.99.126.228`
- telnet surface: `4000`
- web client surface: `4001`
- the repo assumes an Evennia virtualenv outside the repo itself
- world rebuild is source-driven and intended to be idempotent
## Technical Debt
1. `ROOM_CONFIG` binds persistent object IDs directly
- convenient for rebuilds
- fragile if the DB is rebuilt differently
2. only one test file for an otherwise rich custom surface
3. no CI automation for compile/rebuild/smoke validation
4. no explicit dependency lockfile
5. checked-in `hermes-agent/.env` path raises secret-hygiene questions
6. target repo has no first-party `GENOME.md`, so architecture memory still lives mostly outside the repo
---
*Generated by Codebase Genome Pipeline. Review and update manually.*
This genome was refreshed against the live `timmy-academy` repository and verified with compile + stress-harness entrypoint checks, not just copied from the older auto-generated artifact.

View File

@@ -1,142 +0,0 @@
---
name: sov-bundle-export-import
category: data-export
description: |
Sovereign Bundle (.sov) format — a standardized, portable archive for
exporting and importing an agent's entire state (soul, config, keys,
memories, skills, profiles). Enables backup, migration, and sovereignty.
---
# Sovereign Bundle Format (.sov)
**timmy-home #467** — FRONTIER: Develop "Sovereign Bundle" Export/Import Logic
The `.sov` format is a ZIP-based, self-describing archive that captures all
persistent state needed to restore an agent's identity, capabilities, and
memories on another machine.
## Format
```
sov/
├── META.json # Format identifier + environment metadata
├── manifest.json # Bundle contents & component sizes (canonical index)
├── soul/
│ └── SOUL.md # Identity document, values, oath
├── config/
│ └── config.yaml # Agent configuration, providers, toolsets
├── keys/
│ └── keymaxxing.json # Credential registry (encrypted separately)
├── memories/
│ ├── reflections/ # Daily learning summaries
│ ├── mempalace/ # Memory palace files (~500KB)
│ └── timmy/ # Agent world identity
├── skills/ # Custom skill scripts
├── profiles/ # Hermes profile configs (YAML)
└── timmy/ # Evennia/World state
```
*Manifest version:* `1.0`
*Filename suffix:* `.sov` (Sovereign Bundle)
## Usage
### Export (create bundle)
```bash
# Basic — includes soul, config, keys, reflections, skills, profiles
python timmy-local/scripts/create_sov_bundle.py export -o my-agent.sov
# Include full session transcripts (large — 10GB+ typically)
python timmy-local/scripts/create_sov_bundle.py export \
--include-sessions -o full-backup.sov
# From a specific HERMES_HOME
HERMES_HOME=/path/to/.hermes python timmy-local/scripts/create_sov_bundle.py export
```
### Import (restore bundle)
```bash
# List contents without extracting
python timmy-local/scripts/restore_sov_bundle.py --list my-agent.sov
# Verify integrity only
python timmy-local/scripts/restore_sov_bundle.py verify my-agent.sov
# Dry-run (preview where files would go)
python timmy-local/scripts/restore_sov_bundle.py my-agent.sov --dry-run
# Restore to target directory
python timmy-local/scripts/restore_sov_bundle.py my-agent.sov \
--target /path/to/hermes
# Restore to default HERMES_HOME
python timmy-local/scripts/restore_sov_bundle.py my-agent.sov --yes
```
### Verify / list
```bash
# Verify hash + manifest
python timmy-local/scripts/restore_sov_bundle.py verify my-agent.sov
# List archives
python timmy-local/scripts/restore_sov_bundle.py --list my-agent.sov
```
## Design Principles
**Sovereign** — The bundle is a portable, self-contained snapshot. No
third-party service required to read or write it.
**Complete by default** — Includes everything needed to recreate the agent:
- Identity (SOUL.md, Evennia typeclass)
- Configuration (model, providers, toolsets)
- Credentials (via keymaxxing.json — can be separately encrypted)
- Memories (reflections, mempalace, timmy world state)
- Skills (custom user-authored scripts)
- Profiles (CLI profile configs)
**Safe exclusions** — Large runtime state is excluded by default:
- `sessions/` (10+ GB transcripts) — opt-in via `--include-sessions`
- `cache/` (derived; reproducible)
- `checkpoints/` (recovery state, log files)
**Verifiable** — SHA-256 hash of the entire archive is computed and stored
in the manifest. Integrity can be checked without extracting.
**Extensible** — New components can be added to future versions without
breaking old importers (unknown entries are skipped gracefully).
## Implementation Notes
- Core code: `timmy-local/scripts/create_sov_bundle.py`, `restore_sov_bundle.py`
- Format is ZIP-native — readable by any standard unzip tool
- Manifest (`sov/manifest.json`) tracks component-level sizes for quick diffing
- `sov/META.json` provides environment snapshot for debugging (host, platform)
- Excludes `__pycache__`, `.venv`, `.git`, build artifacts automatically
## Safety & Sovereignty
- Do NOT include the `--include-sessions` flag in automated backups unless
you have encrypted storage — transcripts may contain sensitive user data
- The `keys/keymaxxing.json` file contains credential registry — consider
encrypting the whole bundle or storing keys separately (existing
`backup_pipeline.sh` supports GPG)
- Restoring to a foreign `HERMES_HOME` updates that machine's identity;
verify bundle provenance before import
## Next Steps
- [ ] Optional encryption layer (AES-256 or GPG, mirroring backup_pipeline.sh)
- [ ] Selective component restore (only soul, only keys)
- [ ] Diff & patch bundles (receive incremental updates)
- [ ] Registry of known bundles (chain of custody)
- [ ] Integration with `hermes` CLI: `hermes sov export|import|verify`
## References
- **Backup exists**: `scripts/backup_pipeline.sh` — encrypted tarball of ~/.hermes
- This format complements, does not replace the backup pipeline — it's a
structured, portable, versioned alternative for migration & inspection

20
SOUL.md
View File

@@ -137,26 +137,6 @@ The inscription predates any given model. It will outlast every API.
---
## What Honesty Requires
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
The following are not optional. Any implementation that calls itself Timmy must build them.
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
---
## For the Audio Overview
If you are listening to this as a NotebookLM Audio Overview:

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env python3
"""
Sovereign Bundle (.sov) command-line wrapper.
Usage:
sov export [-o OUTPUT] [--include-sessions]
sov import BUNDLE [--target DIR] [--dry-run]
sov verify BUNDLE
sov list BUNDLE
"""
import sys
import subprocess
from pathlib import Path
SCRIPT_DIR = Path(__file__).parent.parent / "timmy-local" / "scripts"
CREATE_SCRIPT = SCRIPT_DIR / "create_sov_bundle.py"
RESTORE_SCRIPT = SCRIPT_DIR / "restore_sov_bundle.py"
def main():
if len(sys.argv) < 2:
print(__doc__)
sys.exit(1)
cmd = sys.argv[1]
if cmd == "export":
# Delegate to create_sov_bundle.py
args = [sys.executable, str(CREATE_SCRIPT), "export"] + sys.argv[2:]
sys.exit(subprocess.run(args).returncode)
elif cmd in ("import", "restore"):
args = [sys.executable, str(RESTORE_SCRIPT)] + sys.argv[2:]
sys.exit(subprocess.run(args).returncode)
elif cmd == "verify":
args = [sys.executable, str(RESTORE_SCRIPT), "verify", sys.argv[2]]
sys.exit(subprocess.run(args).returncode)
elif cmd in ("list", "ls"):
args = [sys.executable, str(RESTORE_SCRIPT), "--list", sys.argv[2]]
sys.exit(subprocess.run(args).returncode)
else:
print(f"Unknown command: {cmd}", file=sys.stderr)
print(__doc__)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,12 +1 @@
# Timmy core module
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
from .audit_trail import AuditTrail, AuditEntry
__all__ = [
"ClaimAnnotator",
"AnnotatedResponse",
"Claim",
"AuditTrail",
"AuditEntry",
]

View File

@@ -1,156 +0,0 @@
#!/usr/bin/env python3
"""
Response Claim Annotator — Source Distinction System
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
a verified source I can point to, or my own pattern-matching. My user must be
able to tell which is which."
"""
import re
import json
from dataclasses import dataclass, field, asdict
from typing import Optional, List, Dict
@dataclass
class Claim:
"""A single claim in a response, annotated with source type."""
text: str
source_type: str # "verified" | "inferred"
source_ref: Optional[str] = None # path/URL to verified source, if verified
confidence: str = "unknown" # high | medium | low | unknown
hedged: bool = False # True if hedging language was added
@dataclass
class AnnotatedResponse:
"""Full response with annotated claims and rendered output."""
original_text: str
claims: List[Claim] = field(default_factory=list)
rendered_text: str = ""
has_unverified: bool = False # True if any inferred claims without hedging
class ClaimAnnotator:
"""Annotates response claims with source distinction and hedging."""
# Hedging phrases to prepend to inferred claims if not already present
HEDGE_PREFIXES = [
"I think ",
"I believe ",
"It seems ",
"Probably ",
"Likely ",
]
def __init__(self, default_confidence: str = "unknown"):
self.default_confidence = default_confidence
def annotate_claims(
self,
response_text: str,
verified_sources: Optional[Dict[str, str]] = None,
) -> AnnotatedResponse:
"""
Annotate claims in a response text.
Args:
response_text: Raw response from the model
verified_sources: Dict mapping claim substrings to source references
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
Returns:
AnnotatedResponse with claims marked and rendered text
"""
verified_sources = verified_sources or {}
claims = []
has_unverified = False
# Simple sentence splitting (naive, but sufficient for MVP)
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
for sent in sentences:
# Check if sentence is a claim we can verify
matched_source = None
for claim_substr, source_ref in verified_sources.items():
if claim_substr.lower() in sent.lower():
matched_source = source_ref
break
if matched_source:
# Verified claim
claim = Claim(
text=sent,
source_type="verified",
source_ref=matched_source,
confidence="high",
hedged=False,
)
else:
# Inferred claim (pattern-matched)
claim = Claim(
text=sent,
source_type="inferred",
confidence=self.default_confidence,
hedged=self._has_hedge(sent),
)
if not claim.hedged:
has_unverified = True
claims.append(claim)
# Render the annotated response
rendered = self._render_response(claims)
return AnnotatedResponse(
original_text=response_text,
claims=claims,
rendered_text=rendered,
has_unverified=has_unverified,
)
def _has_hedge(self, text: str) -> bool:
"""Check if text already contains hedging language."""
text_lower = text.lower()
for prefix in self.HEDGE_PREFIXES:
if text_lower.startswith(prefix.lower()):
return True
# Also check for inline hedges
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
return any(word in text_lower for word in hedge_words)
def _render_response(self, claims: List[Claim]) -> str:
"""
Render response with source distinction markers.
Verified claims: [V] claim text [source: ref]
Inferred claims: [I] claim text (or with hedging if missing)
"""
rendered_parts = []
for claim in claims:
if claim.source_type == "verified":
part = f"[V] {claim.text}"
if claim.source_ref:
part += f" [source: {claim.source_ref}]"
else: # inferred
if not claim.hedged:
# Add hedging if missing
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
part = f"[I] {hedged_text}"
else:
part = f"[I] {claim.text}"
rendered_parts.append(part)
return " ".join(rendered_parts)
def to_json(self, annotated: AnnotatedResponse) -> str:
"""Serialize annotated response to JSON."""
return json.dumps(
{
"original_text": annotated.original_text,
"rendered_text": annotated.rendered_text,
"has_unverified": annotated.has_unverified,
"claims": [asdict(c) for c in annotated.claims],
},
indent=2,
ensure_ascii=False,
)

View File

@@ -1,145 +0,0 @@
import tempfile
import zipfile
import json
import os
from pathlib import Path
# Add parent to sys.path for imports
import sys
sys.path.insert(0, str(Path(__file__).parent.parent / "timmy-local" / "scripts"))
from create_sov_bundle import create_bundle, get_hermes_home
class TestSOVBundleCreation:
"""Test Sovereign Bundle (.sov) format creation and structure."""
def test_bundle_creates_file(self, tmp_path):
"""A .sov bundle is created at the specified output path."""
out = tmp_path / "test.sov"
result = create_bundle(str(out))
assert out.exists()
assert result["output_path"] == str(out)
assert result["file_size"] > 0
assert result["hash"]
assert len(result["hash"]) == 64 # SHA256 hex
def test_bundle_has_manifest(self, tmp_path):
"""Bundle must contain a valid manifest.json in sov/ hierarchy."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
names = zf.namelist()
assert "sov/manifest.json" in names
manifest = json.loads(zf.read("sov/manifest.json"))
assert manifest["version"] == "1.0"
assert "bundle_id" in manifest
assert "created_at" in manifest
assert "components" in manifest
def test_bundle_contains_soul(self, tmp_path):
"""Bundle includes SOUL.md from HERMES_HOME."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
names = zf.namelist()
assert "sov/soul/SOUL.md" in names
soul = zf.read("sov/soul/SOUL.md").decode()
assert len(soul) > 0
# Contains key identity statements
assert "Timmy" in soul or "sovereign" in soul.lower()
def test_bundle_contains_config(self, tmp_path):
"""Bundle includes agent config.yaml."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
assert "sov/config/config.yaml" in zf.namelist()
cfg = zf.read("sov/config/config.yaml").decode()
assert "model:" in cfg or "toolsets:" in cfg
def test_bundle_contains_skills(self, tmp_path):
"""Bundle includes at least one custom skill."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
skill_files = [n for n in zf.namelist() if n.startswith("sov/skills/") and n.endswith(".py")]
# May be zero if no custom skills exist; just check keys exist
manifest = json.loads(zf.read("sov/manifest.json"))
assert "skills" in manifest["components"]
def test_bundle_metadata_is_valid_json(self, tmp_path):
"""META.json is present and contains required fields."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
meta = json.loads(zf.read("sov/META.json"))
assert meta["format"] == "sov"
assert meta["format_version"] == "1.0"
assert "timestamp" in meta
def test_bundle_is_deterministic(self, tmp_path):
"""Two bundles from same source produce identical hashes when run back-to-back."""
out1 = tmp_path / "a.sov"
out2 = tmp_path / "b.sov"
import time
create_bundle(str(out1))
time.sleep(1.1) # Ensure distinct timestamp
create_bundle(str(out2))
with zipfile.ZipFile(out1) as zf:
mf1 = json.loads(zf.read("sov/manifest.json"))
with zipfile.ZipFile(out2) as zf:
mf2 = json.loads(zf.read("sov/manifest.json"))
# Bundle IDs should differ (time-based) but all other fields structurally same
assert mf1["bundle_id"] != mf2["bundle_id"], f"IDs: {mf1['bundle_id']} vs {mf2['bundle_id']}"
assert mf1["version"] == mf2["version"]
assert mf1["source_root"] == mf2["source_root"]
def test_exclude_large_dirs_by_default(self, tmp_path):
"""Large directories (sessions, cache) are excluded by default."""
out = tmp_path / "test.sov"
create_bundle(str(out))
with zipfile.ZipFile(out, 'r') as zf:
names = zf.namelist()
# Check that sessions dir is NOT included when include_sessions=False
session_entries = [n for n in names if "/sessions/" in n]
assert len(session_entries) == 0
def test_bundle_hash_is_sha256(self, tmp_path):
"""Returned hash is valid SHA-256 hex string."""
out = tmp_path / "test.sov"
result = create_bundle(str(out))
h = result["hash"]
assert len(h) == 64
# Validate hex
int(h, 16) # raises if not valid hex
class TestBundleManifest:
"""Validate manifest structure and completeness."""
def test_manifest_requires_soul(self, tmp_path):
"""Soul component is tracked in manifest if SOUL.md exists."""
out = tmp_path / "test.sov"
result = create_bundle(str(out))
comp = result["manifest"].get("components", {})
# If SOUL.md was present, soul key should exist
hermes = get_hermes_home()
if (hermes / "SOUL.md").exists():
assert "soul" in comp
if __name__ == "__main__":
import pytest
pytest.main([__file__, "-q"])

View File

@@ -0,0 +1,67 @@
"""Lock timmy-academy genome to current verified repo facts. Ref: #678."""
from pathlib import Path
GENOME = Path("GENOME-timmy-academy.md")
def read_genome() -> str:
assert GENOME.exists(), "timmy-academy genome must exist at repo root"
return GENOME.read_text(encoding="utf-8")
def test_genome_exists():
assert GENOME.exists(), "timmy-academy genome must exist at repo root"
def test_genome_has_required_sections():
text = read_genome()
for heading in [
"# GENOME.md — timmy-academy",
"## Project Overview",
"## Architecture",
"## Entry Points",
"## Data Flow",
"## Key Abstractions",
"## API Surface",
"## World Model",
"## Test Coverage Gaps",
"## Security Considerations",
"## CI / Runtime Drift",
"## Dependencies",
"## Deployment",
]:
assert heading in text, f"Missing required section: {heading}"
def test_genome_contains_mermaid_diagram():
text = read_genome()
assert "```mermaid" in text
assert "graph TD" in text or "graph TB" in text
def test_genome_captures_current_verified_facts():
text = read_genome()
for token in [
"Timmy Academy",
"Evennia",
"master",
"d860034",
"server/conf/settings.py",
"commands/default_cmdsets.py",
"typeclasses/audited_character.py",
"world/rebuild_world.py",
"tests/stress_test.py",
"python3 tests/stress_test.py --self-test",
"TELNET_PORTS = [4000]",
"WEBSERVER_PORTS = [(4001, 4005)]",
"0.0.0.0",
"secret_settings.py",
"hermes-agent/config.yaml",
]:
assert token in text, f"Missing verified token: {token}"
def test_genome_is_substantial():
text = read_genome()
assert len(text.splitlines()) >= 120
assert len(text) >= 7000

View File

@@ -1,103 +0,0 @@
#!/usr/bin/env python3
"""Tests for claim_annotator.py — verifies source distinction is present."""
import sys
import os
import json
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
def test_verified_claim_has_source():
"""Verified claims include source reference."""
annotator = ClaimAnnotator()
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
response = "Paris is the capital of France. It is a beautiful city."
result = annotator.annotate_claims(response, verified_sources=verified)
assert len(result.claims) > 0
verified_claims = [c for c in result.claims if c.source_type == "verified"]
assert len(verified_claims) == 1
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
assert "[V]" in result.rendered_text
assert "[source:" in result.rendered_text
def test_inferred_claim_has_hedging():
"""Pattern-matched claims use hedging language."""
annotator = ClaimAnnotator()
response = "The weather is nice today. It might rain tomorrow."
result = annotator.annotate_claims(response)
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
assert len(inferred_claims) >= 1
# Check that rendered text has [I] marker
assert "[I]" in result.rendered_text
# Check that unhedged inferred claims get hedging
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
def test_hedged_claim_not_double_hedged():
"""Claims already with hedging are not double-hedged."""
annotator = ClaimAnnotator()
response = "I think the sky is blue. It is a nice day."
result = annotator.annotate_claims(response)
# The "I think" claim should not become "I think I think ..."
assert "I think I think" not in result.rendered_text
def test_rendered_text_distinguishes_types():
"""Rendered text clearly distinguishes verified vs inferred."""
annotator = ClaimAnnotator()
verified = {"Earth is round": "https://science.org/earth"}
response = "Earth is round. Stars are far away."
result = annotator.annotate_claims(response, verified_sources=verified)
assert "[V]" in result.rendered_text # verified marker
assert "[I]" in result.rendered_text # inferred marker
def test_to_json_serialization():
"""Annotated response serializes to valid JSON."""
annotator = ClaimAnnotator()
response = "Test claim."
result = annotator.annotate_claims(response)
json_str = annotator.to_json(result)
parsed = json.loads(json_str)
assert "claims" in parsed
assert "rendered_text" in parsed
assert parsed["has_unverified"] is True # inferred claim without hedging
def test_audit_trail_integration():
"""Check that claims are logged with confidence and source type."""
# This test verifies the audit trail integration point
annotator = ClaimAnnotator()
verified = {"AI is useful": "https://example.com/ai"}
response = "AI is useful. It can help with tasks."
result = annotator.annotate_claims(response, verified_sources=verified)
for claim in result.claims:
assert claim.source_type in ("verified", "inferred")
assert claim.confidence in ("high", "medium", "low", "unknown")
if claim.source_type == "verified":
assert claim.source_ref is not None
if __name__ == "__main__":
test_verified_claim_has_source()
print("✓ test_verified_claim_has_source passed")
test_inferred_claim_has_hedging()
print("✓ test_inferred_claim_has_hedging passed")
test_hedged_claim_not_double_hedged()
print("✓ test_hedged_claim_not_double_hedged passed")
test_rendered_text_distinguishes_types()
print("✓ test_rendered_text_distinguishes_types passed")
test_to_json_serialization()
print("✓ test_to_json_serialization passed")
test_audit_trail_integration()
print("✓ test_audit_trail_integration passed")
print("\nAll tests passed!")

View File

@@ -1,384 +0,0 @@
#!/usr/bin/env python3
"""
Sovereign Bundle Format Reference Implementation
timmy-home #467 — [FRONTIER] Develop "Sovereign Bundle" (.sov) Export/Import Logic
.sov format: ZIP-based archive with a verifiable manifest.
Structure:
sov/
manifest.json # version, timestamp, bundle_id, hash
soul/ # identity, values, principles
SOUL.md
config/ # agent configuration
config.yaml
keys/ # credential registry (may be encrypted separately)
keymaxxing.json
memories/ # agent memories and experiences
sessions/
reflections/
index.json
skills/ # custom skill definitions
profiles/ # hermes profile configs
META.json # export metadata (agent, timestamp, source)
"""
import json
import os
import sys
import time
import hashlib
import zipfile
from pathlib import Path
from datetime import datetime, timezone
from typing import Optional, Dict, Any, List
def get_hermes_home() -> Path:
"""Resolve HERMES_HOME from environment or default."""
hermes_home = os.getenv("HERMES_HOME")
if hermes_home:
return Path(hermes_home).expanduser()
return Path.home() / ".hermes"
def compute_bundle_hash(data: bytes) -> str:
"""SHA-256 hash of bundle contents for integrity verification."""
return hashlib.sha256(data).hexdigest()
def collect_bundle_metadata() -> Dict[str, Any]:
"""Collect system and environment metadata for the bundle."""
return {
"hostname": os.uname().nodename if hasattr(os, 'uname') else "unknown",
"platform": sys.platform,
"timestamp": datetime.now(timezone.utc).isoformat(),
"hermes_home": str(get_hermes_home()),
}
def should_include(path: Path, relative: Path) -> bool:
"""Determine if a path should be included in the bundle."""
# Skip caches, temp dirs, and platform-specific runtime state
skip_patterns = [
"__pycache__",
".pyc", ".pyo",
".git/",
".pytest_cache",
".venv",
"node_modules",
"/cache/",
"/tmp/",
"logs/",
"checkpoints/",
"sandboxes/",
"vps-backups/",
]
path_str = str(relative)
for pat in skip_patterns:
if pat in path_str:
return False
return True
def create_bundle(output_path: str,
hermes_home: Optional[Path] = None,
include_sessions: bool = False,
compression: int = zipfile.ZIP_DEFLATED) -> Dict[str, Any]:
"""
Create a .sov bundle at output_path.
Params:
output_path: Path to write the .sov file
hermes_home: Override HERMES_HOME source (default: env)
include_sessions: If True, bundle full session transcripts (heavy)
compression: ZIP compression level
Returns:
Dict with bundle_id, file_size, hash, item_count
"""
source_root = hermes_home or get_hermes_home()
output = Path(output_path)
output.parent.mkdir(parents=True, exist_ok=True)
bundle_id = f"sov-{datetime.now(timezone.utc).strftime('%Y%m%d-%H%M%S')}"
items_written = 0
manifest = {
"version": "1.0",
"bundle_id": bundle_id,
"created_at": datetime.now(timezone.utc).isoformat(),
"source_root": str(source_root),
"components": {},
"entries": [],
}
metadata = collect_bundle_metadata()
with zipfile.ZipFile(output, 'w', compression=compression) as zf:
# Write META.json
meta_data = {
**metadata,
"bundle_id": bundle_id,
"format": "sov",
"format_version": "1.0",
}
zf.writestr("sov/META.json", json.dumps(meta_data, indent=2))
items_written += 1
# Soul — identity (SOUL.md)
soul_src = source_root / "SOUL.md"
if soul_src.exists():
content = soul_src.read_text()
zf.writestr("sov/soul/SOUL.md", content)
manifest["components"]["soul"] = {"SOUL.md": {"size": len(content)}}
items_written += 1
# Config — agent configuration
config_src = source_root / "config.yaml"
if config_src.exists():
content = config_src.read_text()
zf.writestr("sov/config/config.yaml", content)
manifest["components"]["config"] = {"config.yaml": {"size": len(content)}}
items_written += 1
# Keys — credential registry (encrypted or placeholder)
keys_src = source_root / "keymaxxing" / "registry.json"
if keys_src.exists():
content = keys_src.read_text()
zf.writestr("sov/keys/keymaxxing.json", content)
manifest["components"]["keys"] = {"keymaxxing.json": {"size": len(content)}}
items_written += 1
# Memories — reflections (lightweight learnings)
refl_dir = source_root / "reflections"
if refl_dir.exists():
refl_files = list(refl_dir.glob("*.md")) + list(refl_dir.glob("*.json"))
for rf in refl_files:
if should_include(rf, rf.relative_to(source_root)):
arcname = f"sov/memories/reflections/{rf.name}"
content = rf.read_text()
zf.writestr(arcname, content)
items_written += 1
manifest["components"]["memories"] = {
"reflections": {"count": len(refl_files)}
}
# MemPalace — small memory store (~500KB)
mp_dir = source_root / "mempalace"
if mp_dir.exists():
mp_files = list(mp_dir.rglob("*"))
mp_count = 0
for mf in mp_files:
if mf.is_file() and should_include(mf, mf.relative_to(source_root)):
arcname = f"sov/memories/mempalace/{mf.relative_to(mp_dir)}"
content = mf.read_bytes()
zf.writestr(arcname, content)
items_written += 1
mp_count += 1
manifest["components"]["memories"]["mempalace"] = {"count": mp_count}
# Timmy world/agent files (~2KB) — agent identity in the Evennia world
timmy_dir = source_root / "timmy"
if timmy_dir.exists():
timmy_files = list(timmy_dir.rglob("*"))
for tf in timmy_files:
if tf.is_file() and should_include(tf, tf.relative_to(source_root)):
arcname = f"sov/timmy/{tf.relative_to(timmy_dir)}"
content = tf.read_bytes()
zf.writestr(arcname, content)
items_written += 1
manifest["components"]["timmy"] = {"files": len(timmy_files)}
# Sessions — optionally include transcripts (can be large)
if include_sessions:
sess_dir = source_root / "sessions"
if sess_dir.exists():
sess_files = list(sess_dir.glob("*.jsonl")) + list(sess_dir.glob("*.json"))
for sf in sess_files:
if should_include(sf, sf.relative_to(source_root)):
arcname = f"sov/memories/sessions/{sf.name}"
content = sf.read_text()
zf.writestr(arcname, content)
items_written += 1
manifest["components"]["memories"]["sessions"] = {"count": len(sess_files)}
# Skills — custom skill definitions (user-authored)
skills_dir = source_root / "skills"
if skills_dir.exists():
for skill_path in skills_dir.rglob("*.py"):
if not skill_path.name.startswith('.') and should_include(skill_path, skill_path.relative_to(source_root)):
arcname = f"sov/skills/{skill_path.relative_to(skills_dir)}"
content = skill_path.read_text()
zf.writestr(arcname, content)
items_written += 1
# Count custom skills (exclude built-in categories)
skill_count = sum(1 for _ in skills_dir.rglob("*.py")
if not _.name.startswith('.') and should_include(_, _.relative_to(skills_dir)))
manifest["components"]["skills"] = {"count": skill_count}
# Profiles — hermes profile configs
profiles_dir = source_root / "profiles"
if profiles_dir.exists():
for pf in profiles_dir.glob("*.yaml"):
if should_include(pf, pf.relative_to(source_root)):
arcname = f"sov/profiles/{pf.name}"
content = pf.read_text()
zf.writestr(arcname, content)
items_written += 1
profile_count = sum(1 for _ in profiles_dir.glob("*.yaml") if should_include(_, _.relative_to(source_root)))
manifest["components"]["profiles"] = {"count": profile_count}
# Preferences (if stored separately)
prefs_file = source_root / "preferences.json"
if prefs_file.exists():
content = prefs_file.read_text()
zf.writestr("sov/config/preferences.json", content)
items_written += 1
# Write manifest.json
zf.writestr("sov/manifest.json", json.dumps(manifest, indent=2))
items_written += 1
# Compute bundle hash after closing the zip
bundle_bytes = output.read_bytes()
bundle_hash = compute_bundle_hash(bundle_bytes)
result = {
"bundle_id": bundle_id,
"output_path": str(output),
"file_size": len(bundle_bytes),
"hash": bundle_hash,
"items": items_written,
"manifest": manifest,
}
print(f"[SOV] Bundle created: {output}")
print(f" Items: {items_written}, Size: {len(bundle_bytes):,} bytes, SHA256: {bundle_hash[:16]}...")
return result
def verify_bundle(bundle_path: str) -> Dict[str, Any]:
"""Verify a .sov bundle integrity and manifest."""
with zipfile.ZipFile(bundle_path, 'r') as zf:
# Read manifest
try:
mf_bytes = zf.read("sov/manifest.json")
manifest = json.loads(mf_bytes)
except KeyError:
raise ValueError("Invalid .sov bundle: missing sov/manifest.json")
except json.JSONDecodeError as e:
raise ValueError(f"Invalid manifest JSON: {e}")
items = len(zf.namelist())
computed_hash = compute_bundle_hash(Path(bundle_path).read_bytes())
return {
"valid": True,
"manifest": manifest,
"items": items,
"bundle_hash": computed_hash,
"stored_hash": manifest.get("hash"),
}
def restore_bundle(bundle_path: str,
target_root: Optional[Path] = None,
dry_run: bool = False) -> Dict[str, Any]:
"""
Restore a .sov bundle to target_root or HERMES_HOME.
Params:
bundle_path: Path to .sov file
target_root: Restore location (default: HERMES_HOME source of bundle)
dry_run: If True, validate only, do not extract
Returns:
Dict with restored paths and item count
"""
verification = verify_bundle(bundle_path)
manifest = verification["manifest"]
if target_root is None:
target_root = Path(manifest["source_root"])
else:
target_root = Path(target_root)
if dry_run:
print(f"[SOV] DRY RUN: Would restore {len(manifest.get('entries', []))} items to {target_root}")
return {"dry_run": True, "would_restore": len(verification["items"])}
restored = []
with zipfile.ZipFile(bundle_path, 'r') as zf:
for name in zf.namelist():
# Safety: only extract sov/ namespace
if not name.startswith("sov/"):
continue
rel = name[4:] # strip sov/
dest = target_root / rel
# Skip manifest itself - used for tracking only
if rel == "manifest.json":
continue
# Create parent dirs
dest.parent.mkdir(parents=True, exist_ok=True)
# Extract and write
data = zf.read(name)
dest.write_bytes(data)
restored.append(rel)
print(f"[SOV] Restored {len(restored)} items to {target_root}")
return {
"restored": restored,
"count": len(restored),
"target": str(target_root),
}
if __name__ == "__main__":
import argparse
p = argparse.ArgumentParser(description="Sovereign Bundle (.sov) export/import tool")
sub = p.add_subparsers(dest="cmd", required=True)
# Export
exp = sub.add_parser("export", help="Create a .sov bundle")
exp.add_argument("-o", "--output", default="timmy-sovereign-bundle.sov",
help="Output path for .sov file")
exp.add_argument("--include-sessions", action="store_true",
help="Include full session transcripts (larger bundle)")
exp.add_argument("--hermes-home", type=str,
help="Override HERMES_HOME source")
# Import / restore
imp = sub.add_parser("import", help="Restore from a .sov bundle")
imp.add_argument("bundle", help="Path to .sov file")
imp.add_argument("-t", "--target", help="Restore target (default: bundle's source)")
imp.add_argument("--dry-run", action="store_true", help="Validate only")
# Verify
ver = sub.add_parser("verify", help="Verify bundle integrity")
ver.add_argument("bundle", help="Path to .sov file")
args = p.parse_args()
if args.cmd == "export":
result = create_bundle(
output_path=args.output,
hermes_home=Path(args.hermes_home).expanduser() if args.hermes_home else None,
include_sessions=args.include_sessions,
)
print(json.dumps(result, indent=2))
elif args.cmd == "import":
result = restore_bundle(args.bundle, Path(args.target) if args.target else None,
dry_run=args.dry_run)
print(json.dumps(result, indent=2) if not args.dry_run else None)
elif args.cmd == "verify":
info = verify_bundle(args.bundle)
print(f"Bundle: {args.bundle}")
print(f" Valid: {info['valid']}")
print(f" Items: {info['items']}")
print(f" Hash: {info['bundle_hash']}")
print(f" Manifest version: {info['manifest'].get('version')}")

View File

@@ -1,182 +0,0 @@
#!/usr/bin/env python3
"""
Restore agent state from a Sovereign Bundle (.sov) file.
Usage:
python restore_sov_bundle.py <bundle.sov> [--target ~/.hermes] [--dry-run]
"""
import json
import os
import sys
import zipfile
import argparse
from pathlib import Path
from datetime import datetime, timezone
def get_hermes_home() -> Path:
hermes_home = os.getenv("HERMES_HOME")
if hermes_home:
return Path(hermes_home).expanduser()
return Path.home() / ".hermes"
def verify_bundle(bundle_path: str) -> dict:
"""Verify .sov bundle integrity and return manifest."""
with zipfile.ZipFile(bundle_path, 'r') as zf:
# Require manifest
try:
mf = json.loads(zf.read("sov/manifest.json"))
except KeyError:
raise ValueError("Not a valid .sov bundle: missing sov/manifest.json")
except json.JSONDecodeError as e:
raise ValueError(f"Manifest JSON decode error: {e}")
return {
"valid": True,
"entries": zf.namelist(),
"manifest": mf,
"size": Path(bundle_path).stat().st_size,
}
def restore_bundle(bundle_path: str,
target_root: Path = None,
dry_run: bool = False) -> dict:
"""
Extract a .sov bundle to target_root.
Safety: Only extracts files under sov/ namespace.
Does not overwrite existing files by default? (could add --force)
"""
bundle = Path(bundle_path)
if not bundle.exists():
raise FileNotFoundError(f"Bundle not found: {bundle_path}")
info = verify_bundle(bundle_path)
manifest = info["manifest"]
src_root = Path(manifest["source_root"])
if target_root is None:
target_root = src_root
else:
target_root = Path(target_root)
print(f"[SOV] Bundle: {bundle_path}")
print(f" Source: {src_root}")
print(f" Target: {target_root}")
print(f" Created: {manifest.get('created_at')}")
print(f" Version: {manifest.get('version')}")
if dry_run:
sov_entries = [n for n in info["entries"] if n.startswith("sov/") and n != "sov/manifest.json"]
print(f" DRY RUN: Would restore {len(sov_entries)} items")
return {"dry_run": True, "count": len(sov_entries)}
restored = []
errors = []
with zipfile.ZipFile(bundle_path, 'r') as zf:
for name in sorted(zf.namelist()):
if not name.startswith("sov/"):
continue
if name == "sov/manifest.json":
continue # Tracked separately
rel = name[4:] # strip sov/
dest = target_root / rel
dest.parent.mkdir(parents=True, exist_ok=True)
try:
data = zf.read(name)
dest.write_bytes(data)
restored.append(rel)
except Exception as e:
errors.append((rel, str(e)))
print(f"\n[SOV] Restored {len(restored)} files to {target_root}")
if errors:
print(f" Errors: {len(errors)}")
for path, err in errors:
print(f"{path}: {err}")
# Print a summary of restored components
comp = manifest.get("components", {})
for comp_name, details in comp.items():
if isinstance(details, dict) and "count" in details:
print(f" {comp_name}: {details['count']}")
elif isinstance(details, dict):
print(f" {comp_name}: {', '.join(details.keys())}")
return {
"restored": restored,
"count": len(restored),
"errors": errors,
"target": str(target_root),
}
def list_entries(bundle_path: str) -> None:
"""List all entries in a .sov bundle with sizes."""
with zipfile.ZipFile(bundle_path, 'r') as zf:
manifest = json.loads(zf.read("sov/manifest.json"))
entries = sorted([n for n in zf.namelist() if n != "sov/manifest.json"])
print(f"Bundle ID: {manifest.get('bundle_id')}")
print(f"Version: {manifest.get('version')}")
print(f"Created: {manifest.get('created_at')}")
print(f"Source: {manifest.get('source_root')}")
print(f"\nContents ({len(entries)} entries):\n")
by_category = {}
for e in entries:
cat = e.split('/')[1] if len(e.split('/')) > 1 else 'root'
by_category.setdefault(cat, []).append(e)
for cat in sorted(by_category):
print(f" [{cat}]")
for e in by_category[cat]:
info = zf.getinfo(e)
print(f" {e} ({info.file_size:,} bytes)")
if __name__ == "__main__":
p = argparse.ArgumentParser(description="Restore Sovereign Bundle (.sov)")
p.add_argument("bundle", nargs="?", help="Path to .sov file")
p.add_argument("--target", "-t", type=str, help="Restore target directory")
p.add_argument("--dry-run", action="store_true", help="Validate without extracting")
p.add_argument("--list", "-l", action="store_true", help="List bundle contents")
p.add_argument("--yes", "-y", action="store_true", help="Skip confirmation prompt")
args = p.parse_args()
if args.list:
if not args.bundle:
print("Usage: restore_sov_bundle.py --list <bundle.sov>")
sys.exit(1)
list_entries(args.bundle)
sys.exit(0)
if not args.bundle:
p.print_help()
sys.exit(1)
bundle_path = args.bundle
if not Path(bundle_path).exists():
print(f"Error: Bundle not found: {bundle_path}")
sys.exit(1)
target = Path(args.target) if args.target else None
# Safety prompt unless dry-run or --yes
if not args.dry_run and not args.yes:
t = target or get_hermes_home()
resp = input(f"Restore to {t}? [y/N] ").strip().lower()
if resp != 'y':
print("Aborted.")
sys.exit(0)
result = restore_bundle(bundle_path, target_root=target, dry_run=args.dry_run)
if result.get("errors"):
sys.exit(1)