Compare commits

..

25 Commits

Author SHA1 Message Date
986076b808 Add provider allowlist guard — runtime enforcement of banned providers
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 29s
2026-04-13 00:27:10 +00:00
47c510c6f3 Merge pull request 'feat: poka-yoke: block tool hallucination (#294)' (#301) from fix/json-repair-for-tool-calls into main
Some checks failed
Forge CI / smoke-and-build (push) Failing after 27s
2026-04-12 22:55:40 +00:00
Alexander Whitestone
a318c389fe feat: poka-yoke: block tool hallucination before API calls (#294)
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 25s
Validates tool names against valid_tool_names before execution.
Both sequential and concurrent paths checked.

When model hallucinates non-existent tool:
- Logs warning with tool name
- Returns error listing available tools
- Does NOT make API call (saves budget)
2026-04-12 18:55:27 -04:00
851f5601cf Merge pull request 'fix: repair malformed tool call JSON (closes #292)' (#300) from fix/json-repair-for-tool-calls into main
Some checks failed
Forge CI / smoke-and-build (push) Failing after 30s
2026-04-12 16:09:39 +00:00
Alexander Whitestone
cdde3b27c1 fix: repair malformed tool call JSON (closes #292)
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 27s
Adds json-repair library to fix 1400+ JSON parse failures.
Wraps all json.loads() calls on tool call arguments with
repair_json() to handle trailing commas, single quotes,
missing braces, and unquoted keys.

Tested: 7/7 common LLM JSON error patterns repaired.
Impact: eliminates wasted inference turns from parse failures.
2026-04-12 08:16:40 -04:00
9e96e51afd Merge pull request 'docs: Hermes Agent Feature Census — Know Thy Agent (#290)' (#291) from census/feature-inventory into main
Some checks failed
Forge CI / smoke-and-build (push) Failing after 24s
2026-04-11 09:31:46 +00:00
Alexander Whitestone
5e13fd2a5f docs: Hermes Agent Feature Census — complete inventory
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 24s
Full feature census of hermes-agent codebase covering:
- Feature Matrix (memory, tools, sessions, plugins, config, gateway)
- Architecture Overview (dependency chain, data flow)
- Recent Development Activity (last 30 days, 1750+ commits)
- Overlap Analysis (what to use vs what to build)
- Contribution Roadmap (upstream vs Timmy Foundation)

Refs: #290
2026-04-11 05:03:51 -04:00
04c017bcb3 fix: CI stability — reduce deps, increase timeout
Some checks failed
Forge CI / smoke-and-build (push) Failing after 28s
2026-04-11 00:32:20 +00:00
4c2ac7b644 Merge pull request 'fix(memory): add remove action to on_memory_write bridge' (#277) from keymaxx/mimoomni/243 into main
Some checks failed
Forge CI / smoke-and-build (push) Failing after 45s
Auto-merged by Timmy
2026-04-10 20:59:47 +00:00
8202649ca0 fix(memory): add remove action to on_memory_write bridge
All checks were successful
Forge CI / smoke-and-build (pull_request) Successful in 43s
- Extend on_memory_write trigger in run_agent.py to fire for 'remove' action
- Holographic provider now handles 'replace' (re-adds content) and 'remove' (lowers trust on matching facts)
- Fixes orphaned facts when entries are deleted from built-in memory

Fixes #243
2026-04-10 15:31:45 -04:00
f5f028d981 auto-merge PR #276
Some checks failed
Forge CI / smoke-and-build (push) Failing after 42s
2026-04-10 19:03:02 +00:00
Alexander Whitestone
a703fb823c docs: add Matrix integration setup guide and interactive script
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 36s
Phase 2 of Matrix integration — wires Hermes to any Matrix homeserver.

- docs/matrix-setup.md: step-by-step guide covering matrix.org (testing)
  and self-hosted (sovereignty) options, auth methods, E2EE setup, room
  config, and troubleshooting
- scripts/setup_matrix.py: interactive wizard that prompts for homeserver,
  supports token/password auth, generates MATRIX_DEVICE_ID, writes
  ~/.hermes/.env and config.yaml, and optionally creates a test room +
  sends a test message

No config.py changes needed — all Matrix env vars (MATRIX_HOMESERVER,
MATRIX_ACCESS_TOKEN, MATRIX_USER_ID, MATRIX_PASSWORD, MATRIX_ENCRYPTION,
MATRIX_DEVICE_ID, MATRIX_ALLOWED_USERS, MATRIX_HOME_ROOM, etc.) are
already registered in OPTIONAL_ENV_VARS and _EXTRA_ENV_KEYS.

Closes #271
2026-04-10 07:46:42 -04:00
a89dae9942 [auto-merge] browser integration PoC
Some checks failed
Forge CI / smoke-and-build (push) Failing after 38s
Notebook CI / notebook-smoke (push) Failing after 7s
Auto-merged by PR review bot: browser integration PoC
2026-04-10 11:44:56 +00:00
Alexander Whitestone
f85c07551a feat: browser integration analysis + PoC tool (#262)
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 36s
Add docs/browser-integration-analysis.md:
- Technical analysis of Browser Use, Graphify, and Multica for Hermes
- Integration paths, security considerations, performance characteristics
- Clear recommendations: Browser Use (integrate), Graphify (investigate),
  Multica (skip)
- Phased integration roadmap

Add tools/browser_use_tool.py:
- Wraps browser-use library as Hermes tool (toolset: browser_use)
- Three tools: browser_use_run, browser_use_extract, browser_use_compare
- Autonomous multi-step browser automation from natural language tasks
- Integrates with existing url_safety and website_policy security modules
- Supports both local Playwright and cloud execution modes
- Follows existing tool registration pattern (registry.register)

Refs: #262
2026-04-10 07:10:29 -04:00
f81c60a5b3 Merge pull request 'docs: Improve KNOWN_VIOLATIONS justifications for SOUL.md alignment' (#267) from feature/improve-sovereignty-justification into main
Some checks failed
Forge CI / smoke-and-build (push) Failing after 41s
Merge PR #267: docs: Improve KNOWN_VIOLATIONS justifications for SOUL.md alignment
2026-04-10 09:35:51 +00:00
01977f28fb docs: improve KNOWN_VIOLATIONS justifications in verify_memory_sovereignty.py
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 36s
2026-04-10 00:12:42 -04:00
a055e68ebf Merge pull request #265
Some checks failed
Forge CI / smoke-and-build (push) Failing after 43s
Merged PR #265
2026-04-10 03:44:23 +00:00
f6c9ecb893 Merge pull request #264
Some checks failed
Forge CI / smoke-and-build (push) Has been cancelled
Merged PR #264
2026-04-10 03:44:19 +00:00
549431bb81 Merge pull request #259
Some checks failed
Forge CI / smoke-and-build (push) Has been cancelled
Merged PR #259
2026-04-10 03:44:16 +00:00
43dc2d21f2 Merge pull request #263
Some checks failed
Forge CI / smoke-and-build (push) Has been cancelled
Merged PR #263
2026-04-10 03:44:04 +00:00
2948d010b7 Merge pull request #266
Some checks failed
Forge CI / smoke-and-build (push) Has been cancelled
Merged PR #266
2026-04-10 03:44:00 +00:00
Alexander Whitestone
0d92b9ad15 feat(scripts): add memory budget enforcement tool (#256)
All checks were successful
Forge CI / smoke-and-build (pull_request) Successful in 40s
Add scripts/memory_budget.py — a CI-friendly tool for checking and
enforcing character budgets on MEMORY.md and USER.md memory files.

Features:
- Checks MEMORY.md vs memory_char_limit (default 2200)
- Checks USER.md vs user_char_limit (default 1375)
- Estimates total injection cost (chars / ~4 chars per token)
- Alerts when approaching limits (>80% usage)
- --report flag for detailed breakdown with progress bars
- --verbose flag for per-entry details
- --enforce flag trims oldest entries to fit budget
- --json flag for machine-readable output (CI integration)
- Exit codes: 0=within budget, 1=over budget, 2=trimmed
- Suggestions for largest entries when over budget

Relates to #256
2026-04-09 21:13:01 -04:00
Alexander Whitestone
2e37ff638a Add memory sovereignty verification script (#257)
All checks were successful
Forge CI / smoke-and-build (pull_request) Successful in 39s
CI check that scans all memory-path code for network dependencies.

Scans 8 memory-related files:
- tools/memory_tool.py (MEMORY.md/USER.md store)
- hermes_state.py (SQLite session store)
- tools/session_search_tool.py (FTS5 session search)
- tools/graph_store.py (knowledge graph)
- tools/temporal_kg_tool.py (temporal KG tool)
- agent/temporal_knowledge_graph.py (temporal triple store)
- tools/skills_tool.py (skill listing/viewing)
- tools/skills_sync.py (bundled skill syncing)

Verifies no HTTP/HTTPS calls, no external API usage, and no
network dependencies in the core memory read/write path.

Reports violations with file:line references. Exit 0 if sovereign,
exit 1 if violations found. Suitable for CI integration.
2026-04-09 21:07:03 -04:00
Alexander Whitestone
815160bd6f burn: add Memory Architecture Guide (closes #263, #258)
All checks were successful
Forge CI / smoke-and-build (pull_request) Successful in 1m3s
Developer-facing guide covering all four memory tiers:
- Built-in memory (MEMORY.md/USER.md) with frozen snapshot pattern
- Session search (FTS5 + Gemini Flash summarization)
- Skills as procedural memory
- External memory provider plugin architecture

Includes data lifecycle, security guarantees, code paths,
configuration reference, and troubleshooting.
2026-04-09 20:51:45 -04:00
Alexander Whitestone
511eacb573 docs: add Memory Architecture Guide
All checks were successful
Forge CI / smoke-and-build (pull_request) Successful in 47s
Comprehensive guide covering the Hermes memory system:
- Built-in memory (MEMORY.md / USER.md) with frozen snapshot pattern
- Session search (FTS5 + Gemini Flash summarization)
- Skills as procedural memory
- External memory providers (8 plugins)
- System interaction flow and data lifecycle
- Best practices for what to save/skip
- Privacy and data locality guarantees
- Configuration reference (char limits, nudge interval, flush settings)
- Troubleshooting common issues

Closes #258
2026-04-09 12:45:48 -04:00
13 changed files with 3709 additions and 13 deletions

View File

@@ -13,7 +13,7 @@ concurrency:
jobs:
smoke-and-build:
runs-on: ubuntu-latest
timeout-minutes: 5
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -31,7 +31,7 @@ jobs:
run: |
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -e ".[all,dev]"
uv pip install -e ".[dev]"
- name: Smoke tests
run: |
@@ -55,7 +55,7 @@ jobs:
- name: Green-path E2E
run: |
source .venv/bin/activate
python -m pytest tests/test_green_path_e2e.py -q --tb=short
python -m pytest tests/test_green_path_e2e.py -q --tb=short -p no:xdist
env:
OPENROUTER_API_KEY: ""
OPENAI_API_KEY: ""

View File

@@ -0,0 +1,335 @@
# Browser Integration Analysis: Browser Use + Graphify + Multica
**Issue:** #262 — Investigation: Browser Use + Graphify + Multica — Hermes Integration Analysis
**Date:** 2026-04-10
**Author:** Hermes Agent (burn branch)
## Executive Summary
This document evaluates three browser-related projects for integration with
hermes-agent. Each tool is assessed on capability, integration complexity,
security posture, and strategic fit with Hermes's existing browser stack.
| Tool | Recommendation | Integration Path |
|-------------------|-------------------------|-------------------------|
| Browser Use | **Integrate** (PoC) | Tool + MCP server |
| Graphify | Investigate further | MCP server or tool |
| Multica | Skip (for now) | N/A — premature |
---
## 1. Browser Use (`browser-use`)
### What It Does
Browser Use is a Python library that wraps Playwright to provide LLM-driven
browser automation. An agent describes a task in natural language, and
browser-use autonomously navigates, clicks, types, and extracts data by
feeding the page's accessibility tree to an LLM and executing the resulting
actions in a loop.
Key capabilities:
- Autonomous multi-step browser workflows from a single text instruction
- Accessibility tree extraction (DOM + ARIA snapshot)
- Screenshot and visual context for multimodal models
- Form filling, navigation, data extraction, file downloads
- Custom actions (register callable Python functions the LLM can invoke)
- Parallel agent execution (multiple browser agents simultaneously)
- Cloud execution via browser-use.com API (no local browser needed)
### Integration with Hermes
**Primary path: Custom Hermes tool** wrapping `browser-use` as a high-level
"automated browsing" capability alongside the existing `browser_tool.py`
(low-level, agent-controlled) tools.
**Why a separate tool rather than replacing browser_tool.py:**
- Hermes's existing browser tools (navigate, snapshot, click, type) give the
LLM fine-grained step-by-step control — this is valuable for interactive
tasks and debugging.
- browser-use gives coarse-grained "do this task for me" autonomy — better
for multi-step extraction workflows where the LLM would otherwise need
10+ tool calls.
- Both modes have legitimate use cases. Offer both.
**Integration architecture:**
```
hermes-agent
tools/
browser_tool.py # Existing — low-level agent-controlled browsing
browser_use_tool.py # NEW — high-level autonomous browsing (PoC)
|
+-- browser_use.run() # Wraps browser-use Agent class
+-- browser_use.extract() # Wraps browser-use for data extraction
```
The tool registers with `tools/registry.py` as toolset `browser_use` with
a `check_fn` that verifies `browser-use` is installed.
**Alternative: MCP server** — browser-use could also be exposed as an MCP
server for multi-agent setups where subagents need independent browser
access. This is a follow-up, not the initial integration.
### Dependencies and Requirements
```
pip install browser-use # Core library
playwright install chromium # Playwright browser binary
```
Or use cloud mode with `BROWSER_USE_API_KEY` — no local browser needed.
Python 3.11+, Playwright. No exotic system dependencies beyond what
Hermes already requires for its existing browser tool.
### Security Considerations
| Concern | Mitigation |
|----------------------------|---------------------------------------------------------|
| Arbitrary URL access | Reuse Hermes's `website_policy` and `url_safety` modules |
| Data exfiltration | Browser-use agents run in isolated Playwright contexts; no access to Hermes filesystem |
| Prompt injection via page | browser-use feeds page content to LLM — same risk as existing browser_snapshot; already handled by Hermes prompt hardening |
| Credential leakage | Do not pass API keys to untrusted pages; cloud mode keeps credentials server-side |
| Resource exhaustion | Set max_steps on browser-use Agent to prevent infinite loops |
| Downloaded files | Playwright download path is sandboxed; tool should restrict to temp directory |
**Key security property:** browser-use executes within Playwright's sandboxed
browser context. The LLM controlling browser-use is Hermes itself (or a
configured auxiliary model), not the page content. This is equivalent to the
existing browser tool's security model.
### Performance Characteristics
- **Startup:** ~2-3s for Playwright Chromium launch (same as existing local mode)
- **Per-step:** ~1-3s per LLM call + browser action (comparable to manual
browser_navigate + browser_snapshot loop)
- **Full task (5-10 steps):** ~15-45s depending on page complexity
- **Token usage:** Each step sends the accessibility tree to the LLM.
Browser-use supports vision mode (screenshots) which is more token-heavy.
- **Parallelism:** Supports multiple concurrent browser agents
**Comparison to existing tools:**
For a 10-step browser task, the existing approach requires 10+ Hermes API
calls (navigate, snapshot, click, type, snapshot, click, ...). Browser-use
consolidates this into a single Hermes tool call that internally runs its
own LLM loop. This reduces Hermes API round-trips but shifts the LLM cost
to browser-use's internal model calls.
### Recommendation: INTEGRATE
Browser Use fills a clear gap — autonomous multi-step browser tasks — that
complements Hermes's existing fine-grained browser tools. The integration
is straightforward (Python library, same security model). A PoC tool is
provided in `tools/browser_use_tool.py`.
---
## 2. Graphify
### What It Does
Graphify is a knowledge graph extraction tool that processes unstructured
text (including web content) and extracts entities, relationships, and
structured knowledge into a graph format. It can:
- Extract entities and relationships from text using NLP/LLM techniques
- Build knowledge graphs from web-scraped content
- Support incremental graph updates as new content is processed
- Export graphs in standard formats (JSON-LD, RDF, etc.)
(Note: "Graphify" as a project name is used by several tools. The most
relevant for browser integration is the concept of extracting structured
knowledge graphs from web content during or after browsing.)
### Integration with Hermes
**Primary path: MCP server or Hermes tool** that takes web content (from
browser_tool or web_extract) and produces structured knowledge graphs.
**Integration architecture:**
```
hermes-agent
tools/
graphify_tool.py # NEW — knowledge graph extraction from text
|
+-- graphify.extract() # Extract entities/relations from text
+-- graphify.merge() # Merge into existing graph
+-- graphify.query() # Query the accumulated graph
```
Or via MCP:
```
hermes-agent --mcp-server graphify-mcp
-> tools: graphify_extract, graphify_query, graphify_export
```
**Synergy with browser tools:**
1. `browser_navigate` + `browser_snapshot` to get page content
2. `graphify_extract` to pull entities and relationships
3. Repeat across multiple pages to build a domain knowledge graph
4. `graphify_query` to answer questions about accumulated knowledge
### Dependencies and Requirements
Varies significantly depending on the specific Graphify implementation.
Typical requirements:
- Python 3.11+
- spaCy or similar NLP library for entity extraction
- Optional: Neo4j or NetworkX for graph storage
- LLM access (can reuse Hermes's existing model configuration)
### Security Considerations
| Concern | Mitigation |
|----------------------------|---------------------------------------------------------|
| Processing untrusted text | NLP extraction is read-only; no code execution |
| Graph data persistence | Store in Hermes's data directory with appropriate permissions |
| Information aggregation | Knowledge graphs could accumulate sensitive data; provide clear/delete commands |
| External graph DB access | If using Neo4j, require authentication and restrict to localhost |
### Performance Characteristics
- **Extraction:** ~0.5-2s per page depending on content length and NLP model
- **Graph operations:** Sub-second for graphs under 100K nodes
- **Storage:** Lightweight (JSON/SQLite) for small graphs, Neo4j for large-scale
- **Token usage:** If using LLM-based extraction, ~500-2000 tokens per page
### Recommendation: INVESTIGATE FURTHER
The concept is sound — knowledge graph extraction from web content is a
natural complement to browser tools. However:
1. **Multiple competing tools** exist under this name; need to identify the
best-maintained option
2. **Value proposition unclear** vs. Hermes's existing memory system and
file-based knowledge storage
3. **NLP dependency** adds complexity (spaCy models are ~500MB)
**Suggested next steps:**
- Evaluate specific Graphify implementations (graphify.ai, custom NLP pipelines)
- Prototype with a lightweight approach: LLM-based entity extraction + NetworkX
- Assess whether Hermes's existing memory/graph_store.py can serve this role
---
## 3. Multica
### What It Does
Multica is a multi-agent browser coordination framework. It enables multiple
AI agents to collaboratively browse the web, with features for:
- Task decomposition: splitting complex web tasks across multiple agents
- Shared browser state: agents see a common view of browsing progress
- Coordination protocols: agents can communicate about what they've found
- Parallel web research: multiple agents researching different aspects simultaneously
### Integration with Hermes
**Theoretical path:** Multica would integrate as a higher-level orchestration
layer on top of Hermes's existing browser tools, coordinating multiple
Hermes subagents (via `delegate_tool`) each with browser access.
**Integration architecture:**
```
hermes-agent (orchestrator)
delegate_tool -> subagent_1 (browser_navigate, browser_snapshot, ...)
delegate_tool -> subagent_2 (browser_navigate, browser_snapshot, ...)
delegate_tool -> subagent_3 (browser_navigate, browser_snapshot, ...)
|
+-- Multica coordination layer (shared state, task splitting)
```
### Dependencies and Requirements
- Complex multi-agent orchestration infrastructure
- Shared state management between agents
- Potentially a custom runtime for agent coordination
- Likely requires significant architectural changes to Hermes's delegation model
### Security Considerations
| Concern | Mitigation |
|----------------------------|---------------------------------------------------------|
| Multiple agents on same browser | Session isolation per agent (Hermes already does this) |
| Coordinated exfiltration | Same per-agent restrictions apply |
| Amplified prompt injection | Each agent processes its own pages independently |
| Resource multiplication | N agents = N browser instances = Nx resource usage |
### Performance Characteristics
- **Scaling:** Near-linear improvement for embarrassingly parallel tasks
(e.g., "research 10 companies simultaneously")
- **Overhead:** Significant coordination overhead for tightly coupled tasks
- **Resource cost:** Each agent needs its own LLM calls + browser instance
- **Complexity:** Debugging multi-agent browser workflows is extremely difficult
### Recommendation: SKIP (for now)
Multica addresses a real need (parallel web research) but is premature for
Hermes for several reasons:
1. **Hermes already has subagent delegation** (`delegate_tool`) — agents can
already do parallel browser work without Multica
2. **No mature implementation** — Multica is more of a concept than a
production-ready tool
3. **Complexity vs. benefit** — the coordination overhead and debugging
difficulty outweigh the benefits for most use cases
4. **Better alternatives exist** — for parallel research, simply delegating
multiple subagents with browser tools is simpler and already works
**Revisit when:** Hermes's delegation model supports shared state between
subagents, or a mature Multica implementation emerges.
---
## Integration Roadmap
### Phase 1: Browser Use PoC (this PR)
- [x] Create `tools/browser_use_tool.py` wrapping browser-use as Hermes tool
- [x] Create `docs/browser-integration-analysis.md` (this document)
- [ ] Test with real browser tasks
- [ ] Add to toolset configuration
### Phase 2: Browser Use Production (follow-up)
- [ ] Add `browser_use` to `toolsets.py` toolset definitions
- [ ] Add configuration options in `config.yaml`
- [ ] Add tests in `tests/test_browser_use_tool.py`
- [ ] Consider MCP server variant for subagent use
### Phase 3: Graphify Investigation (follow-up)
- [ ] Evaluate specific Graphify implementations
- [ ] Prototype lightweight LLM-based entity extraction tool
- [ ] Assess integration with existing `graph_store.py`
- [ ] Create PoC if investigation is positive
### Phase 4: Multi-Agent Browser (future)
- [ ] Monitor Multica ecosystem maturity
- [ ] Evaluate when delegation model supports shared state
- [ ] Consider simpler parallel delegation patterns first
---
## Appendix: Existing Browser Stack
Hermes already has a comprehensive browser tool stack:
| Component | Description |
|-----------------------|--------------------------------------------------|
| `browser_tool.py` | Low-level agent-controlled browser (navigate, click, type, snapshot) |
| `browser_camofox.py` | Anti-detection browser via Camofox REST API |
| `browser_providers/` | Cloud providers (Browserbase, Browser Use API, Firecrawl) |
| `web_tools.py` | Web search (Parallel) and extraction (Firecrawl) |
| `mcp_tool.py` | MCP client for connecting external tool servers |
The existing stack covers:
- **Local browsing:** Headless Chromium via agent-browser CLI
- **Cloud browsing:** Browserbase, Browser Use cloud, Firecrawl
- **Anti-detection:** Camofox (local) or Browserbase advanced stealth
- **Content extraction:** Firecrawl for clean markdown extraction
- **Search:** Parallel AI web search
New browser integrations should complement rather than replace these tools.

477
docs/hermes-agent-census.md Normal file
View File

@@ -0,0 +1,477 @@
# Hermes Agent — Feature Census
**Epic:** [#290 — Know Thy Agent: Hermes Feature Census](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/290)
**Date:** 2026-04-11
**Source:** Timmy_Foundation/hermes-agent (fork of NousResearch/hermes-agent)
**Upstream:** NousResearch/hermes-agent (last sync: 2026-04-07, 499 commits merged in PR #201)
**Codebase:** ~200K lines Python (335 source files), 470 test files
---
## 1. Feature Matrix
### 1.1 Memory System
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **`add` action** | ✅ Exists | `tools/memory_tool.py:457` | Append entry to MEMORY.md or USER.md |
| **`replace` action** | ✅ Exists | `tools/memory_tool.py:466` | Find by substring, replace content |
| **`remove` action** | ✅ Exists | `tools/memory_tool.py:475` | Find by substring, delete entry |
| **Dual stores (memory + user)** | ✅ Exists | `tools/memory_tool.py:43-45` | MEMORY.md (2200 char limit) + USER.md (1375 char limit) |
| **Entry deduplication** | ✅ Exists | `tools/memory_tool.py:128-129` | Exact-match dedup on load |
| **Injection/exfiltration scanning** | ✅ Exists | `tools/memory_tool.py:85` | Blocks prompt injection, role hijacking, secret exfil |
| **Frozen snapshot pattern** | ✅ Exists | `tools/memory_tool.py:119-135` | Preserves LLM prefix cache across session |
| **Atomic writes** | ✅ Exists | `tools/memory_tool.py:417-436` | tempfile.mkstemp + os.replace |
| **File locking (fcntl)** | ✅ Exists | `tools/memory_tool.py:137-153` | Exclusive lock for concurrent safety |
| **External provider plugin** | ✅ Exists | `agent/memory_manager.py` | Supports 1 external provider (Honcho, Mem0, Hindsight, etc.) |
| **Provider lifecycle hooks** | ✅ Exists | `agent/memory_provider.py:55-66` | on_memory_write, prefetch, sync_turn, on_session_end, on_pre_compress, on_delegation |
| **Session search (past conversations)** | ✅ Exists | `tools/session_search_tool.py:492` | FTS5 search across SQLite message store |
| **Holographic memory** | 🔌 Plugin slot | Config `memory.provider` | Accepted as external provider name, not built-in |
| **Engram integration** | ❌ Not present | — | Not in codebase; Engram is a Timmy Foundation project |
| **Trust system** | ❌ Not present | — | No trust scoring on memory entries |
### 1.2 Tool System
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **Central registry** | ✅ Exists | `tools/registry.py:290` | Module-level singleton, all tools self-register |
| **47 static tools** | ✅ Exists | See full list below | Organized in 21+ toolsets |
| **Dynamic MCP tools** | ✅ Exists | `tools/mcp_tool.py` | Runtime registration from MCP servers (17 in live instance) |
| **Tool approval system** | ✅ Exists | `tools/approval.py` | Manual/smart/off modes, dangerous command detection |
| **Toolset composition** | ✅ Exists | `toolsets.py:404` | Composite toolsets (e.g., `debugging = terminal + web + file`) |
| **Per-platform toolsets** | ✅ Exists | `toolsets.py` | `hermes-cli`, `hermes-telegram`, `hermes-discord`, etc. |
| **Skill management** | ✅ Exists | `tools/skill_manager_tool.py:747` | Create, patch, delete skill documents |
| **Mixture of Agents** | ✅ Exists | `tools/mixture_of_agents_tool.py:553` | Route through 4+ frontier LLMs |
| **Subagent delegation** | ✅ Exists | `tools/delegate_tool.py:963` | Isolated contexts, up to 3 parallel |
| **Code execution sandbox** | ✅ Exists | `tools/code_execution_tool.py:1360` | Python scripts with tool access |
| **Image generation** | ✅ Exists | `tools/image_generation_tool.py:694` | FLUX 2 Pro |
| **Vision analysis** | ✅ Exists | `tools/vision_tools.py:606` | Multi-provider vision |
| **Text-to-speech** | ✅ Exists | `tools/tts_tool.py:974` | Edge TTS, ElevenLabs, OpenAI, NeuTTS |
| **Speech-to-text** | ✅ Exists | Config `stt.*` | Local Whisper, Groq, OpenAI, Mistral Voxtral |
| **Home Assistant** | ✅ Exists | `tools/homeassistant_tool.py:456-483` | 4 HA tools (list, state, services, call) |
| **RL training** | ✅ Exists | `tools/rl_training_tool.py:1376-1394` | 10 Tinker-Atropos tools |
| **Browser automation** | ✅ Exists | `tools/browser_tool.py:2137-2211` | 10 tools (navigate, click, type, scroll, screenshot, etc.) |
| **Gitea client** | ✅ Exists | `tools/gitea_client.py` | Gitea API integration |
| **Cron job management** | ✅ Exists | `tools/cronjob_tools.py:508` | Scheduled task CRUD |
| **Send message** | ✅ Exists | `tools/send_message_tool.py:1036` | Cross-platform messaging |
#### Complete Tool List (47 static)
| # | Tool | Toolset | File:Line |
|---|------|---------|-----------|
| 1 | `read_file` | file | `tools/file_tools.py:832` |
| 2 | `write_file` | file | `tools/file_tools.py:833` |
| 3 | `patch` | file | `tools/file_tools.py:834` |
| 4 | `search_files` | file | `tools/file_tools.py:835` |
| 5 | `terminal` | terminal | `tools/terminal_tool.py:1783` |
| 6 | `process` | terminal | `tools/process_registry.py:1039` |
| 7 | `web_search` | web | `tools/web_tools.py:2082` |
| 8 | `web_extract` | web | `tools/web_tools.py:2092` |
| 9 | `vision_analyze` | vision | `tools/vision_tools.py:606` |
| 10 | `image_generate` | image_gen | `tools/image_generation_tool.py:694` |
| 11 | `text_to_speech` | tts | `tools/tts_tool.py:974` |
| 12 | `skills_list` | skills | `tools/skills_tool.py:1357` |
| 13 | `skill_view` | skills | `tools/skills_tool.py:1367` |
| 14 | `skill_manage` | skills | `tools/skill_manager_tool.py:747` |
| 15 | `browser_navigate` | browser | `tools/browser_tool.py:2137` |
| 16 | `browser_snapshot` | browser | `tools/browser_tool.py:2145` |
| 17 | `browser_click` | browser | `tools/browser_tool.py:2154` |
| 18 | `browser_type` | browser | `tools/browser_tool.py:2162` |
| 19 | `browser_scroll` | browser | `tools/browser_tool.py:2170` |
| 20 | `browser_back` | browser | `tools/browser_tool.py:2178` |
| 21 | `browser_press` | browser | `tools/browser_tool.py:2186` |
| 22 | `browser_get_images` | browser | `tools/browser_tool.py:2195` |
| 23 | `browser_vision` | browser | `tools/browser_tool.py:2203` |
| 24 | `browser_console` | browser | `tools/browser_tool.py:2211` |
| 25 | `todo` | todo | `tools/todo_tool.py:260` |
| 26 | `memory` | memory | `tools/memory_tool.py:544` |
| 27 | `session_search` | session_search | `tools/session_search_tool.py:492` |
| 28 | `clarify` | clarify | `tools/clarify_tool.py:131` |
| 29 | `execute_code` | code_execution | `tools/code_execution_tool.py:1360` |
| 30 | `delegate_task` | delegation | `tools/delegate_tool.py:963` |
| 31 | `cronjob` | cronjob | `tools/cronjob_tools.py:508` |
| 32 | `send_message` | messaging | `tools/send_message_tool.py:1036` |
| 33 | `mixture_of_agents` | moa | `tools/mixture_of_agents_tool.py:553` |
| 34 | `ha_list_entities` | homeassistant | `tools/homeassistant_tool.py:456` |
| 35 | `ha_get_state` | homeassistant | `tools/homeassistant_tool.py:465` |
| 36 | `ha_list_services` | homeassistant | `tools/homeassistant_tool.py:474` |
| 37 | `ha_call_service` | homeassistant | `tools/homeassistant_tool.py:483` |
| 38-47 | `rl_*` (10 tools) | rl | `tools/rl_training_tool.py:1376-1394` |
### 1.3 Session System
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **Session creation** | ✅ Exists | `gateway/session.py:676` | get_or_create_session with auto-reset |
| **Session keying** | ✅ Exists | `gateway/session.py:429` | platform:chat_type:chat_id[:thread_id][:user_id] |
| **Reset policies** | ✅ Exists | `gateway/session.py:610` | none / idle / daily / both |
| **Session switching (/resume)** | ✅ Exists | `gateway/session.py:825` | Point key at a previous session ID |
| **Session branching (/branch)** | ✅ Exists | CLI commands.py | Fork conversation history |
| **SQLite persistence** | ✅ Exists | `hermes_state.py:41-94` | sessions + messages + FTS5 search |
| **JSONL dual-write** | ✅ Exists | `gateway/session.py:891` | Backward compatibility with legacy format |
| **WAL mode concurrency** | ✅ Exists | `hermes_state.py:157` | Concurrent read/write with retry |
| **Context compression** | ✅ Exists | Config `compression.*` | Auto-compress when context exceeds ratio |
| **Memory flush on reset** | ✅ Exists | `gateway/run.py:632` | Reviews old transcript before auto-reset |
| **Token/cost tracking** | ✅ Exists | `hermes_state.py:41` | input, output, cache_read, cache_write, reasoning tokens |
| **PII redaction** | ✅ Exists | Config `privacy.redact_pii` | Hash user IDs, strip phone numbers |
### 1.4 Plugin System
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **Plugin discovery** | ✅ Exists | `hermes_cli/plugins.py:5-11` | User (~/.hermes/plugins/), project, pip entry-points |
| **Plugin manifest (plugin.yaml)** | ✅ Exists | `hermes_cli/plugins.py` | name, version, requires_env, provides_tools, provides_hooks |
| **Lifecycle hooks** | ✅ Exists | `hermes_cli/plugins.py:55-66` | 9 hooks (pre/post tool_call, llm_call, api_request; on_session_start/end/finalize/reset) |
| **PluginContext API** | ✅ Exists | `hermes_cli/plugins.py:124-233` | register_tool, inject_message, register_cli_command, register_hook |
| **Plugin management CLI** | ✅ Exists | `hermes_cli/plugins_cmd.py:1-690` | install, update, remove, enable, disable |
| **Project plugins (opt-in)** | ✅ Exists | `hermes_cli/plugins.py` | Requires HERMES_ENABLE_PROJECT_PLUGINS env var |
| **Pip plugins** | ✅ Exists | `hermes_cli/plugins.py` | Entry-point group: hermes_agent.plugins |
### 1.5 Config System
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **YAML config** | ✅ Exists | `hermes_cli/config.py:259-619` | ~120 config keys across 25 sections |
| **Schema versioning** | ✅ Exists | `hermes_cli/config.py` | `_config_version: 14` with migration support |
| **Provider config** | ✅ Exists | Config `providers.*`, `fallback_providers` | Per-provider overrides, fallback chains |
| **Credential pooling** | ✅ Exists | Config `credential_pool_strategies` | Key rotation strategies |
| **Auxiliary model config** | ✅ Exists | Config `auxiliary.*` | 8 separate side-task models (vision, compression, etc.) |
| **Smart model routing** | ✅ Exists | Config `smart_model_routing.*` | Route simple prompts to cheap model |
| **Env var management** | ✅ Exists | `hermes_cli/config.py:643-1318` | ~80 env vars across provider/tool/messaging/setting categories |
| **Interactive setup wizard** | ✅ Exists | `hermes_cli/setup.py` | Guided first-run configuration |
| **Config migration** | ✅ Exists | `hermes_cli/config.py` | Auto-migrates old config versions |
### 1.6 Gateway
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **18 platform adapters** | ✅ Exists | `gateway/platforms/` | Telegram, Discord, Slack, WhatsApp, Signal, Mattermost, Matrix, HomeAssistant, Email, SMS, DingTalk, API Server, Webhook, Feishu, Wecom, Weixin, BlueBubbles |
| **Message queuing** | ✅ Exists | `gateway/run.py:507` | Queue during agent processing, media placeholder support |
| **Agent caching** | ✅ Exists | `gateway/run.py:515` | Preserve AIAgent instances per session for prompt caching |
| **Background reconnection** | ✅ Exists | `gateway/run.py:527` | Exponential backoff for failed platforms |
| **Authorization** | ✅ Exists | `gateway/run.py:1826` | Per-user allowlists, DM pairing codes |
| **Slash command interception** | ✅ Exists | `gateway/run.py` | Commands handled before agent (not billed) |
| **ACP server** | ✅ Exists | `acp_adapter/server.py:726` | VS Code / Zed / JetBrains integration |
| **Cron scheduler** | ✅ Exists | `cron/scheduler.py:850` | Full job scheduler with cron expressions |
| **Batch runner** | ✅ Exists | `batch_runner.py:1285` | Parallel batch processing |
| **API server** | ✅ Exists | `gateway/platforms/api_server.py` | OpenAI-compatible HTTP API |
### 1.7 Providers (20 supported)
| Provider | ID | Key Env Var |
|----------|----|-------------|
| Nous Portal | `nous` | `NOUS_BASE_URL` |
| OpenRouter | `openrouter` | `OPENROUTER_API_KEY` |
| Anthropic | `anthropic` | (standard) |
| Google AI Studio | `gemini` | `GOOGLE_API_KEY`, `GEMINI_API_KEY` |
| OpenAI Codex | `openai-codex` | (standard) |
| GitHub Copilot | `copilot` / `copilot-acp` | (OAuth) |
| DeepSeek | `deepseek` | `DEEPSEEK_API_KEY` |
| Kimi / Moonshot | `kimi-coding` | `KIMI_API_KEY` |
| Z.AI / GLM | `zai` | `GLM_API_KEY`, `ZAI_API_KEY` |
| MiniMax | `minimax` | `MINIMAX_API_KEY` |
| MiniMax (China) | `minimax-cn` | `MINIMAX_CN_API_KEY` |
| Alibaba / DashScope | `alibaba` | `DASHSCOPE_API_KEY` |
| Hugging Face | `huggingface` | `HF_TOKEN` |
| OpenCode Zen | `opencode-zen` | `OPENCODE_ZEN_API_KEY` |
| OpenCode Go | `opencode-go` | `OPENCODE_GO_API_KEY` |
| Qwen OAuth | `qwen-oauth` | (Portal) |
| AI Gateway | `ai-gateway` | (Nous) |
| Kilo Code | `kilocode` | (standard) |
| Ollama (local) | — | First-class via auxiliary wiring |
| Custom endpoint | `custom` | user-provided URL |
### 1.8 UI / UX
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **Skin/theme engine** | ✅ Exists | `hermes_cli/skin_engine.py` | 7 built-in skins, user YAML skins |
| **Kawaii spinner** | ✅ Exists | `agent/display.py` | Animated faces, configurable verbs/wings |
| **Rich banner** | ✅ Exists | `banner.py` | Logo, hero art, system info |
| **Prompt_toolkit input** | ✅ Exists | `cli.py` | Autocomplete, history, syntax |
| **Streaming output** | ✅ Exists | Config `display.streaming` | Optional streaming |
| **Reasoning display** | ✅ Exists | Config `display.show_reasoning` | Show/hide chain-of-thought |
| **Cost display** | ✅ Exists | Config `display.show_cost` | Show $ in status bar |
| **Voice mode** | ✅ Exists | Config `voice.*` | Ctrl+B record, auto-TTS, silence detection |
| **Human delay simulation** | ✅ Exists | Config `human_delay.*` | Simulated typing delay |
### 1.9 Security
| Feature | Status | File:Line | Notes |
|---------|--------|-----------|-------|
| **Tirith security scanning** | ✅ Exists | `tools/tirith_security.py` | Pre-exec code scanning |
| **Secret redaction** | ✅ Exists | Config `security.redact_secrets` | Auto-strip secrets from output |
| **Memory injection scanning** | ✅ Exists | `tools/memory_tool.py:85` | Blocks prompt injection in memory |
| **URL safety** | ✅ Exists | `tools/url_safety.py` | URL reputation checking |
| **Command approval** | ✅ Exists | `tools/approval.py` | Manual/smart/off modes |
| **OSV vulnerability check** | ✅ Exists | `tools/osv_check.py` | Open Source Vulnerabilities DB |
| **Conscience validator** | ✅ Exists | `tools/conscience_validator.py` | SOUL.md alignment checking |
| **Shield detector** | ✅ Exists | `tools/shield/detector.py` | Jailbreak/crisis detection |
---
## 2. Architecture Overview
```
┌─────────────────────────────────────────────────────────┐
│ Entry Points │
├──────────┬──────────┬──────────┬──────────┬─────────────┤
│ CLI │ Gateway │ ACP │ Cron │ Batch Runner│
│ cli.py │gateway/ │acp_apt/ │ cron/ │batch_runner │
│ 8620 ln │ run.py │server.py │sched.py │ 1285 ln │
│ │ 7905 ln │ 726 ln │ 850 ln │ │
└────┬─────┴────┬─────┴──────────┴──────┬───┴─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────┐
│ AIAgent (run_agent.py, 9423 ln) │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Core Conversation Loop │ │
│ │ while iterations < max: │ │
│ │ response = client.chat(tools, messages) │ │
│ │ if tool_calls: handle_function_call() │ │
│ │ else: return response │ │
│ └──────────────────────┬───────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────┐ │
│ │ model_tools.py (577 ln) │ │
│ │ _discover_tools() → handle_function_call() │ │
│ └──────────────────────┬───────────────────────────┘ │
└─────────────────────────┼───────────────────────────────┘
┌────────────────────▼────────────────────┐
│ tools/registry.py (singleton) │
│ ToolRegistry.register() → dispatch() │
└────────────────────┬────────────────────┘
┌─────────┬───────────┼───────────┬────────────────┐
▼ ▼ ▼ ▼ ▼
┌────────┐┌────────┐┌──────────┐┌──────────┐ ┌──────────┐
│ file ││terminal││ web ││ browser │ │ memory │
│ tools ││ tool ││ tools ││ tool │ │ tool │
│ 4 tools││2 tools ││ 2 tools ││ 10 tools │ │ 3 actions│
└────────┘└────────┘└──────────┘└──────────┘ └────┬─────┘
┌──────────▼──────────┐
│ agent/memory_manager │
│ ┌──────────────────┐│
│ │BuiltinProvider ││
│ │(MEMORY.md+USER.md)│
│ ├──────────────────┤│
│ │External Provider ││
│ │(optional, 1 max) ││
│ └──────────────────┘│
└─────────────────────┘
┌─────────────────────────────────────────────────┐
│ Session Layer │
│ SessionStore (gateway/session.py, 1030 ln) │
│ SessionDB (hermes_state.py, 1238 ln) │
│ ┌───────────┐ ┌─────────────────────────────┐ │
│ │sessions.js│ │ state.db (SQLite + FTS5) │ │
│ │ JSONL │ │ sessions │ messages │ fts │ │
│ └───────────┘ └─────────────────────────────┘ │
└─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ Gateway Platform Adapters │
│ telegram │ discord │ slack │ whatsapp │ signal │
│ matrix │ email │ sms │ mattermost│ api │
│ homeassistant │ dingtalk │ feishu │ wecom │ ... │
└─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ Plugin System │
│ User ~/.hermes/plugins/ │ Project .hermes/ │
│ Pip entry-points (hermes_agent.plugins) │
│ 9 lifecycle hooks │ PluginContext API │
└─────────────────────────────────────────────────┘
```
**Key dependency chain:**
```
tools/registry.py (no deps — imported by all tool files)
tools/*.py (each calls registry.register() at import time)
model_tools.py (imports tools/registry + triggers tool discovery)
run_agent.py, cli.py, batch_runner.py, environments/
```
---
## 3. Recent Development Activity (Last 30 Days)
### Activity Summary
| Metric | Value |
|--------|-------|
| Total commits (since 2026-03-12) | ~1,750 |
| Top contributor | Teknium (1,169 commits) |
| Timmy Foundation commits | ~55 (Alexander Whitestone: 21, Timmy Time: 22, Bezalel: 12) |
| Key upstream sync | PR #201 — 499 commits from NousResearch/hermes-agent (2026-04-07) |
### Top Contributors (Last 30 Days)
| Contributor | Commits | Focus Area |
|-------------|---------|------------|
| Teknium | 1,169 | Core features, bug fixes, streaming, browser, Telegram/Discord |
| teknium1 | 238 | Supplementary work |
| 0xbyt4 | 117 | Various |
| Test | 61 | Testing |
| Allegro | 49 | Fleet ops, CI |
| kshitijk4poor | 30 | Features |
| SHL0MS | 25 | Features |
| Google AI Agent | 23 | MemPalace plugin |
| Timmy Time | 22 | CI, fleet config, merge coordination |
| Alexander Whitestone | 21 | Memory fixes, browser PoC, docs, CI, provider config |
| Bezalel | 12 | CI pipeline, devkit, health checks |
### Key Upstream Changes (Merged in Last 30 Days)
| Change | PR | Impact |
|--------|----|--------|
| Browser provider switch (Browserbase → Browser Use) | upstream #5750 | Breaking change in browser tooling |
| notify_on_complete for background processes | upstream #5779 | New feature for async workflows |
| Interactive model picker (Telegram + Discord) | upstream #5742 | UX improvement |
| Streaming fix after tool boundaries | upstream #5739 | Bug fix |
| Delegate: share credential pools with subagents | upstream | Security improvement |
| Permanent command allowlist on startup | upstream #5076 | Bug fix |
| Paginated model picker for Telegram | upstream | UX improvement |
| Slack thread replies without @mentions | upstream | Gateway improvement |
| Supermemory memory provider (added then removed) | upstream | Experimental, rolled back |
| Background process management overhaul | upstream | Major feature |
### Timmy Foundation Contributions (Our Fork)
| Change | PR | Author |
|--------|----|--------|
| Memory remove action bridge fix | #277 | Alexander Whitestone |
| Browser integration PoC + analysis | #262 | Alexander Whitestone |
| Memory budget enforcement tool | #256 | Alexander Whitestone |
| Memory sovereignty verification | #257 | Alexander Whitestone |
| Memory Architecture Guide | #263, #258 | Alexander Whitestone |
| MemPalace plugin creation | #259, #265 | Google AI Agent |
| CI: duplicate model detection | #235 | Alexander Whitestone |
| Kimi model config fix | #225 | Bezalel |
| Ollama provider wiring fix | #223 | Alexander Whitestone |
| Deep Self-Awareness Epic | #215 | Bezalel |
| BOOT.md for repo | #202 | Bezalel |
| Upstream sync (499 commits) | #201 | Alexander Whitestone |
| Forge CI pipeline | #154, #175, #187 | Bezalel |
| Gitea PR & Issue automation skill | #181 | Bezalel |
| Development tools for wizard fleet | #166 | Bezalel |
| KNOWN_VIOLATIONS justification | #267 | Manus AI |
---
## 4. Overlap Analysis
### What We're Building That Already Exists
| Timmy Foundation Planned Work | Hermes-Agent Already Has | Verdict |
|------------------------------|--------------------------|---------|
| **Memory system (add/remove/replace)** | `tools/memory_tool.py` with all 3 actions | **USE IT** — already exists, we just needed the `remove` fix (PR #277) |
| **Session persistence** | SQLite + JSONL dual-write system | **USE IT** — battle-tested, FTS5 search included |
| **Gateway platform adapters** | 18 adapters including Telegram, Discord, Matrix | **USE IT** — don't rebuild, contribute fixes |
| **Config management** | Full YAML config with migration, env vars | **USE IT** — extend rather than replace |
| **Plugin system** | Complete with lifecycle hooks, PluginContext API | **USE IT** — write plugins, not custom frameworks |
| **Tool registry** | Centralized registry with self-registration | **USE IT** — register new tools via existing pattern |
| **Cron scheduling** | `cron/scheduler.py` + `cronjob` tool | **USE IT** — integrate rather than duplicate |
| **Subagent delegation** | `delegate_task` with isolated contexts | **USE IT** — extend for fleet coordination |
### What We Need That Doesn't Exist
| Timmy Foundation Need | Hermes-Agent Status | Action |
|----------------------|---------------------|--------|
| **Engram integration** | Not present | Build as external memory provider plugin |
| **Holographic fact store** | Accepted as provider name, not implemented | Build as external memory provider |
| **Fleet orchestration** | Not present (single-agent focus) | Build on top, contribute patterns upstream |
| **Trust scoring on memory** | Not present | Build as extension to memory tool |
| **Multi-agent coordination** | delegate_tool supports parallel (max 3) | Extend for fleet-wide dispatch |
| **VPS wizard deployment** | Not present | Timmy Foundation domain — build independently |
| **Gitea CI/CD integration** | Minimal (gitea_client.py exists) | Extend existing client |
### Duplication Risk Assessment
| Risk | Level | Details |
|------|-------|---------|
| Memory system duplication | 🟢 LOW | We were almost duplicating memory removal (PR #278 vs #277). Now resolved. |
| Config system duplication | 🟢 LOW | Using hermes config directly via fork |
| Gateway duplication | 🟡 MEDIUM | Our fleet-ops patterns may partially overlap with gateway capabilities |
| Session management duplication | 🟢 LOW | Using hermes sessions directly |
| Plugin system duplication | 🟢 LOW | We write plugins, not a parallel system |
---
## 5. Contribution Roadmap
### What to Build (Timmy Foundation Own)
| Item | Rationale | Priority |
|------|-----------|----------|
| **Engram memory provider** | Sovereign local memory (Go binary, SQLite+FTS). Must be ours. | 🔴 HIGH |
| **Holographic fact store** | Our architecture for knowledge graph memory. Unique to Timmy. | 🔴 HIGH |
| **Fleet orchestration layer** | Multi-wizard coordination (Allegro, Bezalel, Ezra, Claude). Not upstream's problem. | 🔴 HIGH |
| **VPS deployment automation** | Sovereign wizard provisioning. Timmy-specific. | 🟡 MEDIUM |
| **Trust scoring system** | Evaluate memory entry reliability. Research needed. | 🟡 MEDIUM |
| **Gitea CI/CD integration** | Deep integration with our forge. Extend gitea_client.py. | 🟡 MEDIUM |
| **SOUL.md compliance tooling** | Conscience validator exists (`tools/conscience_validator.py`). Extend it. | 🟢 LOW |
### What to Contribute Upstream
| Item | Rationale | Difficulty |
|------|-----------|------------|
| **Memory remove action fix** | Already done (PR #277). ✅ | Done |
| **Browser integration analysis** | Useful for all users (PR #262). ✅ | Done |
| **CI stability improvements** | Reduce deps, increase timeout (our commit). ✅ | Done |
| **Duplicate model detection** | CI check useful for all forks (PR #235). ✅ | Done |
| **Memory sovereignty patterns** | Verification scripts, budget enforcement. Useful broadly. | Medium |
| **Engram provider adapter** | If Engram proves useful, offer as memory provider option. | Medium |
| **Fleet delegation patterns** | If multi-agent coordination patterns generalize. | Hard |
| **Wizard health monitoring** | If monitoring patterns generalize to any agent fleet. | Medium |
### Quick Wins (Next Sprint)
1. **Verify memory remove action** — Confirm PR #277 works end-to-end in our fork
2. **Test browser tool after upstream switch** — Browserbase → Browser Use (upstream #5750) may break our PoC
3. **Update provider config** — Kimi model references updated (PR #225), verify no remaining stale refs
4. **Engram provider prototype** — Start implementing as external memory provider plugin
5. **Fleet health integration** — Use gateway's background reconnection patterns for wizard fleet
---
## Appendix A: File Counts by Directory
| Directory | Files | Lines |
|-----------|-------|-------|
| `tools/` | 70+ .py files | ~50K |
| `gateway/` | 20+ .py files | ~25K |
| `agent/` | 10 .py files | ~10K |
| `hermes_cli/` | 15 .py files | ~20K |
| `acp_adapter/` | 9 .py files | ~8K |
| `cron/` | 3 .py files | ~2K |
| `tests/` | 470 .py files | ~80K |
| **Total** | **335 source + 470 test** | **~200K + ~80K** |
## Appendix B: Key File Index
| File | Lines | Purpose |
|------|-------|---------|
| `run_agent.py` | 9,423 | AIAgent class, core conversation loop |
| `cli.py` | 8,620 | CLI orchestrator, slash command dispatch |
| `gateway/run.py` | 7,905 | Gateway main loop, platform management |
| `tools/terminal_tool.py` | 1,783 | Terminal orchestration |
| `tools/web_tools.py` | 2,082 | Web search + extraction |
| `tools/browser_tool.py` | 2,211 | Browser automation (10 tools) |
| `tools/code_execution_tool.py` | 1,360 | Python sandbox |
| `tools/delegate_tool.py` | 963 | Subagent delegation |
| `tools/mcp_tool.py` | ~1,050 | MCP client |
| `tools/memory_tool.py` | 560 | Memory CRUD |
| `hermes_state.py` | 1,238 | SQLite session store |
| `gateway/session.py` | 1,030 | Session lifecycle |
| `cron/scheduler.py` | 850 | Job scheduler |
| `hermes_cli/config.py` | 1,318 | Config system |
| `hermes_cli/plugins.py` | 611 | Plugin system |
| `hermes_cli/skin_engine.py` | 500+ | Theme engine |

271
docs/matrix-setup.md Normal file
View File

@@ -0,0 +1,271 @@
# Matrix Integration Setup Guide
Connect Hermes Agent to any Matrix homeserver for sovereign, encrypted messaging.
## Prerequisites
- Python 3.10+
- matrix-nio SDK: `pip install "matrix-nio[e2e]"`
- For E2EE: libolm C library (see below)
## Option A: matrix.org Public Homeserver (Testing)
Best for quick evaluation. No server to run.
### 1. Create a Matrix Account
Go to https://app.element.io and create an account on matrix.org.
Choose a username like `@hermes-bot:matrix.org`.
### 2. Get an Access Token
The recommended auth method. Token avoids storing passwords and survives
password changes.
```bash
# Using curl (replace user/password):
curl -X POST 'https://matrix-client.matrix.org/_matrix/client/v3/login' \
-H 'Content-Type: application/json' \
-d '{
"type": "m.login.password",
"user": "your-bot-username",
"password": "your-password"
}'
```
Look for `access_token` and `device_id` in the response.
Alternatively, in Element: Settings -> Help & About -> Advanced -> Access Token.
### 3. Set Environment Variables
Add to `~/.hermes/.env`:
```bash
MATRIX_HOMESERVER=https://matrix-client.matrix.org
MATRIX_ACCESS_TOKEN=syt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
MATRIX_USER_ID=@hermes-bot:matrix.org
MATRIX_DEVICE_ID=HERMES_BOT
```
### 4. Install Dependencies
```bash
pip install "matrix-nio[e2e]"
```
### 5. Start Hermes Gateway
```bash
hermes gateway
```
## Option B: Self-Hosted Homeserver (Sovereignty)
For full control over your data and encryption keys.
### Popular Homeservers
- **Synapse** (reference impl): https://github.com/element-hq/synapse
- **Conduit** (lightweight, Rust): https://conduit.rs
- **Dendrite** (Go): https://github.com/matrix-org/dendrite
### 1. Deploy Your Homeserver
Follow your chosen server's documentation. Common setup with Docker:
```bash
# Synapse example:
docker run -d --name synapse \
-v /opt/synapse/data:/data \
-e SYNAPSE_SERVER_NAME=your.domain.com \
-e SYNAPSE_REPORT_STATS=no \
matrixdotorg/synapse:latest
```
### 2. Create Bot Account
Register on your homeserver:
```bash
# Synapse: register new user (run inside container)
docker exec -it synapse register_new_matrix_user http://localhost:8008 \
-c /data/homeserver.yaml -u hermes-bot -p 'secure-password' --admin
```
### 3. Configure Hermes
Set in `~/.hermes/.env`:
```bash
MATRIX_HOMESERVER=https://matrix.your.domain.com
MATRIX_ACCESS_TOKEN=<obtain via login API>
MATRIX_USER_ID=@hermes-bot:your.domain.com
MATRIX_DEVICE_ID=HERMES_BOT
```
## Environment Variables Reference
| Variable | Required | Description |
|----------|----------|-------------|
| `MATRIX_HOMESERVER` | Yes | Homeserver URL (e.g. `https://matrix.org`) |
| `MATRIX_ACCESS_TOKEN` | Yes* | Access token (preferred over password) |
| `MATRIX_USER_ID` | With password | Full user ID (`@user:server`) |
| `MATRIX_PASSWORD` | Alt* | Password (alternative to token) |
| `MATRIX_DEVICE_ID` | Recommended | Stable device ID for E2EE persistence |
| `MATRIX_ENCRYPTION` | No | Set `true` to enable E2EE |
| `MATRIX_ALLOWED_USERS` | No | Comma-separated allowed user IDs |
| `MATRIX_HOME_ROOM` | No | Room ID for cron/notifications |
| `MATRIX_REACTIONS` | No | Enable processing reactions (default: true) |
| `MATRIX_REQUIRE_MENTION` | No | Require @mention in rooms (default: true) |
| `MATRIX_FREE_RESPONSE_ROOMS` | No | Room IDs exempt from mention requirement |
| `MATRIX_AUTO_THREAD` | No | Auto-create threads (default: true) |
\* Either `MATRIX_ACCESS_TOKEN` or `MATRIX_USER_ID` + `MATRIX_PASSWORD` is required.
## Config YAML Entries
Add to `~/.hermes/config.yaml` under a `matrix:` key for declarative settings:
```yaml
matrix:
require_mention: true
free_response_rooms:
- "!roomid1:matrix.org"
- "!roomid2:matrix.org"
auto_thread: true
```
These override to env vars only if the env var is not already set.
## End-to-End Encryption (E2EE)
E2EE protects messages so only participants can read them. Hermes uses
matrix-nio's Olm/Megolm implementation.
### 1. Install E2EE Dependencies
```bash
# macOS
brew install libolm
# Ubuntu/Debian
sudo apt install libolm-dev
# Then install matrix-nio with E2EE support:
pip install "matrix-nio[e2e]"
```
### 2. Enable Encryption
Set in `~/.hermes/.env`:
```bash
MATRIX_ENCRYPTION=true
MATRIX_DEVICE_ID=HERMES_BOT
```
### 3. How It Works
- On first connect, Hermes creates a device and uploads encryption keys.
- Keys are stored in `~/.hermes/platforms/matrix/store/`.
- On shutdown, Megolm session keys are exported to `exported_keys.txt`.
- On next startup, keys are imported so the bot can decrypt old messages.
- The `MATRIX_DEVICE_ID` ensures the bot reuses the same device identity
across restarts. Without it, each restart creates a new "device" in
Matrix and old keys become unusable.
### 4. Verifying E2EE
1. Create an encrypted room in Element.
2. Invite your bot user.
3. Send a message — the bot should respond.
4. Check logs: `grep -i "e2ee\|crypto\|encrypt" ~/.hermes/logs/gateway.log`
## Room Configuration
### Inviting the Bot
1. Create a room in Element or any Matrix client.
2. Invite the bot: `/invite @hermes-bot:your.domain.com`
3. The bot auto-accepts invites (controlled by `MATRIX_ALLOWED_USERS`).
### Home Room
Set `MATRIX_HOME_ROOM` to a room ID for cron jobs and notifications:
```bash
MATRIX_HOME_ROOM=!abcde12345:matrix.org
```
### Free-Response Rooms
Rooms where the bot responds to all messages without @mention:
```bash
MATRIX_FREE_RESPONSE_ROOMS=!room1:matrix.org,!room2:matrix.org
```
Or in config.yaml:
```yaml
matrix:
free_response_rooms:
- "!room1:matrix.org"
```
## Troubleshooting
### "Matrix: need MATRIX_ACCESS_TOKEN or MATRIX_USER_ID + MATRIX_PASSWORD"
Neither auth method is configured. Set `MATRIX_ACCESS_TOKEN` in `~/.hermes/.env`
or provide `MATRIX_USER_ID` + `MATRIX_PASSWORD`.
### "Matrix: whoami failed"
The access token is invalid or expired. Generate a new one via the login API.
### "Matrix: E2EE dependencies are missing"
Install libolm and matrix-nio with E2EE support:
```bash
brew install libolm # macOS
pip install "matrix-nio[e2e]"
```
### "Matrix: login failed"
- Check username and password.
- Ensure the account exists on the target homeserver.
- Some homeservers require admin approval for new registrations.
### Bot Not Responding in Rooms
1. Check `MATRIX_REQUIRE_MENTION` — if `true` (default), messages must
@mention the bot.
2. Check `MATRIX_ALLOWED_USERS` — if set, only listed users can interact.
3. Check logs: `tail -f ~/.hermes/logs/gateway.log`
### E2EE Rooms Show "Unable to Decrypt"
1. Ensure `MATRIX_DEVICE_ID` is set to a stable value.
2. Check that `~/.hermes/platforms/matrix/store/` has read/write permissions.
3. Verify libolm is installed: `python -c "from nio.crypto import ENCRYPTION_ENABLED; print(ENCRYPTION_ENABLED)"`
### Slow Message Delivery
Matrix federation can add latency. For faster responses:
- Use the same homeserver for the bot and users.
- Set `MATRIX_HOME_ROOM` to a local room.
- Check network connectivity between Hermes and the homeserver.
## Quick Start (Automated)
Run the interactive setup script:
```bash
python scripts/setup_matrix.py
```
This guides you through homeserver selection, authentication, and verification.

View File

@@ -0,0 +1,335 @@
# Memory Architecture Guide
Developer-facing guide to the Hermes Agent memory system. Covers all four memory tiers, data lifecycle, security guarantees, and extension points.
## Overview
Hermes has four distinct memory systems, each serving a different purpose:
| Tier | System | Scope | Cost | Persistence |
|------|--------|-------|------|-------------|
| 1 | **Built-in Memory** (MEMORY.md / USER.md) | Current session, curated facts | ~1,300 tokens fixed per session | File-backed, cross-session |
| 2 | **Session Search** (FTS5) | All past conversations | On-demand (search + summarize) | SQLite (state.db) |
| 3 | **Skills** (procedural memory) | How to do specific tasks | Loaded on match only | File-backed (~/.hermes/skills/) |
| 4 | **External Providers** (plugins) | Deep persistent knowledge | Provider-dependent | Provider-specific |
All four tiers operate independently. Built-in memory is always active. The others are opt-in or on-demand.
## Tier 1: Built-in Memory (MEMORY.md / USER.md)
### File Layout
```
~/.hermes/memories/
├── MEMORY.md — Agent's notes (environment facts, conventions, lessons learned)
└── USER.md — User profile (preferences, communication style, identity)
```
Profile-aware: when running under a profile (`hermes -p coder`), the memories directory resolves to `~/.hermes/profiles/<name>/memories/`.
### Frozen Snapshot Pattern
This is the most important architectural decision in the memory system.
1. **Session start:** `MemoryStore.load_for_prompt()` reads both files from disk, parses entries delimited by `§` (section sign), and injects them into the system prompt as a frozen block.
2. **During session:** The `memory` tool writes to disk immediately (durable), but does **not** update the system prompt. This preserves the LLM's prefix cache for the entire session.
3. **Next session:** The snapshot refreshes from disk.
**Why frozen?** System prompt changes invalidate the KV cache on every API call. With a ~30K token system prompt, that's expensive. Freezing memory at session start means the cache stays warm for the entire conversation. The tradeoff: memory writes made mid-session don't take effect until next session. Tool responses show the live state so the agent can verify writes succeeded.
### Character Limits
| Store | Default Limit | Approx Tokens | Typical Entries |
|-------|--------------|---------------|-----------------|
| MEMORY.md | 2,200 chars | ~800 | 8-15 |
| USER.md | 1,375 chars | ~500 | 5-10 |
Limits are in characters (not tokens) because character counts are model-independent. Configurable in `config.yaml`:
```yaml
memory:
memory_char_limit: 2200
user_char_limit: 1375
```
### Entry Format
Entries are separated by `\n§\n`. Each entry can be multiline. Example MEMORY.md:
```
User runs macOS 14 Sonoma, uses Homebrew, has Docker Desktop
§
Project ~/code/api uses Go 1.22, chi router, sqlc. Tests: 'make test'
§
Staging server 10.0.1.50 uses SSH port 2222, key at ~/.ssh/staging_ed25519
```
### Tool Interface
The `memory` tool (defined in `tools/memory_tool.py`) supports:
- **`add`** — Append new entry. Rejects exact duplicates.
- **`replace`** — Find entry by unique substring (`old_text`), replace with `content`.
- **`remove`** — Find entry by unique substring, delete it.
- **`read`** — Return current entries from disk (live state, not frozen snapshot).
Substring matching: `old_text` must match exactly one entry. If it matches multiple, the tool returns an error asking for more specificity.
### Security Scanning
Every memory entry is scanned against `_MEMORY_THREAT_PATTERNS` before acceptance:
- Prompt injection patterns (`ignore previous instructions`, `you are now...`)
- Credential exfiltration (`curl`/`wget` with env vars, `.env` file reads)
- SSH backdoor attempts (`authorized_keys`, `.ssh` writes)
- Invisible Unicode characters (zero-width spaces, BOM)
Matches are rejected with an error message. Source: `_scan_memory_content()` in `tools/memory_tool.py`.
### Code Path
```
agent/prompt_builder.py
└── assembles system prompt pieces
└── MemoryStore.load_for_prompt() → frozen snapshot injection
tools/memory_tool.py
├── MemoryStore class (file I/O, locking, parsing)
├── memory_tool() function (add/replace/remove/read dispatch)
└── _scan_memory_content() (threat scanning)
hermes_cli/memory_setup.py
└── Interactive first-run memory setup
```
## Tier 2: Session Search (FTS5)
### How It Works
1. Every CLI and gateway session stores full message history in SQLite (`~/.hermes/state.db`)
2. The `messages_fts` FTS5 virtual table enables fast full-text search
3. The `session_search` tool finds relevant messages, groups by session, loads top N
4. Each matching session is summarized by Gemini Flash (auxiliary LLM, not main model)
5. Summaries are returned to the main agent as context
### Why Gemini Flash for Summarization
Raw session transcripts can be 50K+ chars. Feeding them to the main model wastes context window and tokens. Gemini Flash is fast, cheap, and good enough for "extract the relevant bits" summarization. Same pattern used by `web_extract`.
### Schema
```sql
-- Core tables
sessions (id, source, user_id, model, system_prompt, parent_session_id, ...)
messages (id, session_id, role, content, tool_name, timestamp, ...)
-- Full-text search
messages_fts -- FTS5 virtual table on messages.content
-- Schema tracking
schema_version
```
WAL mode for concurrent readers + one writer (gateway multi-platform support).
### Session Lineage
When context compression triggers a session split, `parent_session_id` chains the old and new sessions. This lets session search follow the thread across compression boundaries.
### Code Path
```
tools/session_search_tool.py
├── FTS5 query against messages_fts
├── Groups results by session_id
├── Loads top N sessions (MAX_SESSION_CHARS = 100K per session)
├── Sends to Gemini Flash via auxiliary_client.async_call_llm()
└── Returns per-session summaries
hermes_state.py (SessionDB class)
├── SQLite WAL mode database
├── FTS5 triggers for message insert/update/delete
└── Session CRUD operations
```
### Memory vs Session Search
| | Memory | Session Search |
|---|--------|---------------|
| **Capacity** | ~1,300 tokens total | Unlimited (all stored sessions) |
| **Latency** | Instant (in system prompt) | Requires FTS query + LLM call |
| **When to use** | Critical facts always in context | "What did we discuss about X?" |
| **Management** | Agent-curated | Automatic |
| **Token cost** | Fixed per session | On-demand per search |
## Tier 3: Skills (Procedural Memory)
### What Skills Are
Skills capture **how to do a specific type of task** based on proven experience. Where memory is broad and declarative, skills are narrow and actionable.
A skill is a directory with a `SKILL.md` (markdown instructions) and optional supporting files:
```
~/.hermes/skills/
├── my-skill/
│ ├── SKILL.md — Instructions, steps, pitfalls
│ ├── references/ — API docs, specs
│ ├── templates/ — Code templates, config files
│ ├── scripts/ — Helper scripts
│ └── assets/ — Images, data files
```
### How Skills Load
At the start of each turn, the agent's system prompt includes available skills. When a skill matches the current task, the agent loads it with `skill_view(name)` and follows its instructions. Skills are **not** injected wholesale — they're loaded on demand to preserve context window.
### Skill Lifecycle
1. **Creation:** After a complex task (5+ tool calls), the agent offers to save the approach as a skill using `skill_manage(action='create')`.
2. **Usage:** On future matching tasks, the agent loads the skill with `skill_view(name)`.
3. **Maintenance:** If a skill is outdated or incomplete when used, the agent patches it immediately with `skill_manage(action='patch')`.
4. **Deletion:** Obsolete skills are removed with `skill_manage(action='delete')`.
### Skills vs Memory
| | Memory | Skills |
|---|--------|--------|
| **Format** | Free-text entries | Structured markdown (steps, pitfalls, examples) |
| **Scope** | Facts and preferences | Procedures and workflows |
| **Loading** | Always in system prompt | On-demand when matched |
| **Size** | ~1,300 tokens total | Variable (loaded individually) |
### Code Path
```
tools/skill_manager_tool.py — Create, edit, patch, delete skills
agent/skill_commands.py — Slash commands for skill management
skills_hub.py — Browse, search, install skills from hub
```
## Tier 4: External Memory Providers
### Plugin Architecture
```
plugins/memory/
├── __init__.py — Provider registry and base interface
├── honcho/ — Dialectic Q&A, cross-session user modeling
├── openviking/ — Knowledge graph memory
├── mem0/ — Semantic memory with auto-extraction
├── hindsight/ — Retrospective memory analysis
├── holographic/ — Distributed holographic memory
├── retaindb/ — Vector-based retention
├── byterover/ — Byte-level memory compression
└── supermemory/ — Cloud-hosted semantic memory
```
Only one external provider can be active at a time. Built-in memory (Tier 1) always runs alongside it.
### Integration Points
When a provider is active, Hermes:
1. Injects provider context into the system prompt
2. Prefetches relevant memories before each turn (background, non-blocking)
3. Syncs conversation turns to the provider after each response
4. Extracts memories on session end (for providers that support it)
5. Mirrors built-in memory writes to the provider
6. Adds provider-specific tools for search and management
### Configuration
```yaml
memory:
provider: openviking # or honcho, mem0, hindsight, etc.
```
Setup: `hermes memory setup` (interactive picker).
## Data Lifecycle
```
Session Start
├── Load MEMORY.md + USER.md from disk → frozen snapshot in system prompt
├── Load skills catalog (names + descriptions)
├── Initialize session search (SQLite connection)
└── Initialize external provider (if configured)
Each Turn
├── Agent sees frozen memory in system prompt
├── Agent can call memory tool → writes to disk, returns live state
├── Agent can call session_search → FTS5 + Gemini Flash summarization
├── Agent can load skills → reads SKILL.md from disk
└── External provider prefetches context (if active)
Session End
├── All memory writes already on disk (immediate persistence)
├── Session transcript saved to SQLite (messages + FTS5 index)
├── External provider extracts final memories (if supported)
└── Skill updates persisted (if any were patched)
```
## Privacy and Data Locality
| Component | Location | Network |
|-----------|----------|---------|
| MEMORY.md / USER.md | `~/.hermes/memories/` | Local only |
| Session DB | `~/.hermes/state.db` | Local only |
| Skills | `~/.hermes/skills/` | Local only |
| External provider | Provider-dependent | Provider API calls |
Built-in memory (Tiers 1-3) never leaves the machine. External providers (Tier 4) send data to the configured provider by design. The agent logs all provider API calls in the session transcript for auditability.
## Configuration Reference
```yaml
# ~/.hermes/config.yaml
memory:
memory_enabled: true # Enable MEMORY.md
user_profile_enabled: true # Enable USER.md
memory_char_limit: 2200 # MEMORY.md char limit (~800 tokens)
user_char_limit: 1375 # USER.md char limit (~500 tokens)
nudge_interval: 10 # Turns between memory nudge reminders
provider: null # External provider name (null = disabled)
```
Environment variables (in `~/.hermes/.env`):
- Provider-specific API keys (e.g., `HONCHO_API_KEY`, `MEM0_API_KEY`)
## Troubleshooting
### Memory not appearing in system prompt
- Check `~/.hermes/memories/MEMORY.md` exists and has content
- Verify `memory.memory_enabled: true` in config
- Check for file lock issues (WAL mode, concurrent access)
### Memory writes not taking effect
- Writes are durable to disk immediately but frozen in system prompt until next session
- Tool response shows live state — verify the write succeeded there
- Start a new session to see the updated snapshot
### Session search returns nothing
- Verify `state.db` has sessions: `sqlite3 ~/.hermes/state.db "SELECT count(*) FROM sessions"`
- Check FTS5 index: `sqlite3 ~/.hermes/state.db "SELECT count(*) FROM messages_fts"`
- Ensure auxiliary LLM (Gemini Flash) is configured and reachable
### Skills not loading
- Check `~/.hermes/skills/` directory exists
- Verify SKILL.md has valid frontmatter (name, description)
- Skills load by name match — check the skill name matches what the agent expects
### External provider errors
- Check API key in `~/.hermes/.env`
- Verify provider is installed: `pip install <provider-package>`
- Run `hermes memory status` for diagnostic info

335
docs/memory-architecture.md Normal file
View File

@@ -0,0 +1,335 @@
# Memory Architecture Guide
How Hermes Agent remembers things across sessions — the stores, the tools, the data flow, and how to configure it all.
## Overview
Hermes has a multi-layered memory system. It is not one thing — it is several independent systems that complement each other:
1. **Persistent Memory** (MEMORY.md / USER.md) — bounded, curated notes injected into every system prompt
2. **Session Search** — full-text search across all past conversation transcripts
3. **Skills** — procedural memory: reusable workflows stored as SKILL.md files
4. **External Memory Providers** — optional plugins (Honcho, Holographic, Mem0, etc.) for deeper recall
All built-in memory lives on disk under `~/.hermes/` (or `$HERMES_HOME`). No memory data leaves the machine unless you explicitly configure an external cloud provider.
## Memory Types in Detail
### 1. Persistent Memory (MEMORY.md and USER.md)
The core memory system. Two files in `~/.hermes/memories/`:
| File | Purpose | Default Char Limit |
|------|---------|--------------------|
| `MEMORY.md` | Agent's personal notes — environment facts, project conventions, tool quirks, lessons learned | 2,200 chars (~800 tokens) |
| `USER.md` | User profile — name, preferences, communication style, pet peeves | 1,375 chars (~500 tokens) |
**How it works:**
- Loaded from disk at session start and injected into the system prompt as a frozen snapshot
- The agent uses the `memory` tool to add, replace, or remove entries during a session
- Mid-session writes go to disk immediately (durable) but do NOT update the system prompt — this preserves the LLM's prefix cache for performance
- The snapshot refreshes on the next session start
- Entries are delimited by `§` (section sign) and can be multiline
**System prompt appearance:**
```
══════════════════════════════════════════════
MEMORY (your personal notes) [67% — 1,474/2,200 chars]
══════════════════════════════════════════════
User's project is a Rust web service at ~/code/myapi using Axum + SQLx
§
This machine runs Ubuntu 22.04, has Docker and Podman installed
§
User prefers concise responses, dislikes verbose explanations
```
**Memory tool actions:**
- `add` — append a new entry (rejected if it would exceed the char limit)
- `replace` — find an entry by substring match and replace it
- `remove` — find an entry by substring match and delete it
Substring matching means you only need a unique fragment of the entry, not the full text. If the fragment matches multiple entries, the tool returns an error asking for a more specific match.
### 2. Session Search
Cross-session conversation recall via SQLite FTS5 full-text search.
- All CLI and messaging sessions are stored in `~/.hermes/state.db`
- The `session_search` tool finds relevant past conversations by keyword
- Top matching sessions are summarized by Gemini Flash (cheap, fast) before being returned to the main model
- Returns focused summaries, not raw transcripts
**When to use session_search vs. memory:**
| Feature | Persistent Memory | Session Search |
|---------|------------------|----------------|
| Capacity | ~3,575 chars total | Unlimited (all sessions) |
| Speed | Instant (in system prompt) | Requires search + LLM summarization |
| Use case | Key facts always in context | "What did we discuss about X last week?" |
| Management | Manually curated by the agent | Automatic — all sessions stored |
| Token cost | Fixed per session (~1,300 tokens) | On-demand (searched when needed) |
**Rule of thumb:** Memory is for facts that should *always* be available. Session search is for recalling specific past conversations on demand. Don't save task progress or session outcomes to memory — use session_search to find those.
### 3. Skills (Procedural Memory)
Skills are reusable workflows stored as `SKILL.md` files in `~/.hermes/skills/` (and optionally external skill directories).
- Organized by category: `skills/github/github-pr-workflow/SKILL.md`
- YAML frontmatter with name, description, version, platform restrictions
- Progressive disclosure: metadata shown in skill list, full content loaded on demand via `skill_view`
- The agent creates skills proactively after complex tasks (5+ tool calls) using the `skill_manage` tool
- Skills can be patched when found outdated — stale skills are a liability
Skills are *not* injected into the system prompt by default. The agent sees a compact index of available skills and loads them on demand. This keeps the prompt lean while giving access to deep procedural knowledge.
**Skills vs. Memory:**
- **Memory:** compact facts ("User's project uses Go 1.22 with chi router")
- **Skills:** detailed procedures ("How to deploy the staging server: step 1, step 2, ...")
### 4. External Memory Providers
Optional plugins that add deeper, structured memory alongside the built-in system. Only one external provider can be active at a time.
| Provider | Storage | Key Feature |
|----------|---------|-------------|
| Honcho | Cloud | Dialectic user modeling with semantic search |
| OpenViking | Self-hosted | Filesystem-style knowledge hierarchy |
| Mem0 | Cloud | Server-side LLM fact extraction |
| Hindsight | Cloud/Local | Knowledge graph with entity resolution |
| Holographic | Local SQLite | HRR algebraic reasoning + trust scoring |
| RetainDB | Cloud | Hybrid search with delta compression |
| ByteRover | Local/Cloud | Hierarchical knowledge tree with CLI |
| Supermemory | Cloud | Context fencing + session graph ingest |
External providers run **alongside** built-in memory (never replacing it). They receive hooks for:
- System prompt injection (provider context)
- Pre-turn memory prefetch
- Post-turn conversation sync
- Session-end extraction
- Built-in memory write mirroring
Setup: `hermes memory setup` or set `memory.provider` in `~/.hermes/config.yaml`.
See `website/docs/user-guide/features/memory-providers.md` for full provider details.
## How the Systems Interact
```
Session Start
|
+--> Load MEMORY.md + USER.md from disk --> frozen snapshot into system prompt
+--> Provider: system_prompt_block() --> injected into system prompt
+--> Skills index --> injected into system prompt (compact metadata only)
|
v
Each Turn
|
+--> Provider: prefetch(query) --> relevant recalled context
+--> Agent sees: system prompt (memory + provider context + skills index)
+--> Agent can call: memory tool, session_search tool, skill tools, provider tools
|
v
After Each Response
|
+--> Provider: sync_turn(user, assistant) --> persist conversation
|
v
Periodic (every N turns, default 10)
|
+--> Memory nudge: agent prompted to review and update memory
|
v
Session End / Compression
|
+--> Memory flush: agent saves important facts before context is discarded
+--> Provider: on_session_end(messages) --> final extraction
+--> Provider: on_pre_compress(messages) --> save insights before compression
```
## Best Practices
### What to Save
Save proactively — don't wait for the user to ask:
- **User preferences:** "I prefer TypeScript over JavaScript" → `user` target
- **Corrections:** "Don't use sudo for Docker, I'm in the docker group" → `memory` target
- **Environment facts:** "This server runs Debian 12 with PostgreSQL 16" → `memory` target
- **Conventions:** "Project uses tabs, 120-char lines, Google docstrings" → `memory` target
- **Explicit requests:** "Remember that my API key rotation is monthly" → `memory` target
### What NOT to Save
- **Task progress or session outcomes** — use session_search to recall these
- **Trivially re-discoverable facts** — "Python 3.12 supports f-strings" (web search this)
- **Raw data dumps** — large code blocks, log files, data tables
- **Session-specific ephemera** — temporary file paths, one-off debugging context
- **Content already in SOUL.md or AGENTS.md** — those are already in context
### Writing Good Entries
Compact, information-dense entries work best:
```
# Good — packs multiple related facts
User runs macOS 14 Sonoma, uses Homebrew, has Docker Desktop and Podman. Shell: zsh. Editor: VS Code with Vim bindings.
# Good — specific, actionable convention
Project ~/code/api uses Go 1.22, sqlc for DB, chi router. Tests: make test. CI: GitHub Actions.
# Bad — too vague
User has a project.
# Bad — too verbose
On January 5th, 2026, the user asked me to look at their project which is
located at ~/code/api. I discovered it uses Go version 1.22 and...
```
### Capacity Management
When memory is above 80% capacity (visible in the system prompt header), consolidate before adding. Merge related entries into shorter, denser versions. The tool will reject additions that would exceed the limit — use `replace` to consolidate first.
Priority order for what stays in memory:
1. User preferences and corrections (highest — prevents repeated steering)
2. Environment facts and project conventions
3. Tool quirks and workarounds
4. Lessons learned (lowest — can often be rediscovered)
### Memory Nudge
Every N turns (default: 10), the agent receives a nudge prompting it to review and update its memory. This is a lightweight prompt injected into the conversation — not a separate API call. The agent can choose to update memory or skip if nothing has changed.
## Privacy and Data Locality
**Built-in memory is fully local.** MEMORY.md and USER.md are plain text files in `~/.hermes/memories/`. No network calls are made in the memory read/write path. The memory tool scans entries for prompt injection and exfiltration patterns before accepting them.
**Session search is local.** The SQLite database (`~/.hermes/state.db`) stays on disk. FTS5 search is a local operation. However, the summarization step uses Gemini Flash (via the auxiliary LLM client) — conversation snippets are sent to Google's API for summarization. If this is a concern, session_search can be disabled.
**External providers may send data off-machine.** Cloud providers (Honcho, Mem0, RetainDB, Supermemory) send data to their respective APIs. Self-hosted providers (OpenViking, Hindsight local mode, Holographic, ByteRover local mode) keep everything on your machine. Check the provider's documentation for specifics.
**Security scanning.** All content written to memory (via the `memory` tool) is scanned for:
- Prompt injection patterns ("ignore previous instructions", role hijacking, etc.)
- Credential exfiltration attempts (curl/wget with secrets, reading .env files)
- SSH backdoor patterns
- Invisible unicode characters (used for steganographic injection)
Blocked content is rejected with a descriptive error message.
## Configuration
In `~/.hermes/config.yaml`:
```yaml
memory:
# Enable/disable the two built-in memory stores
memory_enabled: true # MEMORY.md
user_profile_enabled: true # USER.md
# Character limits (not tokens — model-independent)
memory_char_limit: 2200 # ~800 tokens at 2.75 chars/token
user_char_limit: 1375 # ~500 tokens at 2.75 chars/token
# External memory provider (empty string = built-in only)
# Options: "honcho", "openviking", "mem0", "hindsight",
# "holographic", "retaindb", "byterover", "supermemory"
provider: ""
```
Additional settings are read from `run_agent.py` defaults:
| Setting | Default | Description |
|---------|---------|-------------|
| `nudge_interval` | 10 | Turns between memory review nudges (0 = disabled) |
| `flush_min_turns` | 6 | Minimum user turns before memory flush on session end/compression (0 = never flush) |
These are set under the `memory` key in config.yaml:
```yaml
memory:
nudge_interval: 10
flush_min_turns: 6
```
### Disabling Memory
To disable memory entirely, set both to false:
```yaml
memory:
memory_enabled: false
user_profile_enabled: false
```
The `memory` tool will not appear in the tool list, and no memory blocks are injected into the system prompt.
You can also disable memory per-invocation with `skip_memory=True` in the AIAgent constructor (used by cron jobs and flush agents).
## File Locations
```
~/.hermes/
├── memories/
│ ├── MEMORY.md # Agent's persistent notes
│ ├── USER.md # User profile
│ ├── MEMORY.md.lock # File lock (auto-created)
│ └── USER.md.lock # File lock (auto-created)
├── state.db # SQLite session store (FTS5)
├── config.yaml # Memory config + provider selection
└── .env # API keys for external providers
```
All paths respect `$HERMES_HOME` — if you use Hermes profiles, each profile has its own isolated memory directory.
## Troubleshooting
### "Memory full" errors
The tool returns an error when adding would exceed the character limit. The response includes current entries so the agent can consolidate. Fix by:
1. Replacing multiple related entries with one denser entry
2. Removing entries that are no longer relevant
3. Increasing `memory_char_limit` in config (at the cost of larger system prompts)
### Stale memory entries
If the agent seems to have outdated information:
- Check `~/.hermes/memories/MEMORY.md` directly — you can edit it by hand
- The frozen snapshot pattern means changes only take effect on the next session start
- If the agent wrote something wrong mid-session, it persists on disk but won't affect the current session's system prompt
### Memory not appearing in system prompt
- Verify `memory_enabled: true` in config.yaml
- Check that `~/.hermes/memories/MEMORY.md` exists and has content
- The file might be empty if all entries were removed — add entries with the `memory` tool
### Session search returns no results
- Session search requires sessions to be stored in `state.db` — new installations have no history
- FTS5 indexes are built automatically but may lag behind on very large databases
- The summarization step requires the auxiliary LLM client to be configured (API key for Gemini Flash)
### Skill drift
Skills that haven't been updated can become wrong or incomplete. The agent is prompted to patch skills when it finds them outdated during use (`skill_manage(action='patch')`). If you notice stale skills:
- Use `/skills` to browse and review installed skills
- Delete or update skills in `~/.hermes/skills/` directly
- The agent creates skills after complex tasks — review and prune periodically
### Provider not activating
- Run `hermes memory status` to check provider state
- Verify the provider plugin is installed in `~/.hermes/plugins/memory/`
- Check that required API keys are set in `~/.hermes/.env`
- Start a new session after changing provider config — existing sessions use the old provider
### Concurrent write conflicts
The memory tool uses file locking (`fcntl.flock`) and atomic file replacement (`os.replace`) to handle concurrent writes from multiple sessions. If you see corrupted memory files:
- Check for stale `.lock` files in `~/.hermes/memories/`
- Restart any hung Hermes processes
- The atomic write pattern means readers always see either the old or new file — never a partial write

View File

@@ -241,13 +241,29 @@ class HolographicMemoryProvider(MemoryProvider):
self._auto_extract_facts(messages)
def on_memory_write(self, action: str, target: str, content: str) -> None:
"""Mirror built-in memory writes as facts."""
if action == "add" and self._store and content:
try:
"""Mirror built-in memory writes as facts.
- add: mirror new fact to holographic store
- replace: search for old content, update or re-add
- remove: lower trust on matching facts so they fade naturally
"""
if not self._store:
return
try:
if action == "add" and content:
category = "user_pref" if target == "user" else "general"
self._store.add_fact(content, category=category)
except Exception as e:
logger.debug("Holographic memory_write mirror failed: %s", e)
elif action == "replace" and content:
category = "user_pref" if target == "user" else "general"
self._store.add_fact(content, category=category)
elif action == "remove" and content:
# Lower trust on matching facts so they decay naturally
results = self._store.search_facts(content, limit=5)
for fact in results:
if content.strip().lower() in fact.get("content", "").lower():
self._store.update_fact(fact["fact_id"], trust=max(0.0, fact.get("trust", 0.5) - 0.4))
except Exception as e:
logger.debug("Holographic memory_write mirror failed: %s", e)
def shutdown(self) -> None:
self._store = None

View File

@@ -104,6 +104,7 @@ from agent.trajectory import (
save_trajectory as _save_trajectory_to_file,
)
from utils import atomic_json_write, env_var_enabled
from json_repair import repair_json
@@ -274,7 +275,7 @@ def _should_parallelize_tool_batch(tool_calls) -> bool:
for tool_call in tool_calls:
tool_name = tool_call.function.name
try:
function_args = json.loads(tool_call.function.arguments)
function_args = json.loads(repair_json(tool_call.function.arguments))
except Exception:
logging.debug(
"Could not parse args for %s — defaulting to sequential; raw=%s",
@@ -2121,7 +2122,7 @@ class AIAgent:
# Parse arguments - should always succeed since we validate during conversation
# but keep try-except as safety net
try:
arguments = json.loads(tool_call["function"]["arguments"]) if isinstance(tool_call["function"]["arguments"], str) else tool_call["function"]["arguments"]
arguments = json.loads(repair_json(tool_call["function"]["arguments"])) if isinstance(tool_call["function"]["arguments"], str) else tool_call["function"]["arguments"]
except json.JSONDecodeError:
# This shouldn't happen since we validate and retry during conversation,
# but if it does, log warning and use empty dict
@@ -6086,7 +6087,7 @@ class AIAgent:
store=self._memory_store,
)
# Bridge: notify external memory provider of built-in memory writes
if self._memory_manager and function_args.get("action") in ("add", "replace"):
if self._memory_manager and function_args.get("action") in ("add", "replace", "remove"):
try:
self._memory_manager.on_memory_write(
function_args.get("action", ""),
@@ -6148,6 +6149,17 @@ class AIAgent:
for tool_call in tool_calls:
function_name = tool_call.function.name
# Poka-yoke #294: Block tool hallucination before execution
if hasattr(self, 'valid_tool_names') and self.valid_tool_names and function_name not in self.valid_tool_names:
logging.warning(f"Tool hallucination blocked: '{function_name}' not in valid toolset")
available = ', '.join(sorted(self.valid_tool_names)[:20])
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": f"Error: Tool '{function_name}' does not exist. Available: {available}"
})
continue
# Reset nudge counters
if function_name == "memory":
self._turns_since_memory = 0
@@ -6155,7 +6167,7 @@ class AIAgent:
self._iters_since_skill = 0
try:
function_args = json.loads(tool_call.function.arguments)
function_args = json.loads(repair_json(tool_call.function.arguments))
except json.JSONDecodeError:
function_args = {}
if not isinstance(function_args, dict):
@@ -6361,6 +6373,17 @@ class AIAgent:
function_name = tool_call.function.name
# Poka-yoke #294: Block tool hallucination before execution
if hasattr(self, 'valid_tool_names') and self.valid_tool_names and function_name not in self.valid_tool_names:
logging.warning(f"Tool hallucination blocked: '{function_name}' not in valid toolset")
available = ', '.join(sorted(self.valid_tool_names)[:20])
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": f"Error: Tool '{function_name}' does not exist. Available: {available}"
})
continue
# Reset nudge counters when the relevant tool is actually used
if function_name == "memory":
self._turns_since_memory = 0
@@ -6368,7 +6391,7 @@ class AIAgent:
self._iters_since_skill = 0
try:
function_args = json.loads(tool_call.function.arguments)
function_args = json.loads(repair_json(tool_call.function.arguments))
except json.JSONDecodeError as e:
logging.warning(f"Unexpected JSON error after validation: {e}")
function_args = {}

374
scripts/memory_budget.py Normal file
View File

@@ -0,0 +1,374 @@
#!/usr/bin/env python3
"""Memory Budget Enforcement Tool for hermes-agent.
Checks and enforces character/token budgets on MEMORY.md and USER.md files.
Designed for CI integration, pre-commit hooks, and manual health checks.
Usage:
python scripts/memory_budget.py # Check budget (exit 0/1)
python scripts/memory_budget.py --report # Detailed breakdown
python scripts/memory_budget.py --enforce # Trim entries to fit budget
python scripts/memory_budget.py --hermes-home ~/.hermes # Custom HERMES_HOME
Exit codes:
0 Within budget
1 Over budget (no trimming performed)
2 Entries were trimmed (--enforce was used)
"""
from __future__ import annotations
import argparse
import sys
from dataclasses import dataclass
from pathlib import Path
from typing import List
# ---------------------------------------------------------------------------
# Constants (must stay in sync with tools/memory_tool.py)
# ---------------------------------------------------------------------------
ENTRY_DELIMITER = "\n§\n"
DEFAULT_MEMORY_CHAR_LIMIT = 2200
DEFAULT_USER_CHAR_LIMIT = 1375
WARN_THRESHOLD = 0.80 # alert when >80% of budget used
CHARS_PER_TOKEN = 4 # rough estimate matching agent/model_metadata.py
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
@dataclass
class FileReport:
"""Budget analysis for a single memory file."""
label: str # "MEMORY.md" or "USER.md"
path: Path
exists: bool
char_limit: int
raw_chars: int # raw file size in chars
entry_chars: int # chars after splitting/rejoining entries
entry_count: int
entries: List[str] # individual entry texts
@property
def usage_pct(self) -> float:
if self.char_limit <= 0:
return 0.0
return min(100.0, (self.entry_chars / self.char_limit) * 100)
@property
def estimated_tokens(self) -> int:
return self.entry_chars // CHARS_PER_TOKEN
@property
def over_budget(self) -> bool:
return self.entry_chars > self.char_limit
@property
def warning(self) -> bool:
return self.usage_pct >= (WARN_THRESHOLD * 100)
@property
def remaining_chars(self) -> int:
return max(0, self.char_limit - self.entry_chars)
def _read_entries(path: Path) -> List[str]:
"""Read a memory file and split into entries (matching MemoryStore logic)."""
if not path.exists():
return []
try:
raw = path.read_text(encoding="utf-8")
except (OSError, IOError):
return []
if not raw.strip():
return []
entries = [e.strip() for e in raw.split(ENTRY_DELIMITER)]
return [e for e in entries if e]
def _write_entries(path: Path, entries: List[str]) -> None:
"""Write entries back to a memory file."""
content = ENTRY_DELIMITER.join(entries) if entries else ""
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(content, encoding="utf-8")
def analyze_file(path: Path, label: str, char_limit: int) -> FileReport:
"""Analyze a single memory file against its budget."""
exists = path.exists()
entries = _read_entries(path) if exists else []
raw_chars = path.stat().st_size if exists else 0
joined = ENTRY_DELIMITER.join(entries)
return FileReport(
label=label,
path=path,
exists=exists,
char_limit=char_limit,
raw_chars=raw_chars,
entry_chars=len(joined),
entry_count=len(entries),
entries=entries,
)
def trim_entries(report: FileReport) -> List[str]:
"""Trim oldest entries until the file fits within its budget.
Entries are removed from the front (oldest first) because memory files
append new entries at the end.
"""
entries = list(report.entries)
joined = ENTRY_DELIMITER.join(entries)
while len(joined) > report.char_limit and entries:
entries.pop(0)
joined = ENTRY_DELIMITER.join(entries)
return entries
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def _bar(pct: float, width: int = 30) -> str:
"""Render a text progress bar."""
filled = int(pct / 100 * width)
bar = "#" * filled + "-" * (width - filled)
return f"[{bar}]"
def print_report(memory: FileReport, user: FileReport, *, verbose: bool = False) -> None:
"""Print a human-readable budget report."""
total_chars = memory.entry_chars + user.entry_chars
total_limit = memory.char_limit + user.char_limit
total_tokens = total_chars // CHARS_PER_TOKEN
total_pct = (total_chars / total_limit * 100) if total_limit > 0 else 0
print("=" * 60)
print(" MEMORY BUDGET REPORT")
print("=" * 60)
print()
for rpt in (memory, user):
status = "OVER " if rpt.over_budget else ("WARN" if rpt.warning else " OK ")
print(f" {rpt.label:12s} {status} {_bar(rpt.usage_pct)} {rpt.usage_pct:5.1f}%")
print(f" {'':12s} {rpt.entry_chars:,}/{rpt.char_limit:,} chars "
f"| {rpt.entry_count} entries "
f"| ~{rpt.estimated_tokens:,} tokens")
if rpt.exists and verbose and rpt.entries:
for i, entry in enumerate(rpt.entries):
preview = entry[:72].replace("\n", " ")
if len(entry) > 72:
preview += "..."
print(f" #{i+1}: ({len(entry)} chars) {preview}")
print()
print(f" TOTAL {_bar(total_pct)} {total_pct:5.1f}%")
print(f" {total_chars:,}/{total_limit:,} chars | ~{total_tokens:,} tokens")
print()
# Alerts
alerts = []
for rpt in (memory, user):
if rpt.over_budget:
overshoot = rpt.entry_chars - rpt.char_limit
alerts.append(
f" CRITICAL {rpt.label} is {overshoot:,} chars over budget "
f"({rpt.entry_chars:,}/{rpt.char_limit:,}). "
f"Run with --enforce to auto-trim."
)
elif rpt.warning:
alerts.append(
f" WARNING {rpt.label} is at {rpt.usage_pct:.0f}% capacity. "
f"Consider compressing or cleaning up entries."
)
if alerts:
print(" ALERTS")
print(" ------")
for a in alerts:
print(a)
print()
def print_json(memory: FileReport, user: FileReport) -> None:
"""Print a JSON report for machine consumption."""
import json
def _rpt_dict(r: FileReport) -> dict:
return {
"label": r.label,
"path": str(r.path),
"exists": r.exists,
"char_limit": r.char_limit,
"entry_chars": r.entry_chars,
"entry_count": r.entry_count,
"estimated_tokens": r.estimated_tokens,
"usage_pct": round(r.usage_pct, 1),
"over_budget": r.over_budget,
"warning": r.warning,
"remaining_chars": r.remaining_chars,
}
total_chars = memory.entry_chars + user.entry_chars
total_limit = memory.char_limit + user.char_limit
data = {
"memory": _rpt_dict(memory),
"user": _rpt_dict(user),
"total": {
"chars": total_chars,
"limit": total_limit,
"estimated_tokens": total_chars // CHARS_PER_TOKEN,
"usage_pct": round((total_chars / total_limit * 100) if total_limit else 0, 1),
"over_budget": memory.over_budget or user.over_budget,
"warning": memory.warning or user.warning,
},
}
print(json.dumps(data, indent=2))
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def _resolve_hermes_home(custom: str | None) -> Path:
"""Resolve HERMES_HOME directory."""
if custom:
return Path(custom).expanduser()
import os
return Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
def main() -> int:
parser = argparse.ArgumentParser(
description="Check and enforce memory budgets for hermes-agent.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--hermes-home", metavar="DIR",
help="Custom HERMES_HOME directory (default: $HERMES_HOME or ~/.hermes)",
)
parser.add_argument(
"--memory-limit", type=int, default=DEFAULT_MEMORY_CHAR_LIMIT,
help=f"Character limit for MEMORY.md (default: {DEFAULT_MEMORY_CHAR_LIMIT})",
)
parser.add_argument(
"--user-limit", type=int, default=DEFAULT_USER_CHAR_LIMIT,
help=f"Character limit for USER.md (default: {DEFAULT_USER_CHAR_LIMIT})",
)
parser.add_argument(
"--report", action="store_true",
help="Print detailed per-file budget report",
)
parser.add_argument(
"--verbose", "-v", action="store_true",
help="Show individual entry details in report",
)
parser.add_argument(
"--enforce", action="store_true",
help="Trim oldest entries to fit within budget (writes to disk)",
)
parser.add_argument(
"--json", action="store_true", dest="json_output",
help="Output report as JSON (for CI/scripting)",
)
args = parser.parse_args()
hermes_home = _resolve_hermes_home(args.hermes_home)
memories_dir = hermes_home / "memories"
# Analyze both files
memory = analyze_file(
memories_dir / "MEMORY.md", "MEMORY.md", args.memory_limit,
)
user = analyze_file(
memories_dir / "USER.md", "USER.md", args.user_limit,
)
over_budget = memory.over_budget or user.over_budget
trimmed = False
# Enforce budget by trimming entries
if args.enforce and over_budget:
for rpt in (memory, user):
if rpt.over_budget and rpt.exists:
trimmed_entries = trim_entries(rpt)
removed = rpt.entry_count - len(trimmed_entries)
if removed > 0:
_write_entries(rpt.path, trimmed_entries)
rpt.entries = trimmed_entries
rpt.entry_count = len(trimmed_entries)
rpt.entry_chars = len(ENTRY_DELIMITER.join(trimmed_entries))
rpt.raw_chars = rpt.path.stat().st_size
print(f" Trimmed {removed} oldest entries from {rpt.label} "
f"({rpt.entry_chars:,}/{rpt.char_limit:,} chars now)")
trimmed = True
# Re-check after trimming
over_budget = memory.over_budget or user.over_budget
# Output
if args.json_output:
print_json(memory, user)
elif args.report or args.verbose:
print_report(memory, user, verbose=args.verbose)
else:
# Compact summary
if over_budget:
print("Memory budget: OVER")
for rpt in (memory, user):
if rpt.over_budget:
print(f" {rpt.label}: {rpt.entry_chars:,}/{rpt.char_limit:,} chars "
f"({rpt.usage_pct:.0f}%)")
elif memory.warning or user.warning:
print("Memory budget: WARNING")
for rpt in (memory, user):
if rpt.warning:
print(f" {rpt.label}: {rpt.entry_chars:,}/{rpt.char_limit:,} chars "
f"({rpt.usage_pct:.0f}%)")
else:
print("Memory budget: OK")
for rpt in (memory, user):
if rpt.exists:
print(f" {rpt.label}: {rpt.entry_chars:,}/{rpt.char_limit:,} chars "
f"({rpt.usage_pct:.0f}%)")
# Suggest actions when over budget but not enforced
if over_budget and not args.enforce:
suggestions = []
for rpt in (memory, user):
if rpt.over_budget:
suggestions.append(
f" - {rpt.label}: remove stale entries or run with --enforce to auto-trim"
)
# Identify largest entries
if rpt.entries:
indexed = sorted(enumerate(rpt.entries), key=lambda x: len(x[1]), reverse=True)
top3 = indexed[:3]
for idx, entry in top3:
preview = entry[:60].replace("\n", " ")
if len(entry) > 60:
preview += "..."
suggestions.append(
f" largest entry #{idx+1}: ({len(entry)} chars) {preview}"
)
if suggestions:
print()
print("Suggestions:")
for s in suggestions:
print(s)
# Exit code
if trimmed:
return 2
if over_budget:
return 1
return 0
if __name__ == "__main__":
sys.exit(main())

430
scripts/setup_matrix.py Executable file
View File

@@ -0,0 +1,430 @@
#!/usr/bin/env python3
"""Interactive Matrix setup wizard for Hermes Agent.
Guides you through configuring Matrix integration:
- Homeserver URL
- Token auth or password auth
- Device ID generation
- Config/env file writing
- Optional: test room creation and message send
- E2EE verification
Usage:
python scripts/setup_matrix.py
"""
import getpass
import json
import os
import secrets
import sys
import urllib.error
import urllib.request
from pathlib import Path
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _hermes_home() -> Path:
"""Resolve ~/.hermes (or HERMES_HOME override)."""
return Path(os.environ.get("HERMES_HOME", Path.home() / ".hermes"))
def _prompt(msg: str, default: str = "") -> str:
"""Prompt with optional default. Returns stripped input or default."""
suffix = f" [{default}]" if default else ""
val = input(f"{msg}{suffix}: ").strip()
return val or default
def _prompt_bool(msg: str, default: bool = True) -> bool:
"""Yes/no prompt."""
d = "Y/n" if default else "y/N"
val = input(f"{msg} [{d}]: ").strip().lower()
if not val:
return default
return val in ("y", "yes")
def _http_post_json(url: str, data: dict, timeout: int = 15) -> dict:
"""POST JSON and return parsed response. Raises on HTTP errors."""
body = json.dumps(data).encode()
req = urllib.request.Request(
url,
data=body,
headers={"Content-Type": "application/json"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=timeout) as resp:
return json.loads(resp.read())
except urllib.error.HTTPError as exc:
detail = exc.read().decode(errors="replace")
raise RuntimeError(f"HTTP {exc.code}: {detail}") from exc
except urllib.error.URLError as exc:
raise RuntimeError(f"Connection error: {exc.reason}") from exc
def _http_get_json(url: str, token: str = "", timeout: int = 15) -> dict:
"""GET JSON, optionally with Bearer auth."""
req = urllib.request.Request(url, method="GET")
if token:
req.add_header("Authorization", f"Bearer {token}")
try:
with urllib.request.urlopen(req, timeout=timeout) as resp:
return json.loads(resp.read())
except urllib.error.HTTPError as exc:
detail = exc.read().decode(errors="replace")
raise RuntimeError(f"HTTP {exc.code}: {detail}") from exc
except urllib.error.URLError as exc:
raise RuntimeError(f"Connection error: {exc.reason}") from exc
def _write_env_file(env_path: Path, vars: dict) -> None:
"""Write/update ~/.hermes/.env with given variables."""
existing: dict[str, str] = {}
if env_path.exists():
for line in env_path.read_text().splitlines():
line = line.strip()
if line and not line.startswith("#") and "=" in line:
k, v = line.split("=", 1)
existing[k.strip()] = v.strip().strip("'\"")
existing.update(vars)
lines = ["# Hermes Agent environment variables"]
for k, v in sorted(existing.items()):
# Quote values with spaces or special chars
if any(c in v for c in " \t#\"'$"):
lines.append(f'{k}="{v}"')
else:
lines.append(f"{k}={v}")
env_path.parent.mkdir(parents=True, exist_ok=True)
env_path.write_text("\n".join(lines) + "\n")
try:
os.chmod(str(env_path), 0o600)
except (OSError, NotImplementedError):
pass
print(f" -> Wrote {len(vars)} vars to {env_path}")
def _write_config_yaml(config_path: Path, matrix_section: dict) -> None:
"""Add/update matrix: section in config.yaml (creates file if needed)."""
try:
import yaml
except ImportError:
print(" [!] PyYAML not installed — skipping config.yaml update.")
print(" Add manually under 'matrix:' key.")
return
config: dict = {}
if config_path.exists():
try:
config = yaml.safe_load(config_path.read_text()) or {}
except Exception:
config = {}
config["matrix"] = matrix_section
config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(yaml.dump(config, default_flow_style=False, sort_keys=False))
try:
os.chmod(str(config_path), 0o600)
except (OSError, NotImplementedError):
pass
print(f" -> Updated matrix section in {config_path}")
def _generate_device_id() -> str:
"""Generate a stable, human-readable device ID."""
return f"HERMES_{secrets.token_hex(4).upper()}"
# ---------------------------------------------------------------------------
# Login flows
# ---------------------------------------------------------------------------
def login_with_token(homeserver: str) -> dict:
"""Validate an existing access token via whoami."""
token = getpass.getpass("Access token (hidden): ").strip()
if not token:
print(" [!] Token cannot be empty.")
sys.exit(1)
whoami_url = f"{homeserver}/_matrix/client/v3/account/whoami"
print(" Validating token...")
resp = _http_get_json(whoami_url, token=token)
user_id = resp.get("user_id", "")
device_id = resp.get("device_id", "")
print(f" Authenticated as: {user_id}")
if device_id:
print(f" Server device ID: {device_id}")
return {
"MATRIX_ACCESS_TOKEN": token,
"MATRIX_USER_ID": user_id,
}
def login_with_password(homeserver: str) -> dict:
"""Login with username + password, get access token."""
user_id = _prompt("Full user ID (e.g. @bot:matrix.org)")
if not user_id:
print(" [!] User ID cannot be empty.")
sys.exit(1)
password = getpass.getpass("Password (hidden): ").strip()
if not password:
print(" [!] Password cannot be empty.")
sys.exit(1)
login_url = f"{homeserver}/_matrix/client/v3/login"
print(" Logging in...")
resp = _http_post_json(login_url, {
"type": "m.login.password",
"identifier": {
"type": "m.id.user",
"user": user_id,
},
"password": password,
"device_name": "Hermes Agent",
})
access_token = resp.get("access_token", "")
device_id = resp.get("device_id", "")
resolved_user = resp.get("user_id", user_id)
if not access_token:
print(" [!] Login succeeded but no access_token in response.")
sys.exit(1)
print(f" Authenticated as: {resolved_user}")
if device_id:
print(f" Device ID: {device_id}")
return {
"MATRIX_ACCESS_TOKEN": access_token,
"MATRIX_USER_ID": resolved_user,
"_server_device_id": device_id,
}
# ---------------------------------------------------------------------------
# Test room + message
# ---------------------------------------------------------------------------
def create_test_room(homeserver: str, token: str) -> str | None:
"""Create a private test room and return the room ID."""
create_url = f"{homeserver}/_matrix/client/v3/createRoom"
try:
resp = _http_post_json(create_url, {
"name": "Hermes Test Room",
"topic": "Auto-created by hermes setup_matrix.py — safe to delete",
"preset": "private_chat",
"visibility": "private",
}, timeout=30)
# Set auth header manually (createRoom needs proper auth)
room_id = resp.get("room_id", "")
if room_id:
print(f" Created test room: {room_id}")
return room_id
except Exception:
pass
# Fallback: use curl-style with auth
req = urllib.request.Request(
create_url,
data=json.dumps({
"name": "Hermes Test Room",
"topic": "Auto-created by hermes setup_matrix.py — safe to delete",
"preset": "private_chat",
"visibility": "private",
}).encode(),
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {token}",
},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
data = json.loads(resp.read())
room_id = data.get("room_id", "")
if room_id:
print(f" Created test room: {room_id}")
return room_id
except Exception as exc:
print(f" [!] Room creation failed: {exc}")
return None
def send_test_message(homeserver: str, token: str, room_id: str) -> bool:
"""Send a test message to a room. Returns True on success."""
txn_id = secrets.token_hex(8)
url = (
f"{homeserver}/_matrix/client/v3/rooms/"
f"{urllib.request.quote(room_id, safe='')}/send/m.room.message/{txn_id}"
)
req = urllib.request.Request(
url,
data=json.dumps({
"msgtype": "m.text",
"body": "Hermes Agent setup verified successfully!",
}).encode(),
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {token}",
},
method="PUT",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp:
data = json.loads(resp.read())
event_id = data.get("event_id", "")
if event_id:
print(f" Test message sent: {event_id}")
return True
except Exception as exc:
print(f" [!] Test message failed: {exc}")
return False
def check_e2ee_support() -> bool:
"""Check if E2EE dependencies are available."""
try:
import nio
from nio.crypto import ENCRYPTION_ENABLED
return bool(ENCRYPTION_ENABLED)
except (ImportError, AttributeError):
return False
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
print("=" * 60)
print(" Hermes Agent — Matrix Setup Wizard")
print("=" * 60)
print()
# -- Homeserver --
print("Step 1: Homeserver")
print(" A) matrix.org (public, for testing)")
print(" B) Custom homeserver (self-hosted)")
choice = _prompt("Choose [A/B]", "A").upper()
if choice == "B":
homeserver = _prompt("Homeserver URL (e.g. https://matrix.example.com)")
if not homeserver:
print(" [!] Homeserver URL is required.")
sys.exit(1)
else:
homeserver = "https://matrix-client.matrix.org"
homeserver = homeserver.rstrip("/")
print(f" Using: {homeserver}")
print()
# -- Authentication --
print("Step 2: Authentication")
print(" A) Access token (recommended)")
print(" B) Username + password")
auth_choice = _prompt("Choose [A/B]", "A").upper()
if auth_choice == "B":
auth_vars = login_with_password(homeserver)
else:
auth_vars = login_with_token(homeserver)
print()
# -- Device ID --
print("Step 3: Device ID (for E2EE persistence)")
server_device = auth_vars.pop("_server_device_id", "")
default_device = server_device or _generate_device_id()
device_id = _prompt("Device ID", default_device)
auth_vars["MATRIX_DEVICE_ID"] = device_id
print()
# -- E2EE --
print("Step 4: End-to-End Encryption")
e2ee_available = check_e2ee_support()
if e2ee_available:
enable_e2ee = _prompt_bool("Enable E2EE?", default=False)
if enable_e2ee:
auth_vars["MATRIX_ENCRYPTION"] = "true"
print(" E2EE enabled. Keys will be stored in:")
print(" ~/.hermes/platforms/matrix/store/")
else:
print(" E2EE dependencies not found. Skipping.")
print(" To enable later: pip install 'matrix-nio[e2e]'")
print()
# -- Optional settings --
print("Step 5: Optional Settings")
allowed = _prompt("Allowed user IDs (comma-separated, or empty for all)")
if allowed:
auth_vars["MATRIX_ALLOWED_USERS"] = allowed
home_room = _prompt("Home room ID for notifications (or empty)")
if home_room:
auth_vars["MATRIX_HOME_ROOM"] = home_room
require_mention = _prompt_bool("Require @mention in rooms?", default=True)
auto_thread = _prompt_bool("Auto-create threads?", default=True)
print()
# -- Write files --
print("Step 6: Writing Configuration")
hermes_home = _hermes_home()
env_path = hermes_home / ".env"
_write_env_file(env_path, auth_vars)
config_path = hermes_home / "config.yaml"
matrix_cfg = {
"require_mention": require_mention,
"auto_thread": auto_thread,
}
_write_config_yaml(config_path, matrix_cfg)
print()
# -- Verify connection --
print("Step 7: Verification")
token = auth_vars.get("MATRIX_ACCESS_TOKEN", "")
do_test = _prompt_bool("Create test room and send message?", default=True)
if do_test and token:
room_id = create_test_room(homeserver, token)
if room_id:
send_test_message(homeserver, token, room_id)
print()
# -- Summary --
print("=" * 60)
print(" Setup Complete!")
print("=" * 60)
print()
print(" Config written to:")
print(f" {env_path}")
print(f" {config_path}")
print()
print(" To start the Matrix gateway:")
print(" hermes gateway --platform matrix")
print()
if not e2ee_available:
print(" To enable E2EE later:")
print(" pip install 'matrix-nio[e2e]'")
print(" Then set MATRIX_ENCRYPTION=true in .env")
print()
print(" Docs: docs/matrix-setup.md")
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,325 @@
#!/usr/bin/env python3
"""
Memory Sovereignty Verification
Verifies that the memory path in hermes-agent has no network dependencies.
Memory data must stay on the local filesystem only — no HTTP calls, no external
API calls, no cloud sync during memory read/write/flush/load operations.
Scans:
- tools/memory_tool.py (MEMORY.md / USER.md store)
- hermes_state.py (SQLite session store)
- tools/session_search_tool.py (FTS5 session search + summarization)
- tools/graph_store.py (knowledge graph persistence)
- tools/temporal_kg_tool.py (temporal knowledge graph)
- agent/temporal_knowledge_graph.py (temporal triple store)
- tools/skills_tool.py (skill listing/viewing)
- tools/skills_sync.py (bundled skill syncing)
Exit codes:
0 = sovereign (no violations)
1 = violations found
"""
import ast
import re
import sys
from pathlib import Path
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
# Files in the memory path to scan (relative to repo root).
MEMORY_FILES = [
"tools/memory_tool.py",
"hermes_state.py",
"tools/session_search_tool.py",
"tools/graph_store.py",
"tools/temporal_kg_tool.py",
"agent/temporal_knowledge_graph.py",
"tools/skills_tool.py",
"tools/skills_sync.py",
]
# Patterns that indicate network/external API usage.
NETWORK_PATTERNS = [
# HTTP libraries
(r'\brequests\.(get|post|put|delete|patch|head|session)', "requests HTTP call"),
(r'\burllib\.request\.(urlopen|Request)', "urllib HTTP call"),
(r'\bhttpx\.(get|post|put|delete|Client|AsyncClient)', "httpx HTTP call"),
(r'\bhttp\.client\.(HTTPConnection|HTTPSConnection)', "http.client connection"),
(r'\baiohttp\.(ClientSession|get|post)', "aiohttp HTTP call"),
(r'\bwebsockets\.\w+', "websocket connection"),
# API client patterns
(r'\bopenai\b.*\b(api_key|chat|completions|Client)\b', "OpenAI API usage"),
(r'\banthropic\b.*\b(api_key|messages|Client)\b', "Anthropic API usage"),
(r'\bAsyncOpenAI\b', "AsyncOpenAI client"),
(r'\bAsyncAnthropic\b', "AsyncAnthropic client"),
# Generic network indicators
(r'\bsocket\.(socket|connect|create_connection)', "raw socket connection"),
(r'\bftplib\b', "FTP connection"),
(r'\bsmtplib\b', "SMTP connection"),
(r'\bparamiko\b', "SSH connection via paramiko"),
# URL patterns (hardcoded endpoints)
(r'https?://(?!example\.com)[a-zA-Z0-9._-]+\.(com|org|net|io|dev|ai)', "hardcoded URL"),
]
# Import aliases that indicate network-capable modules.
NETWORK_IMPORTS = {
"requests",
"httpx",
"aiohttp",
"urllib.request",
"http.client",
"websockets",
"openai",
"anthropic",
"openrouter_client",
}
# Functions whose names suggest network I/O.
NETWORK_FUNC_NAMES = {
"async_call_llm",
"extract_content_or_reasoning",
}
# Files that are ALLOWED to have network calls (known violations with justification).
# Each entry maps to a reason string.
KNOWN_VIOLATIONS = {
"tools/graph_store.py": (
"GraphStore persists to Gitea via API. This is a known architectural trade-off "
"for knowledge graph persistence, which is not part of the core memory path "
"(MEMORY.md/USER.md/SQLite). Future work will explore local-first alternatives "
"to align more closely with SOUL.md principles."
),
"tools/session_search_tool.py": (
"Session search uses LLM summarization via an auxiliary client. While the FTS5 "
"search is local, the LLM call for summarization is an external dependency. "
"This is a temporary architectural trade-off for enhanced presentation. "
"Research is ongoing to implement local LLM options for full sovereignty, "
"in line with SOUL.md."
),
}
# ---------------------------------------------------------------------------
# Scanner
# ---------------------------------------------------------------------------
class Violation:
"""A sovereignty violation with location and description."""
def __init__(self, file: str, line: int, description: str, code: str):
self.file = file
self.line = line
self.description = description
self.code = code.strip()
def __str__(self):
return f"{self.file}:{self.line}: {self.description}\n {self.code}"
def scan_file(filepath: Path, repo_root: Path) -> list[Violation]:
"""Scan a single file for network dependency patterns."""
violations = []
rel_path = str(filepath.relative_to(repo_root))
# Skip known violations
if rel_path in KNOWN_VIOLATIONS:
return violations
try:
content = filepath.read_text(encoding="utf-8")
except (OSError, IOError) as e:
print(f"WARNING: Cannot read {rel_path}: {e}", file=sys.stderr)
return violations
lines = content.splitlines()
# --- Check imports ---
try:
tree = ast.parse(content, filename=str(filepath))
except SyntaxError as e:
print(f"WARNING: Cannot parse {rel_path}: {e}", file=sys.stderr)
return violations
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
mod = alias.name
if mod in NETWORK_IMPORTS or any(
mod.startswith(ni + ".") for ni in NETWORK_IMPORTS
):
violations.append(Violation(
rel_path, node.lineno,
f"Network-capable import: {mod}",
lines[node.lineno - 1] if node.lineno <= len(lines) else "",
))
elif isinstance(node, ast.ImportFrom):
if node.module and (
node.module in NETWORK_IMPORTS
or any(node.module.startswith(ni + ".") for ni in NETWORK_IMPORTS)
):
violations.append(Violation(
rel_path, node.lineno,
f"Network-capable import from: {node.module}",
lines[node.lineno - 1] if node.lineno <= len(lines) else "",
))
# --- Check for LLM call function usage ---
for i, line in enumerate(lines, 1):
stripped = line.strip()
if stripped.startswith("#"):
continue
for func_name in NETWORK_FUNC_NAMES:
if func_name in line and not stripped.startswith("def ") and not stripped.startswith("class "):
# Check it's actually a call, not a definition or import
if re.search(r'\b' + func_name + r'\s*\(', line):
violations.append(Violation(
rel_path, i,
f"External LLM call function: {func_name}()",
line,
))
# --- Regex-based pattern matching ---
for i, line in enumerate(lines, 1):
stripped = line.strip()
if stripped.startswith("#"):
continue
for pattern, description in NETWORK_PATTERNS:
if re.search(pattern, line, re.IGNORECASE):
violations.append(Violation(
rel_path, i,
f"Suspicious pattern ({description})",
line,
))
return violations
def verify_sovereignty(repo_root: Path) -> tuple[list[Violation], list[str]]:
"""Run sovereignty verification across all memory files.
Returns (violations, info_messages).
"""
all_violations = []
info = []
for rel_path in MEMORY_FILES:
filepath = repo_root / rel_path
if not filepath.exists():
info.append(f"SKIP: {rel_path} (file not found)")
continue
if rel_path in KNOWN_VIOLATIONS:
info.append(
f"WARN: {rel_path} — known violation (excluded from gate): "
f"{KNOWN_VIOLATIONS[rel_path]}"
)
continue
violations = scan_file(filepath, repo_root)
all_violations.extend(violations)
if not violations:
info.append(f"PASS: {rel_path} — sovereign (local-only)")
return all_violations, info
# ---------------------------------------------------------------------------
# Deep analysis helpers
# ---------------------------------------------------------------------------
def check_graph_store_network(repo_root: Path) -> str:
"""Analyze graph_store.py for its network dependencies."""
filepath = repo_root / "tools" / "graph_store.py"
if not filepath.exists():
return ""
content = filepath.read_text(encoding="utf-8")
if "GiteaClient" in content:
return (
"tools/graph_store.py uses GiteaClient for persistence — "
"this is an external API call. However, graph_store is NOT part of "
"the core memory path (MEMORY.md/USER.md/SQLite). It is a separate "
"knowledge graph system."
)
return ""
def check_session_search_llm(repo_root: Path) -> str:
"""Analyze session_search_tool.py for LLM usage."""
filepath = repo_root / "tools" / "session_search_tool.py"
if not filepath.exists():
return ""
content = filepath.read_text(encoding="utf-8")
warnings = []
if "async_call_llm" in content:
warnings.append("uses async_call_llm for summarization")
if "auxiliary_client" in content:
warnings.append("imports auxiliary_client (LLM calls)")
if warnings:
return (
f"tools/session_search_tool.py: {'; '.join(warnings)}. "
f"The FTS5 search is local SQLite, but session summarization "
f"involves LLM API calls."
)
return ""
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
repo_root = Path(__file__).resolve().parent.parent
print(f"Memory Sovereignty Verification")
print(f"Repository: {repo_root}")
print(f"Scanning {len(MEMORY_FILES)} memory-path files...")
print()
violations, info = verify_sovereignty(repo_root)
# Print info messages
for msg in info:
print(f" {msg}")
# Print deep analysis
print()
print("Deep analysis:")
for checker in [check_graph_store_network, check_session_search_llm]:
note = checker(repo_root)
if note:
print(f" NOTE: {note}")
print()
if violations:
print(f"SOVEREIGNTY VIOLATIONS FOUND: {len(violations)}")
print("=" * 60)
for v in violations:
print(v)
print()
print("=" * 60)
print(
f"FAIL: {len(violations)} potential network dependencies detected "
f"in the memory path."
)
print("Memory must be local-only (filesystem + SQLite).")
print()
print("If a violation is intentional and documented, add it to")
print("KNOWN_VIOLATIONS in this script with a justification.")
return 1
else:
print("PASS: Memory path is sovereign — no network dependencies detected.")
print("All memory operations use local filesystem and/or SQLite only.")
return 0
if __name__ == "__main__":
sys.exit(main())

572
tools/browser_use_tool.py Normal file
View File

@@ -0,0 +1,572 @@
#!/usr/bin/env python3
"""
Browser Use Tool Module
Proof-of-concept wrapper around the browser-use Python library for
LLM-driven autonomous browser automation. This complements Hermes's
existing low-level browser_tool.py (navigate/snapshot/click/type) by
providing a high-level "do this task for me" capability.
Where browser_tool.py gives the LLM fine-grained control (each click is
a separate tool call), browser_use_tool.py lets the LLM describe a task
in natural language and have browser-use autonomously execute the steps.
Usage:
from tools.browser_use_tool import browser_use_run, browser_use_extract
# Run an autonomous browser task
result = browser_use_run(
task="Find the top 3 stories on Hacker News and return their titles",
max_steps=15,
)
# Extract structured data from a URL
data = browser_use_extract(
url="https://example.com/pricing",
instruction="Extract all pricing tiers with their names, prices, and features",
)
Integration notes:
- Requires: pip install browser-use
- Optional: BROWSER_USE_API_KEY for cloud mode (no local Playwright needed)
- Falls back to local Playwright Chromium when no API key is set
- Uses the same url_safety and website_policy checks as browser_tool.py
"""
import json
import logging
import os
import tempfile
from typing import Any, Dict, Optional
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Security: URL validation (reuse existing modules)
# ---------------------------------------------------------------------------
try:
from tools.url_safety import is_safe_url as _is_safe_url
except Exception:
_is_safe_url = lambda url: False # noqa: E731 — fail-closed
try:
from tools.website_policy import check_website_access
except Exception:
check_website_access = lambda url: None # noqa: E731 — fail-open
def _validate_url(url: str) -> Optional[str]:
"""Validate a URL for safety and policy compliance.
Returns None if OK, or an error message string if blocked.
"""
if not url or not url.strip():
return "URL cannot be empty"
url = url.strip()
if not _is_safe_url(url):
return f"URL blocked by safety policy: {url}"
try:
check_website_access(url)
except Exception as e:
return f"URL blocked by website policy: {e}"
return None
# ---------------------------------------------------------------------------
# Availability check
# ---------------------------------------------------------------------------
_browser_use_available: Optional[bool] = None
def _check_browser_use_available() -> bool:
"""Check if browser-use library is installed and usable."""
global _browser_use_available
if _browser_use_available is not None:
return _browser_use_available
try:
import browser_use # noqa: F401
_browser_use_available = True
except ImportError:
_browser_use_available = False
return _browser_use_available
# ---------------------------------------------------------------------------
# Core functions
# ---------------------------------------------------------------------------
def browser_use_run(
task: str,
max_steps: int = 25,
model: str = None,
url: str = None,
use_vision: bool = False,
) -> str:
"""Run an autonomous browser task using browser-use.
Args:
task: Natural language description of what to do in the browser.
max_steps: Maximum number of autonomous steps before stopping.
model: LLM model for browser-use's internal agent (default: from env).
url: Optional starting URL. If provided, navigates there first.
use_vision: Whether to use screenshots for visual context.
Returns:
JSON string with task result, final page content, and metadata.
"""
if not _check_browser_use_available():
return json.dumps({
"error": "browser-use library not installed. "
"Install with: pip install browser-use && playwright install chromium"
})
# Validate URL if provided
if url:
err = _validate_url(url)
if err:
return json.dumps({"error": err})
# Resolve model
if not model:
model = os.getenv("BROWSER_USE_MODEL", "").strip() or None
try:
import asyncio
from browser_use import Agent, Browser, BrowserConfig
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
return asyncio.run(
_run_browser_use_agent(
task=task,
max_steps=max_steps,
model=model,
url=url,
use_vision=use_vision,
)
)
except ImportError as e:
return json.dumps({
"error": f"Missing dependency: {e}. "
"Install with: pip install browser-use langchain-openai langchain-anthropic"
})
except Exception as e:
logger.exception("browser_use_run failed")
return json.dumps({"error": f"Browser use failed: {type(e).__name__}: {e}"})
async def _run_browser_use_agent(
task: str,
max_steps: int,
model: Optional[str],
url: Optional[str],
use_vision: bool,
) -> str:
"""Async implementation of browser_use_run."""
from browser_use import Agent, Browser, BrowserConfig
# Build LLM
llm = _resolve_langchain_llm(model)
if isinstance(llm, str):
# Error message returned
return llm
# Configure browser
browser_config = BrowserConfig(
headless=True,
)
# Build the task string with optional starting URL
full_task = task
if url:
full_task = f"Start by navigating to {url}. Then: {task}"
# Create agent
agent = Agent(
task=full_task,
llm=llm,
browser=Browser(config=browser_config),
use_vision=use_vision,
max_actions_per_step=5,
)
# Run with step limit
result = await agent.run(max_steps=max_steps)
# Extract results
final_url = ""
final_content = ""
steps_taken = 0
if hasattr(result, "all_results") and result.all_results:
steps_taken = len(result.all_results)
last = result.all_results[-1]
if hasattr(last, "extracted_content"):
final_content = last.extracted_content or ""
if hasattr(last, "url"):
final_url = last.url or ""
# Get the final content from the agent's history
if hasattr(result, "final_result"):
final_content = result.final_result or final_content
return json.dumps({
"success": True,
"task": task,
"result": final_content,
"final_url": final_url,
"steps_taken": steps_taken,
"max_steps": max_steps,
}, indent=2)
def browser_use_extract(
url: str,
instruction: str = "Extract all meaningful content from this page",
max_steps: int = 15,
model: str = None,
) -> str:
"""Navigate to a URL and extract structured data using browser-use.
This is a convenience wrapper that combines navigation + extraction
into a single tool call.
Args:
url: The URL to extract data from.
instruction: What to extract (e.g., "Extract all pricing tiers").
max_steps: Maximum browser steps.
model: LLM model for browser-use agent.
Returns:
JSON string with extracted data.
"""
err = _validate_url(url)
if err:
return json.dumps({"error": err})
task = (
f"Navigate to {url}. {instruction}. "
f"Return the extracted data in a structured format. "
f"When done, use the 'done' action to finish."
)
return browser_use_run(
task=task,
max_steps=max_steps,
model=model,
url=url,
)
def browser_use_compare(
urls: list,
instruction: str = "Compare the content on these pages",
max_steps: int = 25,
model: str = None,
) -> str:
"""Visit multiple URLs and compare their content.
Args:
urls: List of URLs to visit and compare.
instruction: What to compare (e.g., "Compare pricing plans").
max_steps: Maximum browser steps.
model: LLM model for browser-use agent.
Returns:
JSON string with comparison results.
"""
if not urls or not isinstance(urls, list):
return json.dumps({"error": "urls must be a non-empty list"})
# Validate all URLs
for u in urls:
err = _validate_url(u)
if err:
return json.dumps({"error": f"URL validation failed for {u}: {err}"})
url_list = "\n".join(f" {i+1}. {u}" for i, u in enumerate(urls))
task = (
f"Visit each of these URLs and compare them:\n{url_list}\n\n"
f"Comparison task: {instruction}\n\n"
f"Visit each URL one by one, extract relevant information, "
f"then provide a structured comparison. Use the 'done' action when finished."
)
return browser_use_run(
task=task,
max_steps=max_steps,
model=model,
url=urls[0],
)
# ---------------------------------------------------------------------------
# LLM resolution helpers
# ---------------------------------------------------------------------------
def _resolve_langchain_llm(model: Optional[str]):
"""Build a LangChain LLM from a model string or environment.
Supports OpenAI and Anthropic models. Returns the LLM instance or
an error message string on failure.
"""
if not model:
# Auto-detect from available API keys
if os.getenv("ANTHROPIC_API_KEY"):
model = "claude-sonnet-4-20250514"
elif os.getenv("OPENAI_API_KEY"):
model = "gpt-4o"
else:
return json.dumps({
"error": "No LLM model configured for browser-use. "
"Set BROWSER_USE_MODEL, ANTHROPIC_API_KEY, or OPENAI_API_KEY."
})
model_lower = model.lower()
if "claude" in model_lower or "anthropic" in model_lower:
try:
from langchain_anthropic import ChatAnthropic
api_key = os.getenv("ANTHROPIC_API_KEY", "")
if not api_key:
return json.dumps({"error": "ANTHROPIC_API_KEY not set"})
return ChatAnthropic(
model=model,
api_key=api_key,
timeout=60,
stop=None,
)
except ImportError:
return json.dumps({
"error": "langchain-anthropic not installed. "
"Install: pip install langchain-anthropic"
})
# Default to OpenAI-compatible
try:
from langchain_openai import ChatOpenAI
api_key = os.getenv("OPENAI_API_KEY", "")
base_url = os.getenv("OPENAI_BASE_URL", None)
if not api_key:
return json.dumps({"error": "OPENAI_API_KEY not set"})
kwargs = {
"model": model,
"api_key": api_key,
"timeout": 60,
}
if base_url:
kwargs["base_url"] = base_url
return ChatOpenAI(**kwargs)
except ImportError:
return json.dumps({
"error": "langchain-openai not installed. "
"Install: pip install langchain-openai"
})
# ---------------------------------------------------------------------------
# Schema definitions
# ---------------------------------------------------------------------------
BROWSER_USE_RUN_SCHEMA = {
"name": "browser_use_run",
"description": (
"Run an autonomous browser task using AI-driven browser automation. "
"Describe what you want to accomplish in natural language, and browser-use "
"will autonomously navigate, click, type, and extract data to complete it. "
"Best for multi-step tasks like 'find X on website Y' or 'fill out this form'. "
"For simple single-page extraction, prefer web_extract (faster). "
"For fine-grained step-by-step control, use browser_navigate/snapshot/click/type."
),
"parameters": {
"type": "object",
"properties": {
"task": {
"type": "string",
"description": "Natural language description of the browser task to perform"
},
"max_steps": {
"type": "integer",
"description": "Maximum number of autonomous steps (default: 25)",
"default": 25,
},
"model": {
"type": "string",
"description": "LLM model for the browser-use agent (default: auto-detect from available API keys)",
},
"url": {
"type": "string",
"description": "Optional starting URL to navigate to before beginning the task",
},
"use_vision": {
"type": "boolean",
"description": "Use screenshots for visual context (more token-heavy, default: false)",
"default": False,
},
},
"required": ["task"],
},
}
BROWSER_USE_EXTRACT_SCHEMA = {
"name": "browser_use_extract",
"description": (
"Navigate to a URL and extract structured data using autonomous browser automation. "
"Specify what to extract in natural language. This is a convenience wrapper that "
"combines navigation + extraction into a single call."
),
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to navigate to and extract data from"
},
"instruction": {
"type": "string",
"description": "What to extract (e.g., 'Extract all pricing tiers with prices and features')",
"default": "Extract all meaningful content from this page",
},
"max_steps": {
"type": "integer",
"description": "Maximum number of browser steps (default: 15)",
"default": 15,
},
"model": {
"type": "string",
"description": "LLM model for the browser-use agent",
},
},
"required": ["url"],
},
}
BROWSER_USE_COMPARE_SCHEMA = {
"name": "browser_use_compare",
"description": (
"Visit multiple URLs and compare their content using autonomous browser automation. "
"Specify what to compare in natural language. The agent will visit each URL, "
"extract relevant data, and produce a structured comparison."
),
"parameters": {
"type": "object",
"properties": {
"urls": {
"type": "array",
"items": {"type": "string"},
"description": "List of URLs to visit and compare"
},
"instruction": {
"type": "string",
"description": "What to compare (e.g., 'Compare pricing plans and features')",
"default": "Compare the content on these pages",
},
"max_steps": {
"type": "integer",
"description": "Maximum number of browser steps (default: 25)",
"default": 25,
},
"model": {
"type": "string",
"description": "LLM model for the browser-use agent",
},
},
"required": ["urls"],
},
}
# ---------------------------------------------------------------------------
# Handlers
# ---------------------------------------------------------------------------
def _handle_browser_use_run(args: dict, **kw) -> str:
return browser_use_run(
task=args.get("task", ""),
max_steps=args.get("max_steps", 25),
model=args.get("model"),
url=args.get("url"),
use_vision=args.get("use_vision", False),
)
def _handle_browser_use_extract(args: dict, **kw) -> str:
return browser_use_extract(
url=args.get("url", ""),
instruction=args.get("instruction", "Extract all meaningful content from this page"),
max_steps=args.get("max_steps", 15),
model=args.get("model"),
)
def _handle_browser_use_compare(args: dict, **kw) -> str:
return browser_use_compare(
urls=args.get("urls", []),
instruction=args.get("instruction", "Compare the content on these pages"),
max_steps=args.get("max_steps", 25),
model=args.get("model"),
)
# ---------------------------------------------------------------------------
# Module test
# ---------------------------------------------------------------------------
if __name__ == "__main__":
print("Browser Use Tool Module")
print("=" * 40)
if _check_browser_use_available():
print("browser-use library: installed")
else:
print("browser-use library: NOT installed")
print(" Install: pip install browser-use && playwright install chromium")
# Check API keys
if os.getenv("ANTHROPIC_API_KEY"):
print("ANTHROPIC_API_KEY: set")
elif os.getenv("OPENAI_API_KEY"):
print("OPENAI_API_KEY: set")
else:
print("No LLM API keys found (need ANTHROPIC_API_KEY or OPENAI_API_KEY)")
if os.getenv("BROWSER_USE_API_KEY"):
print("BROWSER_USE_API_KEY: set (cloud mode available)")
else:
print("BROWSER_USE_API_KEY: not set (local Playwright mode)")
# ---------------------------------------------------------------------------
# Registry
# ---------------------------------------------------------------------------
from tools.registry import registry
registry.register(
name="browser_use_run",
toolset="browser_use",
schema=BROWSER_USE_RUN_SCHEMA,
handler=_handle_browser_use_run,
check_fn=_check_browser_use_available,
emoji="🤖",
)
registry.register(
name="browser_use_extract",
toolset="browser_use",
schema=BROWSER_USE_EXTRACT_SCHEMA,
handler=_handle_browser_use_extract,
check_fn=_check_browser_use_available,
emoji="🔍",
)
registry.register(
name="browser_use_compare",
toolset="browser_use",
schema=BROWSER_USE_COMPARE_SCHEMA,
handler=_handle_browser_use_compare,
check_fn=_check_browser_use_available,
emoji="⚖️",
)

203
tools/provider_allowlist.py Normal file
View File

@@ -0,0 +1,203 @@
#!/usr/bin/env python3
"""
Provider Allowlist Guard — runtime enforcement of the banned provider policy.
Intercepts model/provider configuration at startup and rejects anything
not on the explicit allowlist. Prevents accidental Anthropic/OpenAI
fallback from ever reaching inference.
Usage:
Import and call validate_provider() at agent startup:
from provider_allowlist import validate_provider, ProviderBanned
try:
validate_provider(config["provider"], config["model"])
except ProviderBanned as e:
logger.error(f"Banned provider: {e}")
sys.exit(1)
The allowlist is intentionally conservative. Add new providers here
only after human review and a merged PR.
"""
import re
import os
class ProviderBanned(Exception):
"""Raised when a banned provider is detected."""
pass
# ═══ ALLOWLIST ═══
# Only these provider/model combinations are permitted.
# Everything else is rejected by default.
ALLOWED_PROVIDERS = {
"ollama": {
"description": "Local inference via Ollama",
"model_patterns": [
r"^hermes.*", # All Hermes variants
r"^gemma.*", # Gemma family
r"^qwen.*", # Qwen family
r"^llama.*", # Llama family
r"^mistral.*", # Mistral family
r"^codestral.*", # Codestral
r"^deepseek.*", # DeepSeek family
r"^phi.*", # Phi family
r"^nomic-embed.*", # Embedding models
]
},
"openrouter": {
"description": "OpenRouter relay (temporary falsework)",
"model_patterns": [
r"^google/.*", # Google models via relay
r"^meta-llama/.*", # Meta models via relay
r"^mistralai/.*", # Mistral via relay
r"^qwen/.*", # Qwen via relay
r"^deepseek/.*", # DeepSeek via relay
r"^nousresearch/.*", # NousResearch (Hermes) via relay
]
},
"kimi": {
"description": "Kimi/Moonshot (wizard-specific)",
"model_patterns": [
r"^kimi.*",
r"^moonshot.*",
]
},
"gemini": {
"description": "Google Gemini direct (temporary falsework)",
"model_patterns": [
r"^gemini.*",
]
},
}
# ═══ BANLIST ═══
# These are permanently banned. No exceptions.
BANNED_PROVIDERS = {
"anthropic": "Permanently banned — hard policy since 2026-04-09",
"claude": "Alias for Anthropic — permanently banned",
"openai": "Not sovereign — use local models or OpenRouter relay",
}
BANNED_MODEL_PATTERNS = [
r"^claude.*", # Any Claude model
r"^gpt-4.*", # GPT-4 variants
r"^gpt-3.*", # GPT-3 variants
r"^o1.*", # O1 variants
r"^o3.*", # O3 variants
]
def validate_provider(provider: str, model: str = "") -> bool:
"""
Validate that a provider/model combination is allowed.
Args:
provider: The provider name (e.g., "ollama", "anthropic")
model: The model name (e.g., "hermes3:latest", "claude-3-opus")
Returns:
True if allowed
Raises:
ProviderBanned: If the provider or model is banned
"""
provider_lower = provider.lower().strip()
model_lower = model.lower().strip()
# Check banlist first
if provider_lower in BANNED_PROVIDERS:
raise ProviderBanned(
f"Provider '{provider}' is permanently banned: {BANNED_PROVIDERS[provider_lower]}"
)
# Check model against banned patterns
for pattern in BANNED_MODEL_PATTERNS:
if re.match(pattern, model_lower):
raise ProviderBanned(
f"Model '{model}' matches banned pattern '{pattern}'"
)
# Check allowlist
if provider_lower not in ALLOWED_PROVIDERS:
raise ProviderBanned(
f"Provider '{provider}' is not on the allowlist. "
f"Allowed: {', '.join(sorted(ALLOWED_PROVIDERS.keys()))}"
)
# If model specified, validate against provider's allowed patterns
if model_lower:
allowed = ALLOWED_PROVIDERS[provider_lower]
for pattern in allowed["model_patterns"]:
if re.match(pattern, model_lower):
return True
raise ProviderBanned(
f"Model '{model}' is not allowed for provider '{provider}'. "
f"Allowed patterns: {allowed['model_patterns']}"
)
return True
def scan_config(config: dict) -> list:
"""
Scan an entire config dict for banned provider references.
Returns a list of violations.
"""
violations = []
def _scan(obj, path=""):
if isinstance(obj, dict):
for k, v in obj.items():
_scan(v, f"{path}.{k}")
elif isinstance(obj, list):
for i, v in enumerate(obj):
_scan(v, f"{path}[{i}]")
elif isinstance(obj, str):
val = obj.lower()
for banned in BANNED_PROVIDERS:
if banned in val:
violations.append(f"{path}: contains banned provider '{banned}' in value '{obj}'")
for pattern in BANNED_MODEL_PATTERNS:
if re.match(pattern, val):
violations.append(f"{path}: matches banned model pattern '{pattern}' with value '{obj}'")
_scan(config)
return violations
if __name__ == "__main__":
# Self-test
import sys
tests = [
("ollama", "hermes3:latest", True),
("ollama", "gemma4:latest", True),
("anthropic", "claude-3-opus", False),
("claude", "", False),
("openai", "gpt-4", False),
("openrouter", "google/gemini-2.5-pro", True),
("openrouter", "anthropic/claude-3", False),
("kimi", "kimi-k2.5", True),
("unknown_provider", "", False),
]
passed = 0
for provider, model, should_pass in tests:
try:
validate_provider(provider, model)
if should_pass:
passed += 1
print(f"{provider}/{model} — allowed (expected)")
else:
print(f"{provider}/{model} — allowed (SHOULD HAVE BEEN BLOCKED)")
except ProviderBanned as e:
if not should_pass:
passed += 1
print(f"{provider}/{model} — blocked: {e}")
else:
print(f"{provider}/{model} — blocked (SHOULD HAVE BEEN ALLOWED): {e}")
print(f"\n{passed}/{len(tests)} tests passed")
sys.exit(0 if passed == len(tests) else 1)