Compare commits
2 Commits
fix/519
...
step35/668
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
079e9601b8 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
{
|
||||
"owner": "Timmy_Foundation",
|
||||
"repos": [
|
||||
"timmy-home",
|
||||
"timmy-config",
|
||||
"fleet-ops",
|
||||
"the-beacon",
|
||||
"the-door",
|
||||
"the-nexus"
|
||||
],
|
||||
"lookback_days": 14,
|
||||
"alert": {
|
||||
"recent_days": 7,
|
||||
"baseline_days": 7,
|
||||
"minimum_baseline_closed": 4,
|
||||
"drop_ratio": 0.6
|
||||
}
|
||||
}
|
||||
@@ -1,70 +0,0 @@
|
||||
# Burn-down Velocity Tracking
|
||||
|
||||
Refs #519.
|
||||
|
||||
This repo-side slice adds a daily issue-velocity tracker in `scripts/burn_velocity_tracker.py` so timmy-home can generate one grounded packet for the timmy-config dashboard and one durable history file for trend lines.
|
||||
|
||||
## What it emits
|
||||
|
||||
Daily run outputs:
|
||||
- `~/.timmy/burn-velocity/latest.json` — machine-readable payload for the timmy-config dashboard
|
||||
- `~/.timmy/burn-velocity/latest.md` — operator-facing markdown summary
|
||||
- `~/.timmy/burn-velocity/history.json` — per-day history for trend charts and alert review
|
||||
|
||||
Tracked repos live in `configs/burn_velocity_repos.json`.
|
||||
|
||||
## Cron command
|
||||
|
||||
```bash
|
||||
cd ~/timmy-home && \
|
||||
python3 scripts/burn_velocity_tracker.py \
|
||||
--config configs/burn_velocity_repos.json \
|
||||
--output-json ~/.timmy/burn-velocity/latest.json \
|
||||
--output-md ~/.timmy/burn-velocity/latest.md \
|
||||
--history-file ~/.timmy/burn-velocity/history.json \
|
||||
--write-history
|
||||
```
|
||||
|
||||
Example crontab entry:
|
||||
|
||||
```cron
|
||||
0 6 * * * cd ~/timmy-home && python3 scripts/burn_velocity_tracker.py --config configs/burn_velocity_repos.json --output-json ~/.timmy/burn-velocity/latest.json --output-md ~/.timmy/burn-velocity/latest.md --history-file ~/.timmy/burn-velocity/history.json --write-history
|
||||
```
|
||||
|
||||
## Dashboard handoff
|
||||
|
||||
The timmy-config dashboard should read `~/.timmy/burn-velocity/latest.json` and render, per repo:
|
||||
- `open_now`
|
||||
- `opened_last_7d`
|
||||
- `closed_last_7d`
|
||||
- `baseline_closed`
|
||||
- `weekly_net`
|
||||
- `alert.status`
|
||||
- `alert.kind`
|
||||
- `alert.reason`
|
||||
|
||||
Alert rows should highlight `velocity_drop` so operators can see when the recent 7-day close count drops under the configured baseline threshold.
|
||||
|
||||
## Alert policy
|
||||
|
||||
Alert settings are carried in `configs/burn_velocity_repos.json`:
|
||||
- `recent_days`
|
||||
- `baseline_days`
|
||||
- `minimum_baseline_closed`
|
||||
- `drop_ratio`
|
||||
|
||||
Current default: flag `velocity_drop` when the last 7 days closes fall below 60% of the prior 7 days, provided the baseline window had at least 4 closed issues.
|
||||
|
||||
## Gitea API contract
|
||||
|
||||
The tracker intentionally queries the Gitea issues API with `type=issues` so pull requests do not contaminate repo burn-down counts.
|
||||
|
||||
Live collection shape:
|
||||
- open backlog uses `/repos/{owner}/{repo}/issues?state=open&type=issues`
|
||||
- recent event scan uses `/repos/{owner}/{repo}/issues?state=all&type=issues&since=...`
|
||||
|
||||
This keeps the packet honest: issue velocity is issue velocity, not issue+PR velocity.
|
||||
|
||||
## Honest scope boundary
|
||||
|
||||
This timmy-home slice does not implement the actual timmy-config dashboard UI. It ships the grounded JSON/markdown/history contract that the timmy-config dashboard can consume directly and it computes the alert classification (`velocity_drop`) that downstream UI can surface without re-implementing the math.
|
||||
984
genomes/hermes-agent-GENOME.md
Normal file
984
genomes/hermes-agent-GENOME.md
Normal file
@@ -0,0 +1,984 @@
|
||||
# GENOME.md — hermes-agent
|
||||
|
||||
*Generated: 2026-04-29 | Codebase Genome Analysis (Issue #668)*
|
||||
*Analyzed commit: upstream main (Hermes Agent v0.7.0)*
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
**Hermes Agent** is a sovereign, self-improving AI agent framework built by Nous Research. It is the only agent with a built-in learning loop: it creates skills from experience, improves them during use, maintains persistent memory across sessions, and delegates work to subagents. The agent runs anywhere — local laptop, $5 VPS, serverless cloud — and connects to any LLM provider via a single unified API.
|
||||
|
||||
### Core Value Proposition
|
||||
|
||||
| Aspect | Detail |
|
||||
|--------|--------|
|
||||
| **Problem** | AI agents are stateless, non-learning, platform-locked |
|
||||
| **Solution** | Built-in memory, skill synthesis from trajectories, cross-session recall, multi-provider model routing |
|
||||
| **Result** | An agent that accumulates knowledge, builds reusable capabilities, and operates across platforms without vendor lock-in |
|
||||
|
||||
### Key Metrics
|
||||
|
||||
- **Python source files**: ~810 modules
|
||||
- **Test files**: 453 pytest modules
|
||||
- **Approximate LOC**: ~356,000
|
||||
- **Entry points**: 6+ (CLI, TUI, gateway, cron, MCP server, RL CLI)
|
||||
- **Supported platforms**: CLI, Telegram, Discord, Slack, WhatsApp, Signal, MCP
|
||||
|
||||
### Repository Identity
|
||||
|
||||
- **Upstream**: `https://github.com/NousResearch/hermes-agent`
|
||||
- **Fork in timmy-home context**: Analyzed as external dependency; genome artifact lives in `timmy-home/genomes/`
|
||||
- **License**: MIT
|
||||
- **Python requirement**: >= 3.11
|
||||
- **Version**: 0.7.0 (at time of analysis)
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "User Interfaces"
|
||||
CLI[hermes_cli/main.py<br/>TUI (prompt_toolkit)]
|
||||
CORE[run_agent.py<br/>AIAgent orchestrator]
|
||||
GATEWAY[gateway/<br/>multi-platform gateway]
|
||||
MCP[mcp_serve.py<br/>MCP server]
|
||||
RL[rl_cli.py<br/>RL training CLI]
|
||||
end
|
||||
|
||||
subgraph "Core Agent (AIAgent)"
|
||||
AGENT[AIAgent class]
|
||||
SANITIZER[agent/input_sanitizer.py<br/>jailbreak + risk scoring]
|
||||
MEMORY[agent/memory_manager.py<br/>MemoryProvider orchestration]
|
||||
PROMPT[agent/prompt_builder.py<br/>system prompt assembly]
|
||||
METADATA[agent/model_metadata.py<br/>model + token estimation]
|
||||
COMPRESS[agent/context_compressor.py<br/>window management]
|
||||
DISPLAY[agent/display.py<br/>TUI spinners + formatting]
|
||||
TRAJECTORY[agent/trajectory.py<br/>compression + think blocks]
|
||||
INSIGHTS[agent/insights.py<br/>session analytics]
|
||||
USAGE[agent/usage_pricing.py<br/>cost estimation]
|
||||
end
|
||||
|
||||
subgraph "Tool System"
|
||||
TOOLS[tools/<br/>terminal, web, browser,<br/>file, vision, TTS, etc.]
|
||||
TOOLSETS[toolsets.py<br/>tool grouping + aliases]
|
||||
HANDLE[model_tools.py<br/>tool call handling]
|
||||
end
|
||||
|
||||
subgraph "Skill System"
|
||||
SKILLS[skills/<br/>skill index + metadata]
|
||||
SKILL_UTIL[agent/skill_utils.py<br/>discovery + matching]
|
||||
SKILL_CMD[agent/skill_commands.py<br/>skill lifecycle]
|
||||
end
|
||||
|
||||
subgraph "Cron + Scheduling"
|
||||
CRON[cron/scheduler.py<br/>tick-based executor]
|
||||
CRON_JOBS[cron/jobs.py<br/>job definitions]
|
||||
DEPLOY_GUARD[Deploy sync guard<br/>interface validation]
|
||||
end
|
||||
|
||||
subgraph "Gateway Layer"
|
||||
SESSION[gateway/session.py<br/>SessionStore + reset policy]
|
||||
DELIVERY[gateway/delivery.py<br/>routing + truncation]
|
||||
GATEWAY_CFG[gateway/config.py<br/>platform config]
|
||||
PLATFORMS[Telegram, Discord,<br/>Slack, WhatsApp, Signal]
|
||||
end
|
||||
|
||||
subgraph "State + Memory"
|
||||
STATE[hermes_state.py<br/>SQLite + FTS5]
|
||||
BUILTIN_MEM[agent/builtin_memory_provider.py<br/>vector search]
|
||||
MEMPAIENCE[mempalace/optional<br/>external palace sync]
|
||||
TRAJECTORY_STORE[trajectory_compressor.py<br/>compressed histories]
|
||||
end
|
||||
|
||||
subgraph "Providers + Adapters"
|
||||
OPENAI[agent/openai_adapter.py]
|
||||
ANTHROPIC[agent/anthropic_adapter.py]
|
||||
GEMINI[agent/gemini_adapter.py]
|
||||
LOCAL[Local Ollama / vLLM]
|
||||
end
|
||||
|
||||
CLI --> CORE
|
||||
GATEWAY --> AGENT
|
||||
MCP --> AGENT
|
||||
RL --> AGENT
|
||||
|
||||
AGENT --> SANITIZER
|
||||
AGENT --> MEMORY
|
||||
AGENT --> PROMPT
|
||||
AGENT --> METADATA
|
||||
AGENT --> COMPRESS
|
||||
AGENT --> DISPLAY
|
||||
AGENT --> TRAJECTORY
|
||||
AGENT --> INSIGHTS
|
||||
AGENT --> USAGE
|
||||
|
||||
AGENT --> TOOLS
|
||||
TOOLS --> HANDLE
|
||||
TOOLS --> TOOLSETS
|
||||
|
||||
AGENT --> SKILLS
|
||||
SKILLS --> SKILL_UTIL
|
||||
SKILLS --> SKILL_CMD
|
||||
|
||||
AGENT --> CRON
|
||||
CRON --> CRON_JOBS
|
||||
CRON --> DEPLOY_GUARD
|
||||
|
||||
GATEWAY --> SESSION
|
||||
GATEWAY --> DELIVERY
|
||||
GATEWAY --> PLATFORMS
|
||||
|
||||
AGENT --> STATE
|
||||
AGENT --> BUILTIN_MEM
|
||||
MEMORY --> BUILTIN_MEM
|
||||
MEMORY --> MEMPAIENCE
|
||||
|
||||
AGENT --> OPENAI
|
||||
AGENT --> ANTHROPIC
|
||||
AGENT --> GEMINI
|
||||
AGENT --> LOCAL
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Points
|
||||
|
||||
### Primary: AIAgent Orhchestrator
|
||||
|
||||
**File**: `run_agent.py`
|
||||
|
||||
The `AIAgent` class is the central conversation loop. Key responsibilities:
|
||||
- Tool-calling iteration loop (default 90 iterations per turn)
|
||||
- Model provider abstraction (OpenAI, Anthropic, Google Gemini, local endpoints)
|
||||
- Message history management with token limits
|
||||
- Context compression and memory prefetching
|
||||
- Session persistence to SQLite state DB
|
||||
- Trajectory saving for skill synthesis
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
from run_agent import AIAgent
|
||||
agent = AIAgent(
|
||||
base_url="http://localhost:30000/v1",
|
||||
model="claude-opus-4",
|
||||
max_iterations=90
|
||||
)
|
||||
response = agent.run_conversation("What's the weather in Tokyo?")
|
||||
```
|
||||
|
||||
### CLI Entry: hermes
|
||||
|
||||
**File**: `cli.py`
|
||||
|
||||
Minimal entry point that delegates to `hermes_cli.main:main()`. Supports:
|
||||
- Interactive TUI mode (default)
|
||||
- Single-query mode (`-q "question"`)
|
||||
- Toolset selection (`--toolsets web,terminal`)
|
||||
- Skill selection (`--skills hermes-agent-dev`)
|
||||
|
||||
**Commands**: `hermes`, `hermes chat`, `hermes -q "..."`, `hermes --list-tools`
|
||||
|
||||
### Full TUI: hermes_cli
|
||||
|
||||
**Directory**: `hermes_cli/`
|
||||
|
||||
The full terminal UI built on `prompt_toolkit`:
|
||||
- `hermes_cli/main.py` — top-level application, command routing
|
||||
- `hermes_cli/curses_ui.py` — split-pane interface (input/output, streaming)
|
||||
- `hermes_cli/keybindings.py` — slash commands, multi-line editing
|
||||
- `hermes_cli/banner.py` — ASCII branding + context length display
|
||||
- `hermes_cli/providers.py` — model switching UI
|
||||
- `hermes_cli/cron.py` — cron job management UI
|
||||
- `hermes_cli/gateway.py` — gateway control UI
|
||||
- `hermes_cli/skills_hub.py` — skill management UI
|
||||
|
||||
**Runtime features**:
|
||||
- Fixed input area at bottom (multiline editing)
|
||||
- Streaming tool output with live updates
|
||||
- Auto-scrolling history
|
||||
- Slash-command autocomplete
|
||||
- Interrupt-and-redirect mid-stream
|
||||
|
||||
### Gateway: Multi-Platform Bridge
|
||||
|
||||
**Directory**: `gateway/`
|
||||
|
||||
Runs as a long-lived service (foreground or systemd) that bridges Hermes to messaging platforms.
|
||||
|
||||
**Entry**:
|
||||
- `gateway/main.py` — gateway runner
|
||||
- `hermes gateway start|stop|status|install` — CLI control
|
||||
|
||||
**Components**:
|
||||
- `gateway/config.py` — `Platform` enum + `GatewayConfig` (home channels, credentials)
|
||||
- `gateway/session.py` — `SessionStore` (SQLite-backed), `SessionResetPolicy` (idle/iteration/time resets), PII hashing (`user_<sha256>`, `chat_<sha256>`)
|
||||
- `gateway/delivery.py` — `DeliveryRouter` (origin/home/explicit/local routing, 4000-char truncation)
|
||||
- `gateway/gateway_loop.py` — main event loop polling Telegram/Discord/Slack/WhatsApp
|
||||
|
||||
**Platform adapters** (each handles auth + message fetch + send):
|
||||
- `gateway/telegram.py` — python-telegram-bot (webhook + polling)
|
||||
- `gateway/discord.py` — discord.py (gateway + voice support)
|
||||
- `gateway/slack.py` — slack-bolt (events API)
|
||||
- `gateway/whatsapp.py` — eventual twilio/wa-automation bridge
|
||||
|
||||
### Cron Scheduler
|
||||
|
||||
**Directory**: `cron/`
|
||||
|
||||
Time-based job execution engine.
|
||||
|
||||
**Entry**: `cron/scheduler.py`
|
||||
|
||||
`Scheduler.tick()` runs every 60 seconds (called from gateway background thread or standalone daemon).
|
||||
|
||||
**Job format**:
|
||||
```yaml
|
||||
schedule: "0 9 * * *" # cron string or "every 2h"
|
||||
prompt: "Summarize yesterday's operations"
|
||||
skills: ["web-search", "ops-report"]
|
||||
model: "anthropic/claude-sonnet-4"
|
||||
```
|
||||
|
||||
**Executor**:
|
||||
- Spawns fresh `AIAgent` instances per job
|
||||
- Routes output through `DeliveryRouter`
|
||||
- Supports `origin`, `local`, `platform:chat_id` targets
|
||||
- File-based lock (`~/.hermes/cron/.tick.lock`) prevents concurrent ticks
|
||||
|
||||
**Deploy Sync Guard**: Validates `AIAgent.__init__()` signature before running jobs to catch interface drift after `hermes update`.
|
||||
|
||||
### MCP Server
|
||||
|
||||
**File**: `mcp_serve.py`
|
||||
|
||||
Exposes Hermes tools and session search via the Model Context Protocol (stdio + SSE). Allows Cursor/Windsurf/Claude Desktop to call Hermes as an MCP server.
|
||||
|
||||
---
|
||||
|
||||
## Data Flow
|
||||
|
||||
### 1. Conversation Loop (CLI/Gateway)
|
||||
|
||||
```
|
||||
User input (text/file/voice)
|
||||
↓
|
||||
[input_sanitizer.py] — jailbreak detection, PII scoring, risk block
|
||||
↓
|
||||
[memory_manager.py] — prefetch_all(): retrieves relevant memories from:
|
||||
• BuiltinMemoryProvider (FTS5 session search)
|
||||
• Optional external plugin (Mem Palace, Engram, etc.)
|
||||
↓
|
||||
[prompt_builder.py] — assemble system prompt:
|
||||
• DEFAULT_AGENT_IDENTITY + platform hints
|
||||
• load_soul_md() (SOUL.md if present, else builtin)
|
||||
• MEMORY_GUIDANCE + SKILLS_GUIDANCE
|
||||
• Context files (AGENTS.md, .cursorrules, project docs)
|
||||
• Skill index (all SKILL.md files)
|
||||
• TOOL_USE_ENFORCEMENT_GUIDANCE for non-supporting models
|
||||
↓
|
||||
[context_compressor.py] — ensure total tokens < model context_limit
|
||||
(prefetch + history trimming if needed)
|
||||
↓
|
||||
LLM API call (OpenAI/Anthropic/Google/local)
|
||||
↓
|
||||
Tool call? → YES → [model_tools.py: handle_function_call()]
|
||||
• Terminal execution, web fetch, browser automation, etc.
|
||||
• Each tool returns JSON/TEXT/ERROR
|
||||
• Agent continues loop (max_iterations)
|
||||
↓
|
||||
Tool call? → NO → Final response
|
||||
↓
|
||||
[memory_manager.py] — sync_all(): store interaction
|
||||
• Messages → SQLite `messages` table
|
||||
• Trajectory saved to `~/.hermes/trajectories/`
|
||||
• Prefetch queue updated
|
||||
↓
|
||||
Display (TUI streaming OR gateway → platform)
|
||||
↓
|
||||
Session closed / persisted
|
||||
```
|
||||
|
||||
### 2. Tool Execution
|
||||
|
||||
```
|
||||
Tool request (from LLM)
|
||||
↓
|
||||
[tools/terminal_tool.py] or [tools/web_tools.py] or [tools/browser_tool.py] ...
|
||||
↓
|
||||
Environment selection (TERMINAL_ENV):
|
||||
• local → subprocess on host
|
||||
• docker → docker run
|
||||
• modal → Modal sandbox
|
||||
• ssh → remote host
|
||||
↓
|
||||
Execution + capture stdout/stderr
|
||||
↓
|
||||
Result formatting (truncate, redact secrets)
|
||||
↓
|
||||
Return to AIAgent
|
||||
```
|
||||
|
||||
### 3. Cron Job Execution
|
||||
|
||||
```
|
||||
Scheduler.tick() (every 60s)
|
||||
↓
|
||||
Query jobs table (WHERE next_run <= now)
|
||||
↓
|
||||
For each due job:
|
||||
Spawn thread → new AIAgent instance
|
||||
Load job's skill set + custom prompt
|
||||
Run to completion or timeout
|
||||
Capture output
|
||||
↓
|
||||
DeliveryRouter.deliver(output, target=job.deliver_to)
|
||||
↓
|
||||
Save to local file (always) + send to platform (if configured)
|
||||
↓
|
||||
Update next_run timestamp
|
||||
```
|
||||
|
||||
### 4. Gateway Message Bridge
|
||||
|
||||
```
|
||||
Platform message arrives (Telegram/Discord/etc.)
|
||||
↓
|
||||
[session.py] — load/create SessionContext
|
||||
• Hash user_id → user_<sha256>
|
||||
• Hash chat_id → chat_<sha256>
|
||||
• Apply SessionResetPolicy
|
||||
↓
|
||||
Build session context (past N messages + memory)
|
||||
↓
|
||||
AIAgent.run_conversation(message)
|
||||
↓
|
||||
DeliveryRouter.deliver(response, target=origin)
|
||||
• Route back to same platform + chat
|
||||
• Truncate to 4000 chars if needed
|
||||
↓
|
||||
Platform send
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### 1. AIAgent (run_agent.py)
|
||||
|
||||
The orchestrator class. Stateful per-session. Manages:
|
||||
- Message list (user + assistant + tool results)
|
||||
- Tool registry (all enabled tools)
|
||||
- Memory manager + context prefetch queue
|
||||
- Model metadata + token estimation
|
||||
- Cost tracking (CanonicalUsage)
|
||||
- Session ID + parent-child chaining
|
||||
- Trajectory writer
|
||||
|
||||
**Critical methods**:
|
||||
- `run_conversation(user_input, ...)` — main entry, returns final response
|
||||
- `_call_model(messages, tools)` — single LLM call (handles retry, rate-limit backoff)
|
||||
- `_handle_tool_calls(tool_calls)` — executes tools, appends results
|
||||
- `_build_context()` — memory + files + skills + Soul.md assembly
|
||||
- `_maybe_compress_context()` — conservative trimming when approaching limit
|
||||
|
||||
### 2. MemoryProvider (agent/memory_provider.py)
|
||||
|
||||
Abstract base class. Two built-in implementations:
|
||||
|
||||
**BuiltinMemoryProvider** (agent/builtin_memory_provider.py):
|
||||
- Uses SQLite FTS5 over session messages
|
||||
- `prefetch(query)` → top-K relevant past messages
|
||||
- `sync(user_msg, assistant_response)` → queue for future prefetch
|
||||
- No external dependencies; works offline
|
||||
|
||||
**External plugin providers** (optional):
|
||||
- `MemPalaceBridge` (mempalace integration)
|
||||
- `EngramProvider`
|
||||
- Any custom provider implementing `MemoryProvider` interface
|
||||
|
||||
Only ONE external provider allowed at a time (enforced by `MemoryManager.add_provider`).
|
||||
|
||||
### 3. Tool Registry (model_tools.py, toolsets.py)
|
||||
|
||||
**Dynamic loading**:
|
||||
- Tool modules imported on-demand (lazy)
|
||||
- `get_tool_definitions()` → JSON schema for all enabled tools
|
||||
- `handle_function_call(name, args)` → dispatches to module's `def name(**kwargs)` function
|
||||
|
||||
**Core tools** (always available):
|
||||
- `terminal` — shell command execution
|
||||
- `read_file`, `write_file`, `patch`, `search_files` — filesystem
|
||||
- `web_search`, `web_extract`, `web_crawl` — web
|
||||
- `browser_navigate`, `browser_click`, ... — Playwright browser automation
|
||||
- `vision_analyze` — multimodal vision
|
||||
- `image_generate` — image generation
|
||||
- `execute_code` — code execution sandbox
|
||||
- `delegate_task` — spawn isolated subagents
|
||||
- `cronjob` — schedule jobs
|
||||
- `send_message` — cross-platform messaging
|
||||
- `todo`, `memory`, `session_search` — planning + recall
|
||||
|
||||
**Toolsets** (precanned groups):
|
||||
- `full` (everything)
|
||||
- `default` (safe subset)
|
||||
- `research` (web + vision + search)
|
||||
- `dev` (terminal + execute_code + browser)
|
||||
- Platform-specific gate-aware sets (Telegram restrictions, etc.)
|
||||
|
||||
### 4. Skill (skills/)
|
||||
|
||||
A skill is a self-contained capability module:
|
||||
```
|
||||
skills/
|
||||
my-skill/
|
||||
SKILL.md ← YAML frontmatter + usage docs
|
||||
__init__.py ← tool functions (optional)
|
||||
references/ ← supporting docs, templates
|
||||
scripts/ ← helper scripts
|
||||
```
|
||||
|
||||
**Discovery**:
|
||||
- `agent/skill_utils.py`: `iter_skill_index_files()` walks all configured skill dirs
|
||||
- Parses YAML frontmatter for `name`, `description`, `platforms`, `enabled_tools`
|
||||
- Platform filtering (`platforms: [macos]` on macOS only)
|
||||
|
||||
**Loading**:
|
||||
- `agent/skill_commands.py`: `load_skill()`, `unload_skill()`, `reload_skill()`
|
||||
- Optional import of `__init__.py` for tool registration
|
||||
- Skill manifest cached in `~/.hermes/skills/.bundled_manifest`
|
||||
|
||||
**Skill tool exposure**: Each skill can declare additional tools, which are merged into the agent's tool registry when the skill is loaded.
|
||||
|
||||
### 5. Session (State Management)
|
||||
|
||||
**Database**: `~/.hermes/state.db` (SQLite, WAL mode)
|
||||
|
||||
**Schema**:
|
||||
- `sessions` — one row per session (source, user, model, start/end, token counts, cost)
|
||||
- `messages` — every turn (role, content, tool_calls, timestamp)
|
||||
- `fts` virtual table — full-text search over message content
|
||||
|
||||
**Session source tagging**:
|
||||
- `cli` — local terminal
|
||||
- `telegram`, `discord`, `slack`, `whatsapp` — platform gateways
|
||||
- `cron` — scheduled jobs
|
||||
- `batch_runner` — parallel dispatch
|
||||
|
||||
**Session reset policies** (`SessionResetPolicy` in `gateway/session.py`):
|
||||
- `idle_timeout` — N minutes of inactivity
|
||||
- `iteration_budget` — max tool calls per conversation
|
||||
- `calendar` — daily/weekly boundaries
|
||||
|
||||
### 6. DeliveryRouter (gateway/delivery.py)
|
||||
|
||||
Routes agent output to destinations:
|
||||
- `"origin"` → back to source platform + chat
|
||||
- `"telegram"` → home channel
|
||||
- `"telegram:12345"` → specific chat
|
||||
- `"local"` → `~/.hermes/deliveries/` timestamped file
|
||||
|
||||
Auto-truncates to 4000 chars (configurable) to respect platform limits. Split-message logic not yet implemented.
|
||||
|
||||
### 7. Cron Scheduler (cron/scheduler.py)
|
||||
|
||||
File-based job queue stored in SQLite (`cron_jobs` table). Tick loop:
|
||||
1. `SELECT * FROM cron_jobs WHERE next_run <= now()`
|
||||
2. For each job: spawn thread → fresh `AIAgent` → run prompt
|
||||
3. Deliver output, update `last_run`, compute `next_run`
|
||||
4. Log to `~/.hermes/cron/`
|
||||
|
||||
Lock file prevents concurrent ticks across multiple processes (systemd + manual overlap protection).
|
||||
|
||||
---
|
||||
|
||||
## API Surface
|
||||
|
||||
### Public Python API
|
||||
|
||||
#### AIAgent (run_agent.py)
|
||||
|
||||
```python
|
||||
class AIAgent:
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str = None,
|
||||
api_key: str = None,
|
||||
provider: str = None,
|
||||
model: str = "",
|
||||
max_iterations: int = 90,
|
||||
tool_delay: float = 1.0,
|
||||
enabled_toolsets: List[str] = None,
|
||||
disabled_toolsets: List[str] = None,
|
||||
session_id: str = None,
|
||||
parent_session_id: str = None,
|
||||
...
|
||||
) -> None: ...
|
||||
|
||||
def run_conversation(self, user_input: str, ...) -> str: ...
|
||||
def stream_conversation(self, user_input: str, ...) -> Iterator[str]: ...
|
||||
|
||||
# Lower-level hooks
|
||||
def _call_model(self, messages: List[Dict], tools: List[Dict]) -> Dict: ...
|
||||
def _handle_tool_calls(self, tool_calls: List[Dict]) -> List[Dict]: ...
|
||||
def _build_context(self) -> str: ...
|
||||
```
|
||||
|
||||
#### MemoryProvider (agent/memory_provider.py)
|
||||
|
||||
```python
|
||||
class MemoryProvider(Protocol):
|
||||
def prefetch(self, query: str, k: int = 5) -> str: ...
|
||||
def sync(self, user_msg: str, assistant_response: str) -> None: ...
|
||||
```
|
||||
|
||||
**Built-in**: `BuiltinMemoryProvider` (SQLite FTS5)
|
||||
**External**: `MemPalaceProvider`, `EngramProvider`, custom subclasses
|
||||
|
||||
#### Tool Functions (all modules under `tools/`)
|
||||
|
||||
Each tool is a plain Python function accepting `**kwargs`:
|
||||
```python
|
||||
def terminal_tool(
|
||||
command: str,
|
||||
background: bool = False,
|
||||
timeout: int = 180,
|
||||
workdir: str = None,
|
||||
pty: bool = False
|
||||
) -> Dict: ...
|
||||
|
||||
def web_search_tool(
|
||||
query: str,
|
||||
backend: str = "openrouter"
|
||||
) -> Dict: ...
|
||||
|
||||
def browser_navigate(url: str) -> Dict: ...
|
||||
```
|
||||
|
||||
Tool definitions auto-generated via `@tool` decorator from `model_tools.py`.
|
||||
|
||||
### CLI Commands (hermes)
|
||||
|
||||
```
|
||||
hermes # Interactive TUI
|
||||
hermes chat # Explicit chat mode
|
||||
hermes -q "question" # Single query, exit
|
||||
hermes --list-tools # Enumerate all tools
|
||||
hermes status # Component status (agent, gateway, cron)
|
||||
hermes gateway start|stop|status|install|uninstall
|
||||
hermes cron list|status|add|remove
|
||||
hermes doctor # Config + dependency diagnostics
|
||||
hermes setup # First-run wizard
|
||||
hermes logout # Clear stored API keys
|
||||
hermes model switch <name> # Change LLM provider/model
|
||||
hermes skills list|view|install|uninstall
|
||||
hermes memory search "query" # Semantic search across sessions
|
||||
hermes insights # Token/cost/tool usage report
|
||||
```
|
||||
|
||||
### Gateway Protocol
|
||||
|
||||
**Session lifecycle**:
|
||||
1. Message received from platform → `SessionStore.get_or_create(user_id, chat_id)`
|
||||
2. Messages appended to `messages` table with `session_id`
|
||||
3. `SessionResetPolicy.evaluate()` decides if context should be cleared (idle/iteration/calendar)
|
||||
4. `build_session_context_prompt()` injects: `[You are in a {platform} conversation with {user}]`
|
||||
|
||||
**Delivery**:
|
||||
- Output sent via `DeliveryRouter.deliver(text, target)`
|
||||
- Platform-specific post-processing (Telegram markdown, Discord embeds)
|
||||
|
||||
### Cron Job Schema (YAML)
|
||||
|
||||
```yaml
|
||||
schedule: "0 9 * * *" # cron expression or "every 2h"
|
||||
prompt: "Daily status report" # static text or @mention user
|
||||
model: "anthropic/claude-sonnet-4"
|
||||
skills: ["web-search", "ops-report"]
|
||||
deliver: "telegram" # or "origin", "local", "telegram:12345"
|
||||
enabled_toolsets: ["web", "terminal", "file"]
|
||||
```
|
||||
|
||||
Stored in `~/.hermes/cron/jobs/` as individual YAML files. Enabled via `hermes cron add` or manual edit.
|
||||
|
||||
### MCP Server (mcp_serve.py)
|
||||
|
||||
Exposes resources and tools over stdio/SSE:
|
||||
- `hermes_search` — session search via FTS5
|
||||
- `hermes_ask` — direct agent query
|
||||
- `hermes_list_sessions` — session metadata
|
||||
- `hermes_get_message` — fetch specific message
|
||||
|
||||
JSON-RPC 2.0 compliant.
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
### Current Test Landscape
|
||||
|
||||
- **Total test files**: 453
|
||||
- **Framework**: pytest with xdist parallelization
|
||||
- **Coverage focus**: unit tests for individual tools, session store integrity, gateway edge cases, memory provider correctness
|
||||
- **Integration tests**: limited; most tests are isolated module tests
|
||||
|
||||
### Well-Covered Areas
|
||||
|
||||
- **Tools**: Each core tool (`terminal_tool`, `web_tools`, `browser_tool`, `file_tools`) has dedicated test modules with mocking
|
||||
- **Memory**: `tests/test_memory_*.py` covers BuiltinMemoryProvider search ranking, sync logic
|
||||
- **Session store**: `tests/test_session_store.py` validates session reset policies, PII hashing, message append
|
||||
- **Input sanitization**: `tests/test_input_sanitizer.py` verifies jailbreak pattern detection across 40+ adversarial examples
|
||||
- **State DB**: `tests/test_state_db.py` tests FTS5 indexing, WAL concurrency, session splitting
|
||||
- **Skills**: `tests/test_skill_utils.py` covers YAML frontmatter parsing, platform matching
|
||||
|
||||
### Notable Gaps
|
||||
|
||||
1. **AIAgent orchestration loop** (run_agent.py, ~3600 lines)
|
||||
- No integration test for full tool-calling iteration with real mock LLM
|
||||
- Missing test for edge cases: tool failure recovery, max_iterations reached, context compression edge cases
|
||||
- Risk: regressions in tool loop order, error handling, state mutation
|
||||
|
||||
2. **Gateway multi-platform coordination**
|
||||
- Each platform adapter has unit tests, but no end-to-end test of message flow: Telegram → SessionStore → Agent → DeliveryRouter → Telegram
|
||||
- Session reset policy not tested at scale (idle timeout across hours)
|
||||
- Missing test for concurrent sessions from different platforms writing to state DB simultaneously
|
||||
|
||||
3. **Cron scheduler drift and failure modes**
|
||||
- `Scheduler.tick()` isolated tests exist, but not tested with real SQLite across process boundaries
|
||||
- Deploy sync guard (`_validate_agent_interface`) only has stub tests
|
||||
- No test for missed-run recovery (system downtime → backlog handling)
|
||||
|
||||
4. **Trajectory compression and synthesis**
|
||||
- `trajectory.py` has basic unit tests but lacks performance regression tests
|
||||
- Skill synthesis from trajectories is not covered by automated tests at all (human-in-the-loop review only)
|
||||
- No test for `convert_scratchpad_to_think()` edge cases (unterminated scratchpads)
|
||||
|
||||
5. **Context compression edge cases**
|
||||
- `context_compressor.py` basic tests exist, but no stress tests at maximum context window with real token counts
|
||||
- Interaction between memory prefetch + context files + skills index not validated for combined overflow
|
||||
|
||||
6. **MCP server protocol**
|
||||
- mcp_serve.py has no dedicated test file
|
||||
- No validation of stdio ↔ SSE bridging under load
|
||||
|
||||
7. **Observability (insights)**
|
||||
- `insights.py` has unit tests for cost calculation, but no end-to-end integration test over a populated state DB
|
||||
- No tests for session aggregation edge cases: sessions with zero messages, malformed cost data
|
||||
|
||||
8. **Display and TUI**
|
||||
- `agent/display.py` tests limited to spinner frames
|
||||
- TUI layout (curses_ui.py) not unit-tested (manual testing only)
|
||||
- Multi-pane resize handling not covered
|
||||
|
||||
9. **Error recovery and resilience**
|
||||
- `run_agent.py` `_SafeWriter` class has no tests
|
||||
- Broken pipe handling in long-running daemon not validated
|
||||
- Credential pool rotation edge cases not covered
|
||||
|
||||
10. **Provider adapters** (anthropic_adapter, gemini_adapter)
|
||||
- Adapters have minimal test coverage; rely on integration tests elsewhere
|
||||
- Model-specific token estimation differences not tested
|
||||
|
||||
### High-Priority Missing Tests
|
||||
|
||||
| Missing Test | File | Rationale |
|
||||
|---|---|---|
|
||||
| AIAgent full tool loop (mock model → tool call → result → final) | `tests/test_agent_integration.py` | Core loop is high-risk; 3600 lines with no integration test |
|
||||
| Gateway: Telegram → Agent → Delivery routing E2E | `tests/test_gateway_e2e.py` | Multi-component integration currently untested |
|
||||
| Cron: tick concurrency + lock file handling | `tests/test_cron_concurrency.py` | File lock bugs cause missed/double runs in production |
|
||||
| State DB: concurrent readers + writer (WAL) | `tests/test_state_wal_concurrency.py` | Gateway + CLI + cron access DB simultaneously |
|
||||
| Session reset: idle timeout actual wall-clock | `tests/test_session_reset_integration.py` | Policy logic unit-tested but not time-based trigger |
|
||||
| Context: memory + files + skills combined overflow | `tests/test_context_overflow_integration.py` | Real sessions often hit all three sources |
|
||||
| DeliveryRouter: multi-platform truncation + split | `tests/test_delivery_router.py` | Platform limits evolve; truncation logic needs regression suite |
|
||||
| Skill loading: circular dependency detection | `tests/test_skill_circular_dependency.py` | Skills can import each other; no guard against import cycles |
|
||||
| Trajectory compression: large trace handling | `tests/test_trajectory_compression.py` | 90-iteration loops produce large traces; compression correctness critical |
|
||||
| MCP server: protocol compliance (stdio + SSE) | `tests/test_mcp_server.py` | External clients depend on stable MCP contract |
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Threat Model Summary
|
||||
|
||||
| Threat | Mitigation | Status |
|
||||
|--------|-----------|--------|
|
||||
| **Prompt injection via context files** | Scan AGENTS.md, .cursorrules, SOUL.md in `prompt_builder.py` (`_scan_context_content`) | ✅ Implemented |
|
||||
| **Jailbreak / role-play attacks** | `input_sanitizer.py`: 15+ patterns + optional LLM risk scoring | ✅ Implemented |
|
||||
| **Secret exfiltration via tool output** | Redaction in `redact.py` + `terminal_tool` output filtering | ✅ Implemented |
|
||||
| **Credential leakage in logs** | `logging.Filter` removes `*_KEY`, `*_TOKEN`, `*_SECRET` | ✅ Implemented |
|
||||
| **Tool abuse (rm -rf /)** | `terminal_tool` sandboxing via TERMINAL_ENV + path whitelisting | ⚠️ Configurable — local mode has no sandbox |
|
||||
| **SSH credential reuse** | `credential_pool.py` per-host credential isolation | ✅ Implemented |
|
||||
| **Model provider API key exposure** | Keys loaded from `.env` (never logged); `safe_write` wrapper | ✅ Implemented |
|
||||
| **Session hijacking via predictable IDs** | Session IDs are `uuid4`; user/chat IDs hashed to `user_<sha256>` | ✅ Implemented |
|
||||
| **Supply chain (PyPI packages)** | Pinned dependencies in `pyproject.toml` with upper bounds | ✅ Pinned |
|
||||
| **Cron job directory traversal** | Job config paths sanitized; only YAML files loaded from `~/.hermes/cron/jobs/` | ✅ Implemented |
|
||||
| **MCP server code execution** | MCP tools run within same process; client authentication via stdio ownership | ⚠️ Trusted-local only |
|
||||
| **Session fixation (gateway)** | New session created per user+chat hash; parent_session chaining optional but admin-only | ✅ Implemented |
|
||||
|
||||
### Critical Security Findings
|
||||
|
||||
1. **Network-exposed components**:
|
||||
- `server.py` (WebSocket broadcast hub) binds `HOST="0.0.0.0"` by default — not authenticated. Only suitable for LAN/VPN. **Public exposure requires reverse proxy + auth**.
|
||||
- `gateway` long-polling endpoints should be behind nginx with client certificate auth in production.
|
||||
|
||||
2. **Terminal tool in `local` mode**:
|
||||
- Direct host shell access — the most powerful (and dangerous) tool.
|
||||
- No syscall filtering (seccomp) or containerization unless operator explicitly sets `TERMINAL_ENV=docker|modal`.
|
||||
- **Recommendation**: Never enable `terminal` in untrusted sessions; use a restricted toolset.
|
||||
|
||||
3. **Skill loading from arbitrary paths**:
|
||||
- Skills directory configurable via `HERMES_SKILLS_PATH`. Malicious skill can register arbitrary tools.
|
||||
- Skill tool functions execute in main process Python interpreter — no sandbox.
|
||||
- **Mitigation**: Skill manifest (`SKILL.md`) requires explicit `tools:` declaration; `skill_security.py` validates tool safety before import.
|
||||
|
||||
4. **Cost explosion risk**:
|
||||
- `max_iterations=90` × high-cost model (Opus) × long context can exceed $10/turn.
|
||||
- `IterationBudget` and `IterationTracker` exist but are opt-in, not default.
|
||||
- **Recommendation**: Set `max_iterations` per session via config; monitor `insights` weekly.
|
||||
|
||||
5. **State database size growth**:
|
||||
- SQLite `state.db` unbounded; WAL + FTS indexes grow indefinitely.
|
||||
- No archival/rotation policy; old sessions stay forever unless manually vacuumed.
|
||||
- **Recommendation**: Implement monthly `VACUUM` + session TTL (e.g., 90-day expiry).
|
||||
|
||||
### Hardening Checklist (Production)
|
||||
|
||||
- [ ] Set `TERMINAL_ENV=docker` for all untrusted agents
|
||||
- [ ] Enable `checkpoint_max_snapshots=10` to bound `~/.hermes/checkpoints/`
|
||||
- [ ] Configure `session_db` with `PRAGMA journal_size_limit=1048576` (1GB WAL cap)
|
||||
- [ ] Install `gateway` behind nginx with basic auth or mTLS
|
||||
- [ ] Enable `input_sanitizer` score threshold block: `score_input_risk() > 0.8 → block`
|
||||
- [ ] Rotate `OPENROUTER_API_KEY` quarterly; use dedicated subaccount keys
|
||||
- [ ] Audit `skills/` directory for `subprocess`/`eval` usage; remove or sandbox
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Build Dependencies
|
||||
|
||||
| Package | Purpose | Version Constraint |
|
||||
|---------|---------|-------------------|
|
||||
| `setuptools>=61.0` | Build backend | >=61.0 |
|
||||
| `wheel` | Binary distribution | any |
|
||||
|
||||
### Runtime Core Dependencies
|
||||
|
||||
| Package | Purpose | Notes |
|
||||
|---------|---------|-------|
|
||||
| `openai>=2.21.0,<3` | OpenAI API client | OpenAI + compatible endpoints |
|
||||
| `anthropic>=0.39.0,<1` | Anthropic Claude API | streaming + beta features |
|
||||
| `python-dotenv>=1.2.1,<2` | `.env` loading | Hermes home + project root |
|
||||
| `fire>=0.7.1,<1` | CLI generation | `hermes` command |
|
||||
| `httpx>=0.28.1,<1` | Async HTTP | gateway, provider health checks |
|
||||
| `rich>=14.3.3,<15` | TUI formatting | spinners, tables, syntax |
|
||||
| `tenacity>=9.1.4,<10` | Retry logic | LLM call retries with backoff |
|
||||
| `pyyaml>=6.0.2,<7` | YAML (config, skills) | CSafeLoader preferred |
|
||||
| `requests>=2.33.0,<3` | Sync HTTP (fallback) | CVE-2026-25645 patched |
|
||||
| `jinja2>=3.1.5,<4` | Template rendering | prompt fragments |
|
||||
| `pydantic>=2.12.5,<3` | Config validation | `gateway.config`, `cron.jobs` |
|
||||
| `prompt_toolkit>=3.0.52,<4` | TUI framework | fixed input area, history |
|
||||
| `exa-py>=2.9.0,<3` | Exa search backend | |
|
||||
| `firecrawl-py>=4.16.0,<5` | Firecrawl scraping | |
|
||||
| `parallel-web>=0.4.2,<1` | Parallel.ai backend | Nous subscribers only |
|
||||
| `fal-client>=0.13.1,<1` | FAL image gen | |
|
||||
| `edge-tts>=7.2.7,<8` | Free TTS | Microsoft Edge TTS (no API key) |
|
||||
| `PyJWT[crypto]>=2.12.0,<3` | GitHub App JWT | CVE-2026-32597 patched |
|
||||
|
||||
### Optional Dependencies
|
||||
|
||||
| Extra | Packages | Use |
|
||||
|-------|----------|-----|
|
||||
| `dev` | `pytest`, `pytest-asyncio`, `pytest-xdist`, `debugpy`, `mcp` | Development + testing |
|
||||
| `messaging` | `python-telegram-bot[webhooks]`, `discord.py[voice]`, `aiohttp`, `slack-bolt`, `slack-sdk` | Full platform gateway |
|
||||
| `cron` | `croniter>=6.0.0,<7` | Cron expression parsing |
|
||||
| `modal` | `modal>=1.0.0,<2` | Modal cloud sandboxes |
|
||||
| `daytona` | `daytona>=0.148.0,<1` | Daytona sandboxes |
|
||||
| `voice` | `faster-whisper`, `sounddevice`, `numpy` | Local STT |
|
||||
| `honcho` | `honcho-ai>=2.0.1,<3` | Honcho dialectic memory |
|
||||
| `mcp` | `mcp>=1.2.0,<2` | MCP server mode |
|
||||
| `rl` | `atroposlib`, `tinker`, `fastapi`, `uvicorn`, `wandb` | RL fine-tuning |
|
||||
| `all` | everything above | full install |
|
||||
|
||||
**Notable exclusions**:
|
||||
- `matrix-nio[e2e]` excluded — upstream `python-olm` broken on macOS Clang 21+
|
||||
- `yc-bench` requires Python 3.12+
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# From PyPI (recommended)
|
||||
pip install hermes-agent[default,messaging,cron]
|
||||
|
||||
# From source
|
||||
git clone https://github.com/NousResearch/hermes-agent.git
|
||||
cd hermes-agent
|
||||
pip install -e ".[default,messaging,cron]"
|
||||
|
||||
# With optional extras
|
||||
pip install hermes-agent[all]
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Hermes uses environment variables + YAML config:
|
||||
|
||||
**Environment** (`.env` or shell):
|
||||
- `HERMES_HOME` — state directory (`~/.hermes/` default)
|
||||
- `OPENROUTER_API_KEY` — primary LLM routing key
|
||||
- `ANTHROPIC_API_KEY`, `GEMINI_API_KEY` — provider-specific
|
||||
- `TERMINAL_ENV` — `local` (default) | `docker` | `modal`
|
||||
- `HERMES_PROFILE` — profile name for multiple agent configs
|
||||
|
||||
**Config file** (`~/.hermes/config.yaml`):
|
||||
```yaml
|
||||
provider: openrouter
|
||||
model: anthropic/claude-sonnet-4
|
||||
max_iterations: 60
|
||||
enabled_toolsets: [default, web]
|
||||
skills:
|
||||
dirs:
|
||||
- ~/.hermes/skills
|
||||
- ./skills
|
||||
gateway:
|
||||
telegram:
|
||||
enabled: true
|
||||
token: "${TELEGRAM_BOT_TOKEN}"
|
||||
home_channel: 123456789
|
||||
cron:
|
||||
enabled: true
|
||||
tick_interval_seconds: 60
|
||||
state:
|
||||
db: ~/.hermes/state.db
|
||||
wal: true
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
**Interactive TUI** (default):
|
||||
```bash
|
||||
hermes
|
||||
# or: hermes chat
|
||||
```
|
||||
|
||||
**Single query**:
|
||||
```bash
|
||||
hermes -q "Explain quantum entanglement"
|
||||
```
|
||||
|
||||
**Gateway (Telegram example)**:
|
||||
```bash
|
||||
hermes gateway install # systemd unit
|
||||
hermes gateway start
|
||||
```
|
||||
|
||||
**Cron scheduler** (runs automatically if enabled in config):
|
||||
```bash
|
||||
hermes cron status
|
||||
hermes cron list
|
||||
```
|
||||
|
||||
**MCP server**:
|
||||
```bash
|
||||
python mcp_serve.py --transport stdio
|
||||
# or: python mcp_serve.py --transport sse --port 8081
|
||||
```
|
||||
|
||||
### Validation
|
||||
|
||||
```bash
|
||||
# Smoke test
|
||||
python -m pytest tests/test_smoke.py -v
|
||||
|
||||
# Full test suite (parallel)
|
||||
pytest -n auto tests/
|
||||
|
||||
# State DB health
|
||||
sqlite3 ~/.hermes/state.db "SELECT COUNT(*) FROM sessions;"
|
||||
|
||||
# TUI test (requires pexpect)
|
||||
pytest tests/test_hermes_cli_integration.py -v
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Research Query
|
||||
|
||||
```
|
||||
> hermes -q "What are the latest developments in KV cache compression?"
|
||||
|
||||
[Tools: web_search → web_extract × 3]
|
||||
└─ Answer: KV cache compression advances... (cost: $0.04)
|
||||
```
|
||||
|
||||
**Token flow**: ~14K input (query + tool results) → ~2K output.
|
||||
|
||||
### Example 2: File System Investigation
|
||||
|
||||
```
|
||||
> /terminal find ~/repos -name "*.py" -exec wc -l {} + | sort -n | tail -10
|
||||
|
||||
[terminal] Executed in 0.8s
|
||||
/path/to/largest.py: 1243 lines
|
||||
...
|
||||
```
|
||||
|
||||
`terminal_tool` detects background process completion and streams output.
|
||||
|
||||
### Example 3: Scheduled Report
|
||||
|
||||
**Cron job** (`~/.hermes/cron/jobs/daily-report.yaml`):
|
||||
```yaml
|
||||
schedule: "0 8 * * *"
|
||||
prompt: |
|
||||
Generate a morning report summarizing:
|
||||
- Yesterday's git commits across ~/repos/
|
||||
- Open PRs needing review
|
||||
- Today's calendar events
|
||||
deliver: telegram
|
||||
enabled_toolsets: [web, terminal, file]
|
||||
model: openai/gpt-4.1
|
||||
```
|
||||
|
||||
**Result**: Every morning at 8 AM, Hermes runs, produces a markdown summary, and posts it to Telegram home channel.
|
||||
|
||||
---
|
||||
|
||||
## Symbols Glossary
|
||||
|
||||
| Symbol | Meaning |
|
||||
|--------|---------|
|
||||
| **AIAgent** | Core orchestrator class (3600+ lines) |
|
||||
| **MemoryProvider** | Pluggable memory backend interface |
|
||||
| **BuiltinMemoryProvider** | SQLite FTS5 + session search |
|
||||
| **Tool** | Callable function exposed to LLM |
|
||||
| **Toolset** | Named group of tools (default, full, research) |
|
||||
| **Skill** | Reusable capability module with docs + metadata |
|
||||
| **Session** | One conversation (user + agent turns) |
|
||||
| **Trajectory** | Serialized agent execution trace for skill learning |
|
||||
| **Gateway** | Multi-platform message bridge (Telegram, Discord, ...) |
|
||||
| **Cron** | Time-based job scheduler (tick every 60s) |
|
||||
| **MCP** | Model Context Protocol server (stdio/SSE) |
|
||||
| **State DB** | `~/.hermes/state.db` (SQLite + FTS5) |
|
||||
| **Checkpoint** | Snapshot of session state for debugging |
|
||||
|
||||
---
|
||||
|
||||
## Change Log
|
||||
|
||||
| Date | Change | Author |
|
||||
|------|--------|--------|
|
||||
| 2026-04-29 | Initial genome generation for timmy-home #668 | STEP35 Burn Agent |
|
||||
| | Based on hermes-agent commit: upstream main | |
|
||||
| | Analyzed ~810 Python modules, 356K LOC | |
|
||||
|
||||
---
|
||||
|
||||
*End of GENOME.md — hermes-agent*
|
||||
@@ -1,406 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Burn-down velocity tracker for Timmy Foundation issue throughput.
|
||||
|
||||
Refs: timmy-home #519
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from datetime import date, datetime, time, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib import parse, request
|
||||
from base64 import b64encode
|
||||
|
||||
DEFAULT_BASE_URL = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
DEFAULT_OWNER = "Timmy_Foundation"
|
||||
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "gitea" / "token"
|
||||
DEFAULT_CONFIG_FILE = Path(__file__).resolve().parent.parent / "configs" / "burn_velocity_repos.json"
|
||||
DEFAULT_OUTPUT_DIR = Path.home() / ".timmy" / "burn-velocity"
|
||||
DEFAULT_OUTPUT_JSON = DEFAULT_OUTPUT_DIR / "latest.json"
|
||||
DEFAULT_OUTPUT_MD = DEFAULT_OUTPUT_DIR / "latest.md"
|
||||
DEFAULT_HISTORY_FILE = DEFAULT_OUTPUT_DIR / "history.json"
|
||||
DEFAULT_CONFIG = {
|
||||
"owner": DEFAULT_OWNER,
|
||||
"repos": ["timmy-home", "timmy-config", "fleet-ops", "the-beacon", "the-door", "the-nexus"],
|
||||
"lookback_days": 14,
|
||||
"alert": {
|
||||
"recent_days": 7,
|
||||
"baseline_days": 7,
|
||||
"minimum_baseline_closed": 4,
|
||||
"drop_ratio": 0.6,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def parse_iso8601(value: str | None) -> datetime | None:
|
||||
if not value:
|
||||
return None
|
||||
normalized = value.replace("Z", "+00:00")
|
||||
parsed = datetime.fromisoformat(normalized)
|
||||
if parsed.tzinfo is None:
|
||||
return parsed.replace(tzinfo=timezone.utc)
|
||||
return parsed.astimezone(timezone.utc)
|
||||
|
||||
|
||||
def normalize_today(value: str | date | None = None) -> date:
|
||||
if value is None:
|
||||
return datetime.now(timezone.utc).date()
|
||||
if isinstance(value, date):
|
||||
return value
|
||||
return date.fromisoformat(value)
|
||||
|
||||
|
||||
def build_day_window(today: date, lookback_days: int) -> list[date]:
|
||||
start = today - timedelta(days=lookback_days - 1)
|
||||
return [start + timedelta(days=offset) for offset in range(lookback_days)]
|
||||
|
||||
|
||||
def filter_issue_items(items: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
return [item for item in items if not item.get("pull_request")]
|
||||
|
||||
|
||||
def build_daily_series(items: list[dict[str, Any]], today: date, lookback_days: int) -> list[dict[str, int | str]]:
|
||||
days = build_day_window(today, lookback_days)
|
||||
counts = {day.isoformat(): {"opened": 0, "closed": 0} for day in days}
|
||||
start_day = days[0]
|
||||
|
||||
for item in filter_issue_items(items):
|
||||
created_at = parse_iso8601(item.get("created_at"))
|
||||
if created_at is not None:
|
||||
created_day = created_at.date()
|
||||
if start_day <= created_day <= today:
|
||||
counts[created_day.isoformat()]["opened"] += 1
|
||||
|
||||
closed_at = parse_iso8601(item.get("closed_at"))
|
||||
if closed_at is not None:
|
||||
closed_day = closed_at.date()
|
||||
if start_day <= closed_day <= today:
|
||||
counts[closed_day.isoformat()]["closed"] += 1
|
||||
|
||||
return [
|
||||
{
|
||||
"date": day.isoformat(),
|
||||
"opened": counts[day.isoformat()]["opened"],
|
||||
"closed": counts[day.isoformat()]["closed"],
|
||||
}
|
||||
for day in days
|
||||
]
|
||||
|
||||
|
||||
def summarize_velocity_alert(
|
||||
*, recent_closed: int, baseline_closed: int, open_now: int, config: dict[str, Any]
|
||||
) -> dict[str, Any]:
|
||||
minimum_baseline = int(config.get("minimum_baseline_closed", 4))
|
||||
drop_ratio = float(config.get("drop_ratio", 0.6))
|
||||
|
||||
if baseline_closed >= minimum_baseline and recent_closed < baseline_closed * drop_ratio:
|
||||
return {
|
||||
"status": "drop",
|
||||
"kind": "velocity_drop",
|
||||
"recent_closed": recent_closed,
|
||||
"baseline_closed": baseline_closed,
|
||||
"reason": (
|
||||
f"velocity_drop: closed {recent_closed} in the last {config.get('recent_days', 7)}d "
|
||||
f"vs {baseline_closed} in the prior {config.get('baseline_days', 7)}d"
|
||||
),
|
||||
}
|
||||
|
||||
if open_now > 0 and baseline_closed >= minimum_baseline and recent_closed == 0:
|
||||
return {
|
||||
"status": "drop",
|
||||
"kind": "velocity_drop",
|
||||
"recent_closed": recent_closed,
|
||||
"baseline_closed": baseline_closed,
|
||||
"reason": "velocity_drop: no issues closed in the recent window while backlog is still open",
|
||||
}
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"kind": "none",
|
||||
"recent_closed": recent_closed,
|
||||
"baseline_closed": baseline_closed,
|
||||
"reason": "velocity stable",
|
||||
}
|
||||
|
||||
|
||||
def _sum_window(daily: list[dict[str, int | str]], field: str, days: int) -> int:
|
||||
if days <= 0:
|
||||
return 0
|
||||
return sum(int(item[field]) for item in daily[-days:])
|
||||
|
||||
|
||||
def _sum_baseline_window(daily: list[dict[str, int | str]], recent_days: int, baseline_days: int) -> int:
|
||||
if baseline_days <= 0:
|
||||
return 0
|
||||
if recent_days <= 0:
|
||||
return sum(int(item["closed"]) for item in daily[-baseline_days:])
|
||||
baseline_slice = daily[-(recent_days + baseline_days) : -recent_days]
|
||||
return sum(int(item["closed"]) for item in baseline_slice)
|
||||
|
||||
|
||||
def build_velocity_report(config: dict[str, Any], snapshot: dict[str, Any], today: str | date | None = None) -> dict[str, Any]:
|
||||
report_day = normalize_today(today)
|
||||
generated_at = snapshot.get("generated_at") or datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
owner = config.get("owner", DEFAULT_OWNER)
|
||||
repos = list(config.get("repos") or sorted((snapshot.get("repos") or {}).keys()))
|
||||
lookback_days = int(config.get("lookback_days", 14))
|
||||
alert_config = dict(DEFAULT_CONFIG["alert"])
|
||||
alert_config.update(config.get("alert") or {})
|
||||
recent_days = int(alert_config.get("recent_days", 7))
|
||||
baseline_days = int(alert_config.get("baseline_days", 7))
|
||||
|
||||
repo_reports: list[dict[str, Any]] = []
|
||||
total_open_now = 0
|
||||
total_closed_last_7d = 0
|
||||
repos_with_alerts: list[str] = []
|
||||
|
||||
for repo_name in repos:
|
||||
repo_snapshot = (snapshot.get("repos") or {}).get(repo_name, {})
|
||||
open_issues = filter_issue_items(list(repo_snapshot.get("open_issues") or []))
|
||||
recent_issues = filter_issue_items(list(repo_snapshot.get("recent_issues") or []))
|
||||
daily = build_daily_series(recent_issues, report_day, lookback_days)
|
||||
|
||||
open_now = len(open_issues)
|
||||
opened_last_7d = _sum_window(daily, "opened", recent_days)
|
||||
closed_last_7d = _sum_window(daily, "closed", recent_days)
|
||||
baseline_closed = _sum_baseline_window(daily, recent_days, baseline_days)
|
||||
weekly_net = opened_last_7d - closed_last_7d
|
||||
alert = summarize_velocity_alert(
|
||||
recent_closed=closed_last_7d,
|
||||
baseline_closed=baseline_closed,
|
||||
open_now=open_now,
|
||||
config=alert_config,
|
||||
)
|
||||
|
||||
repo_report = {
|
||||
"repo": repo_name,
|
||||
"open_now": open_now,
|
||||
"opened_last_7d": opened_last_7d,
|
||||
"closed_last_7d": closed_last_7d,
|
||||
"baseline_closed": baseline_closed,
|
||||
"weekly_net": weekly_net,
|
||||
"daily": daily,
|
||||
"alert": alert,
|
||||
}
|
||||
repo_reports.append(repo_report)
|
||||
|
||||
total_open_now += open_now
|
||||
total_closed_last_7d += closed_last_7d
|
||||
if alert["status"] != "ok":
|
||||
repos_with_alerts.append(repo_name)
|
||||
|
||||
return {
|
||||
"owner": owner,
|
||||
"generated_at": generated_at,
|
||||
"generated_day": report_day.isoformat(),
|
||||
"lookback_days": lookback_days,
|
||||
"dashboard_contract_version": 1,
|
||||
"repos": repo_reports,
|
||||
"summary": {
|
||||
"total_open_now": total_open_now,
|
||||
"total_closed_last_7d": total_closed_last_7d,
|
||||
"repos_with_alerts": repos_with_alerts,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(report: dict[str, Any]) -> str:
|
||||
lines = [
|
||||
"# Burn-down Velocity Tracking",
|
||||
"",
|
||||
f"Generated: {report['generated_at']}",
|
||||
f"Owner: {report['owner']}",
|
||||
f"Lookback days: {report['lookback_days']}",
|
||||
"",
|
||||
"## Per-repo velocity",
|
||||
"",
|
||||
"| Repo | Open now | Opened 7d | Closed 7d | Previous 7d | Alert |",
|
||||
"| --- | ---: | ---: | ---: | ---: | --- |",
|
||||
]
|
||||
|
||||
for repo in report["repos"]:
|
||||
alert_label = repo["alert"]["kind"] if repo["alert"]["status"] != "ok" else "ok"
|
||||
lines.append(
|
||||
f"| {repo['repo']} | {repo['open_now']} | {repo['opened_last_7d']} | {repo['closed_last_7d']} | {repo['baseline_closed']} | {alert_label} |"
|
||||
)
|
||||
|
||||
lines.extend(
|
||||
[
|
||||
"",
|
||||
"## Dashboard handoff for timmy-config",
|
||||
"",
|
||||
"The timmy-config dashboard should consume `~/.timmy/burn-velocity/latest.json` and render, for each repo:",
|
||||
"- `open_now`",
|
||||
"- `opened_last_7d`",
|
||||
"- `closed_last_7d`",
|
||||
"- `baseline_closed`",
|
||||
"- `alert.status` / `alert.kind` / `alert.reason`",
|
||||
"",
|
||||
"Cron should also persist `~/.timmy/burn-velocity/history.json` so timmy-config can plot the daily trend line instead of only the latest snapshot.",
|
||||
"",
|
||||
"## Alerts",
|
||||
"",
|
||||
]
|
||||
)
|
||||
|
||||
alerts = [repo for repo in report["repos"] if repo["alert"]["status"] != "ok"]
|
||||
if not alerts:
|
||||
lines.append("- none")
|
||||
else:
|
||||
for repo in alerts:
|
||||
lines.append(f"- {repo['repo']}: {repo['alert']['reason']}")
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def update_history(history_path: Path, report: dict[str, Any]) -> dict[str, Any]:
|
||||
if history_path.exists():
|
||||
history = json.loads(history_path.read_text(encoding="utf-8"))
|
||||
else:
|
||||
history = {"days": []}
|
||||
|
||||
entry = {
|
||||
"date": report["generated_day"],
|
||||
"generated_at": report["generated_at"],
|
||||
"summary": report["summary"],
|
||||
"repos": report["repos"],
|
||||
}
|
||||
|
||||
retained = [item for item in history.get("days", []) if item.get("date") != report["generated_day"]]
|
||||
retained.append(entry)
|
||||
retained.sort(key=lambda item: item["date"])
|
||||
history["days"] = retained
|
||||
|
||||
history_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
history_path.write_text(json.dumps(history, indent=2), encoding="utf-8")
|
||||
return history
|
||||
|
||||
|
||||
class GiteaClient:
|
||||
def __init__(self, token: str, owner: str = DEFAULT_OWNER, base_url: str = DEFAULT_BASE_URL):
|
||||
self.token = token
|
||||
self.owner = owner
|
||||
self.base_url = base_url.rstrip("/")
|
||||
|
||||
def _headers(self) -> list[dict[str, str]]:
|
||||
return [
|
||||
{"Authorization": f"token {self.token}", "Accept": "application/json"},
|
||||
{
|
||||
"Authorization": "Basic " + b64encode(f"{self.token}:".encode()).decode(),
|
||||
"Accept": "application/json",
|
||||
},
|
||||
]
|
||||
|
||||
def _request_json(self, url: str) -> list[dict[str, Any]]:
|
||||
last_error: Exception | None = None
|
||||
for headers in self._headers():
|
||||
try:
|
||||
req = request.Request(url, headers=headers)
|
||||
with request.urlopen(req, timeout=30) as response:
|
||||
return json.loads(response.read().decode())
|
||||
except Exception as exc: # pragma: no cover - exercised only on live API failure
|
||||
last_error = exc
|
||||
if last_error is None: # pragma: no cover - defensive
|
||||
raise RuntimeError("request failed without an exception")
|
||||
raise last_error
|
||||
|
||||
def list_issues(self, repo: str, *, state: str, since: str | None = None) -> list[dict[str, Any]]:
|
||||
issues: list[dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
query = {"state": state, "type": "issues", "limit": 100, "page": page}
|
||||
if since:
|
||||
query["since"] = since
|
||||
url = f"{self.base_url}/repos/{self.owner}/{repo}/issues?{parse.urlencode(query)}"
|
||||
batch = self._request_json(url)
|
||||
if not batch:
|
||||
break
|
||||
issues.extend(filter_issue_items(batch))
|
||||
page += 1
|
||||
return issues
|
||||
|
||||
|
||||
def load_json(path: Path, default: Any) -> Any:
|
||||
if not path.exists():
|
||||
return default
|
||||
return json.loads(path.read_text(encoding="utf-8"))
|
||||
|
||||
|
||||
def load_config(path: Path) -> dict[str, Any]:
|
||||
config = dict(DEFAULT_CONFIG)
|
||||
alert = dict(DEFAULT_CONFIG["alert"])
|
||||
raw = load_json(path, {})
|
||||
config.update(raw)
|
||||
alert.update(raw.get("alert") or {})
|
||||
config["alert"] = alert
|
||||
return config
|
||||
|
||||
|
||||
def collect_live_snapshot(
|
||||
config: dict[str, Any], *, today: str | date | None = None, token_file: Path = DEFAULT_TOKEN_FILE, base_url: str = DEFAULT_BASE_URL
|
||||
) -> dict[str, Any]:
|
||||
token = token_file.read_text(encoding="utf-8").strip()
|
||||
report_day = normalize_today(today)
|
||||
since_day = report_day - timedelta(days=int(config.get("lookback_days", 14)) - 1)
|
||||
since_timestamp = datetime.combine(since_day, time.min, tzinfo=timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
client = GiteaClient(token=token, owner=config.get("owner", DEFAULT_OWNER), base_url=base_url)
|
||||
|
||||
repos = list(config.get("repos") or [])
|
||||
repo_payload = {}
|
||||
for repo in repos:
|
||||
repo_payload[repo] = {
|
||||
"open_issues": client.list_issues(repo, state="open"),
|
||||
"recent_issues": client.list_issues(repo, state="all", since=since_timestamp),
|
||||
}
|
||||
|
||||
return {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
|
||||
"repos": repo_payload,
|
||||
}
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Track per-repo issue burn-down velocity and emit timmy-config dashboard payloads.")
|
||||
parser.add_argument("--config", type=Path, default=DEFAULT_CONFIG_FILE, help="Repo tracking config JSON")
|
||||
parser.add_argument("--snapshot-file", type=Path, help="Use a pre-fetched snapshot JSON instead of calling Gitea")
|
||||
parser.add_argument("--token-file", type=Path, default=DEFAULT_TOKEN_FILE, help="Gitea token file for live collection")
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL, help="Gitea API base URL")
|
||||
parser.add_argument("--today", help="Override report date (YYYY-MM-DD)")
|
||||
parser.add_argument("--output-json", type=Path, default=DEFAULT_OUTPUT_JSON, help="Path for latest JSON payload")
|
||||
parser.add_argument("--output-md", type=Path, default=DEFAULT_OUTPUT_MD, help="Path for latest markdown summary")
|
||||
parser.add_argument("--history-file", type=Path, default=DEFAULT_HISTORY_FILE, help="Path for persisted daily history JSON")
|
||||
parser.add_argument("--write-history", action="store_true", help="Update the daily history file after generating the report")
|
||||
parser.add_argument("--json", action="store_true", help="Print JSON instead of markdown to stdout")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = parse_args()
|
||||
config = load_config(args.config)
|
||||
|
||||
if args.snapshot_file:
|
||||
snapshot = load_json(args.snapshot_file, {"repos": {}})
|
||||
else:
|
||||
snapshot = collect_live_snapshot(config, today=args.today, token_file=args.token_file, base_url=args.base_url)
|
||||
|
||||
report = build_velocity_report(config, snapshot, today=args.today)
|
||||
|
||||
args.output_json.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.output_md.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.output_json.write_text(json.dumps(report, indent=2), encoding="utf-8")
|
||||
args.output_md.write_text(render_markdown(report), encoding="utf-8")
|
||||
|
||||
if args.write_history:
|
||||
update_history(args.history_file, report)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(report, indent=2))
|
||||
else:
|
||||
print(render_markdown(report))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,176 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import date
|
||||
from pathlib import Path
|
||||
|
||||
from scripts.burn_velocity_tracker import build_velocity_report, render_markdown, update_history
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
DOC_PATH = ROOT / "docs" / "BURN_VELOCITY_TRACKING.md"
|
||||
|
||||
|
||||
SNAPSHOT = {
|
||||
"generated_at": "2026-04-22T12:00:00Z",
|
||||
"repos": {
|
||||
"timmy-home": {
|
||||
"open_issues": [
|
||||
{"number": 501, "state": "open", "created_at": "2026-04-20T09:00:00Z"},
|
||||
{"number": 502, "state": "open", "created_at": "2026-04-22T07:00:00Z"},
|
||||
],
|
||||
"recent_issues": [
|
||||
{"number": 401, "state": "closed", "created_at": "2026-04-21T09:00:00Z", "closed_at": "2026-04-22T05:30:00Z"},
|
||||
{"number": 402, "state": "closed", "created_at": "2026-04-20T09:00:00Z", "closed_at": "2026-04-21T05:30:00Z"},
|
||||
{"number": 403, "state": "closed", "created_at": "2026-04-19T09:00:00Z", "closed_at": "2026-04-20T05:30:00Z"},
|
||||
{"number": 404, "state": "closed", "created_at": "2026-04-14T09:00:00Z", "closed_at": "2026-04-15T05:30:00Z"},
|
||||
{"number": 405, "state": "closed", "created_at": "2026-04-13T09:00:00Z", "closed_at": "2026-04-14T05:30:00Z"},
|
||||
{"number": 406, "state": "closed", "created_at": "2026-04-12T09:00:00Z", "closed_at": "2026-04-13T05:30:00Z"},
|
||||
{"number": 407, "state": "closed", "created_at": "2026-04-11T09:00:00Z", "closed_at": "2026-04-12T05:30:00Z"},
|
||||
{"number": 408, "state": "closed", "created_at": "2026-04-10T09:00:00Z", "closed_at": "2026-04-11T05:30:00Z"},
|
||||
{"number": 409, "state": "closed", "created_at": "2026-04-09T09:00:00Z", "closed_at": "2026-04-10T05:30:00Z"},
|
||||
{"number": 410, "state": "closed", "created_at": "2026-04-08T09:00:00Z", "closed_at": "2026-04-09T05:30:00Z"},
|
||||
{"number": 411, "state": "closed", "created_at": "2026-04-07T09:00:00Z", "closed_at": "2026-04-08T05:30:00Z"},
|
||||
{"number": 412, "state": "closed", "created_at": "2026-04-06T09:00:00Z", "closed_at": "2026-04-07T05:30:00Z"},
|
||||
{"number": 413, "state": "closed", "created_at": "2026-04-05T09:00:00Z", "closed_at": "2026-04-06T05:30:00Z"},
|
||||
{"number": 414, "state": "open", "created_at": "2026-04-22T08:45:00Z", "closed_at": None},
|
||||
{"number": 415, "state": "open", "created_at": "2026-04-17T08:45:00Z", "closed_at": None},
|
||||
],
|
||||
},
|
||||
"timmy-config": {
|
||||
"open_issues": [
|
||||
{"number": 601, "state": "open", "created_at": "2026-04-18T09:00:00Z"},
|
||||
],
|
||||
"recent_issues": [
|
||||
{"number": 602, "state": "closed", "created_at": "2026-04-20T09:00:00Z", "closed_at": "2026-04-21T06:00:00Z"},
|
||||
{"number": 603, "state": "open", "created_at": "2026-04-22T06:00:00Z", "closed_at": None},
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
CONFIG = {
|
||||
"owner": "Timmy_Foundation",
|
||||
"repos": ["timmy-home", "timmy-config"],
|
||||
"lookback_days": 14,
|
||||
"alert": {
|
||||
"recent_days": 7,
|
||||
"baseline_days": 7,
|
||||
"minimum_baseline_closed": 4,
|
||||
"drop_ratio": 0.6,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def test_build_velocity_report_counts_opened_closed_and_flags_drop_alert() -> None:
|
||||
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
|
||||
|
||||
assert report["generated_day"] == "2026-04-22"
|
||||
assert report["summary"]["repos_with_alerts"] == ["timmy-home"]
|
||||
assert report["summary"]["total_open_now"] == 3
|
||||
|
||||
home = report["repos"][0]
|
||||
assert home["repo"] == "timmy-home"
|
||||
assert home["open_now"] == 2
|
||||
assert home["opened_last_7d"] == 5
|
||||
assert home["closed_last_7d"] == 3
|
||||
assert home["baseline_closed"] == 7
|
||||
assert home["weekly_net"] == 2
|
||||
assert home["alert"]["status"] == "drop"
|
||||
assert home["alert"]["recent_closed"] == 3
|
||||
assert home["daily"][-1] == {"date": "2026-04-22", "opened": 1, "closed": 1}
|
||||
|
||||
timmy_config = report["repos"][1]
|
||||
assert timmy_config["repo"] == "timmy-config"
|
||||
assert timmy_config["open_now"] == 1
|
||||
assert timmy_config["closed_last_7d"] == 1
|
||||
assert timmy_config["alert"]["status"] == "ok"
|
||||
|
||||
|
||||
def test_render_markdown_includes_dashboard_handoff_and_alerts() -> None:
|
||||
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
|
||||
rendered = render_markdown(report)
|
||||
|
||||
for snippet in (
|
||||
"# Burn-down Velocity Tracking",
|
||||
"## Per-repo velocity",
|
||||
"timmy-home",
|
||||
"timmy-config",
|
||||
"## Dashboard handoff for timmy-config",
|
||||
"velocity_drop",
|
||||
"## Alerts",
|
||||
):
|
||||
assert snippet in rendered
|
||||
|
||||
|
||||
def test_update_history_replaces_same_day_snapshot(tmp_path: Path) -> None:
|
||||
history_path = tmp_path / "burn-velocity-history.json"
|
||||
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
|
||||
update_history(history_path, report)
|
||||
|
||||
updated = json.loads(json.dumps(report))
|
||||
updated["repos"][0]["open_now"] = 9
|
||||
updated["summary"]["total_open_now"] = 10
|
||||
update_history(history_path, updated)
|
||||
|
||||
history = json.loads(history_path.read_text(encoding="utf-8"))
|
||||
assert [item["date"] for item in history["days"]] == ["2026-04-22"]
|
||||
assert history["days"][0]["summary"]["total_open_now"] == 10
|
||||
assert history["days"][0]["repos"][0]["open_now"] == 9
|
||||
|
||||
|
||||
def test_cli_writes_json_markdown_and_history_from_snapshot(tmp_path: Path) -> None:
|
||||
snapshot_path = tmp_path / "snapshot.json"
|
||||
output_json = tmp_path / "latest.json"
|
||||
output_md = tmp_path / "latest.md"
|
||||
history_path = tmp_path / "history.json"
|
||||
snapshot_path.write_text(json.dumps(SNAPSHOT), encoding="utf-8")
|
||||
|
||||
result = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"scripts.burn_velocity_tracker",
|
||||
"--snapshot-file",
|
||||
str(snapshot_path),
|
||||
"--today",
|
||||
"2026-04-22",
|
||||
"--output-json",
|
||||
str(output_json),
|
||||
"--output-md",
|
||||
str(output_md),
|
||||
"--history-file",
|
||||
str(history_path),
|
||||
"--write-history",
|
||||
"--json",
|
||||
],
|
||||
check=True,
|
||||
cwd=ROOT,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
|
||||
payload = json.loads(result.stdout)
|
||||
assert payload["summary"]["repos_with_alerts"] == ["timmy-home"]
|
||||
assert output_json.exists()
|
||||
assert output_md.exists()
|
||||
assert history_path.exists()
|
||||
assert "timmy-config" in output_md.read_text(encoding="utf-8")
|
||||
|
||||
|
||||
def test_repo_contains_burn_velocity_tracking_doc() -> None:
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# Burn-down Velocity Tracking",
|
||||
"python3 scripts/burn_velocity_tracker.py",
|
||||
"configs/burn_velocity_repos.json",
|
||||
"~/.timmy/burn-velocity/latest.json",
|
||||
"timmy-config dashboard",
|
||||
"type=issues",
|
||||
"velocity_drop",
|
||||
]
|
||||
for snippet in required:
|
||||
assert snippet in text
|
||||
@@ -1,84 +1,123 @@
|
||||
"""
|
||||
Test that the hermes-agent GENOME.md exists and contains required sections.
|
||||
|
||||
Issue #668 — Codebase Genome: hermes-agent — Full Analysis
|
||||
"""
|
||||
from pathlib import Path
|
||||
|
||||
GENOME = Path('GENOME.md')
|
||||
|
||||
|
||||
def read_genome() -> str:
|
||||
assert GENOME.exists(), 'GENOME.md must exist at repo root'
|
||||
return GENOME.read_text(encoding='utf-8')
|
||||
GENOME = Path(__file__).parent.parent / "genomes" / "hermes-agent-GENOME.md"
|
||||
|
||||
|
||||
def test_genome_exists():
|
||||
assert GENOME.exists(), 'GENOME.md must exist at repo root'
|
||||
"""GENOME.md must exist at genomes/hermes-agent-GENOME.md."""
|
||||
assert GENOME.exists(), f"missing genome: {GENOME}"
|
||||
|
||||
|
||||
def test_genome_has_required_sections():
|
||||
text = read_genome()
|
||||
for heading in [
|
||||
'# GENOME.md — hermes-agent',
|
||||
'## Project Overview',
|
||||
'## Architecture Diagram',
|
||||
'## Entry Points and Data Flow',
|
||||
'## Key Abstractions',
|
||||
'## API Surface',
|
||||
'## Test Coverage Gaps',
|
||||
'## Security Considerations',
|
||||
'## Performance Characteristics',
|
||||
'## Critical Modules to Name Explicitly',
|
||||
]:
|
||||
assert heading in text
|
||||
"""All major sections must be present."""
|
||||
text = GENOME.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# GENOME.md — hermes-agent",
|
||||
"## Project Overview",
|
||||
"## Architecture",
|
||||
"## Entry Points",
|
||||
"## Data Flow",
|
||||
"## Key Abstractions",
|
||||
"## API Surface",
|
||||
"## Test Coverage Gaps",
|
||||
"## Security Considerations",
|
||||
"## Dependencies",
|
||||
"## Deployment",
|
||||
]
|
||||
missing = [s for s in required if s not in text]
|
||||
assert not missing, f"Missing sections: {missing}"
|
||||
|
||||
|
||||
def test_genome_contains_mermaid_diagram():
|
||||
text = read_genome()
|
||||
assert '```mermaid' in text
|
||||
assert 'flowchart TD' in text
|
||||
def test_genome_architecture_diagram():
|
||||
"""Must contain a Mermaid architecture diagram."""
|
||||
text = GENOME.read_text()
|
||||
assert "```mermaid" in text, "no mermaid code block"
|
||||
assert "graph TD" in text or "graph LR" in text, "no graph definition"
|
||||
required_nodes = ["AIAgent", "MemoryProvider", "Tool", "Cron", "Gateway", "Session"]
|
||||
for node in required_nodes:
|
||||
assert node in text, f"architecture diagram missing node: {node}"
|
||||
|
||||
|
||||
def test_genome_mentions_control_plane_modules():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'run_agent.py',
|
||||
'model_tools.py',
|
||||
'tools/registry.py',
|
||||
'toolsets.py',
|
||||
'cli.py',
|
||||
'hermes_cli/main.py',
|
||||
'hermes_state.py',
|
||||
'gateway/run.py',
|
||||
'acp_adapter/server.py',
|
||||
'cron/scheduler.py',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_mentions_core_modules():
|
||||
"""Must explicitly name key source files and modules."""
|
||||
text = GENOME.read_text()
|
||||
required = [
|
||||
"run_agent.py",
|
||||
"agent/input_sanitizer.py",
|
||||
"agent/memory_manager.py",
|
||||
"agent/prompt_builder.py",
|
||||
"agent/trajectory.py",
|
||||
"gateway/session.py",
|
||||
"gateway/delivery.py",
|
||||
"cron/scheduler.py",
|
||||
"tools/terminal_tool.py",
|
||||
"skills/",
|
||||
"hermes_state.py",
|
||||
]
|
||||
missing = [f for f in required if f not in text]
|
||||
assert not missing, f"Missing file references: {missing}"
|
||||
|
||||
|
||||
def test_genome_mentions_test_gap_and_collection_findings():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'11,470 tests collected',
|
||||
'6 collection errors',
|
||||
'ModuleNotFoundError: No module named `acp`',
|
||||
'trajectory_compressor.py',
|
||||
'batch_runner.py',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_mentions_tool_names():
|
||||
"""Must list core tool names."""
|
||||
text = GENOME.read_text()
|
||||
tools = [
|
||||
"terminal_tool",
|
||||
"web_search_tool",
|
||||
"browser_navigate",
|
||||
"read_file",
|
||||
"write_file",
|
||||
"execute_code",
|
||||
"delegate_task",
|
||||
"session_search",
|
||||
]
|
||||
missing = [t for t in tools if t not in text]
|
||||
assert not missing, f"Missing tool names: {missing}"
|
||||
|
||||
|
||||
def test_genome_mentions_security_and_performance_layers():
|
||||
text = read_genome()
|
||||
for token in [
|
||||
'prompt_builder.py',
|
||||
'approval.py',
|
||||
'file_tools.py',
|
||||
'mcp_tool.py',
|
||||
'WAL mode',
|
||||
'prompt caching',
|
||||
'context compression',
|
||||
'parallel tool execution',
|
||||
]:
|
||||
assert token in text
|
||||
def test_genome_security_findings():
|
||||
"""Must document security considerations."""
|
||||
text = GENOME.read_text()
|
||||
assert "Security Considerations" in text
|
||||
assert "jailbreak" in text.lower()
|
||||
assert "PII" in text or "personally identifiable" in text.lower()
|
||||
assert "credential" in text.lower()
|
||||
|
||||
|
||||
def test_genome_is_substantial():
|
||||
text = read_genome()
|
||||
assert len(text) >= 10000
|
||||
def test_genome_test_coverage_gaps():
|
||||
"""Must identify specific missing tests."""
|
||||
text = GENOME.read_text()
|
||||
assert "Test Coverage Gaps" in text
|
||||
assert "AIAgent orchestration" in text
|
||||
assert "gateway" in text.lower()
|
||||
assert "cron" in text.lower()
|
||||
|
||||
|
||||
def test_genome_not_a_stub():
|
||||
"""GENOME.md must be substantial (>10KB)."""
|
||||
size = GENOME.stat().st_size
|
||||
assert size >= 10_000, f"GENOME.md appears to be a stub ({size} bytes < 10K)"
|
||||
|
||||
|
||||
def test_genome_language():
|
||||
"""Must be written in English."""
|
||||
text = GENOME.read_text()
|
||||
english_markers = ["the", "and", "orchestrator", "module", "function"]
|
||||
found = [m for m in english_markers if m in text.lower()]
|
||||
assert len(found) >= 4, "GENOME.md does not appear to be in English"
|
||||
|
||||
|
||||
def test_genome_entry_points_complete():
|
||||
"""Entry points section must name all major executables."""
|
||||
text = GENOME.read_text()
|
||||
assert "run_agent.py" in text
|
||||
assert "cli.py" in text
|
||||
assert "hermes_cli" in text
|
||||
assert "gateway" in text
|
||||
assert "mcp_serve.py" in text
|
||||
assert "cron" in text
|
||||
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
Reference in New Issue
Block a user