Major changes across 20 documentation pages: Staleness fixes: - Fix FAQ: wrong import path (hermes.agent → run_agent) - Fix FAQ: stale Gemini 2.0 model → Gemini 3 Flash - Fix integrations/index: missing MiniMax TTS provider - Fix integrations/index: web_crawl is not a registered tool - Fix sessions: add all 19 session sources (was only 5) - Fix cron: add all 18 delivery targets (was only telegram/discord) - Fix webhooks: add all delivery targets - Fix overview: add missing MCP, memory providers, credential pools - Fix all line-number references → use function name searches instead - Update file size estimates (run_agent ~9200, gateway ~7200, cli ~8500) Expanded thin pages (< 150 lines → substantial depth): - honcho.md: 43 → 108 lines — added feature comparison, tools, config, CLI - overview.md: 49 → 55 lines — added MCP, memory providers, credential pools - toolsets-reference.md: 57 → 175 lines — added explanations, config examples, custom toolsets, wildcards, platform differences table - optional-skills-catalog.md: 74 → 153 lines — added 25+ missing skills across communication, devops, mlops (18!), productivity, research categories - integrations/index.md: 82 → 115 lines — added messaging, HA, plugins sections - cron-internals.md: 90 → 195 lines — added job JSON example, lifecycle states, tick cycle, delivery targets, script-backed jobs, CLI interface - gateway-internals.md: 111 → 250 lines — added architecture diagram, message flow, two-level guard, platform adapters, token locks, process management - agent-loop.md: 112 → 235 lines — added entry points, API mode resolution, turn lifecycle detail, message alternation rules, tool execution flow, callback table, budget tracking, compression details - architecture.md: 152 → 295 lines — added system overview diagram, data flow diagrams, design principles table, dependency chain Other depth additions: - context-references.md: added platform availability, compression interaction, common patterns sections - slash-commands.md: added quick commands config example, alias resolution - image-generation.md: added platform delivery table - tools-reference.md: added tool counts, MCP tools note - index.md: updated platform count (5 → 14+), tool count (40+ → 47)
3.9 KiB
sidebar_position, title, description
| sidebar_position | title | description |
|---|---|---|
| 99 | Honcho Memory | AI-native persistent memory via Honcho — dialectic reasoning, multi-agent user modeling, and deep personalization |
Honcho Memory
Honcho is an AI-native memory backend that adds dialectic reasoning and deep user modeling on top of Hermes's built-in memory system. Instead of simple key-value storage, Honcho maintains a running model of who the user is — their preferences, communication style, goals, and patterns — by reasoning about conversations after they happen.
:::info Honcho is a Memory Provider Plugin Honcho is integrated into the Memory Providers system. All features below are available through the unified memory provider interface. :::
What Honcho Adds
| Capability | Built-in Memory | Honcho |
|---|---|---|
| Cross-session persistence | ✔ File-based MEMORY.md/USER.md | ✔ Server-side with API |
| User profile | ✔ Manual agent curation | ✔ Automatic dialectic reasoning |
| Multi-agent isolation | — | ✔ Per-peer profile separation |
| Observation modes | — | ✔ Unified or directional observation |
| Conclusions (derived insights) | — | ✔ Server-side reasoning about patterns |
| Search across history | ✔ FTS5 session search | ✔ Semantic search over conclusions |
Dialectic reasoning: After each conversation, Honcho analyzes the exchange and derives "conclusions" — insights about the user's preferences, habits, and goals. These conclusions accumulate over time, giving the agent a deepening understanding that goes beyond what the user explicitly stated.
Multi-agent profiles: When multiple Hermes instances talk to the same user (e.g., a coding assistant and a personal assistant), Honcho maintains separate "peer" profiles. Each peer sees only its own observations and conclusions, preventing cross-contamination of context.
Setup
hermes memory setup # select "honcho" from the provider list
Or configure manually:
# ~/.hermes/config.yaml
memory:
provider: honcho
echo "HONCHO_API_KEY=your-key" >> ~/.hermes/.env
Get an API key at honcho.dev.
Configuration Options
# ~/.hermes/config.yaml
honcho:
observation: directional # "unified" (default for new installs) or "directional"
peer_name: "" # auto-detected from platform, or set manually
Observation modes:
unified— All observations go into a single pool. Simpler, good for single-agent setups.directional— Observations are tagged with direction (user→agent, agent→user). Enables richer analysis of conversation dynamics.
Tools
When Honcho is active as the memory provider, four additional tools become available:
| Tool | Purpose |
|---|---|
honcho_conclude |
Trigger server-side dialectic reasoning on recent conversations |
honcho_context |
Retrieve relevant context from Honcho's memory for the current conversation |
honcho_profile |
View or update the user's Honcho profile |
honcho_search |
Semantic search across all stored conclusions and observations |
CLI Commands
hermes honcho status # Show connection status and config
hermes honcho peer # Update peer names for multi-agent setups
Migrating from hermes honcho
If you previously used the standalone hermes honcho setup:
- Your existing configuration (
honcho.jsonor~/.honcho/config.json) is preserved - Your server-side data (memories, conclusions, user profiles) is intact
- Set
memory.provider: honchoin config.yaml to reactivate
No re-login or re-setup needed. Run hermes memory setup and select "honcho" — the wizard detects your existing config.
Full Documentation
See Memory Providers — Honcho for the complete reference.