- Add full Supermemory section to memory-providers.md with config table, tools, setup instructions, and key features - Update provider count from 7 to 8 across memory.md and memory-providers.md - Add SUPERMEMORY_API_KEY to environment-variables.md - Add Supermemory to integrations/providers.md optional API keys table - Add supermemory to cli-commands.md provider list - Add Supermemory to profile isolation section (config file providers)
16 KiB
sidebar_position, title, description
| sidebar_position | title | description |
|---|---|---|
| 4 | Memory Providers | External memory provider plugins — Honcho, OpenViking, Mem0, Hindsight, Holographic, RetainDB, ByteRover, Supermemory |
Memory Providers
Hermes Agent ships with 8 external memory provider plugins that give the agent persistent, cross-session knowledge beyond the built-in MEMORY.md and USER.md. Only one external provider can be active at a time — the built-in memory is always active alongside it.
Quick Start
hermes memory setup # interactive picker + configuration
hermes memory status # check what's active
hermes memory off # disable external provider
Or set manually in ~/.hermes/config.yaml:
memory:
provider: openviking # or honcho, mem0, hindsight, holographic, retaindb, byterover, supermemory
How It Works
When a memory provider is active, Hermes automatically:
- Injects provider context into the system prompt (what the provider knows)
- Prefetches relevant memories before each turn (background, non-blocking)
- Syncs conversation turns to the provider after each response
- Extracts memories on session end (for providers that support it)
- Mirrors built-in memory writes to the external provider
- Adds provider-specific tools so the agent can search, store, and manage memories
The built-in memory (MEMORY.md / USER.md) continues to work exactly as before. The external provider is additive.
Available Providers
Honcho
AI-native cross-session user modeling with dialectic Q&A, semantic search, and persistent conclusions.
| Best for | Multi-agent systems with cross-session context, user-agent alignment |
| Requires | pip install honcho-ai + API key or self-hosted instance |
| Data storage | Honcho Cloud or self-hosted |
| Cost | Honcho pricing (cloud) / free (self-hosted) |
Tools: honcho_profile (peer card), honcho_search (semantic search), honcho_context (LLM-synthesized), honcho_conclude (store facts)
Setup Wizard:
hermes honcho setup # (legacy command)
# or
hermes memory setup # select "honcho"
Config: $HERMES_HOME/honcho.json (profile-local) or ~/.honcho/config.json (global). Resolution order: $HERMES_HOME/honcho.json > ~/.hermes/honcho.json > ~/.honcho/config.json. See the config reference and the Honcho integration guide.
Key config options
| Key | Default | Description |
|---|---|---|
apiKey |
-- | API key from app.honcho.dev |
baseUrl |
-- | Base URL for self-hosted Honcho |
peerName |
-- | User peer identity |
aiPeer |
host key | AI peer identity (one per profile) |
workspace |
host key | Shared workspace ID |
recallMode |
hybrid |
hybrid (auto-inject + tools), context (inject only), tools (tools only) |
observation |
all on | Per-peer observeMe/observeOthers booleans |
writeFrequency |
async |
async, turn, session, or integer N |
sessionStrategy |
per-directory |
per-directory, per-repo, per-session, global |
dialecticReasoningLevel |
low |
minimal, low, medium, high, max |
dialecticDynamic |
true |
Auto-bump reasoning by query length |
messageMaxChars |
25000 |
Max chars per message (chunked if exceeded) |
Minimal honcho.json (cloud)
{
"apiKey": "your-key-from-app.honcho.dev",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"peerName": "your-name",
"workspace": "hermes"
}
}
}
Minimal honcho.json (self-hosted)
{
"baseUrl": "http://localhost:8000",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"peerName": "your-name",
"workspace": "hermes"
}
}
}
:::tip Migrating from hermes honcho
If you previously used hermes honcho setup, your config and all server-side data are intact. Just re-enable through the setup wizard again or manually set memory.provider: honcho to reactivate via the new system.
:::
Multi-agent / Profiles:
Each Hermes profile gets its own Honcho AI peer while sharing the same workspace -- all profiles see the same user representation, but each agent builds its own identity and observations.
hermes profile create coder --clone # creates honcho peer "coder", inherits config from default
What --clone does: creates a hermes.coder host block in honcho.json with aiPeer: "coder", shared workspace, inherited peerName, recallMode, writeFrequency, observation, etc. The peer is eagerly created in Honcho so it exists before first message.
For profiles created before Honcho was set up:
hermes honcho sync # scans all profiles, creates host blocks for any missing ones
This inherits settings from the default hermes host block and creates new AI peers for each profile. Idempotent -- skips profiles that already have a host block.
Full honcho.json example (multi-profile)
{
"apiKey": "your-key",
"workspace": "hermes",
"peerName": "eri",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"workspace": "hermes",
"peerName": "eri",
"recallMode": "hybrid",
"writeFrequency": "async",
"sessionStrategy": "per-directory",
"observation": {
"user": { "observeMe": true, "observeOthers": true },
"ai": { "observeMe": true, "observeOthers": true }
},
"dialecticReasoningLevel": "low",
"dialecticDynamic": true,
"dialecticMaxChars": 600,
"messageMaxChars": 25000,
"saveMessages": true
},
"hermes.coder": {
"enabled": true,
"aiPeer": "coder",
"workspace": "hermes",
"peerName": "eri",
"recallMode": "tools",
"observation": {
"user": { "observeMe": true, "observeOthers": false },
"ai": { "observeMe": true, "observeOthers": true }
}
},
"hermes.writer": {
"enabled": true,
"aiPeer": "writer",
"workspace": "hermes",
"peerName": "eri"
}
},
"sessions": {
"/home/user/myproject": "myproject-main"
}
}
See the config reference and Honcho integration guide.
OpenViking
Context database by Volcengine (ByteDance) with filesystem-style knowledge hierarchy, tiered retrieval, and automatic memory extraction into 6 categories.
| Best for | Self-hosted knowledge management with structured browsing |
| Requires | pip install openviking + running server |
| Data storage | Self-hosted (local or cloud) |
| Cost | Free (open-source, AGPL-3.0) |
Tools: viking_search (semantic search), viking_read (tiered: abstract/overview/full), viking_browse (filesystem navigation), viking_remember (store facts), viking_add_resource (ingest URLs/docs)
Setup:
# Start the OpenViking server first
pip install openviking
openviking-server
# Then configure Hermes
hermes memory setup # select "openviking"
# Or manually:
hermes config set memory.provider openviking
echo "OPENVIKING_ENDPOINT=http://localhost:1933" >> ~/.hermes/.env
Key features:
- Tiered context loading: L0 (~100 tokens) → L1 (~2k) → L2 (full)
- Automatic memory extraction on session commit (profile, preferences, entities, events, cases, patterns)
viking://URI scheme for hierarchical knowledge browsing
Mem0
Server-side LLM fact extraction with semantic search, reranking, and automatic deduplication.
| Best for | Hands-off memory management — Mem0 handles extraction automatically |
| Requires | pip install mem0ai + API key |
| Data storage | Mem0 Cloud |
| Cost | Mem0 pricing |
Tools: mem0_profile (all stored memories), mem0_search (semantic search + reranking), mem0_conclude (store verbatim facts)
Setup:
hermes memory setup # select "mem0"
# Or manually:
hermes config set memory.provider mem0
echo "MEM0_API_KEY=your-key" >> ~/.hermes/.env
Config: $HERMES_HOME/mem0.json
| Key | Default | Description |
|---|---|---|
user_id |
hermes-user |
User identifier |
agent_id |
hermes |
Agent identifier |
Hindsight
Long-term memory with knowledge graph, entity resolution, and multi-strategy retrieval. The hindsight_reflect tool provides cross-memory synthesis that no other provider offers.
| Best for | Knowledge graph-based recall with entity relationships |
| Requires | Cloud: pip install hindsight-client + API key. Local: pip install hindsight + LLM key |
| Data storage | Hindsight Cloud or local embedded PostgreSQL |
| Cost | Hindsight pricing (cloud) or free (local) |
Tools: hindsight_retain (store with entity extraction), hindsight_recall (multi-strategy search), hindsight_reflect (cross-memory synthesis)
Setup:
hermes memory setup # select "hindsight"
# Or manually:
hermes config set memory.provider hindsight
echo "HINDSIGHT_API_KEY=your-key" >> ~/.hermes/.env
Config: $HERMES_HOME/hindsight/config.json
| Key | Default | Description |
|---|---|---|
mode |
cloud |
cloud or local |
bank_id |
hermes |
Memory bank identifier |
budget |
mid |
Recall thoroughness: low / mid / high |
Holographic
Local SQLite fact store with FTS5 full-text search, trust scoring, and HRR (Holographic Reduced Representations) for compositional algebraic queries.
| Best for | Local-only memory with advanced retrieval, no external dependencies |
| Requires | Nothing (SQLite is always available). NumPy optional for HRR algebra. |
| Data storage | Local SQLite |
| Cost | Free |
Tools: fact_store (9 actions: add, search, probe, related, reason, contradict, update, remove, list), fact_feedback (helpful/unhelpful rating that trains trust scores)
Setup:
hermes memory setup # select "holographic"
# Or manually:
hermes config set memory.provider holographic
Config: config.yaml under plugins.hermes-memory-store
| Key | Default | Description |
|---|---|---|
db_path |
$HERMES_HOME/memory_store.db |
SQLite database path |
auto_extract |
false |
Auto-extract facts at session end |
default_trust |
0.5 |
Default trust score (0.0–1.0) |
Unique capabilities:
probe— entity-specific algebraic recall (all facts about a person/thing)reason— compositional AND queries across multiple entitiescontradict— automated detection of conflicting facts- Trust scoring with asymmetric feedback (+0.05 helpful / -0.10 unhelpful)
RetainDB
Cloud memory API with hybrid search (Vector + BM25 + Reranking), 7 memory types, and delta compression.
| Best for | Teams already using RetainDB's infrastructure |
| Requires | RetainDB account + API key |
| Data storage | RetainDB Cloud |
| Cost | $20/month |
Tools: retaindb_profile (user profile), retaindb_search (semantic search), retaindb_context (task-relevant context), retaindb_remember (store with type + importance), retaindb_forget (delete memories)
Setup:
hermes memory setup # select "retaindb"
# Or manually:
hermes config set memory.provider retaindb
echo "RETAINDB_API_KEY=your-key" >> ~/.hermes/.env
ByteRover
Persistent memory via the brv CLI — hierarchical knowledge tree with tiered retrieval (fuzzy text → LLM-driven search). Local-first with optional cloud sync.
| Best for | Developers who want portable, local-first memory with a CLI |
| Requires | ByteRover CLI (npm install -g byterover-cli or install script) |
| Data storage | Local (default) or ByteRover Cloud (optional sync) |
| Cost | Free (local) or ByteRover pricing (cloud) |
Tools: brv_query (search knowledge tree), brv_curate (store facts/decisions/patterns), brv_status (CLI version + tree stats)
Setup:
# Install the CLI first
curl -fsSL https://byterover.dev/install.sh | sh
# Then configure Hermes
hermes memory setup # select "byterover"
# Or manually:
hermes config set memory.provider byterover
Key features:
- Automatic pre-compression extraction (saves insights before context compression discards them)
- Knowledge tree stored at
$HERMES_HOME/byterover/(profile-scoped) - SOC2 Type II certified cloud sync (optional)
Supermemory
Semantic long-term memory with profile recall, semantic search, explicit memory tools, and session-end conversation ingest via the Supermemory graph API.
| Best for | Semantic recall with user profiling and session-level graph building |
| Requires | pip install supermemory + API key |
| Data storage | Supermemory Cloud |
| Cost | Supermemory pricing |
Tools: supermemory_store (save explicit memories), supermemory_search (semantic similarity search), supermemory_forget (forget by ID or best-match query), supermemory_profile (persistent profile + recent context)
Setup:
hermes memory setup # select "supermemory"
# Or manually:
hermes config set memory.provider supermemory
echo 'SUPERMEMORY_API_KEY=your-key-here' >> ~/.hermes/.env
Config: $HERMES_HOME/supermemory.json
| Key | Default | Description |
|---|---|---|
container_tag |
hermes |
Container tag used for search and writes |
auto_recall |
true |
Inject relevant memory context before turns |
auto_capture |
true |
Store cleaned user-assistant turns after each response |
max_recall_results |
10 |
Max recalled items to format into context |
profile_frequency |
50 |
Include profile facts on first turn and every N turns |
capture_mode |
all |
Skip tiny or trivial turns by default |
api_timeout |
5.0 |
Timeout for SDK and ingest requests |
Key features:
- Automatic context fencing — strips recalled memories from captured turns to prevent recursive memory pollution
- Session-end conversation ingest for richer graph-level knowledge building
- Profile facts injected on first turn and at configurable intervals
- Trivial message filtering (skips "ok", "thanks", etc.)
Provider Comparison
| Provider | Storage | Cost | Tools | Dependencies | Unique Feature |
|---|---|---|---|---|---|
| Honcho | Cloud | Paid | 4 | honcho-ai |
Dialectic user modeling |
| OpenViking | Self-hosted | Free | 5 | openviking + server |
Filesystem hierarchy + tiered loading |
| Mem0 | Cloud | Paid | 3 | mem0ai |
Server-side LLM extraction |
| Hindsight | Cloud/Local | Free/Paid | 3 | hindsight-client |
Knowledge graph + reflect synthesis |
| Holographic | Local | Free | 2 | None | HRR algebra + trust scoring |
| RetainDB | Cloud | $20/mo | 5 | requests |
Delta compression |
| ByteRover | Local/Cloud | Free/Paid | 3 | brv CLI |
Pre-compression extraction |
| Supermemory | Cloud | Paid | 4 | supermemory |
Context fencing + session graph ingest |
Profile Isolation
Each provider's data is isolated per profile:
- Local storage providers (Holographic, ByteRover) use
$HERMES_HOME/paths which differ per profile - Config file providers (Honcho, Mem0, Hindsight, Supermemory) store config in
$HERMES_HOME/so each profile has its own credentials - Cloud providers (RetainDB) auto-derive profile-scoped project names
- Env var providers (OpenViking) are configured via each profile's
.envfile
Building a Memory Provider
See the Developer Guide: Memory Provider Plugins for how to create your own.