Comparison of Hermes Agent vs. openclaw-honcho — and a porting spec for bringing Hermes patterns into other Honcho integrations.
Two independent Honcho integrations have been built for two different agent runtimes: Hermes Agent (Python, baked into the runner) and openclaw-honcho (TypeScript plugin via hook/tool API). Both use the same Honcho peer paradigm — dual peer model, session.context(), peer.chat() — but they made different tradeoffs at every layer.
This document maps those tradeoffs and defines a porting spec: a set of Hermes-originated patterns, each stated as an integration-agnostic interface, that any Honcho integration can adopt regardless of runtime or language.
Honcho is initialised directly inside AIAgent.__init__. There is no plugin boundary. Session management, context injection, async prefetch, and CLI surface are all first-class concerns of the runner. Context is injected once per session (baked into _cached_system_prompt) and never re-fetched mid-session — this maximises prefix cache hits at the LLM provider.
The plugin registers hooks against OpenClaw's event bus. Context is fetched synchronously inside before_prompt_build on every turn. Message capture happens in agent_end. The multi-agent hierarchy is tracked via subagent_spawned. This model is correct but every turn pays a blocking Honcho round-trip before the LLM call can begin.
| Dimension | Hermes Agent | openclaw-honcho |
|---|---|---|
| Context injection timing | Once per session (cached). Zero HTTP on response path after turn 1. | Every turn, blocking. Fresh context per turn but adds latency. |
| Prefetch strategy | Daemon threads fire at turn end; consumed next turn from cache. | None. Blocking call at prompt-build time. |
| Dialectic (peer.chat) | Prefetched async; result injected into system prompt next turn. | On-demand via honcho_recall / honcho_analyze tools. |
| Reasoning level | Dynamic: scales with message length. Floor = config default. Cap = "high". | Fixed per tool: recall=minimal, analyze=medium. |
| Memory modes | user_memory_mode / agent_memory_mode: hybrid / honcho / local. |
None. Always writes to Honcho. |
| Write frequency | async (background queue), turn, session, N turns. | After every agent_end (no control). |
| AI peer identity | observe_me=True, seed_ai_identity(), get_ai_representation(), SOUL.md → AI peer. |
Agent files uploaded to agent peer at setup. No ongoing self-observation seeding. |
| Context scope | User peer + AI peer representation, both injected. | User peer (owner) representation + conversation summary. peerPerspective on context call. |
| Session naming | per-directory / global / manual map / title-based. | Derived from platform session key. |
| Multi-agent | Single-agent only. | Parent observer hierarchy via subagent_spawned. |
| Tool surface | Single query_user_context tool (on-demand dialectic). |
6 tools: session, profile, search, context (fast) + recall, analyze (LLM). |
| Platform metadata | Not stripped. | Explicitly stripped before Honcho storage. |
| Message dedup | None (sends on every save cycle). | lastSavedIndex in session metadata prevents re-sending. |
| CLI surface in prompt | Management commands injected into system prompt. Agent knows its own CLI. | Not injected. |
| AI peer name in identity | Replaces "Hermes Agent" in DEFAULT_AGENT_IDENTITY when configured. | Not implemented. |
| QMD / local file search | Not implemented. | Passthrough tools when QMD backend configured. |
| Workspace metadata | Not implemented. | agentPeerMap in workspace metadata tracks agent→peer ID. |
Six patterns from Hermes are worth adopting in any Honcho integration. They are described below as integration-agnostic interfaces — the implementation will differ per runtime, but the contract is the same.
Calling session.context() and peer.chat() synchronously before each LLM call adds 200–800ms of Honcho round-trip latency to every turn. Users experience this as the agent "thinking slowly."
Fire both calls as non-blocking background work at the end of each turn. Store results in a per-session cache keyed by session ID. At the start of the next turn, pop from cache — the HTTP is already done. First turn is cold (empty cache); all subsequent turns are zero-latency on the response path.
// TypeScript (openclaw / nanobot plugin shape)
interface AsyncPrefetch {
// Fire context + dialectic fetches at turn end. Non-blocking.
firePrefetch(sessionId: string, userMessage: string): void;
// Pop cached results at turn start. Returns empty if cache is cold.
popContextResult(sessionId: string): ContextResult | null;
popDialecticResult(sessionId: string): string | null;
}
type ContextResult = {
representation: string;
card: string[];
aiRepresentation?: string; // AI peer context if enabled
summary?: string; // conversation summary if fetched
};
threading.Thread(daemon=True). Write to dict[session_id, result] — GIL makes this safe for simple writes.Promise stored in Map<string, Promise<ContextResult>>. Await at pop time. If not resolved yet, skip (return null) — do not block.Move session.context() from before_prompt_build to a post-agent_end background task. Store result in state.contextCache. In before_prompt_build, read from cache instead of calling Honcho. If cache is empty (turn 1), inject nothing — the prompt is still valid without Honcho context on the first turn.
Honcho's dialectic endpoint supports reasoning levels from minimal to max. A fixed level per tool wastes budget on simple queries and under-serves complex ones.
Select the reasoning level dynamically based on the user's message. Use the configured default as a floor. Bump by message length. Cap auto-selection at high — never select max automatically.
// Shared helper — identical logic in any language
const LEVELS = ["minimal", "low", "medium", "high", "max"];
function dynamicReasoningLevel(
query: string,
configDefault: string = "low"
): string {
const baseIdx = Math.max(0, LEVELS.indexOf(configDefault));
const n = query.length;
const bump = n < 120 ? 0 : n < 400 ? 1 : 2;
return LEVELS[Math.min(baseIdx + bump, 3)]; // cap at "high" (idx 3)
}
Add a dialecticReasoningLevel config field (string, default "low"). This sets the floor. Users can raise or lower it. The dynamic bump always applies on top.
Apply in honcho_recall and honcho_analyze: replace the fixed reasoningLevel with the dynamic selector. honcho_recall should use floor "minimal" and honcho_analyze floor "medium" — both still bump with message length.
Users want independent control over whether user context and agent context are written locally, to Honcho, or both. A single memoryMode shorthand is not granular enough.
Three modes per peer: hybrid (write both local + Honcho), honcho (Honcho only, disable local files), local (local files only, skip Honcho sync for this peer). Two orthogonal axes: user peer and agent peer.
// ~/.openclaw/openclaw.json (or ~/.nanobot/config.json)
{
"plugins": {
"openclaw-honcho": {
"config": {
"apiKey": "...",
"memoryMode": "hybrid", // shorthand: both peers
"userMemoryMode": "honcho", // override for user peer
"agentMemoryMode": "hybrid" // override for agent peer
}
}
}
}
userMemoryMode / agentMemoryMode) — wins if present.memoryMode — applies to both peers as default."hybrid".userMemoryMode=local: skip adding user peer messages to Honcho.agentMemoryMode=local: skip adding assistant peer messages to Honcho.session.addMessages() entirely.userMemoryMode=honcho: disable local USER.md writes.agentMemoryMode=honcho: disable local MEMORY.md / SOUL.md writes.Honcho builds the user's representation organically by observing what the user says. The same mechanism exists for the AI peer — but only if observe_me=True is set for the agent peer. Without it, the agent peer accumulates nothing and Honcho's AI-side model never forms.
Additionally, existing persona files (SOUL.md, IDENTITY.md) should seed the AI peer's Honcho representation at first activation, rather than waiting for it to emerge from scratch.
// TypeScript — in session.addPeers() call
await session.addPeers([
[ownerPeer.id, { observeMe: true, observeOthers: false }],
[agentPeer.id, { observeMe: true, observeOthers: true }], // was false
]);
This is a one-line change but foundational. Without it, Honcho's AI peer representation stays empty regardless of what the agent says.
async function seedAiIdentity(
session: HonchoSession,
agentPeer: Peer,
content: string,
source: string
): Promise<boolean> {
const wrapped = [
`<ai_identity_seed>`,
`<source>${source}</source>`,
``,
content.trim(),
`</ai_identity_seed>`,
].join("\n");
await agentPeer.addMessage("assistant", wrapped);
return true;
}
During openclaw honcho setup, upload agent-self files (SOUL.md, IDENTITY.md, AGENTS.md, BOOTSTRAP.md) to the agent peer using seedAiIdentity() instead of session.uploadFile(). This routes the content through Honcho's observation pipeline rather than the file store.
When the agent has a configured name (non-default), inject it into the agent's self-identity prefix. In OpenClaw this means adding to the injected system prompt section:
// In context hook return value
return {
systemPrompt: [
agentName ? `You are ${agentName}.` : "",
"## User Memory Context",
...sections,
].filter(Boolean).join("\n\n")
};
openclaw honcho identity <file> # seed from file
openclaw honcho identity --show # show current AI peer representation
When Honcho is used across multiple projects or directories, a single global session means every project shares the same context. Per-directory sessions provide isolation without requiring users to name sessions manually.
| Strategy | Session key | When to use |
|---|---|---|
per-directory | basename of CWD | Default. Each project gets its own session. |
global | fixed string "global" | Single cross-project session. |
| manual map | user-configured per path | sessions config map overrides directory basename. |
| title-based | sanitized session title | When agent supports named sessions; title set mid-conversation. |
{
"sessionStrategy": "per-directory", // "per-directory" | "global"
"sessionPeerPrefix": false, // prepend peer name to session key
"sessions": { // manual overrides
"/home/user/projects/foo": "foo-project"
}
}
openclaw honcho sessions # list all mappings
openclaw honcho map <name> # map cwd to session name
openclaw honcho map # no-arg = list mappings
Resolution order: manual map wins → session title → directory basename → platform key.
When a user asks "how do I change my memory settings?" or "what Honcho commands are available?" the agent either hallucinates or says it doesn't know. The agent should know its own management interface.
When Honcho is active, append a compact command reference to the system prompt. The agent can cite these commands directly instead of guessing.
// In context hook, append to systemPrompt
const honchoSection = [
"# Honcho memory integration",
`Active. Session: ${sessionKey}. Mode: ${mode}.`,
"Management commands:",
" openclaw honcho status — show config + connection",
" openclaw honcho mode [hybrid|honcho|local] — show or set memory mode",
" openclaw honcho sessions — list session mappings",
" openclaw honcho map <name> — map directory to session",
" openclaw honcho identity [file] [--show] — seed or show AI identity",
" openclaw honcho setup — full interactive wizard",
].join("\n");
Ordered by impact. Each item maps to a spec section above.
session.context() out of before_prompt_build into post-agent_end background Promise. Pop from cache at prompt build. (spec)session.addPeers() config for agent peer. (spec)dynamicReasoningLevel() helper; apply in honcho_recall and honcho_analyze. Add dialecticReasoningLevel to config schema. (spec)userMemoryMode / agentMemoryMode to config; gate Honcho sync and local writes accordingly. (spec)session.uploadFile(). (spec)sessionStrategy, sessions map, sessionPeerPrefix to config; implement resolution function. (spec)before_prompt_build return value when Honcho is active. (spec)openclaw honcho identity CLI command. (spec)aiPeer name configured, prepend to injected system prompt. (spec)nanobot-honcho is a greenfield integration. Start from openclaw-honcho's architecture (hook-based, dual peer) and apply all Hermes patterns from day one rather than retrofitting. Priority order:
observe_me=TruelastSavedIndex dedupseedAiIdentity())apiKey, workspaceId, baseUrl, memoryMode, userMemoryMode, agentMemoryMode, dialecticReasoningLevel, sessionStrategy, sessionshoncho_profile, honcho_recall, honcho_analyze, honcho_search, honcho_contextsetup, status, sessions, map, mode, identity