When streaming was enabled, two visual feedback mechanisms were
completely suppressed:
1. The thinking spinner (TUI toolbar) was skipped because the entire
spinner block was gated on 'not self._has_stream_consumers()'.
Now the thinking_callback fires in streaming mode too — the
raw KawaiiSpinner is still skipped (would conflict with streamed
tokens) but the TUI toolbar widget works fine alongside streaming.
2. Tool progress lines (the ┊ feed) were invisible because _vprint
was blanket-suppressed when stream consumers existed. But during
tool execution, no tokens are actively streaming, so printing is
safe. Added an _executing_tools flag that _vprint respects to
allow output during tool execution even with stream consumers
registered.
Support Signal 'Note to Self' messages in single-number setups where
signal-cli is linked as a secondary device on the user's own account.
syncMessage.sentMessage envelopes addressed to the bot's own account
are now promoted to dataMessage for normal processing, while other
sync events (read receipts, typing, etc.) are still filtered.
Echo-back prevention mirrors the WhatsApp bridge pattern:
- Track timestamps of recently sent messages (bounded set of 50)
- When a Note to Self sync arrives, check if its timestamp matches
a recent outbound — skip if so (agent echo-back)
- Only process sync messages that are genuinely user-initiated
Based on PR #2115 by @Stonelinks with added echo-back protection.
Show complete session IDs in 'hermes sessions list' instead of
truncating to 20 characters. Widens title column from 20→30 chars
and adjusts header widths accordingly.
Fixes#2068. Based on PR #2085 by @Nebula037 with a correction
to preserve the no-titles layout (the original PR accidentally
replaced the Preview/Src header with a duplicate Title/Preview header).
Two issues with /model preventing proper provider switching:
1. Bare provider names not detected: typing '/model nous' treated 'nous'
as a model name instead of triggering a provider switch. Fixed by adding
step 0 in detect_provider_for_model() that checks if the input matches
a known provider name/alias (excluding 'custom'/'openrouter' which need
explicit model names) and returns that provider's default model.
2. Custom endpoint details hidden: /model (no args) showed '[custom]' with
just a usage hint but no endpoint URL or model name. Now displays the
configured base_url for custom providers in both CLI and gateway.
Note: config base_url and OPENAI_BASE_URL are intentionally NOT cleared on
provider switch — dedicated provider paths (nous, anthropic, codex) have
their own credential resolution that ignores these, and clearing them would
destroy the user's custom endpoint config, preventing switching back.
Co-authored-by: Test <test@test.com>
Previously, Tab only handled dropdown completions. Users seeing gray
ghost text from history-based suggestions had no way to accept them
with Tab - they had to use Right arrow or Ctrl+E.
Now Tab follows priority:
1. Completion menu open → accept selected completion
2. Ghost text suggestion available → accept auto-suggestion
3. Otherwise → start completion menu
This matches user intuition that Tab should 'complete what I see.'
* fix(codex): treat reasoning-only responses as incomplete, not stop
When a Codex Responses API response contains only reasoning items
(encrypted thinking state) with no message text or tool calls, the
_normalize_codex_response method was setting finish_reason='stop'.
This sent the response into the empty-content retry loop, which
burned 3 retries and then failed — exactly the pattern Nester
reported in Discord.
Two fixes:
1. _normalize_codex_response: reasoning-only responses (reasoning_items_raw
non-empty but no final_text) now get finish_reason='incomplete', routing
them to the Codex continuation path instead of the retry loop.
2. Incomplete handling: also checks for codex_reasoning_items when deciding
whether to preserve an interim message, so encrypted reasoning state is
not silently dropped when there is no visible reasoning text.
Adds 4 regression tests covering:
- Unit: reasoning-only → incomplete, reasoning+content → stop
- E2E: reasoning-only → continuation → final answer succeeds
- E2E: encrypted reasoning items preserved in interim messages
* fix(codex): ensure reasoning items have required following item in API input
Follow-up to the reasoning-only response fix. Three additional issues
found by tracing the full replay path:
1. _chat_messages_to_responses_input: when a reasoning-only interim
message was converted to Responses API input, the reasoning items
were emitted as the last items with no following item. The Responses
API requires a following item after each reasoning item (otherwise:
'missing_following_item' error, as seen in OpenHands #11406). Now
emits an empty assistant message as the required following item when
content is empty but reasoning items were added.
2. Duplicate detection: two consecutive reasoning-only incomplete
messages with identical empty content/reasoning but different
encrypted codex_reasoning_items were incorrectly treated as
duplicates, silently dropping the second response's reasoning state.
Now includes codex_reasoning_items in the duplicate comparison.
3. Added tests for both the API input conversion path and the duplicate
detection edge case.
Research context: verified against OpenCode (uses Vercel AI SDK, no
retry loop so avoids the issue), Clawdbot (drops orphaned reasoning
blocks entirely), and OpenHands (hit the missing_following_item error).
Our approach preserves reasoning continuity while satisfying the API
constraint.
---------
Co-authored-by: Test <test@test.com>
* fix: persist ACP sessions to disk so they survive process restarts
The ACP adapter stored sessions entirely in-memory. When the editor
restarted the ACP subprocess (idle timeout, crash, system sleep/wake,
editor restart), all sessions were lost. The editor's load_session /
resume_session calls would fail to find the session, forcing a new
empty session and losing all conversation history.
Changes:
- SessionManager now persists each session as a JSON file under
~/.hermes/acp_sessions/<session_id>.json
- get_session() transparently restores from disk when not in memory
- update_cwd(), fork_session(), list_sessions() all check disk
- server.py calls save_session() after prompt completion, /reset,
/compact, and model switches
- cleanup() and remove_session() delete disk files too
- Sessions have a 7-day TTL; expired sessions are pruned on startup
- Atomic writes via tempfile + os.replace to prevent corruption
- 11 new tests covering persistence, disk restoration, and TTL expiry
* refactor: use SessionDB instead of JSON files for ACP session persistence
Replace the standalone JSON file persistence layer with SessionDB
(~/.hermes/state.db) integration. ACP sessions now:
- Share the same DB as CLI and gateway sessions
- Are searchable via session_search (FTS5)
- Get token tracking, cost tracking, and session titles for free
- Follow existing session pruning policies
Key changes:
- _get_db() lazily creates a SessionDB, resolving HERMES_HOME
dynamically (not at import time) for test compatibility
- _persist() creates session record + replaces messages in DB
- _restore() loads from DB with source='acp' filter
- cwd stored in model_config JSON field (no schema migration)
- Model values coerced to str to handle mock agents in tests
- Removed: json files, sessions_dir, ttl_days, _expire logic
- Tests updated: DB-backed persistence, FTS search, tool_call
round-tripping, source filtering
---------
Co-authored-by: Test <test@test.com>
* fix: prevent unavailable tool names from leaking into model schemas
When web_search/web_extract fail check_fn (no API key configured), their
names were still leaking into tool descriptions via two paths:
1. execute_code schema: sandbox_enabled was computed from tools_to_include
(pre-filter) instead of the actual available tools (post-filter), so
the execute_code description listed web_search/web_extract as available
sandbox imports even when they weren't.
2. browser_navigate schema: hardcoded description said 'prefer web_search
or web_extract' regardless of whether those tools existed.
The model saw these references, assumed the tools existed, and tried
calling them directly — triggering 'Unknown tool' errors.
Fix: compute available_tool_names from the filtered result set and use
that for both execute_code sandbox listing and browser_navigate description
patching.
* docs: add pitfall about cross-tool references in schema descriptions
---------
Co-authored-by: Test <test@test.com>
Authored by Hanai. Allows overriding the OpenAI TTS endpoint via
tts.openai.base_url in config.yaml for self-hosted or OpenAI-compatible
TTS services. Falls back to api.openai.com when not set.
Authored by Lovre Pešut (rovle). Migrates from deprecated find_one(labels=...)
to get(sandbox_name) with deterministic naming (hermes-{task_id}), plus legacy
fallback via list(labels=...) for pre-migration sandboxes.
When a cron job references a skill that is no longer installed,
_build_job_prompt() now logs a warning and injects a user-visible notice
into the prompt instead of raising RuntimeError. The job continues with
any remaining valid skills and the user prompt.
Adds 4 regression tests for missing skill handling.
find_one is being deprecated. Primary lookup now uses get() with a
deterministic sandbox name (hermes-{task_id}). A legacy fallback via
list(labels=...) ensures sandboxes created before this migration are
still resumable.
Authored by dusterbloom. Closes#1911.
Pre-computes SQL query strings at class definition time in insights.py,
adds identifier quoting for ALTER TABLE DDL in hermes_state.py, and adds
4 regression tests verifying query construction safety.
The merge at e7844e9c re-introduced a line in _build_child_agent() that
references _saved_tool_names — a variable only defined in _run_single_child().
This caused NameError on every delegate_task call, completely breaking
subagent delegation.
Moves the child._delegate_saved_tool_names assignment to _run_single_child()
where _saved_tool_names is actually defined, keeping the save/restore in the
same scope as the try/finally block.
Adds two regression tests from PR #2038 (YanSte).
Also fixes the same issue reported in PR #2048 (Gutslabs).
Co-authored-by: Yannick Stephan <yannick.stephan@gmail.com>
Co-authored-by: Guts <gutslabs@users.noreply.github.com>
Allow users to configure a custom base_url for the OpenAI TTS provider
in ~/.hermes/config.yaml under tts.openai.base_url. Defaults to the
official OpenAI endpoint. Enables use of self-hosted or OpenAI-compatible
TTS services (e.g. http://localhost:8000/v1).
Also adds a TTS configuration example block to cli-config.yaml.example.
Closes#1911
- insights.py: Pre-compute SELECT queries as class constants instead of
f-string interpolation at runtime. _SESSION_COLS is now evaluated once
at class definition time.
- hermes_state.py: Add identifier quoting and whitelist validation for
ALTER TABLE column names in schema migrations.
- Add 4 tests verifying no injection vectors in SQL query construction.
* fix: detect context length for custom model endpoints via fuzzy matching + config override
Custom model endpoints (non-OpenRouter, non-known-provider) were silently
falling back to 2M tokens when the model name didn't exactly match what the
endpoint's /v1/models reported. This happened because:
1. Endpoint metadata lookup used exact match only — model name mismatches
(e.g. 'qwen3.5:9b' vs 'Qwen3.5-9B-Q4_K_M.gguf') caused a miss
2. Single-model servers (common for local inference) required exact name
match even though only one model was loaded
3. No user escape hatch to manually set context length
Changes:
- Add fuzzy matching for endpoint model metadata: single-model servers
use the only available model regardless of name; multi-model servers
try substring matching in both directions
- Add model.context_length config override (highest priority) so users
can explicitly set their model's context length in config.yaml
- Log an informative message when falling back to 2M probe, telling
users about the config override option
- Thread config_context_length through ContextCompressor and AIAgent init
Tests: 6 new tests covering fuzzy match, single-model fallback, config
override (including zero/None edge cases).
* fix: auto-detect local model name and context length for local servers
Cherry-picked from PR #2043 by sudoingX.
- Auto-detect model name from local server's /v1/models when only one
model is loaded (no manual model name config needed)
- Add n_ctx_train and n_ctx to context length detection keys for llama.cpp
- Query llama.cpp /props endpoint for actual allocated context (not just
training context from GGUF metadata)
- Strip .gguf suffix from display in banner and status bar
- _auto_detect_local_model() in runtime_provider.py for CLI init
Co-authored-by: sudo <sudoingx@users.noreply.github.com>
* fix: revert accidental summary_target_tokens change + add docs for context_length config
- Revert summary_target_tokens from 2500 back to 500 (accidental change
during patching)
- Add 'Context Length Detection' section to Custom & Self-Hosted docs
explaining model.context_length config override
---------
Co-authored-by: Test <test@test.com>
Co-authored-by: sudo <sudoingx@users.noreply.github.com>
The gateway approval system previously intercepted bare 'yes'/'no' text
from the user's next message to approve/deny dangerous commands. This was
fragile and dangerous — if the agent asked a clarify question and the user
said 'yes' to answer it, the gateway would execute the pending dangerous
command instead. (Fixes#1888)
Changes:
- Remove bare text matching ('yes', 'y', 'approve', 'ok', etc.) from
_handle_message approval check
- Add /approve and /deny as gateway-only slash commands in the command
registry
- /approve supports scoping: /approve (one-time), /approve session,
/approve always (permanent)
- Add 5-minute timeout for stale approvals
- Gateway appends structured instructions to the agent response when a
dangerous command is pending, telling the user exactly how to respond
- 9 tests covering approve, deny, timeout, scoping, and verification
that bare 'yes' no longer triggers execution
Credit to @solo386 and @FlyByNight69420 for identifying and reporting
this security issue in PR #1971 and issue #1888.
Co-authored-by: Test <test@test.com>
After #1675 removed ANTHROPIC_BASE_URL env var support, the Anthropic
provider base URL was hardcoded to https://api.anthropic.com. Now reads
model.base_url from config.yaml as an override, falling back to the
default when not set. Also applies to the auxiliary client.
Cherry-picked from PR #1949 by @rivercrab26.
Co-authored-by: rivercrab26 <rivercrab26@users.noreply.github.com>
Three bugs prevented providers like MiniMax from using their
Anthropic-compatible endpoints (e.g. api.minimax.io/anthropic):
1. _VALID_API_MODES was missing 'anthropic_messages', so explicit
api_mode config was silently rejected and defaulted to
chat_completions.
2. API-key provider resolution hardcoded api_mode to 'chat_completions'
without checking model config or detecting Anthropic-compatible URLs.
3. run_agent.py auto-detection only recognized api.anthropic.com, not
third-party endpoints using the /anthropic URL convention.
Fixes:
- Add 'anthropic_messages' to _VALID_API_MODES
- API-key providers now check model config api_mode and auto-detect
URLs ending in /anthropic
- run_agent.py and fallback logic detect /anthropic URL convention
- 5 new tests covering all scenarios
Users can now either:
- Set MINIMAX_BASE_URL=https://api.minimax.io/anthropic (auto-detected)
- Set api_mode: anthropic_messages in model config (explicit)
- Use custom_providers with api_mode: anthropic_messages
Co-authored-by: Test <test@test.com>
When provider: custom is set in config.yaml with base_url and api_key,
those values are now used instead of falling back to OPENAI_BASE_URL and
OPENAI_API_KEY env vars. Also reads the 'api' field as an alternative to
'api_key' for config compatibility.
Cherry-picked from PR #1762 by crazywriter1.
Co-authored-by: crazywriter1 <53251494+crazywriter1@users.noreply.github.com>
_align_boundary_backward only checked messages[idx-1] to decide if
the compress-end boundary splits a tool_call/result group. When an
assistant issues 3+ parallel tool calls, their results span multiple
consecutive messages. If the boundary fell in the middle of that group,
the parent assistant was summarized away and orphaned tool results were
silently deleted by _sanitize_tool_pairs.
Now walks backward through all consecutive tool results to find the
parent assistant, then pulls the boundary before the entire group.
6 regression tests added in tests/test_compression_boundary.py.
Co-authored-by: Guts <Gutslabs@users.noreply.github.com>
Previously, if an error occurred during response processing in
_process_message_background (e.g. during extract_media, send, or
any uncaught exception from the handler), the error was only logged
to server console and the user was left with radio silence — typing
indicator stops but no message arrives.
Now the outer except block attempts to send the error type and detail
(truncated to 300 chars) to the user's chat, matching the format
already used by the inner handler in gateway/run.py.
Co-authored-by: Test <test@test.com>
The whatsapp reply_prefix bridging referenced config.platforms before
the config object was constructed, making it a silent NameError caught
by except Exception: pass.
Fix: fold reply_prefix into the per-platform bridging loop (introduced
in #1919) which correctly writes to gw_data dict pre-construction.
Removes the broken standalone whatsapp bridging block.
Co-authored-by: Test <test@test.com>
Adds model name and provider to the system prompt metadata block,
alongside the existing session ID and timestamp. These are frozen
at session start and don't change mid-conversation, so they won't
break prompt caching.
Update all SOUL.md documentation to reflect that it now occupies
slot #1 in the system prompt, replacing the hardcoded default identity.
Updated pages:
- user-guide/features/personality.md — SOUL.md is primary identity, not just a layer
- developer-guide/prompt-assembly.md — updated prompt layer order, context files list
- guides/use-soul-with-hermes.md — SOUL.md replaces built-in identity
- user-guide/configuration.md — updated context files table and directory tree
Co-authored-by: Test <test@test.com>
SOUL.md now loads in slot #1 of the system prompt, replacing the
hardcoded DEFAULT_AGENT_IDENTITY. This lets users fully customize
the agent's identity and personality by editing ~/.hermes/SOUL.md
without it conflicting with the built-in identity text.
When SOUL.md is loaded as identity, it's excluded from the context
files section to avoid appearing twice. When SOUL.md is missing,
empty, unreadable, or skip_context_files is set, the hardcoded
DEFAULT_AGENT_IDENTITY is used as a fallback.
The default SOUL.md (seeded on first run) already contains the full
Hermes personality, so existing installs are unaffected.
Co-authored-by: Test <test@test.com>
Adds the Hugging Face CLI (hf) reference as a built-in skill under
mlops/. Covers downloading/uploading models and datasets, repo
management, SQL queries on datasets, inference endpoints, Spaces,
buckets, and more.
Based on the official HF skill from huggingface/skills.
Add unauthorized_dm_behavior config (pair|ignore) with global default
and per-platform override. WhatsApp can silently drop unknown DMs
instead of sending pairing codes.
Adapted config bridging to work with gw_data dict (pre-construction)
rather than config object. Dropped implementation plan document.
Co-authored-by: Frederico Ribeiro <fr@tecompanytea.com>
The previous copilot_model_api_mode() checked the catalog's
supported_endpoints first and picked /chat/completions when a model
supported both endpoints. This is wrong — GPT-5+ models should use
the Responses API even when the catalog lists both.
Replicate opencode's shouldUseCopilotResponsesApi() logic:
- GPT-5+ models (gpt-5.4, gpt-5.3-codex, etc.) → Responses API
- gpt-5-mini → Chat Completions (explicit exception)
- Everything else (gpt-4o, claude, gemini, etc.) → Chat Completions
- Model ID pattern is the primary signal, catalog is secondary
The catalog fallback now only matters for non-GPT-5 models that might
exclusively support /v1/messages (e.g. Claude via Copilot).
Models are auto-detected from the live catalog at
api.githubcopilot.com/models — no hardcoded list required for
supported models, only a static fallback for when the API is
unreachable.
Adds /statusbar (alias /sb) to show/hide the bottom status bar that
displays model name, context usage, and session duration.
Uses ConditionalContainer so the bar takes zero space when hidden
rather than leaving a blank line.