Implements mTLS for securing agent-to-agent communication in the Hermes
fleet. Fixes#806.
Changes:
- scripts/gen_fleet_ca.sh: generate a self-signed Fleet CA (4096-bit RSA,
10-year validity) that signs all agent certificates
- scripts/gen_agent_cert.sh: generate per-agent certs (Timmy, Allegro,
Ezra) signed by the fleet CA with SAN entries and clientAuth/serverAuth
extended key usage
- agent/mtls.py: new module providing:
- build_server_ssl_context() — TLS_SERVER context with CERT_REQUIRED,
enforces client cert against Fleet CA
- build_client_ssl_context() — TLS_CLIENT context for outbound A2A calls
- MTLSMiddleware — ASGI middleware that rejects unauthenticated requests
to A2A routes (/.well-known/agent-card*, /api/agent-card, /a2a/) with
HTTP 403 when mTLS is enabled
- is_mtls_configured() — checks HERMES_MTLS_CERT/KEY/CA env vars
- hermes_cli/web_server.py: wire MTLSMiddleware into the FastAPI app;
pass SSL context to uvicorn when HERMES_MTLS_* env vars are set so
the server runs TLS with mandatory client cert verification
- ansible/roles/hermes_mtls/: Ansible role to distribute Fleet CA cert,
agent cert, and agent key to fleet nodes; writes an env file with
HERMES_MTLS_* vars and restarts the hermes-gateway service
- ansible/fleet_mtls.yml: fleet-wide playbook referencing the role for
Timmy, Allegro, and Ezra nodes
- tests/test_mtls.py: 15 tests covering is_mtls_configured, SSL context
creation with real cryptography-generated certs, and MTLSMiddleware
(unauthorized agent rejected → 403, authorized agent accepted → 200)
mTLS is opt-in: set HERMES_MTLS_CERT, HERMES_MTLS_KEY, and HERMES_MTLS_CA
to enable. When unset, the server behaves exactly as before.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Closes#885
2.33x error cascade factor detected. After 3 consecutive errors,
circuit opens and agent must take corrective action.
Recovery pattern: terminal is the safety net (2300 recoveries).
Error rate peaks at 18:00 (9.4%) during evening cron batches vs 4.0%
at 09:00 during interactive work. Route cron tasks to stronger models
during off-hours when user is not present to correct errors.
New agent/time_aware_routing.py:
- resolve_time_aware_model(): routes based on hour, error rate, task type
- Interactive sessions: always use base model (user corrects errors)
- Cron during business hours: use base model (low error rate)
- Cron during off-hours with high error rate (>6%): upgrade to strong model
- get_hour_error_rate(): error rates by hour from empirical audit
- is_off_hours(): 18:00-05:59 = off-hours
- RoutingDecision: model, provider, reason, hour, error_rate
- get_routing_report(): 24h forecast of routing decisions
Config via env vars:
- CRON_STRONG_MODEL (default: xiaomi/mimo-v2-pro)
- CRON_CHEAP_MODEL (default: qwen2.5:7b)
- CRON_ERROR_THRESHOLD (default: 6.0%)
Tests: tests/test_time_aware_routing.py (9 tests)
Closes#889
agent/crisis_resources.py provides all 988 Lifeline contact
methods: phone (988), text (HOME to 988), chat, Spanish line.
Also Crisis Text Line (741741) and 911.
Closes#673
Tool schema descriptions and tool return values contained hardcoded
~/.hermes paths that the model sees and uses. When HERMES_HOME is set
to a custom path (Docker containers, profiles), the agent would still
reference ~/.hermes — looking at the wrong directory.
Fixes 6 locations across 5 files:
- tools/tts_tool.py: output_path schema description
- tools/cronjob_tools.py: script path schema description
- tools/skill_manager_tool.py: skill_manage schema description
- tools/skills_tool.py: two tool return messages
- agent/skill_commands.py: skill config injection text
All now use display_hermes_home() which resolves to the actual
HERMES_HOME path (e.g. /opt/data for Docker, ~/.hermes/profiles/X
for profiles, ~/.hermes for default).
Reported by: Sandeep Narahari (PrithviDevs)
Four independent fixes:
1. Reset activity timestamp on cached agent reuse (#9051)
When the gateway reuses a cached AIAgent for a new turn, the
_last_activity_ts from the previous turn (possibly hours ago)
carried over. The inactivity timeout handler immediately saw
the agent as idle for hours and killed it.
Fix: reset _last_activity_ts, _last_activity_desc, and
_api_call_count when retrieving an agent from the cache.
2. Detect uv-managed virtual environments (#8620 sub-issue 1)
The systemd unit generator fell back to sys.executable (uv's
standalone Python) when running under 'uv run', because
sys.prefix == sys.base_prefix. The generated ExecStart pointed
to a Python binary without site-packages.
Fix: check VIRTUAL_ENV env var before falling back to
sys.executable. uv sets VIRTUAL_ENV even when sys.prefix
doesn't reflect the venv.
3. Nudge model to continue after empty post-tool response (#9400)
Weaker models sometimes return empty after tool calls. The agent
silently abandoned the remaining work.
Fix: append assistant('(empty)') + user nudge message and retry
once. Resets after each successful tool round.
4. Compression model fallback on permanent errors (#8620 sub-issue 4)
When the default summary model (gemini-3-flash) returns 503
'model_not_found' on custom proxies, the compressor entered a
600s cooldown, leaving context growing unbounded.
Fix: detect permanent model-not-found errors (503, 404,
'model_not_found', 'no available channel') and fall back to
using the main model for compression instead of entering
cooldown. One-time fallback with immediate retry.
Test plan: 40 compressor tests + 97 gateway/CLI tests + 9 venv tests pass
Add 'xai', 'x-ai', 'x.ai', 'grok' to _PROVIDER_PREFIXES so that
colon-prefixed model names (e.g. xai:grok-4.20) are stripped correctly
for context length lookups.
Cherry-picked from PR #9184 by @Julientalbot.
- Add glm-5v-turbo to OpenRouter, Nous, and native Z.AI model lists
- Add glm-5v context length entry (200K tokens) to model metadata
- Update Z.AI endpoint probe to try multiple candidate models per
endpoint (glm-5.1, glm-5v-turbo, glm-4.7) — fixes detection for
newer coding plan accounts that lack older models
- Add zai to _PROVIDER_VISION_MODELS so auxiliary vision tasks
(vision_analyze, browser screenshots) route through 5v
Fixes#9888
Seed qwen-oauth credentials from resolve_qwen_runtime_credentials() in
_seed_from_singletons(). Users who authenticate via 'qwen auth qwen-oauth'
store tokens in ~/.qwen/oauth_creds.json which the runtime resolver reads
but the credential pool couldn't detect — same gap pattern as copilot.
Uses refresh_if_expiring=False to avoid network calls during discovery.
Seed copilot credentials from resolve_copilot_token() in the credential
pool's _seed_from_singletons(), alongside the existing anthropic and
openai-codex seeding logic. This makes copilot appear in the /model
provider picker when the user authenticates solely through gh auth token.
Cherry-picked from PR #9767 by Marvae.
Add ctx.register_skill() API so plugins can ship SKILL.md files under
a 'plugin:skill' namespace, preventing name collisions with built-in
Hermes skills. skill_view() detects the ':' separator and routes to
the plugin registry while bare names continue through the existing
flat-tree scan unchanged.
Key additions:
- agent/skill_utils: parse_qualified_name(), is_valid_namespace()
- hermes_cli/plugins: PluginContext.register_skill(), PluginManager
skill registry (find/list/remove)
- tools/skills_tool: qualified name dispatch in skill_view(),
_serve_plugin_skill() with full guards (disabled, platform,
injection scan), bundle context banner with sibling listing,
stale registry self-heal
- Hoisted _INJECTION_PATTERNS to module level (dedup)
- Updated skill_view schema description
Based on PR #9334 by N0nb0at. Lean P1 salvage — omits autogen shim
(P2) for a simpler first merge.
Closes#8422
- Rename platform from 'qq' to 'qqbot' across all integration points
(Platform enum, toolset, config keys, import paths, file rename qq.py → qqbot.py)
- Add PLATFORM_HINTS for QQBot in prompt_builder (QQ supports markdown)
- Set SUPPORTS_MESSAGE_EDITING = False to skip streaming on QQ
(prevents duplicate messages from non-editable partial + final sends)
- Add _send_qqbot() standalone send function for cron/send_message tool
- Add interactive _setup_qq() wizard in hermes_cli/setup.py
- Restore missing _setup_signal/email/sms/dingtalk/feishu/wecom/wecom_callback
functions that were lost during the original merge
* Add hermes debug share instructions to all issue templates
- bug_report.yml: Add required Debug Report section with hermes debug share
and /debug instructions, make OS/Python/Hermes version optional (covered
by debug report), demote old logs field to optional supplementary
- setup_help.yml: Replace hermes doctor reference with hermes debug share,
add Debug Report section with fallback chain (debug share -> --local -> doctor)
- feature_request.yml: Add optional Debug Report section for environment context
All templates now guide users to run hermes debug share (or /debug in chat)
and paste the resulting paste.rs links, giving maintainers system info,
config, and recent logs in one step.
* feat: add openrouter/elephant-alpha to curated model lists
- Add to OPENROUTER_MODELS (free, positioned above GPT models)
- Add to _PROVIDER_MODELS["nous"] mirror list
- Add 256K context window fallback in model_metadata.py
The generic 'gpt-5' fallback was set to 128,000 — which is the max
OUTPUT tokens, not the context window. GPT-5 base and most variants
(codex, mini) have 400,000 context. This caused /model to report
128k for models like gpt-5.3-codex when models.dev was unavailable.
Added specific entries for GPT-5 variants with different context sizes:
- gpt-5.4, gpt-5.4-pro: 1,050,000 (1.05M)
- gpt-5.4-mini, gpt-5.4-nano: 400,000
- gpt-5.3-codex-spark: 128,000 (reduced)
- gpt-5.1-chat: 128,000 (chat variant)
- gpt-5 (catch-all): 400,000
Sources: https://developers.openai.com/api/docs/models
Port two improvements inspired by Kilo-Org/kilocode analysis:
1. Error classifier: add context overflow patterns for vLLM, Ollama,
and llama.cpp/llama-server. These local inference servers return
different error formats than cloud providers (e.g., 'exceeds the
max_model_len', 'context length exceeded', 'slot context'). Without
these patterns, context overflow errors from local servers are
misclassified as format errors, causing infinite retries instead
of triggering compression.
2. MCP initial connection retry: previously, if the very first
connection attempt to an MCP server failed (e.g., transient DNS
blip at startup), the server was permanently marked as failed with
no retry. Post-connect reconnection had 5 retries with exponential
backoff, but initial connection had zero. Now initial connections
retry up to 3 times with backoff before giving up, matching the
resilience of post-connect reconnection.
(Inspired by Kilo Code's MCP server disappearing fix in v1.3.3)
Tests: 6 new error classifier tests, 4 new MCP retry tests, 1
updated existing test. All 276 affected tests pass.
Adds Arcee AI as a standard direct provider (ARCEEAI_API_KEY) with
Trinity models: trinity-large-thinking, trinity-large-preview, trinity-mini.
Standard OpenAI-compatible provider checklist: auth.py, config.py,
models.py, main.py, providers.py, doctor.py, model_normalize.py,
model_metadata.py, setup.py, trajectory_compressor.py.
Based on PR #9274 by arthurbr11, simplified to a standard direct
provider without dual-endpoint OpenRouter routing.
- Use isinstance() with try/except import for CopilotACPClient check
in _to_async_client instead of fragile __class__.__name__ string check
- Restore accurate comment: GPT-5.x models *require* (not 'often require')
the Responses API on OpenAI/OpenRouter; ACP is the exception, not a
softening of the requirement
- Add inline comment explaining the ACP exclusion rationale
Cherry-picked from PR #7637 by hcshen0111.
Adds kimi-coding-cn provider with dedicated KIMI_CN_API_KEY env var
and api.moonshot.cn/v1 endpoint for China-region Moonshot users.
The v11→v12 migration converts custom_providers (list) into providers
(dict), then deletes the list. But all runtime resolvers read from
custom_providers — after migration, named custom endpoints silently stop
resolving and fallback chains fail with AuthError.
Add get_compatible_custom_providers() that reads from both config schemas
(legacy custom_providers list + v12+ providers dict), normalizes entries,
deduplicates, and returns a unified list. Update ALL consumers:
- hermes_cli/runtime_provider.py: _get_named_custom_provider() + key_env
- hermes_cli/auth_commands.py: credential pool provider names
- hermes_cli/main.py: model picker + _model_flow_named_custom()
- agent/auxiliary_client.py: key_env + custom_entry model fallback
- agent/credential_pool.py: _iter_custom_providers()
- cli.py + gateway/run.py: /model switch custom_providers passthrough
- run_agent.py + gateway/run.py: per-model context_length lookup
Also: use config.pop() instead of del for safer migration, fix stale
_config_version assertions in tests, add pool mock to codex test.
Co-authored-by: 墨綠BG <s5460703@gmail.com>
Closes#8776, salvaged from PR #8814
resolve_vision_provider_client() computed resolved_api_mode from config
but never passed it to downstream resolve_provider_client() or
_get_cached_client() calls, causing custom providers with
api_mode: anthropic_messages to crash when used for vision tasks.
Also remove the for_vision special case in _normalize_aux_provider()
that incorrectly discarded named custom provider identifiers.
Fixes#8857
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the backward-compat code paths that read compression provider/model
settings from legacy config keys and env vars, which caused silent failures
when auto-detection resolved to incompatible backends.
What changed:
- Remove compression.summary_model, summary_provider, summary_base_url from
DEFAULT_CONFIG and cli.py defaults
- Remove backward-compat block in _resolve_task_provider_model() that read
from the legacy compression section
- Remove _get_auxiliary_provider() and _get_auxiliary_env_override() helper
functions (AUXILIARY_*/CONTEXT_* env var readers)
- Remove env var fallback chain for per-task overrides
- Update hermes config show to read from auxiliary.compression
- Add config migration (v16→17) that moves non-empty legacy values to
auxiliary.compression and strips the old keys
- Update example config and openclaw migration script
- Remove/update tests for deleted code paths
Compression model/provider is now configured exclusively via:
auxiliary.compression.provider / auxiliary.compression.model
Closes#8923
_query_local_context_length was checking model_info.context_length
(the GGUF training max) before num_ctx (the Modelfile runtime override),
inverse to query_ollama_num_ctx. The two helpers therefore disagreed on
the same model:
hermes-brain:qwen3-14b-ctx32k # Modelfile: num_ctx 32768
underlying qwen3:14b GGUF # qwen3.context_length: 40960
query_ollama_num_ctx correctly returned 32768 (the value Ollama will
actually allocate KV cache for). _query_local_context_length returned
40960, which let ContextCompressor grow conversations past 32768 before
triggering compression — at which point Ollama silently truncated the
prefix, corrupting context.
Swap the order so num_ctx is checked first, matching query_ollama_num_ctx.
Adds a parametrized test that seeds both values and asserts num_ctx wins.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
auxiliary_client.py had its own regex mirroring _strip_think_blocks
but was missing the <thought> variant. Also adds test coverage for
<thought> paired and orphaned tags.