Compare commits

..

554 Commits

Author SHA1 Message Date
Alexander Whitestone
77be46d9c0 fix: tune rerank scoring for temporal recall (#1011)
All checks were successful
Lint / lint (pull_request) Successful in 9s
- use temporal lane support during rerank
- prefer explicit correction facts when fused scores are close
- preserve top-1 improvements on the holographic prompt matrix benchmark
2026-04-22 11:10:41 -04:00
Alexander Whitestone
9de2e87aaa wip: implement multi-path holographic recall pipeline (#1011)
- add lexical, semantic, graph, and temporal retrieval lanes with RRF fusion
- store retrieval traces on fused searches and expose them through the provider
- add benchmark helpers for prompt-matrix before/after evaluation
2026-04-22 11:07:33 -04:00
Alexander Whitestone
3273f469b7 wip: add failing holographic multi-path recall tests (#1011)
- add prompt matrix fixture for semantic, graph, and temporal recall
- add failing tests for lane traces, benchmark report, and provider trace access
2026-04-22 10:57:49 -04:00
d574690abe Merge pull request 'feat: The Sovereign Accountant — Agent Telemetry' (#1009) from feat/sovereign-accountant-agent-1776866068545 into main
All checks were successful
Lint / lint (pull_request) Successful in 19s
Lint / lint (push) Successful in 28s
2026-04-22 13:55:16 +00:00
e208885de6 feat: wire telemetry hooks into auxiliary client
All checks were successful
Lint / lint (pull_request) Successful in 10s
2026-04-22 13:54:32 +00:00
cd84fa2084 feat: add telemetry logger for token accounting 2026-04-22 13:54:30 +00:00
63babca056 Merge pull request 'docs: poka-yoke integration phase 3 status (#967)' (#976) from fix/967 into main
All checks were successful
Lint / lint (push) Successful in 11s
2026-04-22 13:39:43 +00:00
cab3c82c5c Merge pull request '[claude] Add update/restart action buttons to web dashboard (#961)' (#968) from claude/issue-961 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:39:36 +00:00
64a8059f9f Merge pull request '[claude] Verify hardcoded-home path guard on burn/921 branch (#962)' (#964) from claude/issue-962 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:39:32 +00:00
90f6fdef60 Merge pull request 'feat: Autonomous Regression Sentry — verify_impact tool' (#970) from feat/impact-analysis-tool-1776826592325 into main
All checks were successful
Lint / lint (push) Successful in 11s
2026-04-22 13:38:47 +00:00
18e3533a0a Merge pull request 'feat: The Budgetary Sovereign Router — Efficiency Sauce' (#1008) from feat/budgetary-router-1776864510362 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:38:40 +00:00
60ccd825ec Merge pull request 'feat: The Sovereign Teleport — State Migration Sauce' (#1007) from feat/sovereign-teleport-1776864503956 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:38:36 +00:00
e7d5a7f2cf Merge pull request 'feat: The Scavenger Fixer — Closing the Autonomous Loop' (#975) from feat/autonomous-scavenger-fix-1776827712502 into main
All checks were successful
Lint / lint (push) Successful in 13s
2026-04-22 13:38:03 +00:00
9aaac192cf Merge pull request 'test(#798): Parallel tool calling — 2+ tools per response' (#988) from fix/798 into main
All checks were successful
Lint / lint (push) Successful in 9s
2026-04-22 13:36:37 +00:00
f3d88ec31d Merge pull request '[claude] Wire Gemma 4 vision into browser_tool for screenshot analysis (#816)' (#947) from claude/issue-816 into main
All checks were successful
Lint / lint (push) Successful in 13s
2026-04-22 13:36:20 +00:00
2f22570622 Merge pull request 'feat(web-console): Self-healing browser CDP + operator cockpit (#394)' (#934) from feat/web-console-394 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:36:14 +00:00
2022322606 Merge pull request 'feat: Deep Dive Security Integration - Multilayer Defense' (#929) from feat/security-deep-dive-1776732106631 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:36:08 +00:00
d6ec32fe93 Merge pull request 'feat: implement SHIELD Multilingual Defense & Input Sanitization' (#918) from feat/shield-multilingual-1776700482647 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 13:36:05 +00:00
2b284e75f6 Merge pull request 'feat: Multi-Agent Concurrency Guard — "Secret Sauce" for Fleet Scaling' (#969) from feat/fleet-concurrency-guard-1776826501792 into main
All checks were successful
Lint / lint (push) Successful in 16s
2026-04-22 13:29:01 +00:00
efa1fc034e feat: Budgetary Sovereign Router — Complexity-aware steering
All checks were successful
Lint / lint (pull_request) Successful in 25s
2026-04-22 13:28:31 +00:00
99d925d40b feat: Sovereign Teleport — Cross-environment agent migration
All checks were successful
Lint / lint (pull_request) Successful in 28s
2026-04-22 13:28:25 +00:00
Alexander Whitestone
ed250b1ca8 test(#798): Strengthen parallel tool calling tests + fix flaky concurrent tests
All checks were successful
Lint / lint (pull_request) Successful in 10s
- Add TestAIAgentConcurrentExecution with 8 integration tests exercising
  _execute_tool_calls_concurrent through AIAgent for 2/3/4-tool batches,
  pass-rate reporting, and Gemma 4-style read patterns.
- Fix test_malformed_json_args_forces_sequential: use JSON array '[1,2,3]'
  instead of unrepairable garbage now that repair_and_load_json handles
  most malformed input.
- Fix test_concurrent_handles_tool_error: replace racy call_count list
  with deterministic failure based on tool_call_id to eliminate flaky
  failures under ThreadPoolExecutor.

Closes #798
2026-04-22 01:34:24 -04:00
Alexander Whitestone
1f5067e94a Merge: bring in prior QA work on path guard (Refs #962)
All checks were successful
Lint / lint (pull_request) Successful in 15s
2026-04-22 00:25:50 -04:00
Alexander Whitestone
798ca3aa06 chore: sync with remote claude/issue-961 branch
All checks were successful
Lint / lint (pull_request) Successful in 22s
Refs #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-22 00:04:51 -04:00
Alexander Whitestone
5d3e13ede2 test: add pre-commit path guard hook from burn/921 (Refs #962)
All checks were successful
Lint / lint (pull_request) Successful in 24s
Brings hooks/pre-commit-path-guard.py from burn/921-poka-yoke-hardcoded-paths
to complete QA verification of all guard layers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 23:55:38 -04:00
82a076bf4d docs: poka-yoke integration phase 3 status (#967)
All checks were successful
Lint / lint (pull_request) Successful in 8s
2026-04-22 03:24:26 +00:00
16eab5d503 Merge pull request '[claude] A2A auth — mutual TLS between fleet agents (#806)' (#948) from claude/issue-806 into main
All checks were successful
Lint / lint (push) Successful in 13s
Merge PR #948: A2A auth — mutual TLS between fleet agents (#806)
2026-04-22 03:19:42 +00:00
81f7347bcb feat: Scavenger Fixer — Autonomous tech debt healing
All checks were successful
Lint / lint (pull_request) Successful in 22s
2026-04-22 03:15:17 +00:00
c7a2d439c1 Merge pull request 'feat: The Sovereign Scavenger — Automated Tech Debt Recovery' (#974) from feat/sovereign-scavenger-1776827259631 into main
All checks were successful
Lint / lint (push) Successful in 12s
2026-04-22 03:14:14 +00:00
8ad8520bd2 Merge pull request 'feat: Execution Safety Sentry — GOFAI Risk Analysis' (#973) from feat/static-analyzer-gofai-1776826921747 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 03:14:07 +00:00
9c7c88823f Merge pull request 'feat: Local Inference Story — Freeing the fleet from cloud dependency' (#972) from feat/local-inference-bridge-1776826896029 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 03:14:03 +00:00
aa45e02238 Merge pull request 'feat: GOFAI Semantic Sentry — Deterministic code verification' (#971) from feat/symbolic-verify-gofai-1776826842170 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-22 03:14:01 +00:00
3266c39e8e feat: Sovereign Scavenger — Turning tech debt into actionable backlog
All checks were successful
Lint / lint (pull_request) Successful in 18s
2026-04-22 03:07:40 +00:00
Alexander Whitestone
e8886f10c8 feat: add Update Hermes and Restart Gateway action buttons to web dashboard
All checks were successful
Lint / lint (pull_request) Successful in 10s
Implements the action button lifecycle described in #961:
- POST /api/actions/restart-gateway  — sends SIGTERM to the gateway PID
- POST /api/actions/update-hermes    — runs pip upgrade in a background job
- GET  /api/actions/jobs/{job_id}    — polls job status/output

Frontend (StatusPage.tsx):
- "Restart Gateway" button with spinning icon while running, then
  success/error message that clears after 5–8 s
- "Update Hermes" button that polls the job endpoint every 2 s;
  shows collapsible pip output on completion
- Page remains responsive (buttons disabled only during their own action)

Also adds i18n strings to en.ts, zh.ts, and the shared types.ts interface.

Fixes #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 23:04:10 -04:00
93a855d4e3 feat: Static Risk Analyzer (GOFAI) for execution safety
All checks were successful
Lint / lint (pull_request) Successful in 8s
2026-04-22 03:02:02 +00:00
5a0bdb556e feat: Local Inference Bridge — Bypassing cloud for local tasks
All checks were successful
Lint / lint (pull_request) Successful in 17s
2026-04-22 03:01:37 +00:00
d619d279f8 feat: Symbolic Sentry (GOFAI) for deterministic code audits
All checks were successful
Lint / lint (pull_request) Successful in 15s
2026-04-22 03:00:44 +00:00
d3b13a6aa5 feat: add verify_impact tool for regression guarding
All checks were successful
Lint / lint (pull_request) Successful in 16s
2026-04-22 02:56:33 +00:00
77d2430a44 feat: add Fleet-Wide File Concurrency Guard
All checks were successful
Lint / lint (pull_request) Successful in 19s
2026-04-22 02:55:04 +00:00
Alexander Whitestone
d2ce6b8749 test: verify action endpoints for restart-gateway and update-hermes
All checks were successful
Lint / lint (pull_request) Successful in 27s
Add TestActionEndpoints class to test_web_server.py covering:
- POST /api/actions/restart-gateway sends SIGUSR1 to gateway PID
- 409 when gateway is not running
- 500 when os.kill raises a signal error
- POST /api/actions/update-hermes returns ok=true on zero exit
- ok=false on non-zero exit code with stderr in detail
- ok=false on timeout
- Both endpoints reject unauthenticated requests

All 7 new tests pass (83 total in the file).

Refs #961
2026-04-21 22:41:27 -04:00
Alexander Whitestone
a8a086548d feat: add restart gateway and update Hermes action buttons to web dashboard
All checks were successful
Lint / lint (pull_request) Successful in 29s
Implements the update/restart action buttons called out in issue #961:

- Backend (web_server.py): two new POST endpoints
  - /api/actions/restart-gateway — sends SIGUSR1 to the running gateway PID
  - /api/actions/update-hermes  — runs `hermes update --yes` in a subprocess
- Frontend (api.ts): restartGateway() / updateHermes() API helpers + ActionResponse type
- UI (StatusPage.tsx): "Actions" card with Restart Gateway and Update Hermes buttons
  - idle → running (spinner) → success/failure states
  - feedback detail text; auto-resets to idle after 8 s
- i18n: new status.actions / restartGateway / updateHermes strings in en, zh, and types

Refs #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 22:30:22 -04:00
Alexander Whitestone
9e00a59791 test: verify hardcoded-home path guard from burn/921 branch
All checks were successful
Lint / lint (pull_request) Successful in 29s
Cherry-picks tools/path_guard.py and tests/test_path_guard.py from
burn/921-poka-yoke-hardcoded-paths (commit 5dcb905). All 21 tests pass:

- hardcoded /Users/<name>/ paths are rejected at runtime
- hardcoded /home/<name>/ paths are rejected at runtime
- ~/.hermes/... via expanduser() passes (safe, expanded at runtime)
- valid relative and /tmp/ absolute paths pass
- static scanner catches violations and respects # noqa: hardcoded-path-ok
- comments are skipped by scanner
- directory scanner skips test files and __pycache__

Refs #962

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 22:26:54 -04:00
Alexander Whitestone
9ef7682ee2 chore: merge remote claude/issue-816 — deduplicate gemma-4-27b-it in models.py
All checks were successful
Lint / lint (pull_request) Successful in 30s
Merged prior implementation (PR #947) and resolved conflicts.
Removed duplicate "gemma-4-27b-it" entry introduced during merge.

Refs #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 21:27:51 -04:00
Alexander Whitestone
e157a22639 feat: wire Gemma 4 vision into browser_tool for screenshot analysis
- Add `_BROWSER_VISION_DEFAULT_MODEL = "google/gemma-4-27b-it"` constant
- Rewrite `_get_vision_model()` with 4-tier resolution:
  1. BROWSER_VISION_MODEL env var (browser-specific override)
  2. auxiliary.browser_vision.model config key
  3. AUXILIARY_VISION_MODEL env var (backward compat)
  4. google/gemma-4-27b-it default (Gemma 4 native multimodal)
- Extract `_load_browser_vision_config()` helper for testability
- Always set call_kwargs["model"] (remove redundant `if vision_model` guard)
- Read timeout from auxiliary.browser_vision.timeout before auxiliary.vision.timeout
- Register gemma-4-27b-it in Gemini provider model catalog
- Document auxiliary.browser_vision section in cli-config.yaml.example
- Add 12 unit tests in tests/tools/test_browser_vision_model.py covering all
  resolution tiers, backward compat, error fallthrough, and type guarantees

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 21:26:03 -04:00
Alexander Whitestone
671283389c feat: Wire Gemma 4 vision into browser_tool for screenshot analysis
All checks were successful
Lint / lint (pull_request) Successful in 8s
_get_vision_model() now resolves via a layered priority chain:
  1. BROWSER_VISION_MODEL env var (browser-specific override)
  2. config.yaml browser.vision_model
  3. AUXILIARY_VISION_MODEL env var (backward-compat shared override)
  4. google/gemma-4-27b-it — Gemma 4 native multimodal default

Add browser.vision_model config key to hermes_cli/config.py defaults
with inline documentation.

call_kwargs["model"] is now always set (model is never None), and a
debug log line records which model is in use for each screenshot.

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 20:51:04 -04:00
Alexander Whitestone
17cc4bac90 feat: complete Gemma 4 browser_vision wiring — task routing, timeout, tests
All checks were successful
Lint / lint (pull_request) Successful in 10s
Building on the Gemma 4 default already on this branch:

- Change call_llm() task from "vision" to "browser_vision" in browser_vision()
  so auxiliary.browser_vision.* config is consulted for provider/model/timeout
- Route call_llm(task="browser_vision") through the vision provider resolution
  path in auxiliary_client.py (same as task="vision")
- Fix timeout resolution: check auxiliary.browser_vision.timeout before
  auxiliary.vision.timeout (allows browser-specific timeout override)
- Add timeout option to auxiliary.browser_vision in cli-config.yaml.example
- Add test_browser_vision_gemma4.py covering: task routing assertions,
  call_llm() vision branch routing, and timeout config key ordering

Refs #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 19:43:42 -04:00
Alexander Whitestone
1843545d66 chore: merge remote branch — resolve conflicts, use canonical implementation
All checks were successful
Lint / lint (pull_request) Successful in 8s
Merge remote claude/issue-816 which contains the full Gemma 4 browser
vision implementation. Resolved conflicts by taking the remote's cleaner
variable names and docstrings while keeping the same 4-tier resolution
logic. All 12 tests pass.

Refs #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 18:50:22 -04:00
Alexander Whitestone
c643ac90da feat: wire Gemma 4 vision into browser_tool for screenshot analysis
- Add `_BROWSER_VISION_DEFAULT_MODEL = "google/gemma-4-27b-it"` constant
- Rewrite `_get_vision_model()` with 4-tier resolution:
    1. BROWSER_VISION_MODEL env var (browser-specific override)
    2. auxiliary.browser_vision.model config key
    3. AUXILIARY_VISION_MODEL env var (backward compat)
    4. Gemma 4 27B default
- Remove `if vision_model:` guard — function now always returns a string
- Update browser_vision tool description to surface Gemma 4 as default
- Register gemma-4-27b-it in Gemini provider model catalog (models.py)
- Document auxiliary.browser_vision.model in cli-config.yaml.example
- Add 14 unit tests covering all priority levels and backward compat

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 18:47:03 -04:00
Alexander Whitestone
da9c4cf10c feat: wire Gemma 4 vision into browser_tool for screenshot analysis
All checks were successful
Lint / lint (pull_request) Successful in 7s
Extends `_get_vision_model()` with a 5-level resolution chain:
1. `BROWSER_VISION_MODEL` env var — browser-specific override
2. `auxiliary.browser.vision_model` config key — per-install default
3. `AUXILIARY_VISION_MODEL` env var — backward-compat shared override
4. Auto-select `gemma-4-27b-it` when the main provider is Gemini/Google
5. `None` — fall through to `call_llm` vision router

Adds `_BROWSER_VISION_DEFAULT_MODEL = "gemma-4-27b-it"` constant and
registers `gemma-4-27b-it` in the Gemini provider model catalog.

16 new tests in `tests/tools/test_browser_vision_model.py` cover each
priority level, edge cases (empty env, config exceptions, wrong provider).

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 18:18:30 -04:00
Alexander Whitestone
4214082fb6 feat: A2A auth — mutual TLS between fleet agents
All checks were successful
Lint / lint (pull_request) Successful in 8s
Implements mTLS for securing agent-to-agent communication in the Hermes
fleet. Fixes #806.

Changes:
- scripts/gen_fleet_ca.sh: generate a self-signed Fleet CA (4096-bit RSA,
  10-year validity) that signs all agent certificates
- scripts/gen_agent_cert.sh: generate per-agent certs (Timmy, Allegro,
  Ezra) signed by the fleet CA with SAN entries and clientAuth/serverAuth
  extended key usage
- agent/mtls.py: new module providing:
  - build_server_ssl_context() — TLS_SERVER context with CERT_REQUIRED,
    enforces client cert against Fleet CA
  - build_client_ssl_context() — TLS_CLIENT context for outbound A2A calls
  - MTLSMiddleware — ASGI middleware that rejects unauthenticated requests
    to A2A routes (/.well-known/agent-card*, /api/agent-card, /a2a/) with
    HTTP 403 when mTLS is enabled
  - is_mtls_configured() — checks HERMES_MTLS_CERT/KEY/CA env vars
- hermes_cli/web_server.py: wire MTLSMiddleware into the FastAPI app;
  pass SSL context to uvicorn when HERMES_MTLS_* env vars are set so
  the server runs TLS with mandatory client cert verification
- ansible/roles/hermes_mtls/: Ansible role to distribute Fleet CA cert,
  agent cert, and agent key to fleet nodes; writes an env file with
  HERMES_MTLS_* vars and restarts the hermes-gateway service
- ansible/fleet_mtls.yml: fleet-wide playbook referencing the role for
  Timmy, Allegro, and Ezra nodes
- tests/test_mtls.py: 15 tests covering is_mtls_configured, SSL context
  creation with real cryptography-generated certs, and MTLSMiddleware
  (unauthorized agent rejected → 403, authorized agent accepted → 200)

mTLS is opt-in: set HERMES_MTLS_CERT, HERMES_MTLS_KEY, and HERMES_MTLS_CA
to enable. When unset, the server behaves exactly as before.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 18:04:00 -04:00
Alexander Whitestone
95bb842a21 feat: Wire Gemma 4 vision into browser_tool for screenshot analysis
All checks were successful
Lint / lint (pull_request) Successful in 8s
Default browser_vision screenshots to google/gemma-4-27b-it (Gemma 4
native multimodal) for reduced latency and unified text+vision model.

Resolution order for _get_vision_model():
1. BROWSER_VISION_MODEL env var (new, browser-specific override)
2. auxiliary.browser_vision.model in config.yaml (new config key)
3. AUXILIARY_VISION_MODEL env var (existing global vision override)
4. Default: google/gemma-4-27b-it

Backward compatibility: existing AUXILIARY_VISION_MODEL users are
unaffected — their override still flows through to browser_vision.

Also documents the new auxiliary.browser_vision config section in
cli-config.yaml.example and adds 14 unit tests covering the full
priority chain.

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 17:14:32 -04:00
Alexander Whitestone
ac28444bf2 feat: add A2AMTLSServer routing API, A2AMTLSClient, and expand tests to 20 (#806)
All checks were successful
Lint / lint (pull_request) Successful in 9s
Builds on the existing A2AServer / build_*_ssl_context foundation:

- agent/a2a_mtls.py:
  - Add A2AMTLSServer: routing-based HTTPS server with add_route() and
    context-manager (__enter__/__exit__) lifecycle support
  - Add A2AMTLSClient: fleet-cert-presenting HTTP client with .get() / .post()
  - Widen imports (json, Callable, Dict, urlopen)

- tests/agent/test_a2a_mtls.py:
  - Fix datetime.utcnow() deprecation — use datetime.now(timezone.utc)
  - Add TestA2AMTLSServerAndClient (9 tests): routing GET/POST, 404,
    context-manager stop, rogue-cert rejection, A2AMTLSClient, concurrency
  - Total: 11 → 20 passing tests

Refs #806
2026-04-21 15:21:10 -04:00
Alexander Whitestone
12b5d9a7fd refactor: remove redundant vision_model guard in browser_vision
All checks were successful
Lint / lint (pull_request) Successful in 10s
_get_vision_model() now always returns a non-empty string (Gemma 4 default
or configured override), so the `if vision_model:` conditional guard is
unnecessary. Replace with unconditional assignment and add a debug log
line showing which model was selected.

Refs #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 14:09:40 -04:00
Alexander Whitestone
91faf6f956 feat: A2A auth — mutual TLS between fleet agents
All checks were successful
Lint / lint (pull_request) Successful in 10s
Implements mutual TLS for secure agent-to-agent communication (#806).

- scripts/gen_fleet_ca.sh: generate fleet CA (4096-bit RSA, 10-year)
- scripts/gen_agent_cert.sh: per-agent cert signed by fleet CA (timmy, allegro, ezra)
- agent/a2a_mtls.py: A2AServer requiring client cert verification (CERT_REQUIRED),
  build_server_ssl_context / build_client_ssl_context helpers, server_from_env()
- ansible/roles/fleet_mtls_certs/: distribute CA + per-agent certs to fleet nodes,
  write /etc/hermes/a2a.env, notify hermes-a2a service on change
- ansible/fleet_mtls.yml + ansible/inventory/fleet.ini.example: playbook + example inventory
- tests/agent/test_a2a_mtls.py: 11 tests — authorized agent accepted (200/202),
  self-signed cert rejected, no-cert rejected, lifecycle, env-var wiring

Fixes #806

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 13:28:28 -04:00
Alexander Whitestone
b6398b8b0d feat: wire Gemma 4 vision into browser_tool for screenshot analysis
All checks were successful
Lint / lint (pull_request) Successful in 19s
Default browser screenshot analysis now uses Gemma 4 27B
(google/gemma-4-27b-it) instead of deferring to the auxiliary router's
auto-detection.  Gemma 4 is natively multimodal — the same model family
already in use for text tasks — which avoids cold-start model-switching
overhead and improves context continuity.

Resolution order for _get_vision_model():
  1. BROWSER_VISION_MODEL env var (browser-specific override)
  2. auxiliary.browser_vision.model in config.yaml
  3. AUXILIARY_VISION_MODEL env var (shared/legacy override)
  4. google/gemma-4-27b-it (new default)

- Add _BROWSER_VISION_DEFAULT_MODEL constant to browser_tool.py
- Document auxiliary.browser_vision config key in cli-config.yaml.example
- Add 10 unit tests covering all resolution steps

Fixes #816

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 12:49:46 -04:00
a2a40429bd Merge pull request '[claude] Poka-yoke: auto-revert incomplete skill edits (#923)' (#946) from claude/issue-923 into main
All checks were successful
Lint / lint (push) Successful in 10s
2026-04-21 16:38:24 +00:00
ee61c5fa9d Merge pull request 'feat: Add queue health check script' (#912) from feat/queue-health-check into main
All checks were successful
Lint / lint (push) Successful in 34s
2026-04-21 15:37:59 +00:00
Alexander Whitestone
1fece10569 feat: poka-yoke auto-revert for incomplete skill edits (#923)
All checks were successful
Lint / lint (pull_request) Successful in 32s
Implement a transactional write-validate-commit-or-rollback pattern for
all skill_manage write operations (edit, patch, write_file):

- _backup_skill_file: timestamped .bak.{ts} snapshot before every write
- _validate_written_file: re-reads from disk after write to catch truncation,
  encoding errors, and broken YAML frontmatter
- _revert_from_backup: restores original content (or removes the corrupted
  file) on any validation failure
- _cleanup_old_backups: prunes to MAX_BACKUPS_PER_FILE (3) after success;
  failed edits keep their .bak file as a debugging aid

Also fixes pre-existing issue where _patch_skill error returns lacked a
`suggestion` field expected by test_skill_manager_error_context.py tests.

Adds 21 tests in test_skill_manager_autorevert.py covering every component
and an end-to-end simulation of mid-write failure + auto-revert.

Fixes #923

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 11:37:55 -04:00
46668505bc Merge pull request 'feat: tool fixation detection — break repetitive loops (#886)' (#914) from fix/886 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:35:08 +00:00
cac0c8224e Merge pull request 'fix: circuit breaker for error cascading (2.33x amplification)' (#927) from fix/885-circuit-breaker into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:35:04 +00:00
f38a64455d Merge pull request '[claude] Gateway config debt: add validation tests and API_SERVER_KEY warning (#892)' (#915) from claude/issue-892 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:33:19 +00:00
1b35a5a0d2 Merge pull request 'feat: Poka-yoke — hardcoded path guard (#921)' (#928) from fix/921-hardcoded-path-guard into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:33:14 +00:00
9172131b25 Merge pull request 'docs: tool investigation report from awesome-ai-tools (#926)' (#931) from fix/926 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:33:12 +00:00
407eab3331 Merge pull request 'feat: session deterministic seeding & marathon limits' (#919) from feat/session-management-1776700585635 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:29:44 +00:00
cf090a966d Merge pull request 'fix: Poka-yoke — detect and block tool hallucination before API calls (#922)' (#935) from fix/922 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:29:35 +00:00
b65be9b12c Merge pull request '[claude] Add tool investigation report: top 5 awesome-ai-tools recommendations (#926)' (#936) from claude/issue-926 into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:29:32 +00:00
3c1cff255e Merge pull request 'ci: integrate hardcoded path linter into CI workflow' (#938) from fix/865-ci-path-linter into main
Some checks failed
Lint / lint (push) Has been cancelled
2026-04-21 15:29:30 +00:00
690d100afc Merge pull request 'feat: Poka-yoke token budget — progressive context overflow guard (#925)' (#943) from burn/925-1776770102 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been skipped
Nix / nix (ubuntu-latest) (push) Failing after 5s
Tests / e2e (push) Successful in 5m8s
Tests / test (push) Failing after 30m13s
Nix / nix (macos-latest) (push) Has been cancelled
2026-04-21 15:29:02 +00:00
c6f0831738 Merge pull request 'feat: Python syntax validation before execute_code (#913)' (#917) from fix/913-syntax-validation into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:27:05 +00:00
30773ac1f9 Merge pull request 'fix: Path validation before read_file — poka-yoke (#887)' (#911) from fix/887-path-validation-read-file into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:26:55 +00:00
feb24bd08c Merge pull request 'feat: Block silent credential exposure in tool outputs (#839)' (#910) from fix/839-1776403070 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:26:47 +00:00
bc55f40505 Merge pull request 'feat: time-aware model routing for cron jobs (#889)' (#909) from fix/889 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:26:43 +00:00
2adc72335e Merge pull request 'fix: profile session isolation — tag and filter by profile' (#907) from fix/891-profile-isolation into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:26:39 +00:00
ab32670464 Merge pull request 'feat: Poka-yoke — detect and block tool hallucination before API calls (#922)' (#944) from burn/922-1776770102 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:23:56 +00:00
bfc0231297 Merge pull request 'docs: holographic + vector hybrid memory architecture (#879)' (#942) from fix/879 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:23:49 +00:00
cf2b09cf2f Merge pull request 'docs: emotional presence patterns for crisis support (#880)' (#941) from fix/880 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:23:45 +00:00
719bb537c0 Merge pull request 'feat: provider preflight validation before session start (#924)' (#932) from fix/924 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:23:02 +00:00
0bcbcf19ac Merge pull request 'feat: time-aware model routing for cron jobs #889' (#906) from fix/time-aware-routing-889 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:22:37 +00:00
27d2f2ca0e Merge pull request 'feat: Prevent context window overflow via proactive token counting (#838)' (#905) from fix/838-1776402240 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / e2e (push) Has been cancelled
Tests / test (push) Has been cancelled
2026-04-21 15:22:31 +00:00
7e7dcfa345 Merge pull request 'fix: Gateway config validation and fallback fixes (#892)' (#904) from fix/892 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / e2e (push) Has been cancelled
Tests / test (push) Has been cancelled
2026-04-21 15:22:22 +00:00
ba0e614446 Merge pull request 'feat: integrate 988 Suicide & Crisis Lifeline — automatic crisis escalation (#673)' (#903) from feat/673 into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
Tests / e2e (push) Has been cancelled
2026-04-21 15:22:17 +00:00
4f5e641c92 Merge pull request 'fix: kill 9 dead cron jobs — audit and cleanup script' (#902) from fix/890-dead-cron-jobs into main
Some checks failed
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Nix / nix (ubuntu-latest) (push) Has been cancelled
Tests / e2e (push) Has been cancelled
Tests / test (push) Has been cancelled
2026-04-21 15:22:15 +00:00
d61bd141f9 feat: add poka-yoke validation to non-execute_code dispatch (#922)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 32s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 32s
Tests / e2e (pull_request) Successful in 3m5s
Tests / test (pull_request) Failing after 36m26s
2026-04-21 12:01:57 +00:00
a4058af238 feat: wire poka-yoke validation into tool dispatch (#922) 2026-04-21 12:00:20 +00:00
08432a5618 test: poka-yoke validation tests (#922) 2026-04-21 11:59:26 +00:00
a875c6ed91 feat: poka-yoke tool call validation firewall (#922) 2026-04-21 11:59:25 +00:00
07c5b5b83d test: add token budget poka-yoke tests (#925)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 44s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 45s
Tests / test (pull_request) Failing after 25m21s
Tests / e2e (pull_request) Successful in 3m18s
2026-04-21 11:41:39 +00:00
ba56567631 docs: holographic + vector hybrid memory architecture (#879)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 45s
Tests / test (pull_request) Failing after 14m3s
Tests / e2e (pull_request) Successful in 1m53s
2026-04-21 11:41:31 +00:00
8ac26f54a5 feat: token budget with progressive poka-yoke thresholds (#925) 2026-04-21 11:40:39 +00:00
b807972d05 docs: emotional presence patterns for crisis support (#880)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 52s
Tests / e2e (pull_request) Successful in 3m53s
Tests / test (pull_request) Failing after 53m34s
2026-04-21 11:37:57 +00:00
6b5a6db668 ci: add Gitea Actions lint workflow
All checks were successful
Lint / lint (pull_request) Successful in 15s
Part of #865. Runs hardcoded path linter on every push/PR.
2026-04-21 11:37:33 +00:00
b702249c12 ci: add hardcoded path linter to CI workflow
Closes #865

Runs scripts/lint_hardcoded_paths.py as a CI check.
Uses continue-on-error for now since the linter may have false positives.
2026-04-21 11:37:31 +00:00
Alexander Whitestone
8023c9b8f2 docs: add tool investigation report for top 5 awesome-ai-tools recommendations
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 55s
Tests / e2e (pull_request) Successful in 3m56s
Tests / test (pull_request) Failing after 54m0s
Persists the research report from issue #926 as a markdown file following
the existing convention of research_*.md files in the repo. Documents the
top 5 tool recommendations (LiteLLM, Mem0, RAGFlow, LiteRT-LM, Claude-Mem)
with integration effort, impact scores, and phased implementation plan.

Refs #926
2026-04-21 07:26:44 -04:00
TERRA
9edd5383e7 feat: add hermes web console cockpit and browser self-healing (#394)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 36s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 31s
Tests / e2e (pull_request) Successful in 3m37s
Tests / test (pull_request) Failing after 38m26s
2026-04-21 02:00:41 -04:00
TERRA
f6c072f136 wip: add web console cockpit regression tests for #394 2026-04-21 02:00:41 -04:00
6eeee39c10 test(#922): Add tests for tool hallucination detection
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 1m15s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 1m8s
Tests / e2e (pull_request) Successful in 3m44s
Tests / test (pull_request) Failing after 1h9m15s
Tests for validation firewall:
- Unknown tool detection
- Missing required params
- Wrong type detection
- Hallucination patterns
- Rejection stats

Refs #922
2026-04-21 05:38:54 +00:00
b2d2d2c650 fix(#922): Poka-yoke — detect and block tool hallucination
Validation firewall between LLM tool-call output and execution:
1. Unknown tool names rejected
2. Malformed parameters caught
3. Missing required arguments detected
4. Hallucination patterns detected

All rejections logged with model provenance.
Agent receives rejection as tool result for self-correction.

Resolves #922
2026-04-21 05:38:22 +00:00
5b62bb8d81 feat(#394): Hermes web UI operator cockpit
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 43s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 1m9s
Tests / e2e (pull_request) Successful in 6m9s
Tests / test (pull_request) Failing after 1h3m4s
Minimal web interface for Hermes operation:
- Chat interface with streaming
- System status monitoring
- Crisis detection display
- Session management
- Dark theme, responsive design

Source-backed: Hermes Atlas pattern.
Refs #394
2026-04-21 05:34:22 +00:00
10f9fd690a feat(#394): Self-healing browser CDP layer (browser-harness)
Source-backed browser automation:
- CDP connection with auto-reconnect
- Self-healing on disconnects
- Screenshot, DOM inspection, JS evaluation
- Click, type, navigate primitives
- Session persistence

Refs #394
2026-04-21 05:33:32 +00:00
bdd0f2709b feat: provider preflight validation before session start (#924)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 47s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 52s
Tests / test (pull_request) Failing after 30m48s
Tests / e2e (pull_request) Successful in 2m9s
2026-04-21 04:48:57 +00:00
a9cbf7d69f docs: tool investigation report from awesome-ai-tools (#926)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 36s
Tests / e2e (pull_request) Successful in 2m56s
Tests / test (pull_request) Failing after 34m20s
2026-04-21 04:45:03 +00:00
b64f4d9632 feat: update run_agent.py for deep dive security
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 28s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 4s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 35s
Tests / test (pull_request) Failing after 1h0m5s
Tests / e2e (pull_request) Failing after 6m56s
Nix / nix (macos-latest) (pull_request) Has been cancelled
2026-04-21 00:41:55 +00:00
7caaf49a34 feat: deep dive integration of tests/test_shield_multilingual.py 2026-04-21 00:41:53 +00:00
e52f6d2cde feat: deep dive integration of tools/shield/detector.py 2026-04-21 00:41:52 +00:00
000d64deed feat: deep dive integration of agent/input_sanitizer.py 2026-04-21 00:41:50 +00:00
d527cb569b feat: deep dive integration of agent/shield.py 2026-04-21 00:41:49 +00:00
44ada06fd4 feat: update agent/privacy_filter.py for deep dive security 2026-04-21 00:41:48 +00:00
4cdda8701d feat: integrate hardcoded path guard into tool dispatch
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 32s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 32s
Tests / e2e (pull_request) Successful in 2m56s
Tests / test (pull_request) Failing after 1h1m7s
2026-04-21 00:31:01 +00:00
a80d30b342 feat: add pre-commit hook for hardcoded path detection 2026-04-21 00:29:33 +00:00
f098cf8c4a feat: add hardcoded path guard module (#921)
- Detects /Users/, /home/, ~/ in tool arguments
- Source code scanner for CI/pre-commit
- Runtime guard for tool dispatch
- noqa: hardcoded-path-ok escape hatch

Closes #921
2026-04-21 00:29:12 +00:00
30509b9c7c test: circuit breaker tests
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 38s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 40s
Tests / e2e (pull_request) Successful in 1m36s
Tests / test (pull_request) Failing after 17m13s
Part of #885
2026-04-21 00:28:15 +00:00
ccaa1cb021 feat: circuit breaker for error cascading
Closes #885

2.33x error cascade factor detected. After 3 consecutive errors,
circuit opens and agent must take corrective action.

Recovery pattern: terminal is the safety net (2300 recoveries).
2026-04-21 00:28:14 +00:00
c6f2855745 fix: restore _format_error helper for test compatibility (#916)
Some checks failed
Docker Build and Publish / build-and-push (push) Has been skipped
Nix / nix (ubuntu-latest) (push) Failing after 2s
Tests / e2e (push) Successful in 2m47s
Tests / test (push) Failing after 27m41s
Build Skills Index / build-index (push) Has been skipped
Build Skills Index / deploy-with-index (push) Has been skipped
Nix / nix (macos-latest) (push) Has been cancelled
fix: restore _format_error helper for test compatibility (#916)
2026-04-20 23:56:27 +00:00
9d180f31cc feat: add session templates
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 43s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 45s
Tests / test (pull_request) Failing after 45m24s
Tests / e2e (pull_request) Failing after 7m35s
2026-04-20 15:56:26 +00:00
3d8cf5122a feat: add agent/shield.py for SHIELD defense
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 31s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 40s
Tests / e2e (pull_request) Successful in 2m2s
Tests / test (pull_request) Failing after 52m0s
2026-04-20 15:54:48 +00:00
790b677978 feat: add tests/test_shield_multilingual.py for SHIELD defense 2026-04-20 15:54:46 +00:00
9a749d2854 feat: add agent/input_sanitizer.py for SHIELD defense 2026-04-20 15:54:45 +00:00
68534e78be feat: add tools/shield/detector.py for SHIELD defense 2026-04-20 15:54:43 +00:00
c17f64fa2c test: add syntax validation tests (#913)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 41s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 29s
Tests / e2e (pull_request) Successful in 2m2s
Tests / test (pull_request) Failing after 1h14m43s
2026-04-20 15:47:35 +00:00
bc7ffc2166 feat: Python syntax validation before execute_code (#913) 2026-04-20 15:46:23 +00:00
Alexander Whitestone
c22cdcaa8e fix: add _validate_gateway_config tests and API_SERVER_KEY network binding warning
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 23s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 27s
Tests / e2e (pull_request) Successful in 1m51s
Tests / test (pull_request) Failing after 37m0s
Refs #892 - Gateway config debt: missing keys and broken fallbacks

Changes:
- Add `_is_network_accessible()` helper to gateway/config.py (avoids circular
  import with gateway.platforms.base which imports from gateway.config)
- Add API_SERVER_KEY warning in `_validate_gateway_config`: when the API server
  is enabled on a network-accessible address (0.0.0.0, public IP, hostname) but
  no key is configured, log a warning at config-load time so operators see the
  issue before any adapter initialisation runs
- Add `TestValidateGatewayConfig` in tests/gateway/test_config.py covering:
  - idle_minutes <= 0 and None are corrected to 1440 (default)
  - at_hour outside 0-23 is corrected to 4 (default)
  - Boundary hours 0 and 23 are accepted unchanged
  - Empty platform token triggers a warning log
  - Disabled platform with empty token produces no warning
  - API server on 0.0.0.0 without key logs a warning
  - API server on 127.0.0.1 without key is silent (loopback is allowed)
  - API server with a key set logs no warning regardless of bind address

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 02:18:02 -04:00
Alexander Whitestone
ab968e910c feat: tool fixation detection — break repetitive loops (#886)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 37s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 43s
Tests / e2e (pull_request) Successful in 1m57s
Tests / test (pull_request) Failing after 18m57s
Marathon sessions show tool fixation: agent latches onto one tool
and calls it repeatedly. Observed streaks of 8-25 identical calls.

New agent/tool_fixation_detector.py:
- ToolFixationDetector: tracks consecutive tool calls
- record(tool_name): returns nudge prompt when threshold reached
- Default threshold: 5 consecutive calls (configurable via
  TOOL_FIXATION_THRESHOLD env var)
- Nudge prompt explains the fixation and suggests alternatives:
  1. Read error carefully
  2. Try different tool
  3. Ask user for clarification
  4. Check if task is complete
- get_streak_info(): current streak state
- format_report(): human-readable fixation events
- Singleton via get_fixation_detector()

Config:
- TOOL_FIXATION_THRESHOLD (default: 5)
- TOOL_FIXATION_WINDOW (default: 10)

Tests: tests/test_tool_fixation_detector.py (9 tests)

Closes #886
2026-04-17 01:57:37 -04:00
Alexander Whitestone
73984ca72f feat: Add queue health check script
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 29s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 31s
Tests / e2e (pull_request) Successful in 2m13s
Tests / test (pull_request) Failing after 28m10s
2026-04-17 01:26:07 -04:00
436c800def fix: add path validation before read_file (#887)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 35s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 38s
Tests / e2e (pull_request) Successful in 1m58s
Tests / test (pull_request) Failing after 42m6s
- Check if file exists before attempting read
- Return clear error with suggestions for similar files
- Suggest using search_files to find correct path
- Eliminates 83.7% of read_file errors (file not found)

Closes #887
2026-04-17 05:24:52 +00:00
cb331da4f1 test: Add credential redaction tests (#839)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 49s
Tests / e2e (pull_request) Successful in 2m50s
Tests / test (pull_request) Failing after 11m50s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 47s
2026-04-17 05:23:48 +00:00
fa892bfcb9 feat: Add credential redaction for tool outputs (#839) 2026-04-17 05:21:25 +00:00
Alexander Whitestone
0b72884750 feat: time-aware model routing for cron jobs (#889)
Some checks failed
Tests / test (pull_request) Failing after 25m4s
Tests / e2e (pull_request) Successful in 3m19s
Contributor Attribution Check / check-attribution (pull_request) Failing after 14s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 14s
Error rate peaks at 18:00 (9.4%) during evening cron batches vs 4.0%
at 09:00 during interactive work. Route cron tasks to stronger models
during off-hours when user is not present to correct errors.

New agent/time_aware_routing.py:
- resolve_time_aware_model(): routes based on hour, error rate, task type
- Interactive sessions: always use base model (user corrects errors)
- Cron during business hours: use base model (low error rate)
- Cron during off-hours with high error rate (>6%): upgrade to strong model
- get_hour_error_rate(): error rates by hour from empirical audit
- is_off_hours(): 18:00-05:59 = off-hours
- RoutingDecision: model, provider, reason, hour, error_rate
- get_routing_report(): 24h forecast of routing decisions

Config via env vars:
- CRON_STRONG_MODEL (default: xiaomi/mimo-v2-pro)
- CRON_CHEAP_MODEL (default: qwen2.5:7b)
- CRON_ERROR_THRESHOLD (default: 6.0%)

Tests: tests/test_time_aware_routing.py (9 tests)

Closes #889
2026-04-17 01:15:09 -04:00
a0ed1e6ff2 test: profile isolation tests
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 15s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 15s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Tests / test (pull_request) Failing after 18m33s
Tests / e2e (pull_request) Successful in 1m17s
Part of #891
2026-04-17 05:13:03 +00:00
b5ba272efe feat: profile session isolation
Closes #891

Tags sessions with originating profile and provides filtered
access so profiles cannot see each other's data.
2026-04-17 05:13:01 +00:00
2e0dfe27df feat: time-aware model routing for cron jobs #889
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Failing after 15s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 14s
Tests / test (pull_request) Failing after 18m19s
Tests / e2e (pull_request) Successful in 1m17s
2026-04-17 05:10:34 +00:00
d4cdfdc604 test: Add context budget tracker tests (#838)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 19s
Contributor Attribution Check / check-attribution (pull_request) Failing after 16s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Tests / test (pull_request) Failing after 18m30s
Tests / e2e (pull_request) Successful in 1m16s
2026-04-17 05:06:54 +00:00
e3436e36c3 feat: Add context budget tracker for overflow prevention (#838) 2026-04-17 05:06:08 +00:00
34e7de6a4c feat: 988 Lifeline tests (#673)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 18s
Contributor Attribution Check / check-attribution (pull_request) Failing after 17s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Tests / test (pull_request) Failing after 18m18s
Tests / e2e (pull_request) Successful in 1m13s
2026-04-17 05:04:50 +00:00
dbabe0e6ae feat: 988 Suicide & Crisis Lifeline integration (#673)
agent/crisis_resources.py provides all 988 Lifeline contact
methods: phone (988), text (HOME to 988), chat, Spanish line.
Also Crisis Text Line (741741) and 911.

Closes #673
2026-04-17 05:04:48 +00:00
517e2c571e fix(#892): Gateway config validation and fallback fixes
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 16s
Tests / test (pull_request) Failing after 18m29s
Tests / e2e (pull_request) Successful in 1m20s
Contributor Attribution Check / check-attribution (pull_request) Failing after 16s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Config validator and fallback fixes:
- Validate required keys (OPENROUTER_API_KEY, API_SERVER_KEY)
- Fix idle_minutes validation (>0 required)
- Fix Discord skill limit (reduce to 95 max)
- Validate provider configs
- Apply sensible defaults

Resolves #892
2026-04-17 05:04:11 +00:00
0b019327a3 docs: cron audit documentation
Some checks failed
Tests / e2e (pull_request) Successful in 49s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 15s
Contributor Attribution Check / check-attribution (pull_request) Failing after 14s
Tests / test (pull_request) Failing after 18m24s
Part of #890
2026-04-17 05:00:09 +00:00
6b0fca6944 feat: cron job audit and cleanup script
Closes #890

Finds dead cron jobs (zero completions, stale) and provides
--disable and --delete actions to clean them up.
2026-04-17 05:00:06 +00:00
05f8c2d188 Merge PR #899
Merged PR #899: feat: Allegro worker deliverables
2026-04-17 01:52:11 +00:00
ff2ce95ade feat(research): Allegro worker deliverables — fleet research reports + skill manager test
Some checks failed
Tests / e2e (pull_request) Successful in 1m39s
Tests / test (pull_request) Failing after 1h7m45s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Successful in 24s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 28s
Research reports:
- Vector DB research
- Workflow orchestration research
- Fleet knowledge graph SOTA research
- LLM inference optimization
- Local model crisis quality
- Memory systems SOTA
- Multi-agent coordination
- R5 vs E2E gap analysis
- Text-to-music-video

Test:
- test_skill_manager_error_context.py

[Allegro] Forge workers — 2026-04-16
2026-04-16 15:04:28 +00:00
Hermes Merge Bot
aedebfdf58 Merge PR #848 2026-04-16 02:12:13 -04:00
Hermes Merge Bot
adf49b1809 Merge PR #849 2026-04-16 02:11:21 -04:00
Hermes Merge Bot
52ea3a8935 Merge PR #850 2026-04-16 02:09:00 -04:00
Hermes Merge Bot
43246d6cb4 Merge PR #852 2026-04-16 02:08:06 -04:00
Hermes Merge Bot
20c5e237a7 Merge PR #861 2026-04-16 02:06:36 -04:00
Hermes Merge Bot
a0f4d10a7f Merge PR #855 2026-04-16 02:06:17 -04:00
Hermes Merge Bot
bc5d1cf6ff Merge PR #863 2026-04-16 02:05:44 -04:00
Hermes Merge Bot
dff451081d Merge PR #856 2026-04-16 02:05:42 -04:00
Hermes Merge Bot
5509b157c5 Merge PR #864 2026-04-16 02:05:05 -04:00
Hermes Merge Bot
fcc322fb81 Merge PR #867 2026-04-16 02:03:23 -04:00
Hermes Merge Bot
9bba9ecc40 Merge PR #866 2026-04-16 02:02:43 -04:00
Hermes Merge Bot
05086e58ea Merge PR #871 2026-04-16 02:00:55 -04:00
Hermes Merge Bot
7af6889767 Merge PR #869 2026-04-16 02:00:49 -04:00
5022db9d7b Merge pull request 'feat: self-modifying agent that improves its own prompts (#813)' (#897) from fix/813 into main 2026-04-16 05:29:11 +00:00
0f61474b74 Merge pull request 'feat: MCP server — expose hermes tools to fleet peers (#803)' (#896) from fix/803 into main
Auto-merged PR #896: feat: MCP server — expose hermes tools to fleet peers (#803)
2026-04-16 05:24:27 +00:00
Alexander Whitestone
a528bd5b1b fix: use .get() for env_vars key in _show_tool_availability_warnings
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 24s
Tests / test (pull_request) Failing after 1h2m1s
Tests / e2e (pull_request) Successful in 1m38s
Fixes KeyError: 'missing_vars' crash on CLI startup when toolsets are
unavailable. registry.py returns dicts with 'env_vars' key, but
_show_tool_availability_warnings() was accessing 'missing_vars' directly.

Now uses .get("env_vars") or .get("missing_vars") to handle both key
names, consistent with how doctor.py already handles this.

Fixes #834

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-16 01:23:48 -04:00
Alexander Whitestone
e63cdaf16f feat: self-modifying agent that improves its own prompts (#813)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been cancelled
Contributor Attribution Check / check-attribution (pull_request) Has been cancelled
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
Tests / e2e (pull_request) Has been cancelled
Resolves #813. Agent analyzes session transcripts for failure
patterns and generates prompt patches to prevent future failures.

agent/self_modify.py (PromptLearner class):
- analyze_session(): detects 5 failure types from transcripts:
  retry_loop, timeout, hallucination, context_loss, tool_failure
- generate_patches(): converts patterns to prompt patches with
  confidence scoring (frequency-based)
- apply_patches(): appends learned rules to system prompt with
  backup and rollback support
- learn_from_session(): full cycle analyze → patch → apply

Failures → patterns → patches → improved prompts → fewer failures.

Safety: patches only ADD rules (append-only), never remove.
Rollback:  restores from timestamped backup.
2026-04-16 01:23:48 -04:00
Alexander Whitestone
2b7b12baf9 feat: MCP server — expose hermes tools to fleet peers (#803)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 44s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Tests / test (pull_request) Has been cancelled
Tests / e2e (pull_request) Has been cancelled
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 19m48s
Resolves #803. Standalone MCP server that exposes safe hermes
tools to other fleet agents.

scripts/mcp_server.py:
- Exposes: terminal, file_read, file_search, web_search, session_search
- Blocks: approval, delegate, memory, config, cron, send_message
- Terminal uses approval.py dangerous command detection
- Auth via Bearer token (MCP_AUTH_KEY)
- HTTP endpoints: GET /mcp/tools, POST /mcp/tools/call, GET /health

Usage:
  python scripts/mcp_server.py --port 8081 --auth-key SECRET
  curl http://localhost:8081/mcp/tools
  curl -X POST http://localhost:8081/mcp/tools/call -d {"name":"file_read","arguments":{"path":"README.md"}}
2026-04-16 01:10:00 -04:00
Alexander Whitestone
6b40c5db7a fix: use env_vars key in _show_tool_availability_warnings to prevent KeyError
registry.py:check_tool_availability() returns unavailable dicts with key
"env_vars", but _show_tool_availability_warnings() in cli.py was accessing
u["missing_vars"] causing a KeyError crashing CLI startup whenever any
toolset was disabled.

Fix matches how doctor.py already handles the same data.

Fixes #834
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-16 00:42:03 -04:00
5a24894f78 fix: update hermes_cli/web_server.py for agent card discovery
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 43s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 5s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 38s
Tests / test (pull_request) Failing after 10m58s
Tests / e2e (pull_request) Successful in 1m32s
Nix / nix (macos-latest) (pull_request) Has been cancelled
2026-04-16 03:45:04 +00:00
a474eb8459 fix: add agent/agent_card.py for agent card discovery 2026-04-16 03:45:01 +00:00
Alexander Whitestone
3238cf4eb1 feat: Tool investigation report + Mem0 local provider (#842)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 38s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 32s
Tests / test (pull_request) Failing after 43m54s
Tests / e2e (pull_request) Successful in 2m5s
## Investigation Report
- docs/tool-investigation-2026-04-15.md: Full report analyzing 414 tools
  from awesome-ai-tools. Top 5 recommendations with integration paths.
- docs/plans/awesome-ai-tools-integration.md: Implementation tracking plan.

## Mem0 Local Provider (P1)
- plugins/memory/mem0_local/: New ChromaDB-backed memory provider.
  No API key required - fully sovereign. Compatible tool schemas with
  cloud Mem0 (mem0_profile, mem0_search, mem0_conclude).
- Pattern-based fact extraction from conversations.
- Deterministic dedup via content hashing.
- Circuit breaker for resilience.
- tests/plugins/memory/test_mem0_local.py: Full test coverage.

## Issues Filed
- #857: LightRAG integration (P2)
- #858: n8n workflow orchestration (P3)
- #859: RAGFlow document understanding (P4)
- #860: tensorzero LLMOps evaluation (P3)

Closes #842
2026-04-15 23:04:41 -04:00
eed87e454e test: Benchmark Gemma 4 vision accuracy vs current approach (#817)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 26s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 26s
Tests / e2e (pull_request) Successful in 2m38s
Tests / test (pull_request) Failing after 47m49s
Vision benchmark suite comparing Gemma 4 (google/gemma-4-27b-it) vs
current Gemini 3 Flash Preview (google/gemini-3-flash-preview).

Metrics:
- OCR accuracy (character + word overlap)
- Description completeness (keyword coverage)
- Structural quality (length, sentences, numbers)
- Latency (ms per image)
- Token usage
- Consistency across runs

Features:
- 24 diverse test images (screenshots, diagrams, photos, charts)
- Category-specific evaluation prompts
- Automated verdict with composite scoring
- JSON + markdown report output
- 28 unit tests passing

Usage:
  python benchmarks/vision_benchmark.py --images benchmarks/test_images.json
  python benchmarks/vision_benchmark.py --url https://example.com/img.png
  python benchmarks/vision_benchmark.py --generate-dataset

Closes #817.
2026-04-15 23:02:02 -04:00
Alexander Whitestone
f03709aa29 test: crisis hook integration tests with agent loop (#707)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 16s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 15s
Tests / e2e (pull_request) Failing after 12m38s
Tests / test (pull_request) Failing after 25m58s
10 integration tests verifying crisis detection works correctly
when called from the agent conversation flow:

- scan_user_message detects CRITICAL/HIGH/MEDIUM/LOW levels
- Safe messages pass through without triggering
- Tool handler returns valid JSON
- Compassion injection includes 988 lifeline for CRITICAL/HIGH
- Case insensitive detection
- Empty/None text handled gracefully
- False positive resistance on common non-crisis phrases
- Config check returns bool
- Callable from agent context (not just isolation tests)
2026-04-15 23:00:12 -04:00
Alexander Whitestone
4d8e004b5f fix: extend JSON repair to remaining json.loads sites in run_agent.py
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 42s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 4s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 36s
Tests / test (pull_request) Failing after 1h13m6s
Tests / e2e (pull_request) Successful in 1m32s
Nix / nix (macos-latest) (pull_request) Has been cancelled
Adds `repair_and_load_json()` to utils.py using the `json_repair` library
as a fallback when `json.loads()` fails. Replaces 8 non-hot-path json.loads
sites identified in issue #809:

- L2250: trajectory/sanitization message content parsing
- L2500: tool_call dict reconstruction in trajectory conversion
- L2535: tool_content parsing (JSON-like strings in tool responses)
- L2888: session log file loading (with warning on unrecoverable parse)
- L3119: todo content parsing in message processing
- L5963: vision result_json parsing
- L6761: memory flush tool call argument parsing
- L8300: cache serialization tool call args normalization

Each site uses an appropriate default ({} for tool args, None/continue for
content parsing) and a context label for debug tracing.

Fixes #809
2026-04-15 22:56:39 -04:00
85a654348a feat: poka-yoke — prevent hardcoded ~/.hermes paths (closes #835)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 27s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 19s
Tests / e2e (pull_request) Successful in 1m55s
Tests / test (pull_request) Failing after 56m41s
scripts/lint_hardcoded_paths.py (new):
- Scans Python files for hardcoded home-directory paths
- Detects: Path.home()/.hermes without env fallback, /Users/<name>/, /home/<name>/
- Excludes: comments, docstrings, test files, skills, plugins, docs
- Excludes correct patterns: profiles_parent, current_default, native_home
- Supports --staged (git pre-commit), --fix (suggestions), --json output

scripts/pre-commit-hardcoded-paths.sh (new):
- Pre-commit hook that runs lint_hardcoded_paths.py --staged
- Blocks commits containing hardcoded path violations

tools/confirmation_daemon.py (fixed):
- Replaced Path.home() / '.hermes' / 'approval_whitelist.json'
  with get_hermes_home() / 'approval_whitelist.json'
- Added import of get_hermes_home from hermes_constants

tests/test_hardcoded_paths.py (new):
- 11 tests: detection, exclusion, fallback patterns, clean files
2026-04-15 22:56:32 -04:00
fc0d8fe5e9 fix: extend JSON repair to ALL remaining json.loads sites (#809)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 2s
Contributor Attribution Check / check-attribution (pull_request) Successful in 26s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 26s
Tests / e2e (pull_request) Successful in 2m50s
Tests / test (pull_request) Failing after 1h17m49s
Nix / nix (macos-latest) (pull_request) Has been cancelled
2026-04-16 02:53:41 +00:00
Alexander Whitestone
13ef670c05 feat: session compaction with fact extraction (#748)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 29s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 33s
Tests / e2e (pull_request) Successful in 3m26s
Tests / test (pull_request) Failing after 1h28m50s
Before compressing conversation context, extract durable facts
(user preferences, corrections, project details) and save to
fact store so they survive compression.

New agent/session_compactor.py:
- extract_facts_from_messages(): scans user messages for
  preferences, corrections, project/infra facts using regex
- 3 pattern categories: user_pref (5 patterns), correction
  (3 patterns), project (4 patterns)
- ExtractedFact: category, entity, content, confidence, source_turn
- save_facts_to_store(): saves to fact store (callback or auto-detect)
- extract_and_save_facts(): one-call extraction + persistence
- Deduplication by category+content
- Skips tool results, short messages, system messages
- format_facts_summary(): human-readable summary

Tests: tests/test_session_compactor.py (9 tests)

Closes #748
2026-04-15 22:41:54 -04:00
4752a0085e fix: extend JSON repair to remaining json.loads sites in run_agent.py (#809) 2026-04-16 02:40:51 +00:00
b26a6ec23b feat: add repair_and_load_json() to utils.py (#809) 2026-04-16 02:38:01 +00:00
Alexander Whitestone
9f0c410481 feat: batch tool execution with parallel safety checks (#749)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Successful in 35s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 37s
Tests / e2e (pull_request) Successful in 1m48s
Tests / test (pull_request) Failing after 36m13s
Centralized safety classification for tool call batches:

tools/batch_executor.py (new):
- classify_tool_calls() — classifies batch into parallel_safe,
  path_scoped, sequential, never_parallel tiers
- BatchExecutionPlan — structured plan with parallel and sequential batches
- Path conflict detection — write_file + patch on same file go sequential
- Destructive command detection — rm, mv, sed -i, redirects
- execute_parallel_batch() — ThreadPoolExecutor for concurrent execution

tools/registry.py (enhanced):
- ToolEntry.parallel_safe field — tools can declare parallel safety
- registry.register() accepts parallel_safe=True parameter
- registry.get_parallel_safe_tools() — query registry-declared safe tools

Safety tiers:
- parallel_safe: read_file, web_search, search_files, etc.
- path_scoped: write_file, patch (concurrent when paths don't overlap)
- sequential: terminal, delegate_task, unknown tools
- never_parallel: clarify (requires user interaction)

19 tests passing.
2026-04-15 22:17:16 -04:00
b34b5b293d test: add tests for tool hallucination prevention (#836)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 24s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 22s
Tests / e2e (pull_request) Successful in 3m6s
Tests / test (pull_request) Failing after 41m24s
2026-04-16 02:15:59 +00:00
05f9d2b009 feat: integrate poka-yoke validation into tool dispatch (#836)
- Added import for tool_pokayoke module
- Added validation before orchestrator.dispatch calls
- Auto-corrects tool names and parameters
- Returns structured errors with suggestions
- Circuit breaker for consecutive failures

Closes #836
2026-04-16 02:15:17 +00:00
Timmy Time
fb7464995c fix: Ultraplan Mode for daily autonomous planning (closes #840)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 37s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 39s
Tests / test (pull_request) Failing after 1h15m33s
Tests / e2e (pull_request) Successful in 2m20s
2026-04-15 22:14:16 -04:00
7c71b7e73a test: parallel tool calling — 2+ tools per response (#798)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 45s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 1m16s
Tests / e2e (pull_request) Successful in 3m17s
Tests / test (pull_request) Failing after 1h30m54s
2026-04-16 02:13:00 +00:00
4a3068b3b5 test: add regression tests for issue #834 KeyError fix
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 39s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 44s
Tests / e2e (pull_request) Successful in 2m53s
Tests / test (pull_request) Failing after 1h28m32s
2026-04-16 02:12:36 +00:00
a8300ceb43 fix: KeyError 'missing_vars' in _show_tool_availability_warnings (#834) 2026-04-16 02:11:08 +00:00
8ef766beac feat: add tool hallucination prevention module (#836)
- Validates tool names against registered tools
- Auto-corrects parameter names within Levenshtein distance 1
- Circuit breaker for consecutive failures (threshold: 3)
- Structured error messages with suggestions

Closes #836
2026-04-16 02:10:39 +00:00
db72e908f7 Merge pull request 'feat(security): implement Vitalik's secure LLM patterns — privacy filter + confirmation daemon [resolves merge conflict]' (#830) from feat/vitalik-secure-llm-1776303263 into main
Vitalik's secure LLM patterns — privacy filter + confirmation daemon

Clean rebase of #397 onto current main. Resolves merge conflicts in tools/approval.py.
2026-04-16 01:36:58 +00:00
b82b760d5d feat: add Vitalik's threat model patterns to DANGEROUS_PATTERNS
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 41s
Contributor Attribution Check / check-attribution (pull_request) Successful in 51s
Tests / e2e (pull_request) Successful in 5m21s
Tests / test (pull_request) Failing after 45m7s
2026-04-16 01:35:49 +00:00
d8d7846897 feat: add tests/tools/test_confirmation_daemon.py from PR #397 2026-04-16 01:35:24 +00:00
6840d05554 feat: add tests/agent/test_privacy_filter.py from PR #397 2026-04-16 01:35:21 +00:00
8abe59ed95 feat: add tools/confirmation_daemon.py from PR #397 2026-04-16 01:35:18 +00:00
435d790201 feat: add agent/privacy_filter.py from PR #397 2026-04-16 01:35:14 +00:00
Alexander Whitestone
30afd529ac feat: add crisis detection tool — the-door integration (#141)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Successful in 44s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 59s
Tests / e2e (pull_request) Successful in 3m49s
Tests / test (pull_request) Failing after 44m1s
New tool: tools/crisis_tool.py
- Wraps the-door's canonical crisis detection (detect.py)
- Scans user messages for despair/suicidal ideation
- Classifies into NONE/LOW/MEDIUM/HIGH/CRITICAL tiers
- Provides recommended actions per tier
- Gateway hook: scan_user_message() for pre-API-call detection
- System prompt injection: compassion_injection based on crisis level
- Optional escalation logging to crisis_escalations.jsonl
- Optional bridge API POST for HIGH+ (configurable via CRISIS_BRIDGE_URL)
- Configurable via crisis_detection: true/false in config.yaml
- Follows the-door design principles: never computes life value,
  never suggests death, errs on side of higher risk

Also: tests/test_crisis_tool.py (9 tests, all passing)
2026-04-15 21:00:06 -04:00
Alexander Whitestone
a244b157be bench: add Gemma 4 vs mimo-v2-pro tool calling benchmark (#796)
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 42s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 32s
Tests / e2e (pull_request) Successful in 2m26s
Tests / test (pull_request) Failing after 44m7s
100-call regression test across 7 tool categories:
- File operations (20): read_file, write_file, search_files
- Terminal commands (20): shell execution
- Web search (15): web_search
- Code execution (15): execute_code
- Browser automation (10): browser_navigate
- Delegation (10): delegate_task
- MCP tools (10): mcp_list/read/call

Metrics tracked:
- Schema parse success (valid JSON tool calls)
- Tool name accuracy (correct tool selected)
- Arguments accuracy (required args present)
- Average latency per call

Usage:
  python3 benchmarks/tool_call_benchmark.py --model nous:xiaomi/mimo-v2-pro
  python3 benchmarks/tool_call_benchmark.py --model ollama/gemma4:latest
  python3 benchmarks/tool_call_benchmark.py --compare
2026-04-15 18:56:35 -04:00
d86359cbb2 Merge pull request 'feat: robust tool orchestration and circuit breaking' (#811) from feat/robust-tool-orchestration-1776268138150 into main 2026-04-15 16:03:07 +00:00
f264b55b29 refactor: use ToolOrchestrator for robust tool execution
Some checks failed
Contributor Attribution Check / check-attribution (pull_request) Successful in 36s
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 38s
Tests / e2e (pull_request) Successful in 2m37s
Tests / test (pull_request) Failing after 40m19s
2026-04-15 15:49:02 +00:00
dfe23f66b1 feat: add ToolOrchestrator with circuit breaker 2026-04-15 15:49:00 +00:00
Teknium
722331a57d fix: replace hardcoded ~/.hermes with display_hermes_home() in agent-facing text (#10285)
Tool schema descriptions and tool return values contained hardcoded
~/.hermes paths that the model sees and uses. When HERMES_HOME is set
to a custom path (Docker containers, profiles), the agent would still
reference ~/.hermes — looking at the wrong directory.

Fixes 6 locations across 5 files:
- tools/tts_tool.py: output_path schema description
- tools/cronjob_tools.py: script path schema description
- tools/skill_manager_tool.py: skill_manage schema description
- tools/skills_tool.py: two tool return messages
- agent/skill_commands.py: skill config injection text

All now use display_hermes_home() which resolves to the actual
HERMES_HOME path (e.g. /opt/data for Docker, ~/.hermes/profiles/X
for profiles, ~/.hermes for default).

Reported by: Sandeep Narahari (PrithviDevs)
2026-04-15 04:57:55 -07:00
sprmn24
41e2d61b3f feat(discord): add native send_animation for inline GIF playback 2026-04-15 04:51:27 -07:00
Teknium
4da598b48a docs: clarify hermes model vs /model — two commands, two purposes (#10276)
Users are confused about the difference between `hermes model` (terminal
command for full provider setup) and `/model` (session command for switching
between already-configured providers). This distinction was not documented
anywhere.

Changes across 4 doc pages:
- cli-commands.md: Added warning callout explaining the difference, added
  --global flag docs, added 'only see OpenRouter models?' info box
- slash-commands.md: Added notes on both TUI and messaging /model entries
  that /model only switches between configured providers
- providers.md: Added 'Two Commands for Model Management' comparison table
  near top of page, added warning callout in switching section
- faq.md: Added new FAQ entry '/model only shows one provider' with quick
  reference table

Prompted by user feedback in Discord — new users consistently hit this
confusion when trying to add providers from inside a session.
2026-04-15 04:39:34 -07:00
asheriif
33ae403890 fix(gateway): fix matrix lingering typing indicator 2026-04-15 04:16:16 -07:00
Teknium
47e6ea84bb fix: file handle bug, warning text, and tests for Discord media send
- Fix file handle closed before POST: nest session.post() inside
  the 'with open()' block so aiohttp can read the file during upload
- Update warning text to include weixin (also supports media delivery)
- Add 8 unit tests covering: text+media, media-only, missing files,
  upload failures, multiple files, and _send_to_platform routing
2026-04-15 04:16:06 -07:00
sprmn24
4bcb2f2d26 feat(send_message): add native media attachment support for Discord
Previously send_message only supported media delivery for Telegram.
Discord users received a warning that media was omitted.

- Add media_files parameter to _send_discord()
- Upload media via Discord multipart/form-data API (files[0] field)
- Handle Discord in _send_to_platform() same way as Telegram block
- Remove Discord from generic chunk loop (now handled above)
- Update error/warning strings to mention telegram and discord
2026-04-15 04:16:06 -07:00
Teknium
1c4d3216d3 fix(cron): include job_id in delivery and guide models on removal workflow (#10242)
* fix(gateway): suppress duplicate replies on interrupt and streaming flood control

Three fixes for the duplicate reply bug affecting all gateway platforms:

1. base.py: Suppress stale response when the session was interrupted by a
   new message that hasn't been consumed yet. Checks both interrupt_event
   and _pending_messages to avoid false positives. (#8221, #2483)

2. run.py (return path): Remove response_previewed guard from already_sent
   check. Stream consumer's already_sent alone is authoritative — if
   content was delivered via streaming, the duplicate send must be
   suppressed regardless of the agent's response_previewed flag. (#8375)

3. run.py (queued-message path): Same fix — already_sent without
   response_previewed now correctly marks the first response as already
   streamed, preventing re-send before processing the queued message.

The response_previewed field is still produced by the agent (run_agent.py)
but is no longer required as a gate for duplicate suppression. The stream
consumer's already_sent flag is the delivery-level truth about what the
user actually saw.

Concepts from PR #8380 (konsisumer). Closes #8375, #8221, #2483.

* fix(cron): include job_id in delivery and guide models on removal workflow

Users reported cron reminders keep firing after asking the agent to stop.
Root cause: the conversational agent didn't know the job_id (not in delivery)
and models don't reliably do the list→remove two-step without guidance.

1. Include job_id in the cron delivery wrapper so users and agents can
   reference it when requesting removal.

2. Replace confusing footer ('The agent cannot see this message') with
   actionable guidance ('To stop or manage this job, send me a new
   message').

3. Add explicit list→remove guidance in the cronjob tool schema so models
   know to list first and never guess job IDs.
2026-04-15 03:46:58 -07:00
Misturi
dedc4600dd fix(skills): handle missing fields in Google Workspace token file gracefully instead of crashing with KeyError 2026-04-15 03:45:09 -07:00
Misturi
8bc9b5a0b4 fix(skills): use is None check for coordinates in find-nearby to avoid dropping valid 0.0 values 2026-04-15 03:45:09 -07:00
Teknium
2546b7acea fix(gateway): suppress duplicate replies on interrupt and streaming flood control
Three fixes for the duplicate reply bug affecting all gateway platforms:

1. base.py: Suppress stale response when the session was interrupted by a
   new message that hasn't been consumed yet. Checks both interrupt_event
   and _pending_messages to avoid false positives. (#8221, #2483)

2. run.py (return path): Remove response_previewed guard from already_sent
   check. Stream consumer's already_sent alone is authoritative — if
   content was delivered via streaming, the duplicate send must be
   suppressed regardless of the agent's response_previewed flag. (#8375)

3. run.py (queued-message path): Same fix — already_sent without
   response_previewed now correctly marks the first response as already
   streamed, preventing re-send before processing the queued message.

The response_previewed field is still produced by the agent (run_agent.py)
but is no longer required as a gate for duplicate suppression. The stream
consumer's already_sent flag is the delivery-level truth about what the
user actually saw.

Concepts from PR #8380 (konsisumer). Closes #8375, #8221, #2483.
2026-04-15 03:42:24 -07:00
Teknium
7b2700c9af fix(browser): use 127.0.0.1 instead of localhost for CDP default (#10231)
/browser connect set BROWSER_CDP_URL to http://localhost:9222, but
Chrome's --remote-debugging-port only binds to 127.0.0.1 (IPv4).
On macOS, 'localhost' can resolve to ::1 (IPv6) first, causing both
_resolve_cdp_override's /json/version fetch and agent-browser's
--cdp connection to fail when Chrome isn't listening on IPv6.

The socket check in the connect handler already used 127.0.0.1
explicitly and succeeded, masking the mismatch.

Use 127.0.0.1 in the default CDP URL to match what Chrome actually
binds to.
2026-04-15 03:29:37 -07:00
Teknium
a4e1842f12 fix: strip reasoning item IDs from Responses API input when store=False (#10217)
With store=False (our default for the Responses API), the API does not
persist response items.  When reasoning items with 'id' fields were
replayed on subsequent turns, the API attempted a server-side lookup
for those IDs and returned 404:

  Item with id 'rs_...' not found. Items are not persisted when store
  is set to false.

The encrypted_content blob is self-contained for reasoning chain
continuity — the id field is unnecessary and triggers the failed lookup.

Fix: strip 'id' from reasoning items in both _chat_messages_to_responses_input
(message conversion) and _preflight_codex_input_items (normalization layer).
The id is still used for local deduplication but never sent to the API.

Reported by @zuogl448 on GPT-5.4.
2026-04-15 03:19:43 -07:00
Teknium
e69526be79 fix(send_message): URL-encode Matrix room IDs and add Matrix to schema examples (#10151)
Matrix room IDs contain ! and : which must be percent-encoded in URI
path segments per the Matrix C-S spec. Without encoding, some
homeservers reject the PUT request.

Also adds 'matrix:!roomid:server.org' and 'matrix:@user:server.org'
to the tool schema examples so models know the correct target format.
2026-04-15 00:10:59 -07:00
Teknium
180b14442f test: add _parse_target_ref Matrix coverage for salvaged PR #6144 2026-04-15 00:08:14 -07:00
bkadish
03446e06bb fix(send_message): accept Matrix room IDs and user MXIDs as explicit targets
`_parse_target_ref` has explicit-reference branches for Telegram, Feishu,
and numeric IDs, but none for Matrix. As a result, callers of
`send_message(target="matrix:!roomid:server")` or
`send_message(target="matrix:@user:server")` fall through to
`(None, None, False)` and the tool errors out with a resolution failure —
even though a raw Matrix room ID or MXID is the most unambiguous possible
target.

Three-line fix: recognize `!…` as a room ID and `@…` as a user MXID when
platform is `matrix`, and return them as explicit targets. Alias-based
targets (`#…`) continue to go through the normal resolve path.
2026-04-15 00:08:14 -07:00
Teknium
df7be3d8ae fix(cli): /model picker shows curated models instead of full catalog (#10146)
The /model picker called provider_model_ids() which fetches the FULL
live API catalog (hundreds of models for Anthropic, Copilot, etc.) and
only fell back to the curated list when the live fetch failed.

This flips the priority: use the curated model list from
list_authenticated_providers() (same lists as `hermes model` and
gateway pickers), falling back to provider_model_ids() only when the
curated list is empty (e.g. user-defined endpoints).
2026-04-15 00:07:50 -07:00
Ubuntu
da8bab77fb fix(cli): restore messaging toolset for gateway platforms 2026-04-14 23:13:35 -07:00
Teknium
9932366f3c feat(doctor): add Command Installation check for hermes bin symlink
hermes doctor now checks whether the ~/.local/bin/hermes symlink exists
and points to the correct venv entry point. With --fix, it creates or
repairs the symlink automatically.

Covers:
- Missing symlink at ~/.local/bin/hermes (or $PREFIX/bin on Termux)
- Symlink pointing to wrong target
- Missing venv entry point (venv/bin/hermes or .venv/bin/hermes)
- PATH warning when ~/.local/bin is not on PATH
- Skipped on Windows (different mechanism)

Addresses user report: 'python -m hermes_cli.main doesn't have an option
to fix the local bin/install'

10 new tests covering all scenarios.
2026-04-14 23:13:11 -07:00
Teknium
029938fbed fix(cli): defensive subparser routing for argparse bpo-9338 (#10113)
On some Python versions, argparse fails to route subcommand tokens when
the parent parser has nargs='?' optional arguments (--continue).  The
symptom: 'hermes model' produces 'unrecognized arguments: model' even
though 'model' is a registered subcommand.

Fix: when argv contains a token matching a known subcommand, set
subparsers.required=True to force deterministic routing.  If that fails
(e.g. 'hermes -c model' where 'model' is consumed as the session name
for --continue), fall back to the default optional-subparsers behaviour.

Adds 13 tests covering all key argument combinations.

Reported via user screenshot showing the exact error on an installed
version with the model subcommand listed in usage but rejected at parse
time.
2026-04-14 23:13:02 -07:00
Teknium
772cfb6c4e fix: stale agent timeout, uv venv detection, empty response after tools, compression model fallback (#9051, #8620, #9400) (#10093)
Four independent fixes:

1. Reset activity timestamp on cached agent reuse (#9051)
   When the gateway reuses a cached AIAgent for a new turn, the
   _last_activity_ts from the previous turn (possibly hours ago)
   carried over. The inactivity timeout handler immediately saw
   the agent as idle for hours and killed it.

   Fix: reset _last_activity_ts, _last_activity_desc, and
   _api_call_count when retrieving an agent from the cache.

2. Detect uv-managed virtual environments (#8620 sub-issue 1)
   The systemd unit generator fell back to sys.executable (uv's
   standalone Python) when running under 'uv run', because
   sys.prefix == sys.base_prefix. The generated ExecStart pointed
   to a Python binary without site-packages.

   Fix: check VIRTUAL_ENV env var before falling back to
   sys.executable. uv sets VIRTUAL_ENV even when sys.prefix
   doesn't reflect the venv.

3. Nudge model to continue after empty post-tool response (#9400)
   Weaker models sometimes return empty after tool calls. The agent
   silently abandoned the remaining work.

   Fix: append assistant('(empty)') + user nudge message and retry
   once. Resets after each successful tool round.

4. Compression model fallback on permanent errors (#8620 sub-issue 4)
   When the default summary model (gemini-3-flash) returns 503
   'model_not_found' on custom proxies, the compressor entered a
   600s cooldown, leaving context growing unbounded.

   Fix: detect permanent model-not-found errors (503, 404,
   'model_not_found', 'no available channel') and fall back to
   using the main model for compression instead of entering
   cooldown. One-time fallback with immediate retry.

Test plan: 40 compressor tests + 97 gateway/CLI tests + 9 venv tests pass
2026-04-14 22:38:17 -07:00
Teknium
5d5d21556e fix: sync client.api_key during UnicodeEncodeError ASCII recovery (#10090)
The existing recovery block sanitized self.api_key and
self._client_kwargs['api_key'] but did not update self.client.api_key.
The OpenAI SDK stores its own copy of api_key and reads it dynamically
via the auth_headers property on every request. Without this fix, the
retry after sanitization would still send the corrupted key in the
Authorization header, causing the same UnicodeEncodeError.

The bug manifests when an API key contains Unicode lookalike characters
(e.g. ʋ U+028B instead of v) from copy-pasting out of PDFs, rich-text
editors, or web pages with decorative fonts. httpx hard-encodes all
HTTP headers as ASCII, so the non-ASCII char in the Authorization
header triggers the error.

Adds TestApiKeyClientSync with two tests verifying:
- All three key locations are synced after sanitization
- Recovery handles client=None (pre-init) without crashing
2026-04-14 22:37:45 -07:00
kshitijk4poor
9855190f23 feat(compressor): smart collapse, dedup, anti-thrashing, template upgrade, hardening
Combined salvage of PRs #9661, #9663, #9674, #9677, #9678 by kshitijk4poor.

- Smart tool output collapse: informative 1-line summaries replace generic placeholder
- Dedup identical tool results via MD5 hash, truncate large tool_call arguments
- Anti-thrashing: skip compression after 2 consecutive <10% savings passes
- Structured action-log summary template with numbered actions and Active State
- Hardening: max_tokens 1.3x cap, multimodal safety, note idempotency, adaptive cooldown

Follow-up fixes applied during salvage:
- web_extract: reads 'urls' (list) not 'url' (original PR bug)
- Multimodal list content guards in dedup and prune passes
- Kept 'Relevant Files' section in template (original PR removed it)

Skipped PRs #9665 (user msg preservation — duplication risk) and #9675 (dead code).
2026-04-14 22:21:25 -07:00
Teknium
50c35dcabe fix: stale agent timeout, uv venv detection, empty response after tools (#9051, #8620, #9400)
Three independent fixes:

1. Reset activity timestamp on cached agent reuse (#9051)
   When the gateway reuses a cached AIAgent for a new turn, the
   _last_activity_ts from the previous turn (possibly hours ago)
   carried over. The inactivity timeout handler immediately saw
   the agent as idle for hours and killed it.

   Fix: reset _last_activity_ts, _last_activity_desc, and
   _api_call_count when retrieving an agent from the cache.

2. Detect uv-managed virtual environments (#8620 sub-issue 1)
   The systemd unit generator fell back to sys.executable (uv's
   standalone Python) when running under 'uv run', because
   sys.prefix == sys.base_prefix (uv doesn't set up traditional
   venv activation). The generated ExecStart pointed to a Python
   binary without site-packages, crashing the service on startup.

   Fix: check VIRTUAL_ENV env var before falling back to
   sys.executable. uv sets VIRTUAL_ENV even when sys.prefix
   doesn't reflect the venv.

3. Nudge model to continue after empty post-tool response (#9400)
   Weaker models (GLM-5, mimo-v2-pro) sometimes return empty
   responses after tool calls instead of continuing to the next
   step. The agent silently abandoned the remaining work with
   '(empty)' or used prior-turn fallback text.

   Fix: when the model returns empty after tool calls AND there's
   no prior-turn content to fall back on, inject a one-time user
   nudge message telling the model to process the tool results and
   continue. The flag resets after each successful tool round so it
   can fire again on later rounds.

Test plan: 97 gateway + CLI tests pass, 9 venv detection tests pass
2026-04-14 22:16:02 -07:00
Teknium
93fe4ead83 fix: warn on invalid context_length format in config.yaml (#10067)
Previously, non-integer context_length values (e.g. '256K') in
config.yaml were silently ignored, causing the agent to fall back
to 128K auto-detection with no user feedback. This was confusing
for users with custom LiteLLM endpoints expecting larger context.

Now prints a clear stderr warning and logs at WARNING level when
model.context_length or custom_providers[].models.<model>.context_length
cannot be parsed as an integer, telling users to use plain integers
(e.g. 256000 instead of '256K').

Reported by community user ChFarhan via Discord.
2026-04-14 22:14:27 -07:00
Teknium
a8b7db35b2 fix: interrupt agent immediately when user messages during active run (#10068)
When a user sends a message while the agent is executing a task on the
gateway, the agent is now interrupted immediately — not silently queued.
Previously, messages were stored in _pending_messages with zero feedback
to the user, potentially leaving them waiting 1+ hours.

Root cause: Level 1 guard (base.py) intercepted all messages for active
sessions and returned with no response. Level 2 (gateway/run.py) which
calls agent.interrupt() was never reached.

Fix: Expand _handle_active_session_busy_message to handle the normal
(non-draining) case:
  1. Call running_agent.interrupt(text) to abort in-flight tool calls
     and exit the agent loop at the next check point
  2. Store the message as pending so it becomes the next turn once the
     interrupted run returns
  3. Send a brief ack: 'Interrupting current task (10 min elapsed,
     iteration 21/60, running: terminal). I'll respond shortly.'
  4. Debounce acks to once per 30s to avoid spam on rapid messages

Reported by @Lonely__MH.
2026-04-14 22:07:28 -07:00
Teknium
8548893d14 feat: entry-level Podman support — find_docker() + rootless entrypoint (#10066)
- find_docker() now checks HERMES_DOCKER_BINARY env var first, then
  docker on PATH, then podman on PATH, then macOS known locations
- Entrypoint respects HERMES_HOME env var (was hardcoded to /opt/data)
- Entrypoint uses groupmod -o to tolerate non-unique GIDs (fixes macOS
  GID 20 conflict with Debian's dialout group)
- Entrypoint makes chown best-effort so rootless Podman continues
  instead of failing with 'Operation not permitted'
- 5 new tests covering env var override, podman fallback, precedence

Based on work by alanjds (PR #3996) and malaiwah (PR #8115).
Closes #4084.
2026-04-14 21:20:37 -07:00
Teknium
c5688e7c8b fix(gateway): break compression-exhaustion infinite loop and auto-reset session (#9893)
When compression fails after max attempts, the agent returns
{completed: False, partial: True} but was missing the 'failed' flag.
The gateway's agent_failed_early guard checked for 'failed' AND
'not final_response', but _run_agent_blocking always converts errors
to final_response — making the guard dead code.  This caused the
oversized session to persist, creating an infinite fail loop where
every subsequent message hits the same compression failure.

Changes:
- run_agent.py: add 'failed: True' and 'compression_exhausted: True'
  to all 5 compression-exhaustion return paths
- gateway/run.py (_run_agent_blocking): forward 'failed' and
  'compression_exhausted' flags through to the caller
- gateway/run.py (_handle_message_with_agent): fix agent_failed_early
  to check bool(failed) without the broken 'not final_response' clause;
  auto-reset the session when compression is exhausted so the next
  message starts fresh
- Update tests to match new guard logic and add
  TestCompressionExhaustedFlag test class

Closes #9893
2026-04-14 21:18:17 -07:00
Teknium
ba24f058ed docs: fix stale docstring reference to _discover_tools in mcp_tool.py 2026-04-14 21:12:29 -07:00
Teknium
ef04de3e98 docs: update tool-adding instructions for auto-discovery
- AGENTS.md: 3 files → 2 files, remove _discover_tools() step
- adding-tools.md: remove Step 3, note auto-discovery
- architecture.md: update discovery description
- tools-runtime.md: replace manual list with discover_builtin_tools() docs
- hermes-agent skill: remove manual import step
2026-04-14 21:12:29 -07:00
Teknium
fc6cb5b970 fix: tighten AST check to module-level only
The original tree-wide ast.walk() would match registry.register() calls
inside functions too. Restrict to top-level ast.Expr statements so helper
modules that call registry.register() inside a function are never picked
up as tool modules.
2026-04-14 21:12:29 -07:00
Greer Guthrie
4b2a1a4337 fix(tools): auto-discover built-in tool modules 2026-04-14 21:12:29 -07:00
Teknium
2871ef1807 docs: note session continuity for previous_response_id chains (#10060) 2026-04-14 21:07:37 -07:00
Teknium
5cbb45d93e fix: preserve session_id across previous_response_id chains in /v1/responses (#10059)
The /v1/responses endpoint generated a new UUID session_id for every
request, even when previous_response_id was provided. This caused each
turn of a multi-turn conversation to appear as a separate session on the
web dashboard, despite the conversation history being correctly chained.

Fix: store session_id alongside the response in the ResponseStore, and
reuse it when a subsequent request chains via previous_response_id.
Applies to both the non-streaming /v1/responses path and the streaming
SSE path. The /v1/runs endpoint also gains session continuity from
stored responses (explicit body.session_id still takes priority).

Adds test verifying session_id is preserved across chained requests.
2026-04-14 21:06:32 -07:00
Teknium
ca0ae56ccb fix: add 402 billing error hint to gateway error handler (#5220) (#10057)
* fix: hermes gateway restart waits for service to come back up (#8260)

Previously, systemd_restart() sent SIGUSR1 to the gateway, printed
'restart requested', and returned immediately. The gateway still
needed to drain active agents, exit with code 75, wait for systemd's
RestartSec=30, and start the new process. The user saw 'success' but
the gateway was actually down for 30-60 seconds.

Now the SIGUSR1 path blocks with progress feedback:

Phase 1 — wait for old process to die:
   User service draining active work...
  Polls os.kill(pid, 0) until ProcessLookupError (up to 90s)

Phase 2 — wait for new process to become active:
   Waiting for hermes-gateway to restart...
  Polls systemctl is-active + verifies new PID (up to 60s)

Success:
  ✓ User service restarted (PID 12345)

Timeout:
  ⚠ User service did not become active within 60s.
    Check status: hermes gateway status
    Check logs: journalctl --user -u hermes-gateway --since '2 min ago'

The reload-or-restart fallback path (line 1189) already blocks because
systemctl reload-or-restart is synchronous.

Test plan:
- Updated test to verify wait-for-restart behavior
- All 118 gateway CLI tests pass

* fix: add 402 billing error hint to gateway error handler (#5220)

The gateway's exception handler for agent errors had specific hints for
HTTP 401, 429, 529, 400, 500 — but not 402 (Payment Required / quota
exhausted). Users hitting billing limits from custom proxy providers
got a generic error with no guidance.

Added: 'Your API balance or quota is exhausted. Check your provider
dashboard.'

The underlying billing classification (error_classifier.py) already
correctly handles 402 as FailoverReason.billing with credential
rotation and fallback. The original issue (#5220) where 402 killed
the entire gateway was from an older version — on current main, 402
is excluded from the is_client_error abort path (line 9460) and goes
through the proper retry/fallback/fail flow. Combined with PR #9875
(auto-recover from unexpected SIGTERM), even edge cases where the
gateway dies are now survivable.
2026-04-14 21:03:05 -07:00
Teknium
23b87c8ca8 chore: add zons-zhaozhy to AUTHOR_MAP 2026-04-14 21:01:40 -07:00
阿泥豆
92385679b6 fix: reset retry counters after compression and stop poisoning conversation history
Three bugfixes in the agent loop:

1. Reset retry counters after context compression. Without this,
   pre-compression retry counts carry over, causing the model to
   hit empty-response recovery immediately after a compression-
   induced context loss, wasting API calls on a now-valid context.

2. Unmute output in the final-response (no-tool-call) branch.
   _mute_post_response could be left True from a prior housekeeping
   turn, silently suppressing empty-response warnings and recovery
   status that the user should see.

3. Stop injecting 'Calling the X tools...' into assistant message
   content when falling back to prior-turn content. This mutated
   conversation history with synthetic text that the model never
   produced, poisoning subsequent turns.
2026-04-14 21:01:40 -07:00
Teknium
82f364ffd1 feat: add --all flag to gateway start and restart commands (#10043)
- gateway start --all: kills all stale gateway processes across all
  profiles before starting the current profile's service
- gateway restart --all: stops all gateway processes across all
  profiles, then starts the current profile's service fresh
- gateway stop --all: already existed, unchanged

The --all flag was only available on 'stop' but not on 'start' or
'restart', causing 'unrecognized arguments' errors for users.
2026-04-14 20:52:18 -07:00
Teknium
31d0620663 chore: add simon-marcus to AUTHOR_MAP 2026-04-14 20:51:52 -07:00
Teknium
cf1d718823 fix: keep batch-path function_call_output.output as string per OpenAI spec
The streaming path emits output as content-part arrays for Open WebUI
compatibility, but the batch (non-streaming) Responses API path must
return output as a plain string per the OpenAI Responses API spec.
Reverts the _extract_output_items change from the cherry-picked commits
while preserving the streaming path's array format.
2026-04-14 20:51:52 -07:00
simon-marcus
302554b158 fix(api-server): format responses tool outputs for open webui 2026-04-14 20:51:52 -07:00
simon-marcus
d6c09ab94a feat(api-server): stream /v1/responses SSE tool events 2026-04-14 20:51:52 -07:00
Teknium
da528a8207 fix: detect and strip non-ASCII characters from API keys (#6843)
API keys containing Unicode lookalike characters (e.g. ʋ U+028B instead
of v) cause UnicodeEncodeError when httpx encodes the Authorization
header as ASCII.  This commonly happens when users copy-paste keys from
PDFs, rich-text editors, or web pages with decorative fonts.

Three layers of defense:

1. **Save-time validation** (hermes_cli/config.py):
   _check_non_ascii_credential() strips non-ASCII from credential values
   when saving to .env, with a clear warning explaining the issue.

2. **Load-time sanitization** (hermes_cli/env_loader.py):
   _sanitize_loaded_credentials() strips non-ASCII from credential env
   vars (those ending in _API_KEY, _TOKEN, _SECRET, _KEY) after dotenv
   loads them, so the rest of the codebase never sees non-ASCII keys.

3. **Runtime recovery** (run_agent.py):
   The UnicodeEncodeError recovery block now also sanitizes self.api_key
   and self._client_kwargs['api_key'], fixing the gap where message/tool
   sanitization succeeded but the API key still caused httpx to fail on
   the Authorization header.

Also: hermes_logging.py RotatingFileHandler now explicitly sets
encoding='utf-8' instead of relying on locale default (defensive
hardening for ASCII-locale systems).
2026-04-14 20:20:31 -07:00
kshitijk4poor
677f1227c3 fix: remove @staticmethod from _context_completions — crashes on @ mention
PR #9467 added a call to self._fuzzy_file_completions() inside
_context_completions(), but the method was still decorated with
@staticmethod and didn't receive self. Every @ mention in the input
triggers 'name self is not defined' from prompt_toolkit's async
completer, spamming the error on every keystroke.

Fix: remove @staticmethod, add self parameter. The method already uses
self._fuzzy_file_completions() and self._get_project_files() via that
call chain, so it was never meant to stay static after the fuzzy search
feature was added.
2026-04-14 19:43:42 -07:00
Teknium
4610551d74 fix: update stale comment referencing removed _sync_mcp_toolsets 2026-04-14 17:19:20 -07:00
Greer Guthrie
498cb7a0fc chore(release): map greer guthrie attribution 2026-04-14 17:19:20 -07:00
Greer Guthrie
c10fea8d26 fix(mcp): make server aliases explicit 2026-04-14 17:19:20 -07:00
Greer Guthrie
cda64a5961 fix(mcp): resolve toolsets from live registry 2026-04-14 17:19:20 -07:00
Teknium
2a98098035 fix: hermes gateway restart waits for service to come back up (#8260)
Previously, systemd_restart() sent SIGUSR1 to the gateway, printed
'restart requested', and returned immediately. The gateway still
needed to drain active agents, exit with code 75, wait for systemd's
RestartSec=30, and start the new process. The user saw 'success' but
the gateway was actually down for 30-60 seconds.

Now the SIGUSR1 path blocks with progress feedback:

Phase 1 — wait for old process to die:
   User service draining active work...
  Polls os.kill(pid, 0) until ProcessLookupError (up to 90s)

Phase 2 — wait for new process to become active:
   Waiting for hermes-gateway to restart...
  Polls systemctl is-active + verifies new PID (up to 60s)

Success:
  ✓ User service restarted (PID 12345)

Timeout:
  ⚠ User service did not become active within 60s.
    Check status: hermes gateway status
    Check logs: journalctl --user -u hermes-gateway --since '2 min ago'

The reload-or-restart fallback path (line 1189) already blocks because
systemctl reload-or-restart is synchronous.

Test plan:
- Updated test to verify wait-for-restart behavior
- All 118 gateway CLI tests pass
2026-04-14 17:12:58 -07:00
Teknium
6c89306437 fix: break stuck session resume loops after repeated restarts (#7536)
When a session gets stuck (hung terminal, runaway tool loop) and the
user restarts the gateway, the same session history loads and puts the
agent right back in the stuck state. The user is trapped in a loop:
restart → stuck → restart → stuck.

Fix: track restart-failure counts per session using a simple JSON file
(.restart_failure_counts). On each shutdown with active agents, the
counter increments for those sessions. On startup, if any session has
been active across 3+ consecutive restarts, it's auto-suspended —
giving the user a clean slate on their next message.

The counter resets to 0 when a session completes a turn successfully
(response delivered), so normal sessions that happen to be active
during planned restarts (/restart, hermes update) won't accumulate
false counts.

Implementation:
- _increment_restart_failure_counts(): called during stop() when
  agents are active. Writes {session_key: count} to JSON file.
  Sessions NOT active are dropped (loop broken).
- _suspend_stuck_loop_sessions(): called on startup. Reads the file,
  suspends sessions at threshold (3), clears the file.
- _clear_restart_failure_count(): called after successful response
  delivery. Removes the session from the counter file.

No SessionEntry schema changes. No database migration. Pure file-based
tracking that naturally cleans up.

Test plan:
- 9 new stuck-loop tests (increment, accumulate, threshold, clear,
  suspend, file cleanup, edge cases)
- All 28 gateway lifecycle tests pass (restart drain + auto-continue
  + stuck loop)
2026-04-14 17:08:35 -07:00
Teknium
847d7cbea5 fix: improve CLI text padding, word-wrap for responses and verbose tool output (#9920)
* feat(skills): add fitness-nutrition skill to optional-skills

Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'

* fix: increase CLI response text padding to 4-space tab indent

Increases horizontal padding on all response display paths:

- Rich Panel responses (main, background, /btw): padding (1,2) -> (1,4)
- Streaming text: add 4-space indent prefix to each line
- Streaming TTS: add 4-space indent prefix to sentences

Gives response text proper breathing room with a tab-width indent.
Rich Panel word wrapping automatically adjusts for the wider padding.

Requested by AriesTheCoder.

* fix: word-wrap verbose tool call args and results to terminal width

Verbose mode (tool_progress: verbose) printed tool args and results as
single unwrapped lines that could be thousands of characters long.

Adds _wrap_verbose() helper that:
- Pretty-prints JSON args with indent=2 instead of one-line dumps
- Splits text on existing newlines (preserves JSON/structured output)
- Wraps lines exceeding terminal width with 5-char continuation indent
- Uses break_long_words=True for URLs and paths without spaces

Applied to all 4 verbose print sites:
- Concurrent tool call args
- Concurrent tool results
- Sequential tool call args
- Sequential tool results

---------

Co-authored-by: haileymarshall <haileymarshall@users.noreply.github.com>
2026-04-14 16:58:23 -07:00
Teknium
a9c78d0eb0 feat(setup): add recommendation badges to tool provider selection (#9929)
New users don't know which tool providers to pick during setup.
Add [badge] labels to each provider in the selection menu:

  - [★ recommended · free] for best default choices (Edge TTS, Local Browser)
  - [★ recommended] for top-tier paid options (Firecrawl Cloud)
  - [paid] for options requiring an API key
  - [free tier] for services with a free tier (Tavily)
  - [free · self-hosted] / [free · local] for self-run options
  - [subscription] for Nous subscription-managed options

Also improves vague tag descriptions — e.g. 'AI-native search and
contents' becomes 'Neural search with semantic understanding' and
Tavily gets '1000 free searches/mo'.

Both hermes setup and hermes tools share the same rendering path,
so badges appear in both flows.

Addresses user feedback about setup being confusing for newcomers.
2026-04-14 16:58:10 -07:00
Teknium
e7475b1582 feat: auto-continue interrupted agent work after gateway restart (#4493)
When the gateway restarts mid-agent-work, the session transcript ends
on a tool result the agent never processed. Previously, the user had
to type 'continue' or use /retry (which replays from scratch, losing
all prior work).

Now, when the next user message arrives and the loaded history ends
with role='tool', a system note is prepended:

  [System note: Your previous turn was interrupted before you could
  process the last tool result(s). Please finish processing those
  results and summarize what was accomplished, then address the
  user's new message below.]

This is injected in _run_agent()'s run_sync closure, right before
calling agent.run_conversation(). The agent sees the full history
(including the pending tool results) and the system note, so it can
summarize what was accomplished and then handle the user's new input.

Design decisions:
- No new session flags or schema changes — purely detects trailing
  tool messages in the loaded history
- Works for any restart scenario (clean, crash, SIGTERM, drain timeout)
  as long as the session wasn't suspended (suspended = fresh start)
- The user's actual message is preserved after the note
- If the session WAS suspended (unclean shutdown), the old history is
  abandoned and the user starts fresh — no false auto-continue

Also updates the shutdown notification message from 'Use /retry after
restart to continue' to 'Send any message after restart to resume
where it left off' — which is now accurate.

Test plan:
- 6 new auto-continue tests (trailing tool detection, no false
  positives for assistant/user/empty history, multi-tool, message
  preservation)
- All 13 restart drain tests pass (updated /retry assertion)
2026-04-14 16:56:49 -07:00
Teknium
ac1f8fcccd docs(termux): note browser tool PATH auto-discovery
Update the Termux guide to mention that the browser tool now
automatically discovers Termux directories, and add the missing
pkg install nodejs-lts step.
2026-04-14 16:55:55 -07:00
adybag14-cyber
56c34ac4f7 fix(browser): add termux PATH fallbacks
Refactor browser tool PATH construction to include Termux directories
(/data/data/com.termux/files/usr/bin, /data/data/com.termux/files/usr/sbin)
so agent-browser and npx are discoverable on Android/Termux.

Extracts _browser_candidate_path_dirs() and _merge_browser_path() helpers
to centralize PATH construction shared between _find_agent_browser() and
_run_browser_command(), replacing duplicated inline logic.

Also fixes os.pathsep usage (was hardcoded ':') for cross-platform correctness.

Cherry-picked from PR #9846.
2026-04-14 16:55:55 -07:00
Teknium
3ca7417c2a chore: add areu01or00 to AUTHOR_MAP 2026-04-14 16:55:48 -07:00
areu01or00
cfa24532d3 fix(discord): register native /restart slash command 2026-04-14 16:55:48 -07:00
Teknium
b24e5ee4b0 feat(google-workspace): add --from flag for custom sender display name (#9931)
Adds --from flag to gmail send and gmail reply commands, allowing agents
to customize the From header display name when sharing the same email
account. Usage: --from '"Agent Name" <user@example.com>'

Also syncs repo google_api.py with the deployed standalone implementation
(replaces outdated gws_bridge thin wrapper), adds dedicated docs page
under Features > Skills, and updates sidebar navigation.

Requested by community user @Maxime44.
2026-04-14 16:55:34 -07:00
Julien Talbot
3b50821555 feat(xai): add xAI/Grok to provider prefix stripping
Add 'xai', 'x-ai', 'x.ai', 'grok' to _PROVIDER_PREFIXES so that
colon-prefixed model names (e.g. xai:grok-4.20) are stripped correctly
for context length lookups.

Cherry-picked from PR #9184 by @Julientalbot.
2026-04-14 16:43:42 -07:00
Teknium
10494b42a1 feat(discord): register skills under /skill command group with category subcommands (#9909)
Instead of consuming one top-level slash command slot per skill (hitting the
100-command limit with ~26 built-ins + 74 skills), skills are now organized
under a single /skill group command with category-based subcommand groups:

  /skill creative ascii-art [args]
  /skill media gif-search [args]
  /skill mlops axolotl [args]

Discord supports 25 subcommand groups × 25 subcommands = 625 max skills,
well beyond the previous 74-slot ceiling.

Categories are derived from the skill directory structure:
- skills/creative/ascii-art/ → category 'creative'
- skills/mlops/training/axolotl/ → category 'mlops' (top-level parent)
- skills/dogfood/ → uncategorized (direct subcommand)

Changes:
- hermes_cli/commands.py: add discord_skill_commands_by_category() with
  category grouping, hub/disabled filtering, Discord limit enforcement
- gateway/platforms/discord.py: replace top-level skill registration with
  _register_skill_group() using app_commands.Group hierarchy
- tests: 7 new tests covering group creation, category grouping,
  uncategorized skills, hub exclusion, deep nesting, empty skills,
  and handler dispatch

Inspired by Discord community suggestion from bottium.
2026-04-14 16:27:02 -07:00
Teknium
039023f497 diag: log all hermes processes on unexpected gateway shutdown (#9905)
When the gateway receives SIGTERM/SIGINT, the shutdown handler now
runs 'ps aux' and logs every hermes/gateway-related process (excluding
itself). This will show in agent.log as:

  WARNING: Shutdown diagnostic — other hermes processes running:
    hermes  1234 ... hermes update --gateway
    hermes  5678 ... hermes gateway restart

This is the missing diagnostic for #5646 / #6666 — we can prove
the restarts are from systemctl but can't determine WHO issues the
systemctl command. Next time it happens, the agent.log will contain
the evidence (the process that sent the signal or called systemctl
should still be alive when the handler fires).
2026-04-14 16:26:36 -07:00
Teknium
6448e1da23 feat(zai): add GLM-5V-Turbo support for coding plan (#9907)
- Add glm-5v-turbo to OpenRouter, Nous, and native Z.AI model lists
- Add glm-5v context length entry (200K tokens) to model metadata
- Update Z.AI endpoint probe to try multiple candidate models per
  endpoint (glm-5.1, glm-5v-turbo, glm-4.7) — fixes detection for
  newer coding plan accounts that lack older models
- Add zai to _PROVIDER_VISION_MODELS so auxiliary vision tasks
  (vision_analyze, browser screenshots) route through 5v

Fixes #9888
2026-04-14 16:26:01 -07:00
Teknium
1e5e1e822b fix: ESC cancels secret/sudo prompts, clearer skip messaging (#9902)
- Add ESC key binding (eager) for secret_state and sudo_state modal
  prompts — fires immediately, same behavior as Ctrl+C cancel
- Update placeholder text: 'Enter to submit · ESC to skip' (was
  'Enter to skip' which was confusing — Enter on empty looked like
  submitting nothing rather than intentionally skipping)
- Update widget body text: 'ESC or Ctrl+C to skip'
- Change feedback message from 'Secret entry cancelled' to 'Secret
  entry skipped' — more accurate for the action taken
- getpass fallback prompt also updated for non-TUI mode
2026-04-14 16:11:37 -07:00
Teknium
55ce76b372 feat: add architecture-diagram skill (Cocoon AI port) (#9906)
Port of Cocoon AI's architecture-diagram-generator (MIT) as a Hermes skill.
Generates professional dark-themed system architecture diagrams as standalone
HTML/SVG files. Self-contained output, no dependencies.

- SKILL.md with design system specs, color palette, layout rules
- HTML template with all component types, arrow styles, legend examples
- Fits alongside excalidraw in creative/ category

Source: https://github.com/Cocoon-AI/architecture-diagram-generator
2026-04-14 16:10:18 -07:00
Teknium
1525624904 fix: block agent from self-destructing gateway via terminal (#6666)
Add dangerous command patterns that require approval when the agent
tries to run gateway lifecycle commands via the terminal tool:

- hermes gateway stop/restart — kills all running agents mid-work
- hermes update — pulls code and restarts the gateway
- systemctl restart/stop (with optional flags like --user)

These patterns fire the approval prompt so the user must explicitly
approve before the agent can kill its own gateway process. In YOLO
mode, the commands run without approval (by design — YOLO means the
user accepts all risks).

Also fixes the existing systemctl pattern to handle flags between
the command and action (e.g. 'systemctl --user restart' was previously
undetected because the regex expected the action immediately after
'systemctl').

Root cause: issue #6666 reported agents running 'hermes gateway
restart' via terminal, killing the gateway process mid-agent-loop.
The user sees the agent suddenly stop responding with no explanation.
Combined with the SIGTERM auto-recovery from PR #9875, the gateway
now both prevents accidental self-destruction AND recovers if it
happens anyway.

Test plan:
- Updated test_systemctl_restart_not_flagged → test_systemctl_restart_flagged
- All 119 approval tests pass
- E2E verified: hermes gateway restart, hermes update, systemctl
  --user restart all detected; hermes gateway status, systemctl
  status remain safe
2026-04-14 15:43:31 -07:00
Teknium
353b5bacbd test: add tests for /health/detailed endpoint and gateway health probe
- TestHealthDetailedEndpoint: 3 tests for the new API server endpoint
  (returns runtime data, handles missing status, no auth required)
- TestProbeGatewayHealth: 5 tests for _probe_gateway_health()
  (URL normalization, successful/failed probes, fallback chain)
- TestStatusRemoteGateway: 4 tests for /api/status remote fallback
  (remote probe triggers, skipped when local PID found, null PID handling)
2026-04-14 15:41:30 -07:00
Hermes Agent
139a5e37a4 docs(docker): add dashboard section, expose API port, update Compose example
- Running in gateway mode: expose port 8642 for the API server and
  health endpoint, with a note on when it's needed.
- New 'Running the dashboard' section: docker run command with
  GATEWAY_HEALTH_URL and env var reference table.
- Docker Compose example: updated to include both gateway and dashboard
  services with internal network connectivity (hermes-net), so the
  dashboard probes the gateway via http://hermes:8642.
- Concurrent access warning: clarified that running a read-only
  dashboard alongside the gateway is safe.
2026-04-14 15:41:30 -07:00
Hermes Agent
673acf22ae fix: override stale 'stopped' state when health probe confirms gateway alive
When the gateway responds to the health probe but the local
gateway_state.json has a stale 'stopped' state (common in cross-container
setups where the file was written before the gateway restarted), the
dashboard would show 'Running (remote)' but with a 'Stopped' badge.

Now if the HTTP probe succeeded (remote_health_body is not None) and
gateway_state is 'stopped' or None, override it to 'running'. Also
handles the no-shared-volume case where runtime is None entirely.
2026-04-14 15:41:30 -07:00
Hermes Agent
6ed682f111 fix: normalise GATEWAY_HEALTH_URL to base URL before probing
The probe was appending '/detailed' to whatever URL was provided,
so GATEWAY_HEALTH_URL=http://host:8642 would try /8642/detailed
and /8642 — neither of which are valid routes.

Now strips any trailing /health or /health/detailed from the env var
and always probes {base}/health/detailed then {base}/health.
Accepts bare base URL, /health, or /health/detailed forms.
2026-04-14 15:41:30 -07:00
Hermes Agent
45595f4805 feat(dashboard): add HTTP health probe for cross-container gateway detection
The dashboard's gateway status detection relied solely on local PID checks
(os.kill + /proc), which fails when the gateway runs in a separate container.

Changes:
- web_server.py: Add _probe_gateway_health() that queries the gateway's HTTP
  /health/detailed endpoint when the local PID check fails. Activated by
  setting the GATEWAY_HEALTH_URL env var (e.g. http://gateway:8642/health).
  Falls back to standard PID check when the env var is not set.
- api_server.py: Add GET /health/detailed endpoint that returns full gateway
  state (platforms, gateway_state, active_agents, pid, etc.) without auth.
  The existing GET /health remains unchanged for backwards compatibility.
- StatusPage.tsx: Handle the case where gateway_pid is null but the gateway
  is running remotely, displaying 'Running (remote)' instead of 'PID null'.

Environment variables:
- GATEWAY_HEALTH_URL: URL of the gateway health endpoint (e.g.
  http://gateway-container:8642/health). Unset = local PID check only.
- GATEWAY_HEALTH_TIMEOUT: Probe timeout in seconds (default: 3).
2026-04-14 15:41:30 -07:00
Teknium
397386cae2 fix: gateway auto-recovers from unexpected SIGTERM via systemd (#5646)
Root cause: when the gateway received SIGTERM (from hermes update,
external kill, WSL2 runtime, etc.), it exited with status 0. systemd's
Restart=on-failure only restarts on non-zero exit, so the gateway
stayed dead permanently. Users had to manually restart.

Fix 1: Signal-initiated shutdown exits non-zero
When SIGTERM/SIGINT is received and no restart was requested (via
/restart, /update, or SIGUSR1), start_gateway() returns False which
causes sys.exit(1). systemd sees a failure exit and auto-restarts
after RestartSec=30.

This is safe because systemctl stop tracks its own stop-requested
state independently of exit code — Restart= never fires for a
deliberate stop, regardless of exit code.

Also logs 'Received SIGTERM/SIGINT — initiating shutdown' so the
cause of unexpected shutdowns is visible in agent.log.

Fix 2: PID file ownership guard
remove_pid_file() now checks that the PID file belongs to the current
process before removing it. During --replace handoffs, the old
process's atexit handler could fire AFTER the new process wrote its
PID file, deleting the new record. This left the gateway running but
invisible to get_running_pid(), causing 'Another gateway already
running' errors on next restart.

Test plan:
- All restart drain tests pass (13)
- All gateway service tests pass (84)
- All update gateway restart tests pass (34)
2026-04-14 15:35:58 -07:00
Teknium
eed891f1bb security: supply chain hardening — CI pinning, dep pinning, and code fixes (#9801)
CI/CD Hardening:
- Pin all 12 GitHub Actions to full commit SHAs (was mutable @vN tags)
- Add explicit permissions: {contents: read} to 4 workflows
- Pin CI pip installs to exact versions (pyyaml==6.0.2, httpx==0.28.1)
- Extend supply-chain-audit.yml to scan workflow, Dockerfile, dependency
  manifest, and Actions version changes

Dependency Pinning:
- Pin git-based Python deps to commit SHAs (atroposlib, tinker, yc-bench)
- Pin WhatsApp Baileys from mutable branch to commit SHA

Tool Registry:
- Reject tool name shadowing from different tool families (plugins/MCP
  cannot overwrite built-in tools). MCP-to-MCP overwrites still allowed.

MCP Security:
- Add tool description content scanning for prompt injection patterns
- Log detailed change diff on dynamic tool refresh at WARNING level

Skill Manager:
- Fix dangerous verdict bug: agent-created skills with dangerous
  findings were silently allowed (ask->None->allow). Now blocked.
2026-04-14 14:23:37 -07:00
Teknium
9bbf7659e9 chore: add Roy-oss1 to AUTHOR_MAP 2026-04-14 14:22:11 -07:00
Roy-oss1
1aa76620d4 fix(feishu): keep approval clicks synchronized with callback card state
Feishu approval clicks need the resolved card to come back from the
synchronous callback path itself. Leaving approval resolution to the
generic asynchronous card-action flow made button feedback depend on
later loop work instead of the callback response the client is waiting
for.

Change-Id: I574997cbbcaa097fdba759b47367e28d1b56b040
Constraint: Feishu card-action callbacks must acknowledge quickly and reflect final approval state from the callback response path
Rejected: Keep approval handling on the generic async card-action route | leaves card state synchronization vulnerable to callback timing and follow-up update ordering
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep approval callback response construction separate from async queue unblocking unless Feishu callback semantics change
Tested: pytest tests/gateway/test_feishu.py tests/gateway/test_feishu_approval_buttons.py tests/gateway/test_approve_deny_commands.py tests/gateway/test_slack_approval_buttons.py tests/gateway/test_telegram_approval_buttons.py -q
Not-tested: Live Feishu workspace end-to-end callback rendering
2026-04-14 14:22:11 -07:00
Teknium
fa8c448f7d fix: notify active sessions on gateway shutdown + update health check
Three fixes for gateway lifecycle stability:

1. Notify active sessions before shutdown (#new)
   When the gateway receives SIGTERM or /restart, it now sends a
   notification to every chat with an active agent BEFORE starting
   the drain. Users see:
   - Shutdown: 'Gateway shutting down — your task will be interrupted.'
   - Restart: 'Gateway restarting — use /retry after restart to continue.'
   Deduplicates per-chat so group sessions with multiple users get
   one notification. Best-effort: send failures are logged and swallowed.

2. Skip .clean_shutdown marker when drain timed out
   Previously, a graceful SIGTERM always wrote .clean_shutdown, even if
   agents were force-interrupted when the drain timed out. This meant
   the next startup skipped session suspension, leaving interrupted
   sessions in a broken state (trailing tool response, no final message).
   Now the marker is only written if the drain completed without timeout,
   so interrupted sessions get properly suspended on next startup.

3. Post-restart health check for hermes update (#6631)
   cmd_update() now verifies the gateway actually survived after
   systemctl restart (sleep 3s + is-active check). If the service
   crashed immediately, it retries once. If still dead, prints
   actionable diagnostics (journalctl command, manual restart hint).

Also closes #8104 — already fixed on main (the /restart handler
correctly detects systemd via INVOCATION_ID and uses via_service=True).

Test plan:
- 6 new tests for shutdown notifications (dedup, restart vs shutdown
  messaging, sentinel filtering, send failure resilience)
- Existing restart drain + update tests pass (47 total)
2026-04-14 14:21:57 -07:00
Teknium
95d11dfd8e docs: automation templates gallery + comparison post (#9821)
Some checks failed
Deploy Site / build-and-deploy (push) Has been skipped
Docker Build and Publish / build-and-push (push) Has been skipped
Nix / nix (ubuntu-latest) (push) Failing after 2s
Tests / e2e (push) Successful in 2m6s
Tests / test (push) Failing after 17m54s
Nix / nix (macos-latest) (push) Has been cancelled
Build Skills Index / deploy-with-index (push) Has been skipped
Build Skills Index / build-index (push) Has been skipped
* feat(skills): add fitness-nutrition skill to optional-skills

Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'

* docs: add automation templates gallery and comparison post

- New docs page: guides/automation-templates.md with 15+ ready-to-use
  automation recipes covering development workflow, devops, research,
  GitHub events, and business operations
- Comparison post (hermes-already-has-routines.md) showing Hermes has
  had schedule/webhook/API triggers since March 2026
- Added automation-templates to sidebar navigation

---------

Co-authored-by: haileymarshall <haileymarshall@users.noreply.github.com>
2026-04-14 12:30:50 -07:00
Teknium
a37a095980 fix: detect qwen-oauth provider via CLI tokens in /model picker
Seed qwen-oauth credentials from resolve_qwen_runtime_credentials() in
_seed_from_singletons(). Users who authenticate via 'qwen auth qwen-oauth'
store tokens in ~/.qwen/oauth_creds.json which the runtime resolver reads
but the credential pool couldn't detect — same gap pattern as copilot.

Uses refresh_if_expiring=False to avoid network calls during discovery.
2026-04-14 11:16:26 -07:00
Marvae
0bd3f521ae fix: detect copilot provider via gh auth token in /model picker
Seed copilot credentials from resolve_copilot_token() in the credential
pool's _seed_from_singletons(), alongside the existing anthropic and
openai-codex seeding logic. This makes copilot appear in the /model
provider picker when the user authenticates solely through gh auth token.

Cherry-picked from PR #9767 by Marvae.
2026-04-14 11:16:26 -07:00
Teknium
3e0bccc54c fix: update existing webhook tests to use _webhook_register_url
Follow-up for cherry-picked PR #9746 — three pre-existing tests used
adapter._webhook_url (bare URL) in mock data, but _register_webhook
and _unregister_webhook now compare against _webhook_register_url
(password-bearing URL). Updated to match.
2026-04-14 11:02:48 -07:00
cypres0099
326cbbe40e fix(gateway/bluebubbles): embed password in registered webhook URL for inbound auth
When BlueBubbles posts webhook events to the adapter, it uses the exact
URL registered via /api/v1/webhook — and BB's registration API does not
support custom headers. The adapter currently registers the bare URL
(no credentials), but then requires password auth on inbound POSTs,
rejecting every webhook with HTTP 401.

This is masked on fresh BB installs by a race condition: the webhook
might register once with a prior (possibly patched) URL and keep working
until the first restart. On v0.9.0, _unregister_webhook runs on clean
shutdown, so the next startup re-registers with the bare URL and the
401s begin. Users see the bot go silent with no obvious cause.

Root cause: there's no way to pass auth credentials from BB to the
webhook handler except via the URL itself. BB accepts query params and
preserves them on outbound POSTs.

## Fix

Introduce `_webhook_register_url` — the URL handed to BB's registration
API, with the configured password appended as a `?password=<value>`
query param. The existing webhook auth handler already accepts this
form (it reads `request.query.get("password")`), so no change to the
receive side is needed.

The bare `_webhook_url` is still used for logging and for binding the
local listener, so credentials don't leak into log output. Only the
registration/find/unregister paths use the password-bearing form.

## Notes

- Password is URL-encoded via urllib.parse.quote, handling special
  characters (&, *, @, etc.) that would otherwise break parsing.
- Storing the password in BB's webhook table is not a new disclosure:
  anyone with access to that table already has the BB admin password
  (same credential used for every other API call).
- If `self.password` is empty (no auth configured), the register URL
  is the bare URL — preserves current behavior for unauthenticated
  local-only setups.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 11:02:48 -07:00
cypres0099
8b52356849 fix(gateway/bluebubbles): fall back to data.chats[0].guid when chatGuid missing
BlueBubbles v1.9+ webhook payloads for new-message events do not always
include a top-level chatGuid field on the message data object. Instead,
the chat GUID is nested under data.chats[0].guid.

The adapter currently checks five top-level fallback locations (record and
payload, snake_case and camelCase, plus payload.guid) but never looks
inside the chats array. When none of those top-level fields contain the
GUID, the adapter falls through to using the sender's phone/email as the
session chat ID.

This causes two observable bugs when a user is a participant in both a DM
and a group chat with the bot:

1. DM and group sessions merge. Every message from that user ends up with
   the same session_chat_id (their own address), so the bot cannot
   distinguish which thread the message came from.

2. Outbound routing becomes ambiguous. _resolve_chat_guid() iterates all
   chats and returns the first one where the address appears as a
   participant; group chats typically sort ahead of DMs by activity, so
   replies and cron messages intended for the DM can land in a group.

This was observed in production: a user's morning brief cron delivered to
a group chat with his spouse instead of his DM thread.

The fix adds a single fallback that extracts chat_guid from
record["chats"][0]["guid"] when the top-level fields are empty. The chats
array is included in every new-message webhook payload in BB v1.9.9
(verified against a live server). It is backwards compatible: if a future
BB version starts including chatGuid at the top level, that still wins.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 11:02:48 -07:00
cypres0099
064f8d74de fix(gateway/bluebubbles): remove invalid "message" from webhook event registration
The BlueBubbles adapter registers its webhook with three events:
["new-message", "updated-message", "message"]. The third, "message",
is not a valid event type in the BlueBubbles server API — BB rejects
the registration payload with HTTP 400 Bad Request.

Currently this is masked by the "crash resilience" check in
_register_webhook, which reuses any existing registration matching the
webhook URL and short-circuits before reaching the API call. So an
already-registered webhook from a prior run keeps working. But any fresh
install, or any restart after _unregister_webhook has run during a clean
shutdown, fails to re-register and silently stops receiving messages.

Observed in production: after a gateway restart in v0.9.0 (which auto-
unregisters on shutdown), the next startup hit this 400 and the bot went
silent until the invalid event was removed.

BlueBubbles documents "new-message" and "updated-message" as the message
event types (see https://docs.bluebubbles.app/). There is no "message"
event, and no harm in dropping it — the two remaining events cover all
inbound message webhooks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 11:02:48 -07:00
Teknium
99bcc2de5b fix(security): harden dashboard API against unauthenticated access (#9800)
Addresses responsible disclosure from FuzzMind Security Lab (CVE pending).

The web dashboard API server had 36 endpoints, of which only 5 checked
the session token. The token itself was served from an unauthenticated
GET /api/auth/session-token endpoint, rendering the protection circular.
When bound to 0.0.0.0 (--host flag), all API keys, config, and cron
management were accessible to any machine on the network.

Changes:
- Add auth middleware requiring session token on ALL /api/ routes except
  a small public whitelist (status, config/defaults, config/schema,
  model/info)
- Remove GET /api/auth/session-token endpoint entirely; inject the token
  into index.html via a <script> tag at serve time instead
- Replace all inline token comparisons (!=) with hmac.compare_digest()
  to prevent timing side-channel attacks
- Block non-localhost binding by default; require --insecure flag to
  override (with warning log)
- Update frontend fetchJSON() to send Authorization header on all
  requests using the injected window.__HERMES_SESSION_TOKEN__

Credit: Callum (@0xca1x) and @migraine-sudo at FuzzMind Security Lab
2026-04-14 10:57:56 -07:00
asheriif
b583210c97 fix(gateway): fix regression causing display.streaming to override root streaming key 2026-04-14 10:52:23 -07:00
Teknium
8bb5973950 docs: add proxy mode documentation
- Matrix docs: full Proxy Mode section with architecture diagram,
  step-by-step setup (host + Docker), docker-compose.yml/Dockerfile
  examples, configuration reference, and limitations notes
- API Server docs: add Proxy Mode section explaining the api_server
  serves as the backend for gateway proxy mode
- Environment variables reference: add GATEWAY_PROXY_URL and
  GATEWAY_PROXY_KEY entries
2026-04-14 10:49:48 -07:00
Teknium
90c98345c9 feat: gateway proxy mode — forward messages to remote API server
When GATEWAY_PROXY_URL (or gateway.proxy_url in config.yaml) is set,
the gateway becomes a thin relay: it handles platform I/O (encryption,
threading, media) and delegates all agent work to a remote Hermes API
server via POST /v1/chat/completions with SSE streaming.

This enables the primary use case of running a Matrix E2EE gateway in
Docker on Linux while the actual agent runs on the host (e.g. macOS)
with full access to local files, memory, skills, and a unified session
store. Works for any platform adapter, not just Matrix.

Configuration:
  - GATEWAY_PROXY_URL env var (Docker-friendly)
  - gateway.proxy_url in config.yaml
  - GATEWAY_PROXY_KEY env var for API auth (matches API_SERVER_KEY)
  - X-Hermes-Session-Id header for session continuity

Architecture:
  - _get_proxy_url() checks env var first, then config.yaml
  - _run_agent_via_proxy() handles HTTP forwarding with SSE streaming
  - _run_agent() delegates to proxy path when URL is configured
  - Platform streaming (GatewayStreamConsumer) works through proxy
  - Returns compatible result dict for session store recording

Files changed:
  - gateway/run.py: proxy mode implementation (~250 lines)
  - hermes_cli/config.py: GATEWAY_PROXY_URL + GATEWAY_PROXY_KEY env vars
  - tests/gateway/test_proxy_mode.py: 17 tests covering config
    resolution, dispatch, HTTP forwarding, error handling, message
    filtering, and result shape validation

Closes discussion from Cars29 re: Matrix gateway mixed-mode issue.
2026-04-14 10:49:48 -07:00
zhiheng.liu
1ace9b4dc4 fix: memory_setup.py - write non-secret env vars, check all fields in status
Critical bug fixes only (no redundant changes):

1. **Write non-secret fields to .env** - Add non-secret fields with env_var to env_writes so they get saved to .env
2. **Status checks all fields** - Check all fields with env_var (both secret and non-secret), not just secrets

Fixes:
- OPENVIKING_ENDPOINT and similar non-secret env vars now get written to .env
- hermes memory status now shows ALL missing required fields
2026-04-14 10:49:35 -07:00
dirtyfancy
e964cfc403 fix(gateway): trigger memory provider shutdown on /new and /reset
The /new and /reset commands were not calling shutdown_memory_provider()
on the cached agent before eviction. This caused OpenViking (and any
memory provider that relies on session-end shutdown) to skip commit,
leaving memories un-indexed until idle timeout or gateway shutdown.

Add the missing shutdown_memory_provider() call in _handle_reset_command(),
matching the behavior already present in the session expiry watcher.

Fixes #7759
2026-04-14 10:49:35 -07:00
Disaster-Terminator
9bdfcd1b93 feat: sort tool search results by score and add corresponding unit test 2026-04-14 10:49:35 -07:00
Teknium
b867171291 fix: preserve profile name completion in dynamic shell completion
The dynamic parser walker from the contributor's commit lost the profile
name tab-completion that existed in the old static generators. This adds
it back for all three shells:

- Bash: _hermes_profiles() helper, -p/--profile completion, profile
  action→name completion (use/delete/show/alias/rename/export)
- Zsh: _hermes_profiles() function, -p/--profile argument spec, profile
  action case with name completion
- Fish: __hermes_profiles function, -s p -l profile flag, profile action
  completions

Also removes the dead fallback path in cmd_completion() that imported
the old static generators from profiles.py (parser is always available
via the lambda wiring) and adds 11 regression-prevention tests for
profile completion.
2026-04-14 10:45:42 -07:00
leozeli
c95b1c5096 fix(install): add fish shell support in install.sh
Fish users' $SHELL is /usr/bin/fish, which fell into the '*' case and
incorrectly wrote 'export PATH=...' to ~/.bashrc and ~/.zshrc — neither
of which fish reads.

- setup_path(): add fish) case that writes fish_add_path to
  ~/.config/fish/config.fish (fish-compatible PATH syntax)
- setup_path(): skip ~/.profile for fish (not sourced by fish)
- print_success(): show correct reload instruction for fish:
  source ~/.config/fish/config.fish

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 10:45:42 -07:00
leozeli
a686dbdd26 feat(cli): add dynamic shell completion for bash, zsh, and fish
Replaces the hardcoded completion stubs in profiles.py with a dynamic
generator that walks the live argparse parser tree at runtime.

- New hermes_cli/completion.py: _walk() recursively extracts all
  subcommands and flags; generate_bash/zsh/fish() produce complete
  scripts with nested subcommand support
- cmd_completion now accepts the parser via closure so completions
  always reflect the actual registered commands (including plugin-
  registered ones like honcho)
- completion subcommand now accepts bash | zsh | fish (fish requested
  in issue comments)
- Fix _SUBCOMMANDS set: add honcho, claw, plugins, acp, webhook,
  memory, dump, debug, backup, import, completion, logs so that
  multi-word session names after -c/-r are not broken by these commands
- Add tests/hermes_cli/test_completion.py: 17 tests covering parser
  extraction, alias deduplication, bash/zsh/fish output content,
  bash syntax validation, fish syntax validation, and subcommand
  drift prevention

Tested on Linux (Arch). bash and fish completion verified live.
zsh script passes syntax check (zsh not installed on test machine).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 10:45:42 -07:00
N0nb0at
b21b3bfd68 feat(plugins): namespaced skill registration for plugin skill bundles
Add ctx.register_skill() API so plugins can ship SKILL.md files under
a 'plugin:skill' namespace, preventing name collisions with built-in
Hermes skills. skill_view() detects the ':' separator and routes to
the plugin registry while bare names continue through the existing
flat-tree scan unchanged.

Key additions:
- agent/skill_utils: parse_qualified_name(), is_valid_namespace()
- hermes_cli/plugins: PluginContext.register_skill(), PluginManager
  skill registry (find/list/remove)
- tools/skills_tool: qualified name dispatch in skill_view(),
  _serve_plugin_skill() with full guards (disabled, platform,
  injection scan), bundle context banner with sibling listing,
  stale registry self-heal
- Hoisted _INJECTION_PATTERNS to module level (dedup)
- Updated skill_view schema description

Based on PR #9334 by N0nb0at. Lean P1 salvage — omits autogen shim
(P2) for a simpler first merge.

Closes #8422
2026-04-14 10:42:58 -07:00
Dusk1e
4b47856f90 fix: load credentials from HERMES_HOME .env in trajectory_compressor 2026-04-14 10:24:19 -07:00
Teknium
8a002d4efc chore: add ChimingLiu to AUTHOR_MAP 2026-04-14 10:22:11 -07:00
Teknium
8ea9ceb44c fix: guard reply_to_text against DeletedReferencedMessage
Use getattr() for resolved.content since discord.py's
DeletedReferencedMessage lacks a content attribute. Adds test
for the deleted-message edge case.
2026-04-14 10:22:11 -07:00
ChimingLiu
7636baf49c feat(discord): extract reply text from message references 2026-04-14 10:22:11 -07:00
Teknium
0e7dd30acc fix(browser): fix Camofox JS eval endpoint, userId, and package rename (#9774)
- Fix _camofox_eval() endpoint: /tabs/{id}/eval → /tabs/{id}/evaluate
  (correct Camofox REST API path)
- Add required userId field to JS eval request body (all other Camofox
  endpoints already include it)
- Update npm package from @askjo/camoufox-browser ^1.0.0 to
  @askjo/camofox-browser ^1.5.2 (upstream package was renamed)
- Update tools_config.py post-setup to reference new package directory
  and npx command
- Bump Node engine requirement from >=18 to >=20 (required by
  camoufox-js dependency in camofox-browser v1.5.2)
- Regenerate package-lock.json

Fixes issues reported in PRs #9472, #8267, #7208 (stale).
2026-04-14 10:21:54 -07:00
Teknium
5f36b42b2e fix: nest msvcrt import inside fcntl except block
Match cron/scheduler.py pattern — only attempt msvcrt import when
fcntl is unavailable. Pre-declare msvcrt = None at module level so
_file_lock() references don't NameError on Linux.
2026-04-14 10:18:05 -07:00
Dusk1e
420d27098f fix(tools): keep memory tool available when fcntl is unavailable 2026-04-14 10:18:05 -07:00
Zhuofeng Wang
449c17e9a9 fix(gateway): support Telegram MarkdownV2 expandable blockquotes 2026-04-14 10:16:49 -07:00
shijianzhi
70611879de fix(cli): fix doctor checks for Kimi China credentials 2026-04-14 10:16:30 -07:00
Austin Pickett
206259d111 Merge pull request #9701 from NousResearch/fix/dashboard-routing-v2
feat(web): re-apply dashboard UI improvements on top of i18n
2026-04-14 08:46:17 -07:00
Austin Pickett
4ffaac542b fix(web): i18n fixes for sidebar and dropdown labels
- Add missing translation keys: skills.resultCount, skills.toolsetLabel
- Replace hardcoded "result(s)" and "toolset" with translated strings
- Fix stale useMemo in SkillsPage allCategories (missing `t` dependency)
  causing sidebar category names to stay in English after language switch

Made-with: Cursor
2026-04-14 10:32:51 -04:00
Austin Pickett
e88aa8a58c feat(web): re-apply dashboard UI improvements on top of i18n
Re-applies changes from #9471 that were overwritten by the i18n PR:

- URL-based routing via react-router-dom (NavLink, Routes, BrowserRouter)
- Replace emoji icons with lucide-react in ConfigPage and SkillsPage
- Sidebar layout for ConfigPage, SkillsPage, and LogsPage
- Custom dropdown Select component (SelectOption) in CronPage
- Remove all non-functional rounded borders across the UI
- Fixed header with proper content offset

Made-with: Cursor
2026-04-14 10:23:43 -04:00
Ben Barclay
16f9d02084 Merge pull request #9475 from NousResearch/docs/fix-docker-version-command
docs: update docker version check command
2026-04-14 20:27:24 +10:00
Teknium
7ad47ace51 fix: resolve remaining 4 CI test failures (#9543)
- test_auth_commands: suppress _seed_from_singletons auto-seeding that
  adds extra credentials from CI env (same pattern as nearby tests)
- test_interrupt: clear stale _interrupted_threads set to prevent
  thread ident reuse from prior tests in same xdist worker
- test_code_execution: add watch_patterns to _BLOCKED_TERMINAL_PARAMS
  to match production _TERMINAL_BLOCKED_PARAMS
2026-04-14 02:18:38 -07:00
Teknium
b4fcec6412 fix: prevent streaming cursor from appearing as standalone messages (#9538)
During rapid tool-calling, the model often emits 1-2 tokens before
switching to tool calls. The stream consumer would create a new message
with 'X ▉' (short text + cursor), and if the follow-up edit to strip
the cursor was rate-limited by the platform, the cursor remained as
a permanent standalone message — reported on Telegram as 'white box'
artifacts.

Add a minimum-content guard in _send_or_edit: when creating a new
standalone message (no existing message_id), require at least 4
visible characters alongside the cursor before sending. Shorter text
accumulates into the next streaming segment instead.

This prevents cursor-only 'tofu' messages across all platforms without
affecting normal streaming (edits to existing messages, final sends
without cursor, and messages with substantial text are all unaffected).

Reported by @michalkomar on X.
2026-04-14 01:52:42 -07:00
Teknium
2558d28a9b fix: resolve CI test failures — add missing functions, fix stale tests (#9483)
Production fixes:
- Add clear_session_context() to hermes_logging.py (fixes 48 teardown errors)
- Add clear_session() to tools/approval.py (fixes 9 setup errors)
- Add SyncError M_UNKNOWN_TOKEN check to Matrix _sync_loop (bug fix)
- Fall back to inline api_key in named custom providers when key_env
  is absent (runtime_provider.py)

Test fixes:
- test_memory_user_id: use builtin+external provider pair, fix honcho
  peer_name override test to match production behavior
- test_display_config: remove TestHelpers for non-existent functions
- test_auxiliary_client: fix OAuth tokens to match _is_oauth_token
  patterns, replace get_vision_auxiliary_client with resolve_vision_provider_client
- test_cli_interrupt_subagent: add missing _execution_thread_id attr
- test_compress_focus: add model/provider/api_key/base_url/api_mode
  to mock compressor
- test_auth_provider_gate: add autouse fixture to clean Anthropic env
  vars that leak from CI secrets
- test_opencode_go_in_model_list: accept both 'built-in' and 'hermes'
  source (models.dev API unavailable in CI)
- test_email: verify email Platform enum membership instead of source
  inspection (build_channel_directory now uses dynamic enum loop)
- test_feishu: add bot_added/bot_deleted handler mocks to _Builder
- test_ws_auth_retry: add AsyncMock for sync_store.get_next_batch,
  add _pending_megolm and _joined_rooms to Matrix adapter mocks
- test_restart_drain: monkeypatch-delete INVOCATION_ID (systemd sets
  this in CI, changing the restart call signature)
- test_session_hygiene: add user_id to SessionSource
- test_session_env: use relative baseline for contextvar clear check
  (pytest-xdist workers share context)
2026-04-14 01:43:45 -07:00
Jiawen-lee
2cfd2dafc6 feat(gateway): add ignored_threads config for Telegram 2026-04-14 01:40:32 -07:00
Teknium
1acf81fdf5 docs: add QQBot to all 14 docs pages (full platform parity)
- sidebars.ts: sidebar navigation entry
- webhooks.md: deliver field routing table
- configuration.md: platform keys list
- sessions.md: platform identifiers table
- features/cron.md: delivery target table
- developer-guide/architecture.md: adapter listing
- developer-guide/cron-internals.md: delivery target table
- developer-guide/gateway-internals.md: file tree listing
- guides/cron-troubleshooting.md: supported platforms list
- integrations/index.md: platform links list
- reference/toolsets-reference.md: toolset table

(qqbot.md, environment-variables.md, and messaging/index.md were
already included in the contributor's original PR)
2026-04-14 00:11:49 -07:00
Teknium
8d545da3ff fix: add platform lock, send retry, message splitting, REST one-shot, shared strip_markdown
Improvements from our earlier #8269 salvage work applied to #7616:

- Platform token lock: acquire_scoped_lock/release_scoped_lock prevents
  two profiles from double-connecting the same QQ bot simultaneously
- Send retry with exponential backoff (3 attempts, 1s/2s/4s) with
  permanent vs transient error classification (matches Telegram pattern)
- Proper long-message splitting via truncate_message() instead of
  hard-truncating at MAX_MESSAGE_LENGTH (preserves code blocks, adds 1/N)
- REST-based one-shot send in send_message_tool — uses QQ Bot REST API
  directly with httpx instead of creating a full WebSocket adapter per
  message (fixes the connect→send race condition)
- Use shared strip_markdown() from helpers.py instead of 15 lines of
  inline regex with import-inside-method (DRY, same as BlueBubbles/SMS)
- format_message() now wired into send() pipeline
2026-04-14 00:11:49 -07:00
Teknium
4654f75627 fix: QQBot missing integration points, timestamp parsing, test fix
- Add Platform.QQBOT to _UPDATE_ALLOWED_PLATFORMS (enables /update command)
- Add 'qqbot' to webhook cross-platform delivery routing
- Add 'qqbot' to hermes dump platform detection
- Fix test_name_property casing: 'QQBot' not 'QQBOT'
- Add _parse_qq_timestamp() for ISO 8601 + integer ms compatibility
  (QQ API changed timestamp format — from PR #2411 finding)
- Wire timestamp parsing into all 4 message handlers
2026-04-14 00:11:49 -07:00
walli
884cd920d4 feat(gateway): unify QQBot branding, add PLATFORM_HINTS, fix streaming, restore missing setup functions
- Rename platform from 'qq' to 'qqbot' across all integration points
  (Platform enum, toolset, config keys, import paths, file rename qq.py → qqbot.py)
- Add PLATFORM_HINTS for QQBot in prompt_builder (QQ supports markdown)
- Set SUPPORTS_MESSAGE_EDITING = False to skip streaming on QQ
  (prevents duplicate messages from non-editable partial + final sends)
- Add _send_qqbot() standalone send function for cron/send_message tool
- Add interactive _setup_qq() wizard in hermes_cli/setup.py
- Restore missing _setup_signal/email/sms/dingtalk/feishu/wecom/wecom_callback
  functions that were lost during the original merge
2026-04-14 00:11:49 -07:00
Junjun Zhang
87bfc28e70 feat: add QQ Bot platform adapter (Official API v2)
Add full QQ Bot integration via the Official QQ Bot API (v2):
- WebSocket gateway for inbound events (C2C, group, guild, DM)
- REST API for outbound text/markdown/media messages
- Voice transcription (Tencent ASR + configurable STT provider)
- Attachment processing (images, voice, files)
- User authorization (allowlist + allow-all + DM pairing)

Integration points:
- gateway: Platform.QQ enum, adapter factory, allowlist maps
- CLI: setup wizard, gateway config, status display, tools config
- tools: send_message cross-platform routing, toolsets
- cron: delivery platform support
- docs: QQ Bot setup guide
2026-04-14 00:11:49 -07:00
Teknium
eb44abd6b1 feat: improve file search UX — fuzzy @ completions, mtime sorting, better suggestions (#9467)
Three improvements to file search based on user feedback:

1. Fuzzy @ completions (commands.py):
   - Bare @query now does project-wide fuzzy file search instead of
     prefix-only directory listing
   - Uses rg --files with 5-second cache for responsive completions
   - Scoring: exact name (100) > prefix (80) > substring (60) >
     path contains (40) > subsequence with boundary bonus (35/25)
   - Bare @ with no query shows recently modified files first

2. Mtime-sorted file search (file_operations.py):
   - _search_files_rg now uses --sortr=modified (rg 13+) to surface
     recently edited files first
   - Falls back to unsorted on older rg versions

3. Improved file-not-found suggestions (file_operations.py):
   - Replaced crude character-set overlap with ranked scoring:
     same basename (90) > prefix (70) > substring (60) >
     reverse substring (40) > same extension (30)
   - search_files path-not-found now suggests similar directories
     from the parent
2026-04-13 23:54:45 -07:00
Greer Guthrie
c7e2fe655a fix: make tool registry reads thread-safe 2026-04-13 23:52:32 -07:00
Teknium
6dc8f8e9c0 feat(skin): add warm-lightmode skin from PR #4811
Add a second light-mode skin option with warm brown/parchment tones,
adapted from ygd58's contribution in PR #4811. Includes completion
menu and status bar color keys for full light-terminal support.

Co-authored-by: buray <78954051+ygd58@users.noreply.github.com>
2026-04-13 23:51:21 -07:00
Liu Chongwei
bc93641c4f feat(skins): add built-in daylight skin 2026-04-13 23:51:21 -07:00
Ben Barclay
9ffc26bc8f docs: update docker version check command
Replace `docker exec hermes hermes version` with
`docker run -it --rm nousresearch/hermes-agent:latest version`
2026-04-14 06:37:50 +00:00
Teknium
a2ea237db2 feat: add internationalization (i18n) to web dashboard — English + Chinese (#9453)
Add a lightweight i18n system to the web dashboard with English (default) and
Chinese language support. A language switcher with flag icons is placed in the
header bar, allowing users to toggle between languages. The choice persists
to localStorage.

Implementation:
- src/i18n/ — types, translation files (en.ts, zh.ts), React context + hook
- LanguageSwitcher component shows the *other* language's flag as the toggle
- I18nProvider wraps the app in main.tsx
- All 8 pages + OAuth components updated to use t() translation calls
- Zero new dependencies — pure React context + localStorage
2026-04-13 23:19:13 -07:00
Teknium
19199cd38d fix: clamp 'minimal' reasoning effort to 'low' on Responses API (#9429)
GPT-5.4 supports none/low/medium/high/xhigh but not 'minimal'.
Users may configure 'minimal' via OpenRouter conventions, which would
cause a 400 on native OpenAI. Clamp to 'low' in the codex_responses
path before sending.
2026-04-13 23:11:13 -07:00
Teknium
38ad158b6b fix: auto-correct close model name matches in /model validation (#9424)
* feat(skills): add fitness-nutrition skill to optional-skills

Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'

* fix: auto-correct close model name matches in /model validation

When a user types a model name with a minor typo (e.g. gpt5.3-codex instead
of gpt-5.3-codex), the validation now auto-corrects to the closest match
instead of accepting the wrong name with a warning.

Uses difflib get_close_matches with cutoff=0.9 to avoid false corrections
(e.g. gpt-5.3 should not silently become gpt-5.4). Applied consistently
across all three validation paths: codex provider, custom endpoints, and
generic API-probed providers.

The validate_requested_model() return dict gains an optional corrected_model
key that switch_model() applies before building the result.

Reported by Discord user — /model gpt5.3-codex was accepted with a warning
but would fail at the API level.

---------

Co-authored-by: haileymarshall <haileymarshall@users.noreply.github.com>
2026-04-13 23:09:39 -07:00
Teknium
35424f8fc1 chore: add bennytimz to AUTHOR_MAP 2026-04-13 23:03:08 -07:00
oluwadareab12
a91b9bb855 feat(skills): add drug-discovery optional skill — ChEMBL, PubChem, OpenFDA, ADMET analysis
Pharmaceutical research skill covering bioactive compound search (ChEMBL),
drug-likeness screening (Lipinski Ro5 + Veber via PubChem), drug-drug
interaction lookups (OpenFDA), gene-disease associations (OpenTargets
GraphQL), and ADMET reasoning guidance. All free public APIs, zero auth,
stdlib-only Python. Includes helper scripts for batch Ro5 screening and
target-to-compound pipelines.

Moved to optional-skills/research/ (niche domain skill, not built-in).
Fixed: authors→author frontmatter, removed unused jq prerequisite,
bare except→except Exception.

Co-authored-by: bennytimz <oluwadareab12@gmail.com>
Salvaged from PR #8695.
2026-04-13 23:03:08 -07:00
Teknium
d631431872 feat: prompt for display name when adding custom providers (#9420)
During custom endpoint setup, users are now asked for a display name
with the auto-generated name as the default. Typing 'Ollama' or
'LM Studio' replaces the generic 'Local (localhost:11434)' in the
provider menu.

Extracts _auto_provider_name() for reuse and adds a name= parameter
to _save_custom_provider() so the caller can pass through the
user-chosen label.
2026-04-13 22:41:00 -07:00
Kenny Xie
cdd44817f2 fix(anthropic): send fast mode speed via extra_body 2026-04-13 22:32:39 -07:00
Teknium
110892ff69 docs: move Xiaomi MiMo up in README provider list 2026-04-13 22:30:44 -07:00
Teknium
3de2b98503 fix(streaming): filter <think> blocks from gateway stream consumer
Models like MiniMax emit inline <think>...</think> reasoning blocks in
their content field. The CLI already suppresses these via a state machine
in _stream_delta, but the gateway's GatewayStreamConsumer had no
equivalent filtering — raw think blocks were streamed directly to
Discord/Telegram/Slack.

The fix adds a _filter_and_accumulate() method that mirrors the CLI's
approach: a state machine tracks whether we're inside a reasoning block
and silently discards the content. Includes the same block-boundary
check (tag must appear at line start or after whitespace-only prefix)
to avoid false positives when models mention <think> in prose.

Handles all tag variants: <think>, <thinking>, <THINKING>, <thought>,
<reasoning>, <REASONING_SCRATCHPAD>.

Also handles edge cases:
- Tags split across streaming deltas (partial tag buffering)
- Unclosed blocks (content suppressed until stream ends)
- Multiple consecutive blocks
- _flush_think_buffer on stream end for held-back partial tags

Adds 22 unit tests + 1 integration test covering all scenarios.
2026-04-13 22:16:20 -07:00
helix4u
e08590888a fix: honor interrupts during MCP tool waits 2026-04-13 22:14:55 -07:00
Teknium
69d619cf89 docs: add Hugging Face and Xiaomi MiMo to README provider list (#9406)
* feat(skills): add fitness-nutrition skill to optional-skills

Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'

* docs: add Hugging Face and Xiaomi MiMo to README provider list

---------

Co-authored-by: haileymarshall <haileymarshall@users.noreply.github.com>
2026-04-13 22:12:46 -07:00
haileymarshall
f0b353bade feat(skills): add fitness-nutrition skill to optional-skills
Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'
2026-04-13 22:10:00 -07:00
Teknium
62fb6b2cd8 fix: guard zero context length display + add 19 tests for model info
- ModelInfoCard: hide card when effective_context_length <= 0 instead
  of showing 'Context Window: 0 auto-detected'
- Add tests for _normalize_config_for_web model_context_length extraction
- Add tests for _denormalize_config_from_web round-trip (write back,
  remove on zero, upgrade bare string to dict, coerce string input)
- Add tests for CONFIG_SCHEMA ordering (model_context_length after model)
- Add tests for GET /api/model/info endpoint (dict config, bare string,
  empty model, capabilities, graceful error handling)
2026-04-13 22:04:35 -07:00
kshitijk4poor
8fd3093f49 feat(web): add context window support to dashboard config
- Add GET /api/model/info endpoint that resolves model metadata using the
  same 10-step context-length detection chain the agent uses. Returns
  auto-detected context length, config override, effective value, and
  model capabilities (tools, vision, reasoning, max output, model family).

- Surface model.context_length as model_context_length virtual field in
  the config normalize/denormalize cycle. 0 = auto-detect (default),
  positive value overrides. Writing 0 removes context_length from the
  model dict on disk.

- Add ModelInfoCard component showing resolved context window (e.g. '1M
  auto-detected' or '500K override — auto: 1M'), max output tokens, and
  colored capability badges (Tools, Vision, Reasoning, model family).

- Inject ModelInfoCard between model field and context_length override in
  ConfigPage General tab. Card re-fetches on model change and after save.

- Insert model_context_length right after model in CONFIG_SCHEMA ordering
  so the three elements (model input → info card → override) are adjacent.
2026-04-13 22:04:35 -07:00
Gianfranco Piana
eabc0a2f66 feat(plugins): let pre_tool_call hooks block tool execution
Plugins can now return {"action": "block", "message": "reason"} from
their pre_tool_call hook to prevent a tool from executing. The error
message is returned to the model as a tool result so it can adjust.

Covers both execution paths: handle_function_call (model_tools.py) and
agent-level tools (run_agent.py _invoke_tool + sequential/concurrent).
Blocked tools skip all side effects (counter resets, checkpoints,
callbacks, read-loop tracker).

Adds skip_pre_tool_call_hook flag to avoid double-firing the hook when
run_agent.py already checked and then calls handle_function_call.

Salvaged from PR #5385 (gianfrancopiana) and PR #4610 (oredsecurity).
2026-04-13 22:01:49 -07:00
Austin Pickett
ea74f61d98 Merge pull request #9370 from NousResearch/fix/dashboard-routing
feat: react-router, sidebar layout, sticky header, dropdown component…
2026-04-13 21:23:48 -07:00
Teknium
943c01536f feat: add openrouter/elephant-alpha to curated model lists (#9378)
* Add hermes debug share instructions to all issue templates

- bug_report.yml: Add required Debug Report section with hermes debug share
  and /debug instructions, make OS/Python/Hermes version optional (covered
  by debug report), demote old logs field to optional supplementary
- setup_help.yml: Replace hermes doctor reference with hermes debug share,
  add Debug Report section with fallback chain (debug share -> --local -> doctor)
- feature_request.yml: Add optional Debug Report section for environment context

All templates now guide users to run hermes debug share (or /debug in chat)
and paste the resulting paste.rs links, giving maintainers system info,
config, and recent logs in one step.

* feat: add openrouter/elephant-alpha to curated model lists

- Add to OPENROUTER_MODELS (free, positioned above GPT models)
- Add to _PROVIDER_MODELS["nous"] mirror list
- Add 256K context window fallback in model_metadata.py
2026-04-13 21:16:14 -07:00
Teknium
dd86deef13 feat(ci): add contributor attribution check on PRs (#9376)
Adds a CI workflow that blocks PRs introducing commits with
unmapped author emails. Checks each new commit's author email
against AUTHOR_MAP in scripts/release.py — GitHub noreply emails
auto-pass, but personal/work emails must be mapped.

Also adds --strict and --diff-base flags to contributor_audit.py
for programmatic use. --strict exits 1 when new unmapped emails
are found; --diff-base scopes the check to only flag emails from
commits after a given ref (grandfathers existing unknowns).

Prevention for the 97-unmapped-email gap found in the April 2026
contributor audit.
2026-04-13 21:13:08 -07:00
Teknium
5719c1f391 fix: add 75 contributor email→username mappings + .mailmap (#9358)
Audit of all external contributor PRs revealed 97 commit emails
not mapped in AUTHOR_MAP, meaning contributors weren't properly
credited in release notes. Cross-referenced via:
- GitHub API email search (9 resolved before rate limit)
- Salvage PR body mentions (@username in descriptions)
- Git noreply email cross-reference (same person, both emails)
- GH contributor list username matching

Also adds .mailmap for git shortlog/log display consistency.

Remaining 22 unmapped emails need GH API resolution when rate
limit resets — the contributor_audit.py script will flag them.

Addresses ColourfulWhite's report about missing contributor tags.
2026-04-13 21:10:39 -07:00
Austin Pickett
bc3844c907 feat: react-router, sidebar layout, sticky header, dropdown component, remove emojis, rounded corners 2026-04-14 00:01:18 -04:00
Teknium
5621fc449a chore: rename AI Gateway → Vercel AI Gateway, move Xiaomi to #5 (#9326)
- Rename 'AI Gateway' to 'Vercel AI Gateway' across auth, models,
  doctor, setup, and tests.
- Move Xiaomi MiMo to position #5 in the provider picker.
2026-04-13 19:51:54 -07:00
Teknium
0cc7f79016 fix(streaming): prevent duplicate Telegram replies when stream task is cancelled (#9319)
When the 5-second stream_task timeout in gateway/run.py expires (due to
slow Telegram API calls from rate limiting after several messages), the
stream consumer is cancelled via asyncio.CancelledError. The
CancelledError handler did a best-effort final edit but never set
final_response_sent, so the gateway fell through to the normal send path
and delivered the full response again as a reply — causing a duplicate.

The fix: in the CancelledError handler, set final_response_sent = True
when already_sent is True (i.e., the stream consumer had already
delivered content to the user). This tells the gateway's already_sent
check that the response was delivered, preventing the duplicate send.

Adds two tests verifying the cancellation behavior:
- Cancelled with already_sent=True → final_response_sent=True (no dup)
- Cancelled with already_sent=False → final_response_sent=False (normal
  send path proceeds)

Reported by community user hume on Discord.
2026-04-13 19:22:43 -07:00
Teknium
d15efc9c1b fix: correct GPT-5 family context lengths in fallback defaults (#9309)
The generic 'gpt-5' fallback was set to 128,000 — which is the max
OUTPUT tokens, not the context window. GPT-5 base and most variants
(codex, mini) have 400,000 context. This caused /model to report
128k for models like gpt-5.3-codex when models.dev was unavailable.

Added specific entries for GPT-5 variants with different context sizes:
- gpt-5.4, gpt-5.4-pro: 1,050,000 (1.05M)
- gpt-5.4-mini, gpt-5.4-nano: 400,000
- gpt-5.3-codex-spark: 128,000 (reduced)
- gpt-5.1-chat: 128,000 (chat variant)
- gpt-5 (catch-all): 400,000

Sources: https://developers.openai.com/api/docs/models
2026-04-13 19:22:23 -07:00
Teknium
f6626fccee refactor: remove provider tier system — flat picker in hermes model (#9303)
Remove the two-tier (top/extended) provider picker that hid most
providers behind a 'More providers...' submenu. All providers now
appear in a single flat list.

- Remove tier field from ProviderEntry namedtuple
- Remove tier values from all CANONICAL_PROVIDERS entries
- Flatten the hermes model picker (no more 'More...' submenu)
- Move 'Custom endpoint' to the bottom of the main list
2026-04-13 18:51:13 -07:00
Teknium
f324222b79 fix: add vLLM/local server error patterns + MCP initial connection retry (#9281)
Port two improvements inspired by Kilo-Org/kilocode analysis:

1. Error classifier: add context overflow patterns for vLLM, Ollama,
   and llama.cpp/llama-server. These local inference servers return
   different error formats than cloud providers (e.g., 'exceeds the
   max_model_len', 'context length exceeded', 'slot context'). Without
   these patterns, context overflow errors from local servers are
   misclassified as format errors, causing infinite retries instead
   of triggering compression.

2. MCP initial connection retry: previously, if the very first
   connection attempt to an MCP server failed (e.g., transient DNS
   blip at startup), the server was permanently marked as failed with
   no retry. Post-connect reconnection had 5 retries with exponential
   backoff, but initial connection had zero. Now initial connections
   retry up to 3 times with backoff before giving up, matching the
   resilience of post-connect reconnection.
   (Inspired by Kilo Code's MCP server disappearing fix in v1.3.3)

Tests: 6 new error classifier tests, 4 new MCP retry tests, 1
updated existing test. All 276 affected tests pass.
2026-04-13 18:46:14 -07:00
arthurbr11
0a4cf5b3e1 feat(providers): add Arcee AI as direct API provider
Adds Arcee AI as a standard direct provider (ARCEEAI_API_KEY) with
Trinity models: trinity-large-thinking, trinity-large-preview, trinity-mini.

Standard OpenAI-compatible provider checklist: auth.py, config.py,
models.py, main.py, providers.py, doctor.py, model_normalize.py,
model_metadata.py, setup.py, trajectory_compressor.py.

Based on PR #9274 by arthurbr11, simplified to a standard direct
provider without dual-endpoint OpenRouter routing.
2026-04-13 18:40:06 -07:00
Agent
78fa758451 feat(web): make Web UI responsive for mobile
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 10s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 1m0s
Docs Site Checks / docs-site-checks (pull_request) Failing after 4m1s
Tests / e2e (pull_request) Successful in 2m26s
Tests / test (pull_request) Failing after 36m31s
Nix / nix (macos-latest) (pull_request) Has been cancelled
- Nav: icons only on mobile, icon+label on sm+
- Brand: abbreviated "H A" on mobile, full "Hermes Agent" on sm+
- Content: reduced padding on mobile (px-3 vs px-6)
- StatusPage: session cards stack vertically on mobile, truncate
  overflow text, strip model namespace for brevity
- ConfigPage: sidebar becomes horizontal scrollable pills on mobile
  instead of fixed left column, search hidden on mobile
- SessionsPage: title + search stack vertically on mobile, search
  goes full-width
- Card component: add overflow-hidden to prevent content bleed
- Body/root: add overflow-x-hidden to prevent horizontal scroll
- Footer: reduced font sizes on mobile

All changes use Tailwind responsive breakpoints (sm: prefix).
No logic changes — purely layout/CSS adjustments.
2026-04-13 17:16:28 -07:00
Teknium
ac80bd61ad test: add regression tests for custom_providers multi-model dedup and grouping
Tests for salvaged PRs #9233 and #8011.
2026-04-13 16:41:30 -07:00
Ubuntu
ec9bf9e378 feat(model-picker): group custom_providers by name into a single row per provider
The /model picker currently renders one row per ``custom_providers``
entry. When several entries share the same provider name (e.g. four
``ollama-cloud`` entries for ``qwen3-coder``, ``glm-5.1``, ``kimi-k2``,
``minimax-m2.7``), users see four separate "Ollama Cloud" rows in the
picker, which is confusing UX — there is only one Ollama Cloud
provider, so there should be one row containing four models.

This PR groups ``custom_providers`` entries that share the same provider
name into a single picker row while keeping entries with distinct names
as separate rows. So:

* Four entries named ``Ollama Cloud`` → one "Ollama Cloud" row with
  four models inside.
* One entry named ``Ollama Cloud`` and one named ``Moonshot`` → two
  separate rows, one model each.

Implementation
--------------
Replaces the single-pass loop in ``list_authenticated_providers()`` with
a two-pass approach:

1. First pass: build an ``OrderedDict`` keyed by ``custom_provider_slug(name)``,
   accumulating ``models`` per group while preserving discovery order.
2. Second pass: iterate the groups and append one result row per group,
   skipping any slug that already appeared in an earlier provider source
   (the existing ``seen_slugs`` guard).

Insertion order is preserved via ``OrderedDict``, so providers and
their models still appear in the order the user listed them in
``custom_providers``. No new dependencies.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 16:41:30 -07:00
akhater
01f71007d0 fix(config): include model field in custom_providers dedup key
get_compatible_custom_providers() deduplicates by (name, base_url) which
collapses multiple models under the same provider into a single entry.
For example, 7 Ollama Cloud entries with different models become 1.
Adding model to the tuple preserves all entries.
2026-04-13 16:41:30 -07:00
Teknium
32cea0c08d fix: dashboard shows Nous Portal as 'not connected' despite active auth (#9261)
The dashboard device-code flow (_nous_poller in web_server.py) saved
credentials to the credential pool only, while get_nous_auth_status()
only checked the auth store (auth.json). This caused the Keys tab to
show 'not connected' even when the backend was fully authenticated.

Two fixes:
1. get_nous_auth_status() now checks the credential pool first (like
   get_codex_auth_status() already does), then falls back to the auth
   store.
2. _nous_poller now also persists to the auth store after saving to
   the credential pool, matching the CLI flow (_login_nous).

Adds 3 tests covering pool-only, auth-store-fallback, and empty-state
scenarios.
2026-04-13 16:32:11 -07:00
Teknium
8d023e43ed refactor: remove dead code — 1,784 lines across 77 files (#9180)
Deep scan with vulture, pyflakes, and manual cross-referencing identified:
- 41 dead functions/methods (zero callers in production)
- 7 production-dead functions (only test callers, tests deleted)
- 5 dead constants/variables
- ~35 unused imports across agent/, hermes_cli/, tools/, gateway/

Categories of dead code removed:
- Refactoring leftovers: _set_default_model, _setup_copilot_reasoning_selection,
  rebuild_lookups, clear_session_context, get_logs_dir, clear_session
- Unused API surface: search_models_dev, get_pricing, skills_categories,
  get_read_files_summary, clear_read_tracker, menu_labels, get_spinner_list
- Dead compatibility wrappers: schedule_cronjob, list_cronjobs, remove_cronjob
- Stale debug helpers: get_debug_session_info copies in 4 tool files
  (centralized version in debug_helpers.py already exists)
- Dead gateway methods: send_emote, send_notice (matrix), send_reaction
  (bluebubbles), _normalize_inbound_text (feishu), fetch_room_history
  (matrix), _start_typing_indicator (signal), parse_feishu_post_content
- Dead constants: NOUS_API_BASE_URL, SKILLS_TOOL_DESCRIPTION,
  FILE_TOOLS, VALID_ASPECT_RATIOS, MEMORY_DIR
- Unused UI code: _interactive_provider_selection,
  _interactive_model_selection (superseded by prompt_toolkit picker)

Test suite verified: 609 tests covering affected files all pass.
Tests for removed functions deleted. Tests using removed utilities
(clear_read_tracker, MEMORY_DIR) updated to use internal APIs directly.
2026-04-13 16:32:04 -07:00
Teknium
a66fc1365d fix: add files:read to SLACK_BOT_TOKEN description in config.py
Missed in the original PR — the env var description also lists required scopes.
2026-04-13 16:31:38 -07:00
helix4u
448b8bfb7c docs: add slack files:read scope 2026-04-13 16:31:38 -07:00
Teknium
def8b959b8 fix: add contributor audit script + fix missed contributors (#9264)
Three problems fixed:

1. bobashopcashier missing from v0.9.0 contributor list despite
   authoring the gateway drain PR (#7290, salvaged into #7503).
   Their email (kennyx102@gmail.com) was missing from AUTHOR_MAP.

2. release.py only scanned git commit authors, missing Co-authored-by
   trailers. Now parse_coauthors() extracts trailers from commit bodies.

3. No mechanism to detect contributors from salvaged PRs (where original
   author only appears in PR description, not git log).

Changes:
- scripts/release.py: add kennyx102@gmail.com to AUTHOR_MAP, enhance
  get_commits() to parse Co-authored-by trailers, filter AI assistants
  (Claude, Copilot, Cursor Agent) from co-author lists
- scripts/contributor_audit.py: new script that cross-references git
  authors, co-author trailers, and salvaged PR descriptions. Reports
  unknown emails and contributors missing from release notes.
- RELEASE_v0.9.0.md: add bobashopcashier to community contributors

Usage:
  python scripts/contributor_audit.py --since-tag v2026.4.8
  python scripts/contributor_audit.py --since-tag v2026.4.8 --release-file RELEASE_v0.9.0.md
2026-04-13 16:31:27 -07:00
helix4u
f94f53cc22 fix(matrix): disable streaming cursor decoration on Matrix 2026-04-13 16:31:02 -07:00
helix4u
0ffb6f2dae fix(matrix): skip cursor-only stream placeholder messages 2026-04-13 16:31:02 -07:00
Teknium
b27eaaa4db fix: improve ACP type check and restore comment accuracy
- Use isinstance() with try/except import for CopilotACPClient check
  in _to_async_client instead of fragile __class__.__name__ string check
- Restore accurate comment: GPT-5.x models *require* (not 'often require')
  the Responses API on OpenAI/OpenRouter; ACP is the exception, not a
  softening of the requirement
- Add inline comment explaining the ACP exclusion rationale
2026-04-13 16:17:43 -07:00
helix4u
8680f61f8b fix(copilot-acp): keep acp runtime off responses path 2026-04-13 16:17:43 -07:00
Teknium
063244bb16 test: add coverage for plugin context engine init (#9071)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Nix / nix (ubuntu-latest) (pull_request) Failing after 3s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 47s
Tests / e2e (pull_request) Successful in 2m35s
Docs Site Checks / docs-site-checks (pull_request) Failing after 6m6s
Tests / test (pull_request) Failing after 34m29s
Nix / nix (macos-latest) (pull_request) Has been cancelled
Verify that plugin context engines receive update_model() with correct
context_length during AIAgent init — regression test for the ctx -- bug.
2026-04-13 15:00:57 -07:00
Stephen Schoettler
c763ed5801 fix(agent): resolve context_length for plugin context engines
Plugin context engines loaded via load_context_engine() were never
given context_length, causing the CLI status bar to show "ctx --"
with an empty progress bar. Call update_model() immediately after
loading the plugin engine, mirroring what switch_model() already does.

Fixes NousResearch/hermes-agent#9071

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 15:00:57 -07:00
Teknium
204e9190c4 fix: consolidate provider lists into single CANONICAL_PROVIDERS source of truth (#9237)
Three separate hardcoded provider lists (/model, /provider, hermes model)
diverged over time, causing providers to be missing from some commands.

- Create CANONICAL_PROVIDERS in hermes_cli/models.py as the single source
  of truth for all provider identity, labels, and TUI ordering
- Derive _PROVIDER_LABELS and list_available_providers() from canonical list
- Add step 2b in list_authenticated_providers() to cross-check canonical
  list — catches providers with credentials that weren't found via
  PROVIDER_TO_MODELS_DEV or HERMES_OVERLAYS mappings
- Derive hermes model TUI provider menus from canonical list
- Add deepseek and xai as first-class providers (were missing from TUI)
- Add grok/x-ai/x.ai aliases for xai provider

Fixes: /model command not showing all providers that hermes model shows
2026-04-13 14:59:50 -07:00
Teknium
952a885fbf fix(gateway): /stop no longer resets the session (#9224)
/stop was calling suspend_session() which marked the session for auto-reset
on the next message. This meant users lost their conversation history every
time they stopped a running agent — especially painful for untitled sessions
that can't be resumed by name.

Now /stop just interrupts the agent and cleans the session lock. The session
stays intact so users can continue the conversation.

The suspend behavior was introduced in #7536 to break stuck session resume
loops on gateway restart. That case is already handled by
suspend_recently_active() which runs at gateway startup, so removing it from
/stop doesn't regress the original fix.
2026-04-13 14:59:05 -07:00
SHL0MS
d5fd74cac2 fix(ci): don't fail supply chain scan when PR comment can't be posted on fork PRs (#6681)
The GITHUB_TOKEN for fork PRs is read-only — gh pr comment fails with
'Resource not accessible by integration'. This caused the supply chain
scan to show a red X on every fork PR even when no findings were detected.

The scan itself still runs and the 'Fail on critical findings' step
still exits 1 on real issues. Only the comment posting is gracefully
skipped for fork PRs.

Closes #6679

Co-authored-by: SHL0MS <SHL0MS@users.noreply.github.com>
2026-04-13 13:58:59 -07:00
Teknium
a6f07a6c37 docs: fix hermes web → hermes dashboard in web-dashboard.md (#9207)
The actual CLI command is 'hermes dashboard', not 'hermes web'.
cli-commands.md already had the correct name.
2026-04-13 13:26:21 -07:00
Sabin Iacob
a27b3c8725 add git to the container installed packages (fixes #8439) 2026-04-13 13:08:19 -07:00
Teknium
1af2e18d40 chore: release v0.9.0 (v2026.4.13) (#9182)
The everywhere release — Hermes goes mobile with Termux/Android, adds
iMessage and WeChat, ships Fast Mode for OpenAI and Anthropic,
introduces background process monitoring, launches a local web
dashboard, and delivers the deepest security hardening pass yet
across 16 supported platforms.

487 commits, 269 merged PRs, 167 resolved issues, 24 contributors.
2026-04-13 11:52:09 -07:00
Teknium
0e60a9dc25 fix: add kimi-coding-cn to remaining provider touchpoints
Follow-up for salvaged PR #7637. Adds kimi-coding-cn to:
- model_normalize.py (prefix strip)
- providers.py (models.dev mapping)
- runtime_provider.py (credential resolution)
- setup.py (model list + setup label)
- doctor.py (health check)
- trajectory_compressor.py (URL detection)
- models_dev.py (registry mapping)
- integrations/providers.md (docs)
2026-04-13 11:20:37 -07:00
hcshen0111
2b3aa36242 feat(providers): add kimi-coding-cn provider for mainland China users
Cherry-picked from PR #7637 by hcshen0111.
Adds kimi-coding-cn provider with dedicated KIMI_CN_API_KEY env var
and api.moonshot.cn/v1 endpoint for China-region Moonshot users.
2026-04-13 11:20:37 -07:00
Teknium
ef180880aa fix: guard anthropic_adapter import + use canonical authorize URL
- Wrap module-level import from agent.anthropic_adapter in try/except
  so hermes web still starts if the adapter is unavailable; Phase 2
  PKCE endpoints return 501 in that case.
- Change authorize URL from console.anthropic.com to claude.ai to
  match the canonical adapter code.
2026-04-13 11:18:18 -07:00
kshitijk4poor
247929b0dd feat: dashboard OAuth provider management
Add OAuth provider management to the Hermes dashboard with full
lifecycle support for Anthropic (PKCE), Nous and OpenAI Codex
(device-code) flows.

## Backend (hermes_cli/web_server.py)

- 6 new API endpoints:
  GET /api/providers/oauth — list providers with connection status
  POST /api/providers/oauth/{id}/start — initiate PKCE or device-code
  POST /api/providers/oauth/{id}/submit — exchange PKCE auth code
  GET /api/providers/oauth/{id}/poll/{session} — poll device-code
  DELETE /api/providers/oauth/{id} — disconnect provider
  DELETE /api/providers/oauth/sessions/{id} — cancel pending session
- OAuth constants imported from anthropic_adapter (no duplication)
- Blocking I/O wrapped in run_in_executor for async safety
- In-memory session store with 15-minute TTL and automatic GC
- Auth token required on all mutating endpoints

## Frontend

- OAuthLoginModal — PKCE (paste auth code) and device-code (poll) flows
- OAuthProvidersCard — status, token preview, connect/disconnect actions
- Toast fix: createPortal to document.body for correct z-index
- App.tsx: skip animation key bump on initial mount (prevent double-mount)
- Integrated into the Env/Keys page
2026-04-13 11:18:18 -07:00
yongtenglei
2773b18b56 fix(run_agent): refresh activity during streaming responses
Previously, long-running streamed responses could be incorrectly treated
as idle by the gateway/cron inactivity timeout even while tokens were
actively arriving. The _touch_activity() call (which feeds
get_activity_summary() polled by the external timeout) was either called
only on the first chunk (chat completions) or not at all (Anthropic,
Codex, Codex fallback).

Add _touch_activity() on every chunk/event in all four streaming paths
so the inactivity monitor knows data is still flowing.

Fixes #8760
2026-04-13 10:55:51 -07:00
Teknium
ba50fa3035 docs: fix 30+ inaccuracies across documentation (#9023)
Cross-referenced all docs pages against the actual codebase and fixed:

Reference docs (cli-commands.md, slash-commands.md, profile-commands.md):
- Fix: hermes web -> hermes dashboard (correct subparser name)
- Fix: Wrong provider list (removed deepseek, ai-gateway, opencode-zen,
  opencode-go, alibaba; added gemini)
- Fix: Missing tts in hermes setup section choices
- Add: Missing --image flag for hermes chat
- Add: Missing --component flag for hermes logs
- Add: Missing CLI commands: debug, backup, import
- Fix: /status incorrectly marked as messaging-only (available everywhere)
- Fix: /statusbar moved from Session to Configuration category
- Add: Missing slash commands: /fast, /snapshot, /image, /debug
- Add: Missing /restart from messaging commands table
- Fix: /compress description to match COMMAND_REGISTRY
- Add: --no-alias flag to profile create docs

Configuration docs (configuration.md, environment-variables.md):
- Fix: Vision timeout default 30s -> 120s
- Fix: TTS providers missing minimax and mistral
- Fix: STT providers missing mistral
- Fix: TTS openai base_url shown with wrong default
- Fix: Compression config showing stale summary_model/provider/base_url
  keys (migrated out in config v17) -> target_ratio/protect_last_n

Getting-started docs:
- Fix: Redundant faster-whisper install (already in voice extra)
- Fix: Messaging extra description missing Slack

Developer guide:
- Fix: architecture.md tool count 48 -> 47, toolset count 40 -> 19
- Fix: run_agent.py line count 9,200 -> 10,700
- Fix: cli.py line count 8,500 -> 10,000
- Fix: main.py line count 5,500 -> 6,000
- Fix: gateway/run.py line count 7,500 -> 9,000
- Fix: Browser tools count 11 -> 10
- Fix: Platform adapter count 15 -> 18 (add wecom_callback, api_server)
- Fix: agent-loop.md wrong budget sharing (not shared, independent)
- Fix: agent-loop.md non-existent _get_budget_warning() reference
- Fix: context-compression-and-caching.md non-existent function name
- Fix: toolsets-reference.md safe toolset includes mixture_of_agents (it doesn't)
- Fix: toolsets-reference.md hermes-cli tool count 38 -> 36

Guides:
- Fix: automate-with-cron.md claims daily at 9am is valid (it's not)
- Fix: delegation-patterns.md Max 3 presented as hard cap (configurable)
- Fix: sessions.md group thread key format (shared by default, not per-user)
- Fix: cron-internals.md job ID format and JSON structure
2026-04-13 10:53:10 -07:00
Teknium
4ca6668daf docs: comprehensive update for recent merged PRs (#9019)
Audit and update documentation across 12 files to match changes from
~50 recently merged PRs. Key updates:

Slash commands (slash-commands.md):
- Add 5 missing commands: /snapshot, /fast, /image, /debug, /restart
- Fix /status incorrectly labeled as messaging-only (available in both)
- Add --global flag to /model docs
- Add [focus topic] arg to /compress docs

CLI commands (cli-commands.md):
- Add hermes debug share section with options and examples
- Add hermes backup section with --quick and --label flags
- Add hermes import section

Feature docs:
- TTS: document global tts.speed and per-provider speed for Edge/OpenAI
- Web dashboard: add docs for 5 missing pages (Sessions, Logs,
  Analytics, Cron, Skills) and 15+ API endpoints
- WhatsApp: add streaming, 4K chunking, and markdown formatting docs
- Skills: add GitHub rate-limit/GITHUB_TOKEN troubleshooting tip
- Budget: document CLI notification on iteration budget exhaustion

Config migration (compression.summary_* → auxiliary.compression.*):
- Update configuration.md, environment-variables.md,
  fallback-providers.md, cli.md, and context-compression-and-caching.md
- Replace legacy compression.summary_model/provider/base_url references
  with auxiliary.compression.model/provider/base_url
- Add legacy migration info boxes explaining auto-migration

Minor fixes:
- wecom-callback.md: clarify 'text only' limitation (input only)
- Escape {session_id}/{job_id} in web-dashboard.md headings for MDX
2026-04-13 10:50:59 -07:00
墨綠BG
c449cd1af5 fix(config): restore custom providers after v11→v12 migration
The v11→v12 migration converts custom_providers (list) into providers
(dict), then deletes the list. But all runtime resolvers read from
custom_providers — after migration, named custom endpoints silently stop
resolving and fallback chains fail with AuthError.

Add get_compatible_custom_providers() that reads from both config schemas
(legacy custom_providers list + v12+ providers dict), normalizes entries,
deduplicates, and returns a unified list. Update ALL consumers:

- hermes_cli/runtime_provider.py: _get_named_custom_provider() + key_env
- hermes_cli/auth_commands.py: credential pool provider names
- hermes_cli/main.py: model picker + _model_flow_named_custom()
- agent/auxiliary_client.py: key_env + custom_entry model fallback
- agent/credential_pool.py: _iter_custom_providers()
- cli.py + gateway/run.py: /model switch custom_providers passthrough
- run_agent.py + gateway/run.py: per-model context_length lookup

Also: use config.pop() instead of del for safer migration, fix stale
_config_version assertions in tests, add pool mock to codex test.

Co-authored-by: 墨綠BG <s5460703@gmail.com>
Closes #8776, salvaged from PR #8814
2026-04-13 10:50:52 -07:00
Teknium
0dd26c9495 fix(tests): fix 78 CI test failures and remove dead test (#9036)
Production fixes:
- voice_mode.py: add is_recording property to AudioRecorder (parity with TermuxAudioRecorder)
- cronjob_tools.py: add sms example to deliver description

Test fixes:
- test_real_interrupt_subagent: add missing _execution_thread_id (fixes 19 cascading failures from leaked _build_system_prompt patch)
- test_anthropic_error_handling: add _FakeMessages, override _interruptible_streaming_api_call (6 fixes)
- test_ctx_halving_fix: add missing request_overrides attribute (4 fixes)
- test_context_token_tracking: set _disable_streaming=True for non-streaming test path (4 fixes)
- test_dict_tool_call_args: set _disable_streaming=True (1 fix)
- test_provider_parity: add model='gpt-4o' for AIGateway tests to meet 64K minimum context (4 fixes)
- test_session_race_guard: add user_id to SessionSource (5 fixes)
- test_restart_drain/helpers: add user_id to SessionSource (2 fixes)
- test_telegram_photo_interrupts: add user_id to SessionSource
- test_interrupt: target thread_id for per-thread interrupt system (2 fixes)
- test_zombie_process_cleanup: rewrite with object.__new__ for refactored GatewayRunner.stop() (1 fix)
- test_browser_camofox_state: update config version 15->17 (1 fix)
- test_trajectory_compressor_async: widen lookback window 10->20 for line-shifted AsyncOpenAI (1 fix)
- test_voice_mode: fixed by production is_recording addition (5 fixes)
- test_voice_cli_integration: add _attached_images to CLI stub (2 fixes)
- test_hermes_logging: explicit propagation/level reset for cross-test pollution defense (1 fix)
- test_run_agent: add base_url for OpenRouter detection tests (2 fixes)

Deleted:
- test_inline_think_blocks_reasoning_only_accepted: tested unimplemented inline <think> handling
2026-04-13 10:50:24 -07:00
kimsr96
b909a9efef fix: extend ASCII-locale UnicodeEncodeError recovery to full request payload
The existing ASCII codec handler only sanitized conversation messages,
leaving tool schemas, system prompts, ephemeral prompts, prefill messages,
and HTTP headers as unhandled sources of non-ASCII content. On systems
with LANG=C or non-UTF-8 locale, Unicode symbols in tool descriptions
(e.g. arrows, em-dashes from prompt_builder) and system prompt content
would cause UnicodeEncodeError that fell through to the error path.

Changes:
- Add _sanitize_structure_non_ascii() generic recursive walker for
  nested dict/list payloads
- Add _sanitize_tools_non_ascii() thin wrapper for tool schemas
- Add _force_ascii_payload flag: once ASCII locale is detected, all
  subsequent API calls get proactively sanitized (prevents recurring
  failures from new tool results bringing fresh Unicode each turn)
- Extend the ASCII codec error handler to sanitize: prefill_messages,
  tool schemas (self.tools), system prompt, ephemeral system prompt,
  and default HTTP headers
- Update stale comment that acknowledged the gap

Cherry-picked from PR #8834 (credential pool changes dropped as
separate concern).
2026-04-13 05:16:35 -07:00
Teknium
28a9c43f81 fix: resolve key_env to actual API key value instead of env var name
The cherry-picked code passed the env var NAME (e.g. 'MY_API_KEY') as the
api_key value. The caller's has_usable_secret() check would reject the
var name, so the actual key was never used. Now we os.getenv() the
key_env value to get the real API key before returning it.
2026-04-13 05:16:21 -07:00
Geoff
76eecf3819 fix(model): Support providers: dict for custom endpoints in /model
Two fixes for user-defined providers in config.yaml:

1. list_authenticated_providers() - now includes full models list from
   providers.*.models array, not just default_model. This fixes /model
   showing only one model when multiple are configured.

2. _get_named_custom_provider() - now checks providers: dict (new-style)
   in addition to custom_providers: list (legacy). This fixes credential
   resolution errors when switching models via /model command.

Both changes are backwards compatible with existing custom_providers list format.

Fixes: Only one model appears for custom providers in /model selection
2026-04-13 05:16:21 -07:00
konsisumer
311dac1971 fix(file_tools): block /private/etc writes on macOS symlink bypass
On macOS, /etc is a symlink to /private/etc, so os.path.realpath()
resolves /etc/hosts to /private/etc/hosts. The sensitive path check
only matched /etc/ prefixes against the resolved path, allowing
writes to system files on macOS.

- Add /private/etc/ and /private/var/ to _SENSITIVE_PATH_PREFIXES
- Check both realpath-resolved and normpath-normalized paths
- Add regression tests for macOS symlink bypass

Closes #8734
Co-authored-by: ElhamDevelopmentStudio (PR #8829)
2026-04-13 05:15:05 -07:00
Teknium
587eeb56b9 chore: remove duplicate dead _try_gh_cli_token / _gh_cli_candidates from auth.py
These functions were duplicated between auth.py and copilot_auth.py.
The auth.py copies had zero production callers — only copilot_auth.py's
versions are used. Redirect the test import to the live copy and update
monkeypatch targets accordingly.
2026-04-13 05:12:36 -07:00
HearthCore
2a9e50c104 fix(copilot): resolve GHE token poisoning when GITHUB_TOKEN is set
When GITHUB_TOKEN is present in the environment (e.g. for gh CLI or
GitHub Actions), two issues broke Copilot authentication against
GitHub Enterprise (GHE) instances:

1. The copilot provider had no base_url_env_var, so COPILOT_API_BASE_URL
   was silently ignored — requests always went to public GitHub.

2. `gh auth token` (the CLI fallback) treats GITHUB_TOKEN as an override
   and echoes it back instead of reading from its credential store
   (hosts.yml). This caused the same rejected token to be used even
   after env var priority correctly skipped it.

Fix:
- Add base_url_env_var="COPILOT_API_BASE_URL" to copilot ProviderConfig
- Strip GITHUB_TOKEN/GH_TOKEN from the subprocess env when calling
  `gh auth token` so it reads from hosts.yml
- Pass --hostname from COPILOT_GH_HOST when set so gh returns the
  GHE-specific OAuth token
2026-04-13 05:12:36 -07:00
luyao618
8ec1608642 fix(agent): propagate api_mode to vision provider resolution
resolve_vision_provider_client() computed resolved_api_mode from config
but never passed it to downstream resolve_provider_client() or
_get_cached_client() calls, causing custom providers with
api_mode: anthropic_messages to crash when used for vision tasks.

Also remove the for_vision special case in _normalize_aux_provider()
that incorrectly discarded named custom provider identifiers.

Fixes #8857

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 05:02:54 -07:00
Teknium
e3ffe5b75f fix: remove legacy compression.summary_* config and env var fallbacks (#8992)
Remove the backward-compat code paths that read compression provider/model
settings from legacy config keys and env vars, which caused silent failures
when auto-detection resolved to incompatible backends.

What changed:
- Remove compression.summary_model, summary_provider, summary_base_url from
  DEFAULT_CONFIG and cli.py defaults
- Remove backward-compat block in _resolve_task_provider_model() that read
  from the legacy compression section
- Remove _get_auxiliary_provider() and _get_auxiliary_env_override() helper
  functions (AUXILIARY_*/CONTEXT_* env var readers)
- Remove env var fallback chain for per-task overrides
- Update hermes config show to read from auxiliary.compression
- Add config migration (v16→17) that moves non-empty legacy values to
  auxiliary.compression and strips the old keys
- Update example config and openclaw migration script
- Remove/update tests for deleted code paths

Compression model/provider is now configured exclusively via:
  auxiliary.compression.provider / auxiliary.compression.model

Closes #8923
2026-04-13 04:59:26 -07:00
WorldInnovationsDepartment
c1809e85e7 fix(gateway): handle stale lock files in acquire_scoped_lock
Updated the acquire_scoped_lock function to treat empty or corrupt lock files as stale. This change ensures that if a lock file exists but is invalid, it will be removed to prevent issues with stale locks. Added tests to verify recovery from both empty and corrupt lock files.
2026-04-13 04:59:25 -07:00
Teknium
23f668d66e fix: extract Gemma 4 <thought> reasoning in _extract_reasoning() (#8991)
Add <thought>(.*?)</thought> to inline_patterns so Gemma 4
reasoning content is captured for /reasoning display, not just
stripped from visible output.


Closes #8891

Co-authored-by: RhushabhVaghela <rhushabhvaghela@users.noreply.github.com>
2026-04-13 04:59:06 -07:00
flobo3
d8a521092b fix(weixin): rename send_document parameter to match base class 2026-04-13 04:58:30 -07:00
Teknium
a5bd56eae3 fix: eliminate provider hang dead zones in retry/timeout architecture (#8985)
Three targeted changes to close the gaps between retry layers that
caused users to experience 'No response from provider for 580s' and
'No activity for 15 minutes' despite having 5 layers of retry:

1. Remove non-streaming fallback from streaming path

   Previously, when all 3 stream retries exhausted, the code fell back
   to _interruptible_api_call() which had no stale detection and no
   activity tracking — a black hole that could hang for up to 1800s.
   Now errors propagate to the main retry loop which has richer recovery
   (credential rotation, provider fallback, backoff).

   For 'stream not supported' errors, sets _disable_streaming flag so
   the main retry loop automatically switches to non-streaming on the
   next attempt.

2. Add _touch_activity to recovery dead zones

   The gateway inactivity monitor relies on _touch_activity() to know
   the agent is alive, but activity was never touched during:
   - Stale stream detection/kill cycles (180-300s gaps)
   - Stream retry connection rebuilds
   - Main retry backoff sleeps (up to 120s)
   - Error recovery classification

   Now all these paths touch activity every ~30s, keeping the gateway
   informed during recovery cycles.

3. Add stale-call detector to non-streaming path

   _interruptible_api_call() now has the same stale detection pattern
   as the streaming path: kills hung connections after 300s (default,
   configurable via HERMES_API_CALL_STALE_TIMEOUT), scaled for large
   contexts (450s for 50K+ tokens, 600s for 100K+ tokens), disabled
   for local providers.

   Also touches activity every ~30s during the wait so the gateway
   monitor stays informed.

Env vars:
- HERMES_API_CALL_STALE_TIMEOUT: non-streaming stale timeout (default 300s)
- HERMES_STREAM_STALE_TIMEOUT: unchanged (default 180s)

Before: worst case ~2+ hours of sequential retries with no feedback
After: worst case bounded by gateway inactivity timeout (default 1800s)
with continuous activity reporting
2026-04-13 04:55:20 -07:00
Teknium
acdff020b7 test: add multi-word query tests for truncation match strategy
Tests phrase matching, proximity co-occurrence, and sliding window
coverage maximisation — the three new tiers from the truncation fix.
2026-04-13 04:54:42 -07:00
Al Sayed Hoota
a5bc698b9a fix(session_search): improve truncation to center on actual query matches
Three-tier match strategy for _truncate_around_matches():
1. Full-phrase search (exact query string positions)
2. Proximity co-occurrence (all terms within 200 chars)
3. Individual terms (fallback, preserves existing behavior)

Sliding window picks the start offset covering the most matches.

Moved inline import re to module level.

Co-authored-by: Al Sayed Hoota <78100282+AlsayedHoota@users.noreply.github.com>
2026-04-13 04:54:42 -07:00
landy
dbed40f39b fix: reopen resumed gateway sessions in sqlite 2026-04-13 04:54:07 -07:00
flobo3
d945cf6b1a fix(docker): add .venv to .dockerignore 2026-04-13 04:52:00 -07:00
twilwa
3a64348772 fix(discord): voice session continuity and signal handler thread safety
- Store source metadata on /voice channel join so voice input shares the
  same session as the linked text channel conversation
- Treat voice-linked text channels as free-response (skip @mention and
  auto-thread) while voice is active
- Scope the voice-linked exemption to the exact bound channel, not
  sibling threads
- Guard signal handler registration in start_gateway() for non-main
  threads (prevents RuntimeError when gateway runs in a daemon thread)
- Clean up _voice_sources on leave_voice_channel

Salvaged from PR #3475 by twilwa (Modal runtime portions excluded).
2026-04-13 04:49:21 -07:00
Teknium
381810ad50 feat: fix SQLite safety in hermes backup + add --quick snapshots + /snapshot command (#8971)
Three changes consolidated into the existing backup system:

1. Fix: hermes backup now uses sqlite3.Connection.backup() for .db files
   instead of raw file copy. Raw copy of a WAL-mode database can produce
   a corrupted backup — the backup() API handles this correctly.

2. hermes backup --quick: fast snapshot of just critical state files
   (config.yaml, state.db, .env, auth.json, cron/jobs.json, etc.)
   stored in ~/.hermes/state-snapshots/. Auto-prunes to 20 snapshots.

3. /snapshot slash command (alias /snap): in-session interface for
   quick state snapshots. create/list/restore/prune subcommands.
   Restore by ID or number. Powered by the same backup module.

No new modules — everything lives in hermes_cli/backup.py alongside
the existing full backup/import code.

No hooks in run_agent.py — purely on-demand, zero runtime overhead.

Closes the use case from PRs #8406 and #7813 with ~200 lines of new
logic instead of a 1090-line content-addressed storage engine.
2026-04-13 04:46:13 -07:00
Richard Li
82901695ff feat(wecom): add platform hint for native media sending 2026-04-13 04:46:04 -07:00
Teknium
3365abdddf fix: use correct 'completed' state in status badge map, clean up blank lines
The cron backend uses 'completed' (not 'exhausted') when repeat count
is reached. Also removes extra blank lines from cherry-pick.
2026-04-13 04:45:29 -07:00
jonny
70f490a12a fix(web): CronPage crash when rendering schedule object
The cron API returns schedule as {kind, expr, display} object but
CronPage.tsx rendered it directly as a React child, crashing with
'Objects are not valid as a React child'.

- Update CronJob interface in api.ts to match actual API response
- Use schedule_display (string) instead of schedule (object)
- Use state instead of status for job state
- Use last_error instead of error for error display
2026-04-13 04:45:29 -07:00
Teknium
8dfee98d06 fix: clean up description escaping, add string-data tests
Follow-up for cherry-picked PR #8918.
2026-04-13 04:45:07 -07:00
dippwho
bca22f3090 fix(homeassistant): #8912 resolve XML tool calling loop by casting nested object to JSON string 2026-04-13 04:45:07 -07:00
MaybeRichard
11e2e04667 fix(telegram): pass proxy URL explicitly to HTTPXRequest when proxy env vars are set
When HTTPS_PROXY / HTTP_PROXY / ALL_PROXY env vars are set (or macOS system proxy
is detected), pass the proxy URL explicitly via HTTPXRequest(proxy=proxy_url) instead
of relying on httpx's trust_env mechanism, which is unreliable for HTTP CONNECT
proxies (e.g. Clash / ClashMac in fake-ip mode).

Uses the shared resolve_proxy_url() from base.py (handles env vars + macOS system
proxy detection) instead of duplicating env var reading inline. Consolidates the
proxy_configured boolean into a single proxy_url = resolve_proxy_url() call that
serves as both the gate for skipping fallback-IP transport and the value passed
to HTTPXRequest.

Co-authored-by: Hermes Agent <hermes@nousresearch.com>
Salvaged from PR #8931 by MaybeRichard.
2026-04-13 04:45:05 -07:00
XiaoXiao0221
860489600a fix(cli): sanitize surrogate characters in handle_paste
Prevents UTF-8 encoding crash when pasting text from Word or Google Docs,
which may contain lone surrogate code points (U+D800-U+DFFF).
Reuses existing _sanitize_surrogates() from run_agent module.
2026-04-13 04:42:45 -07:00
Teknium
0998a57007 refactor: remove 5 dead utility functions from utils.py (#8975)
Remove read_json_file, read_jsonl, append_jsonl, env_str, env_lower —
all added in #7917 but never imported anywhere in the codebase. Also
remove unused List and Optional typing imports.

env_int, env_bool, and the other helpers that have real consumers are
kept.
2026-04-13 04:39:59 -07:00
Teknium
cea34dc7ef fix: follow-up for salvaged PR #8939
- Move test file to tests/hermes_cli/ (consistent with test layout)
- Remove unused imports (os, pytest) from test file
- Update _sanitize_env_lines docstring: now used on read + write paths
2026-04-13 04:35:37 -07:00
Mil Wang (from Dev Box)
e469f3f3db fix: sanitize .env before loading to prevent token duplication (#8908)
When .env files become corrupted (e.g. concatenated KEY=VALUE pairs on
a single line due to concurrent writes or encoding issues), both
python-dotenv and load_env() would parse the entire concatenated string
as a single value. This caused bot tokens to appear duplicated up to 8×,
triggering InvalidToken errors from the Telegram API.

Root cause: _sanitize_env_lines() — which correctly splits concatenated
lines — was only called during save_env_value() writes, not during reads.

Fix:
- load_env() now calls _sanitize_env_lines() before parsing
- env_loader.load_hermes_dotenv() sanitizes the .env file on disk
  before python-dotenv reads it, so os.getenv() also returns clean values
- Added tests reproducing the exact corruption pattern from #8908

Closes #8908
2026-04-13 04:35:37 -07:00
ismell0992-afk
e77f135ed8 fix(cli): narrow Nous Hermes non-agentic warning to actual hermes-3/-4 models
The startup warning that Nous Research Hermes 3 & 4 models are not agentic
fired on any model whose name contained "hermes" anywhere, via a plain
substring check. That false-positived on unrelated local Modelfiles such
as `hermes-brain:qwen3-14b-ctx16k` — a tool-capable Qwen3 wrapper that
happens to live under a custom "hermes" tag namespace — making the warning
noise for legitimate setups.

Replace the substring check with a narrow regex anchored on `^`, `/`, or
`:` boundaries that only matches the real Hermes-3 / Hermes-4 chat family
(e.g. `NousResearch/Hermes-3-Llama-3.1-70B`, `hermes-4-405b`,
`openrouter/hermes3:70b`). Consolidate into a single helper
`is_nous_hermes_non_agentic()` in `hermes_cli.model_switch` so the CLI
and the canonical check don't drift, and route the duplicate inline site
in `cli.HermesCLI._print_warnings()` through the helper.

Add a parametrized test covering positive matches (real Hermes-3/-4
names) and a broad set of negatives (custom Modelfiles, Qwen/Claude/GPT,
older Nous-Hermes-2 families, bare "hermes", empty string, and the
"brain-hermes-3-impostor" boundary case).
2026-04-13 04:33:52 -07:00
ismell0992-afk
3e99964789 fix(agent): prefer Ollama Modelfile num_ctx over GGUF training max
_query_local_context_length was checking model_info.context_length
(the GGUF training max) before num_ctx (the Modelfile runtime override),
inverse to query_ollama_num_ctx. The two helpers therefore disagreed on
the same model:

  hermes-brain:qwen3-14b-ctx32k     # Modelfile: num_ctx 32768
  underlying qwen3:14b GGUF         # qwen3.context_length: 40960

query_ollama_num_ctx correctly returned 32768 (the value Ollama will
actually allocate KV cache for). _query_local_context_length returned
40960, which let ContextCompressor grow conversations past 32768 before
triggering compression — at which point Ollama silently truncated the
prefix, corrupting context.

Swap the order so num_ctx is checked first, matching query_ollama_num_ctx.
Adds a parametrized test that seeds both values and asserts num_ctx wins.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 04:24:07 -07:00
Teknium
39b83f3443 fix: remove sandbox language from tool descriptions
The terminal and execute_code tool schemas unconditionally mentioned
'cloud sandboxes' in their descriptions sent to the model. This caused
agents running on local backends to believe they were in a sandboxed
environment, refusing networking tasks and other operations. Worse,
agents sometimes saved this false belief to persistent memory, making
it persist across sessions.

Reported by multiple users (XLion, 林泽).
2026-04-13 04:23:27 -07:00
Teknium
67fece1176 feat(cli): show notification when iteration budget is reached
Displays a dim warning after the response panel when the agent hit
its max iterations, so the user knows the response may be incomplete.
2026-04-13 03:40:47 -07:00
Teknium
934318ba3a fix: budget-exhausted conversations now get a summary instead of empty response
The post-loop grace call mechanism was broken: it injected a user
message and set _budget_grace_call=True, but could never re-enter the
while loop (already exited).  Worse, the flag blocked the fallback
_handle_max_iterations from running, so final_response stayed None.

Users saw empty/no response when the agent hit max iterations.

Fix: remove the dead grace block and let _handle_max_iterations handle
it directly — it already injects a summary request and makes one extra
toolless API call.
2026-04-13 03:36:20 -07:00
Teknium
3804556cd9 fix: restore clarify toolset row removed in cherry-pick 2026-04-13 02:49:11 -07:00
Haoqing Wang
8e0ae66520 fix(skills): correct TTS/STT providers, add missing platforms/commands in hermes-agent skill
Fixes verified via 5-container parallel testing against v0.8.0 codebase.

Critical fixes:
- TTS providers: replace nonexistent kokoro/fish with actual minimax/mistral/neutts
- STT providers: add missing mistral (Voxtral Transcribe)
- Testing section: remove `source venv/bin/activate` (no venv dir in project)

Expanded coverage:
- Provider table: 13 → 22 entries (add Gemini, xAI, Xiaomi, Qwen OAuth, MiniMax CN, etc.)
- Platform list: add BlueBubbles (iMessage) and Weixin (WeChat), clarify Open WebUI
- Slash commands: add 14 undocumented commands (/approve, /deny, /branch, /fast, etc.)
- Toolsets: add 4 missing (messaging, search, todo, rl)
- Troubleshooting: expand from 6 to 10 sections with practical deployment fixes
  (Copilot OAuth 403, gateway linger, WSL2 systemd, Discord intents, etc.)

Minor fixes:
- agent/ directory description expanded
- delegation config keys completed
- /restart noted as gateway-only
- hermes honcho noted as plugin-dependent
2026-04-13 02:49:11 -07:00
Teknium
397eae5d93 fix: recover partial streamed content on connection failure
When streaming fails after partial content delivery (e.g. OpenRouter
timeout kills connection mid-response), the stub response now carries
the accumulated streamed text instead of content=None.

Two fixes:
1. The partial-stream stub response includes recovered content from
   _current_streamed_assistant_text — the text that was already
   delivered to the user via stream callbacks before the connection
   died.

2. The empty response recovery chain now checks for partial stream
   content BEFORE falling back to _last_content_with_tools (prior
   turn content) or wasting API calls on retries. This prevents:
   - Showing wrong content from a prior turn
   - Burning 3+ unnecessary retry API calls
   - Falling through to '(empty)' when the user already saw content

The root cause: OpenRouter has a ~125s inactivity timeout. When
Anthropic's SSE stream goes silent during extended reasoning, the
proxy kills the connection. The model's text was already partially
streamed but the stub discarded it, triggering the empty recovery
chain which would show stale prior-turn content or waste retries.
2026-04-13 02:12:01 -07:00
Teknium
35b11f48a5 docs: add web dashboard documentation (#8864)
- New docs page: user-guide/features/web-dashboard.md covering
  quick start, prerequisites, all three pages (Status, Config, API Keys),
  the /reload slash command, REST API endpoints, CORS config, and
  development workflow
- Added 'Management' category in sidebar for web-dashboard
- Added 'hermes web' to CLI commands reference with options table
- Added '/reload' to slash commands reference (both CLI and gateway tables)
2026-04-13 01:15:27 -07:00
Ubuntu
73ed09e145 fix(gateway): keep venv python symlink unresolved when remapping paths
_remap_path_for_user was calling .resolve() on the Python path, which
followed venv/bin/python into the base interpreter. On uv-managed venvs
this swaps the systemd ExecStart to a bare Python that has none of the
venv's site-packages, so the service crashes on first import. Classical
python -m venv installs were unaffected by accident: the resolved target
/usr/bin/python3.x lives outside $HOME so the path-remap branch was
skipped and the system Python's packages silently worked.

Remove .resolve() calls on both current_home and the path; use
.expanduser() for lexical tilde expansion only. The function does
lexical prefix substitution, which is all it needs to do for its
actual purpose (remapping /root/.hermes -> /home/<user>/.hermes when
installing system services as root for a different user).

Repro: on a uv-managed venv install, `sudo hermes gateway install
--system` writes ExecStart=.../uv/python/cpython-3.11.15-.../bin/python3.11
instead of .../hermes-agent/venv/bin/python, and the service crashes on
ModuleNotFoundError: yaml.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 00:49:22 -07:00
Teknium
964ef681cf fix(gateway): improve /restart response with fallback instructions 2026-04-12 22:34:23 -07:00
Teknium
276d20e62c fix(gateway): /restart uses service restart under systemd instead of detached subprocess
The detached bash subprocess spawned by /restart gets killed by
systemd's KillMode=mixed cgroup cleanup, leaving the gateway dead.

Under systemd (detected via INVOCATION_ID env var), /restart now uses
via_service=True which exits with code 75 — RestartForceExitStatus=75
in the unit file makes systemd auto-restart the service. The detached
subprocess approach is preserved as fallback for non-systemd
environments (Docker, tmux, foreground mode).
2026-04-12 22:32:19 -07:00
Teknium
e2a9b5369f feat: web UI dashboard for managing Hermes Agent (#8756)
* feat: web UI dashboard for managing Hermes Agent (salvage of #8204/#7621)

Adds an embedded web UI dashboard accessible via `hermes web`:
- Status page: agent version, active sessions, gateway status, connected platforms
- Config editor: schema-driven form with tabbed categories, import/export, reset
- API Keys page: set, clear, and view redacted values with category grouping
- Sessions, Skills, Cron, Logs, and Analytics pages

Backend:
- hermes_cli/web_server.py: FastAPI server with REST endpoints
- hermes_cli/config.py: reload_env() utility for hot-reloading .env
- hermes_cli/main.py: `hermes web` subcommand (--port, --host, --no-open)
- cli.py / commands.py: /reload slash command for .env hot-reload
- pyproject.toml: [web] optional dependency extra (fastapi + uvicorn)
- Both update paths (git + zip) auto-build web frontend when npm available

Frontend:
- Vite + React + TypeScript + Tailwind v4 SPA in web/
- shadcn/ui-style components, Nous design language
- Auto-refresh status page, toast notifications, masked password inputs

Security:
- Path traversal guard (resolve().is_relative_to()) on SPA file serving
- CORS localhost-only via allow_origin_regex
- Generic error messages (no internal leak), SessionDB handles closed properly

Tests: 47 tests covering reload_env, redact_key, API endpoints, schema
generation, path traversal, category merging, internal key stripping,
and full config round-trip.

Original work by @austinpickett (PR #1813), salvaged by @kshitijk4poor
(PR #7621#8204), re-salvaged onto current main with stale-branch
regressions removed.

* fix(web): clean up status page cards, always rebuild on `hermes web`

- Remove config version migration alert banner from status page
- Remove config version card (internal noise, not surfaced in TUI)
- Reorder status cards: Agent → Gateway → Active Sessions (3-col grid)
- `hermes web` now always rebuilds from source before serving,
  preventing stale web_dist when editing frontend files

* feat(web): full-text search across session messages

- Add GET /api/sessions/search endpoint backed by FTS5
- Auto-append prefix wildcards so partial words match (e.g. 'nimb' → 'nimby')
- Debounced search (300ms) with spinner in the search icon slot
- Search results show FTS5 snippets with highlighted match delimiters
- Expanding a search hit auto-scrolls to the first matching message
- Matching messages get a warning ring + 'match' badge
- Inline term highlighting within Markdown (text, bold, italic, headings, lists)
- Clear button (x) on search input for quick reset

---------

Co-authored-by: emozilla <emozilla@nousresearch.com>
2026-04-12 22:26:28 -07:00
Dusk1e
c052cf0eea fix(security): validate domain/service params in ha_call_service to prevent path traversal 2026-04-12 22:26:15 -07:00
Teknium
8a64f3e368 feat(gateway): notify /restart requester when gateway comes back online
When a user sends /restart, the gateway now persists their routing info
(platform, chat_id, thread_id) to .restart_notify.json. After the new
gateway process starts and adapters connect, it reads the file, sends a
'Gateway restarted successfully' message to that specific chat, and
cleans up the file.

This follows the same pattern as _send_update_notification (used by
/update). Thread IDs are preserved so the notification lands in the
correct Telegram topic or Discord thread.

Previously, after /restart the user had no feedback that the gateway was
back — they had to send a message to find out. Now they get a proactive
notification and know their session continues.
2026-04-12 22:23:48 -07:00
Teknium
b22663ea69 docs: restore Orchestra Research attribution in research-paper-writing skill (#8800)
PR #4654 replaced ml-paper-writing with research-paper-writing, preserving
the writing philosophy and reference files but dropping the dedicated
'Sources Behind This Guidance' attribution table from the SKILL.md body.

Re-adds:
- The researcher attribution table (Nanda, Farquhar, Gopen & Swan, Lipton,
  Steinhardt, Perez, Karpathy) with affiliations and links to SKILL.md
- Orchestra Research credit as original compiler of the writing philosophy
- 'Origin & Attribution' section in sources.md documenting the full chain:
  Nanda blog → Orchestra skill → teknium integration → SHL0MS expansion
2026-04-12 22:03:18 -07:00
Teknium
83ca0844f7 fix: preserve dots in model names for OpenCode Zen and ZAI providers (#8794)
OpenCode Zen was in _DOT_TO_HYPHEN_PROVIDERS, causing all dotted model
names (minimax-m2.5-free, gpt-5.4, glm-5.1) to be mangled. The fix:

Layer 1 (model_normalize.py): Remove opencode-zen from the blanket
dot-to-hyphen set. Add an explicit block that preserves dots for
non-Claude models while keeping Claude hyphenated (Zen's Claude
endpoint uses anthropic_messages mode which expects hyphens).

Layer 2 (run_agent.py _anthropic_preserve_dots): Add opencode-zen and
zai to the provider allowlist. Broaden URL check from opencode.ai/zen/go
to opencode.ai/zen/ to cover both Go and Zen endpoints. Add bigmodel.cn
for ZAI URL detection.

Also adds glm-5.1 to ZAI model lists in models.py and setup.py.

Closes #7710

Salvaged from contributions by:
- konsisumer (PR #7739, #7719)
- DomGrieco (PR #8708)
- Esashiero (PR #7296)
- sharziki (PR #7497)
- XiaoYingGee (PR #8750)
- APTX4869-maker (PR #8752)
- kagura-agent (PR #7157)
2026-04-12 21:22:59 -07:00
Teknium
a0cd2c5338 fix(gateway): verbose tool progress no longer truncates args when tool_preview_length is 0 (#8735)
When tool_preview_length is 0 (default for platforms without a tier
default, like Session), verbose mode was truncating args JSON to 200
characters.  Since the user explicitly opted into verbose mode, they
expect full tool call detail — the 200-char cap defeated the purpose.

Now: tool_preview_length=0 means no truncation in verbose mode.
Positive values still cap as before.  Platform message-length limits
handle overflow naturally.
2026-04-12 20:05:12 -07:00
Teknium
3636f64540 fix: resolve npm audit vulnerabilities in browser tools and whatsapp bridge (#8745)
* fix(telegram): use UTF-16 code units for message length splitting

Port from nearai/ironclaw#2304: Telegram's 4096 character limit is
measured in UTF-16 code units, not Unicode codepoints. Characters
outside the Basic Multilingual Plane (emoji like 😀, CJK Extension B,
musical symbols) are surrogate pairs: 1 Python char but 2 UTF-16 units.

Previously, truncate_message() used Python's len() which counts
codepoints. This could produce chunks exceeding Telegram's actual limit
when messages contain many astral-plane characters.

Changes:
- Add utf16_len() helper and _prefix_within_utf16_limit() for
  UTF-16-aware string measurement and truncation
- Add _custom_unit_to_cp() binary-search helper that maps a custom-unit
  budget to the largest safe codepoint slice position
- Update truncate_message() to accept optional len_fn parameter
- Telegram adapter now passes len_fn=utf16_len when splitting messages
- Fix fallback truncation in Telegram error handler to use
  _prefix_within_utf16_limit instead of codepoint slicing
- Update send_message_tool.py to use utf16_len for Telegram platform
- Add comprehensive tests: utf16_len, _prefix_within_utf16_limit,
  truncate_message with len_fn (emoji splitting, content preservation,
  code block handling)
- Update mock lambdas in reply_mode tests to accept **kw for len_fn

* fix: resolve npm audit vulnerabilities in browser tools and whatsapp bridge

Browser tools (agent-browser):
- Override lodash to 4.18.1 (fixes prototype pollution CVEs in transitive
  dep via node-simctl → @appium/logger). Not reachable in Hermes's code
  path but cleans the audit report.
- basic-ftp and brace-expansion updated via npm audit fix.

WhatsApp bridge:
- file-type updated (fixes infinite loop in ASF parser + ZIP bomb DoS)
- music-metadata updated (fixes infinite loop in ASF parser)
- path-to-regexp updated (fixes ReDoS, mitigated by localhost binding)

Both components now report 0 npm vulnerabilities.

Ref: https://gist.github.com/jacklevin74/b41b710d3e20ba78fb7e2d42e2b83819
2026-04-12 19:38:20 -07:00
Teknium
15b1a3aa69 fix: improve WhatsApp UX — chunking, formatting, streaming (#8723)
Three changes that address the poor WhatsApp experience reported by users:

1. Reclassify WhatsApp from TIER_LOW to TIER_MEDIUM in display_config.py
   — enables streaming and tool progress via the existing Baileys /edit
   bridge endpoint. Users now see progressive responses instead of
   minutes of silence followed by a wall of text.

2. Lower MAX_MESSAGE_LENGTH from 65536 to 4096 and add proper chunking
   — send() now calls format_message() and truncate_message() before
   sending, then loops through chunks with a small delay between them.
   The base class truncate_message() already handles code block boundary
   detection (closes/reopens fences at chunk boundaries). reply_to is
   only set on the first chunk.

3. Override format_message() with WhatsApp-specific markdown conversion
   — converts **bold** to *bold*, ~~strike~~ to ~strike~, headers to
   bold text, and [links](url) to text (url). Code blocks and inline
   code are protected from conversion via placeholder substitution.

Together these fix the two user complaints:
- 'sends the whole code all the time' → now chunked at 4K with proper
  formatting
- 'terminal gets interrupted and gets cooked' → streaming + tool progress
  give visual feedback so users don't accidentally interrupt with
  follow-up messages
2026-04-12 19:20:13 -07:00
Teknium
5fae356a85 fix: show full last assistant response when resuming a session (#8724)
When resuming a session with --resume or -c, the last assistant response
was truncated to 200 chars / 3 lines just like older messages in the recap.
This forced users to waste tokens re-asking for the response.

Now the last assistant message in the recap is shown in full with non-dim
styling, so users can see exactly where they left off. Earlier messages
remain truncated for compact display.

Changes:
- Track un-truncated text for the last assistant entry during collection
- Replace last entry with full text after history trimming
- Render last assistant entry with bold (non-dim) styling
- Update existing truncation tests to use multi-message histories
- Add new tests for full last response display (char + multiline)
2026-04-12 19:07:14 -07:00
Teknium
9e992df8ae fix(telegram): use UTF-16 code units for message length splitting (#8725)
Port from nearai/ironclaw#2304: Telegram's 4096 character limit is
measured in UTF-16 code units, not Unicode codepoints. Characters
outside the Basic Multilingual Plane (emoji like 😀, CJK Extension B,
musical symbols) are surrogate pairs: 1 Python char but 2 UTF-16 units.

Previously, truncate_message() used Python's len() which counts
codepoints. This could produce chunks exceeding Telegram's actual limit
when messages contain many astral-plane characters.

Changes:
- Add utf16_len() helper and _prefix_within_utf16_limit() for
  UTF-16-aware string measurement and truncation
- Add _custom_unit_to_cp() binary-search helper that maps a custom-unit
  budget to the largest safe codepoint slice position
- Update truncate_message() to accept optional len_fn parameter
- Telegram adapter now passes len_fn=utf16_len when splitting messages
- Fix fallback truncation in Telegram error handler to use
  _prefix_within_utf16_limit instead of codepoint slicing
- Update send_message_tool.py to use utf16_len for Telegram platform
- Add comprehensive tests: utf16_len, _prefix_within_utf16_limit,
  truncate_message with len_fn (emoji splitting, content preservation,
  code block handling)
- Update mock lambdas in reply_mode tests to accept **kw for len_fn
2026-04-12 19:06:20 -07:00
Teknium
3cd6cbee5f feat: add /debug slash command for all platforms
Adds /debug as a slash command available in CLI, Telegram, Discord,
Slack, and all other gateway platforms. Uploads debug report + full
logs to paste services and returns shareable URLs.

- commands.py: CommandDef in Info category (no cli_only/gateway_only)
- gateway/run.py: async handler with run_in_executor for blocking I/O
- cli.py: dispatch in process_command to run_debug_share
2026-04-12 18:08:45 -07:00
Teknium
f724079d3b fix(gateway): reject known-weak placeholder credentials at startup
Port from openclaw/openclaw#64586: users who copy .env.example without
changing placeholder values now get a clear error at startup instead of
a confusing auth failure from the platform API. Also rejects placeholder
API_SERVER_KEY when binding to a network-accessible address.

Cherry-picked from PR #8677.
2026-04-12 18:05:41 -07:00
Teknium
c7d8d109ff fix(matrix): trust m.mentions.user_ids as authoritative mention signal
Port from openclaw/openclaw#64796: Per MSC3952 / Matrix v1.7, the
m.mentions.user_ids field is the authoritative mention signal. Clients
that populate m.mentions but don't duplicate @bot in the body text
were being silently dropped when MATRIX_REQUIRE_MENTION=true.

Cherry-picked from PR #8673.
2026-04-12 18:05:41 -07:00
Teknium
88a12af58c feat: add hermes debug share — upload debug report to pastebin (#8681)
* feat: add `hermes debug share` — upload debug report to pastebin

Adds a new `hermes debug share` command that collects system info
(via hermes dump), recent logs (agent.log, errors.log, gateway.log),
and uploads the combined report to a paste service (paste.rs primary,
dpaste.com fallback). Returns a shareable URL for support.

Options:
  --lines N    Number of log lines per file (default: 200)
  --expire N   Paste expiry in days (default: 7, dpaste.com only)
  --local      Print report locally without uploading

Files:
  hermes_cli/debug.py           - New module: paste upload + report collection
  hermes_cli/main.py            - Wire cmd_debug + argparse subparser
  tests/hermes_cli/test_debug.py - 19 tests covering upload, collection, CLI

* feat: upload full agent.log and gateway.log as separate pastes

hermes debug share now uploads up to 3 pastes:
  1. Summary report (system info + log tails) — always
  2. Full agent.log (last ~500KB) — if file exists
  3. Full gateway.log (last ~500KB) — if file exists

Each paste uploads independently; log upload failures are noted
but don't block the main report. Output shows all links aligned:

  Report     https://paste.rs/abc
  agent.log  https://paste.rs/def
  gateway.log https://paste.rs/ghi

Also adds _read_full_log() with size-capped tail reading to stay
within paste service limits (~512KB per file).

* feat: prepend hermes dump to each log paste for self-contained context

Each paste (agent.log, gateway.log) now starts with the hermes dump
output so clicking any single link gives full system context without
needing to cross-reference the summary report.

Refactored dump capture into _capture_dump() — called once and
reused across the summary report and each log paste.

* fix: fall back to .1 rotated log when primary log is missing or empty

When gateway.log (or agent.log) doesn't exist or is empty, the debug
share now checks for the .1 rotation file. This is common — the
gateway rotates logs and the primary file may not exist yet.

Extracted _resolve_log_path() to centralize the fallback logic for
both _read_log_tail() and _read_full_log().

* chore: remove unused display_hermes_home import
2026-04-12 18:05:14 -07:00
Teknium
bcad679799 fix(api_server): normalize array-based content parts in chat completions
Some OpenAI-compatible clients (Open WebUI, LobeChat, etc.) send
message content as an array of typed parts instead of a plain string:

    [{"type": "text", "text": "hello"}]

The agent pipeline expects strings, so these array payloads caused
silent failures or empty messages.

Add _normalize_chat_content() with defensive limits (recursion depth,
list size, output length) and apply it to both the Chat Completions
and Responses API endpoints. The Responses path had inline
normalization that only handled input_text/output_text — the shared
function also handles the standard 'text' type.

Salvaged from PR #7980 (ikelvingo) — only the content normalization;
the SSE and Weixin changes in that PR were regressions and are not
included.

Co-authored-by: ikelvingo <ikelvingo@users.noreply.github.com>
2026-04-12 18:03:16 -07:00
AaronWong1999
e8385f6f89 docs: add HermesClaw to community ecosystem
Adds a one-line entry for HermesClaw (community WeChat bridge) to the Community section. It lets users run Hermes Agent and OpenClaw on the same WeChat account.
2026-04-12 18:03:16 -07:00
Sicheng Li
ea2829ab43 fix(weixin,wecom,matrix): respect system proxy via aiohttp trust_env
aiohttp.ClientSession defaults to trust_env=False, ignoring HTTP_PROXY/
HTTPS_PROXY env vars. This causes QR login and all API calls to fail for
users behind a proxy (e.g. Clash in fake-ip mode), which is common in
China where Weixin and WeCom are primarily used.

Added trust_env=True to all aiohttp.ClientSession instantiations that
connect to external hosts (weixin: 3 places, wecom: 1, matrix: 1).
WhatsApp sessions are excluded as they only connect to localhost.

httpx-based adapters (dingtalk, signal, wecom_callback) are unaffected
as httpx defaults to trust_env=True.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 18:03:16 -07:00
Teknium
bc4e2744c3 test: add tests for compression config_context_length passthrough
- Test that auxiliary.compression.context_length from config is forwarded
  to get_model_context_length (positive case)
- Test that invalid/non-integer config values are silently ignored
- Fix _make_agent() to set config=None (cherry-picked code reads self.config)
2026-04-12 17:52:34 -07:00
ygd58
4a9c356559 fix(compression): pass configured context_length to feasibility check
_check_compression_model_feasibility() called get_model_context_length()
without passing config_context_length, so custom endpoints that do not
support /models API queries always fell through to the 128K default,
ignoring auxiliary.compression.context_length in config.yaml.

Fix: read auxiliary.compression.context_length from config and pass it
as config_context_length (highest-priority hint) so the user-configured
value is always respected regardless of API availability.

Fixes #8499
2026-04-12 17:52:34 -07:00
Teknium
0d0d27d45e test(tts): add speed config tests for Edge, OpenAI, and MiniMax
12 tests covering:
- Provider-specific speed overrides global speed
- Global speed used as fallback
- Default (no speed) preserves existing behavior
- Edge SSML rate string conversion (positive/negative)
- OpenAI speed clamping to 0.25-4.0 range
2026-04-12 16:46:18 -07:00
0xbyt4
8ec0656f53 feat(tts): add speed support for Edge TTS and OpenAI TTS
Read tts.speed (global) or tts.<provider>.speed (provider-specific) from
config. Provider-specific takes precedence over global.

- Edge TTS: converts speed float to SSML prosody rate string
- OpenAI TTS: passes speed param clamped to 0.25-4.0
- MiniMax: wired into global tts.speed fallback for consistency

Co-authored-by: 0xbyt4 <0xbyt4@users.noreply.github.com>
2026-04-12 16:46:18 -07:00
Teknium
651419b014 fix: make mimo-v2-pro the default model for Nous portal users
Users who set up Nous auth without explicitly selecting a model via
`hermes model` were silently falling back to anthropic/claude-opus-4.6
(the first entry in _PROVIDER_MODELS['nous']), causing unexpected
charges on their Nous plan. Move xiaomi/mimo-v2-pro to the first
position so unconfigured users default to a free model instead.
2026-04-12 16:44:03 -07:00
Teknium
a266238e1e fix(weixin): streaming cursor, media uploads, markdown links, blank messages (#8665)
Four fixes for the Weixin/WeChat adapter, synthesized from the best
aspects of community PRs #8407, #8521, #8360, #7695, #8308, #8525,
#7531, #8144, #8251.

1. Streaming cursor (▉) stuck permanently — WeChat doesn't support
   message editing, so the cursor appended during streaming can never
   be removed.  Add SUPPORTS_MESSAGE_EDITING = False to WeixinAdapter
   and check it in gateway/run.py to use an empty cursor for non-edit
   platforms.  (Fixes #8307, #8326)

2. Media upload failures — two bugs in _send_file():
   a) upload_full_url path used PUT (404 on WeChat CDN); now uses POST.
   b) aes_key was base64(raw_bytes) but the iLink API expects
      base64(hex_string); images showed as grey boxes.  (Fixes #8352, #7529)
   Also: unified both upload paths into _upload_ciphertext(), preferring
   upload_full_url.  Added send_video/send_voice methods and voice_item
   media builder for audio/.silk files.  Added video_md5 field.

3. Markdown links stripped — WeChat can't render [text](url), so
   format_message() now converts them to 'text (url)' plaintext.
   Code blocks are preserved.  (Fixes #7617)

4. Blank message prevention — three guards:
   a) _split_text_for_weixin_delivery('') returns [] not ['']
   b) send() filters empty/whitespace chunks before _send_text_chunk
   c) _send_message() raises ValueError for empty text as safety net

Community credit: joei4cm (#8407), lyonDan (#8521), SKFDJKLDG (#8360),
tomqiaozc (#7695), joshleeeeee (#8308), luoxiao6645(#8525),
longsizhuo (#7531), Astral-Yang (#8144), QingWei-Li (#8251).
2026-04-12 16:43:25 -07:00
Teknium
c83674dd77 fix: unify OpenClaw detection, add isatty guard, fix print_warning import
Combines detection from both PRs into _detect_openclaw_processes():
- Cross-platform process scan (pgrep/tasklist/PowerShell) from PR #8102
- systemd service check from PR #8555
- Returns list[str] with details about what's found

Fixes in cleanup warning (from PR #8555):
- print_warning -> print_error/print_info (print_warning not in import chain)
- Added isatty() guard for non-interactive sessions
- Removed duplicate _check_openclaw_running() in favor of shared function

Updated all tests to match new API.
2026-04-12 16:40:37 -07:00
Serhat Dolmac
76f7411fca fix(claw): warn and prompt if OpenClaw is still running before archival (fixes #8502) 2026-04-12 16:40:37 -07:00
dirtyfancy
9fb36738a7 fix(claw): address Copilot review on Windows detection and non-interactive prompt
- Use PowerShell to inspect node.exe command lines on Windows,
  since tasklist output does not include them.
- Also check for dedicated openclaw.exe/clawd.exe processes.
- Skip the interactive prompt in non-interactive sessions so the
  preview-only behavior is preserved.
- Update tests accordingly.

Relates to #7907
2026-04-12 16:40:37 -07:00
dirtyfancy
5af9614f6d fix(claw): warn if OpenClaw is running before migration
Add _is_openclaw_running() and _warn_if_openclaw_running() to detect
OpenClaw processes (via pgrep/tasklist) before hermes claw migrate.
Warns the user that messaging platforms only allow one active session
per bot token, and lets them cancel or continue.

Fixes #7907
2026-04-12 16:40:37 -07:00
Teknium
76019320fb feat(skills): centralized skills index — eliminate GitHub API calls for search/install
Add a CI-built skills index served from the docs site. The index is
crawled daily by GitHub Actions, resolves all GitHub paths upfront, and
is cached locally by the client. When the index is available:

- Search uses the cached index (0 GitHub API calls, was 23+)
- Install uses resolved paths from index (6 API calls for file
  downloads only, was 31-45 for discovery + downloads)

Total: 68 → 6 GitHub API calls for a typical search + install flow.
Unauthenticated users (60 req/hr) can now search and install without
hitting rate limits.

Components:
- scripts/build_skills_index.py: Crawl all sources (skills.sh, GitHub
  taps, official, clawhub, lobehub), batch-resolve GitHub paths via
  tree API, output JSON index
- tools/skills_hub.py: HermesIndexSource class — search/fetch/inspect
  backed by the index, with lazy GitHubSource for file downloads
- parallel_search_sources() skips external API sources when index is
  available (0 GitHub calls for search)
- .github/workflows/skills-index.yml: twice-daily CI build + deploy
- .github/workflows/deploy-site.yml: also builds index during docs deploy

Graceful degradation: when the index is unavailable (first run, network
down, stale), all methods return empty/None and downstream sources
handle the request via direct API as before.
2026-04-12 16:39:04 -07:00
Teknium
7e0e5ea03b fix(skills): cache GitHub repo trees to avoid rate-limit exhaustion on install
Skills.sh installs hit the GitHub API 45 times per install because the
same repo tree was fetched 6 times redundantly. Combined with search
(23 API calls), this totals 68 — exceeding the unauthenticated rate
limit of 60 req/hr, causing 'Could not fetch' errors for users without
a GITHUB_TOKEN.

Changes:
- Add _get_repo_tree() cache to GitHubSource — repo info + recursive
  tree fetched once per repo per source instance, eliminating 10
  redundant API calls (6 tree + 4 candidate 404s)
- _download_directory_via_tree returns {} (not None) when cached tree
  shows path doesn't exist, skipping unnecessary Contents API fallback
- _check_rate_limit_response() detects exhausted quota and sets
  is_rate_limited flag
- do_install() shows actionable hint when rate limited: set
  GITHUB_TOKEN or install gh CLI

Before: 45 API calls per install (68 total with search)
After:  31 API calls per install (54 total with search — under 60/hr)

Reported by community user from Vietnam (no GitHub auth configured).
2026-04-12 16:39:04 -07:00
Teknium
4c6ebd077e chore: sync uv.lock with matrix extra deps (aiosqlite, asyncpg) (#8661)
These were already declared in pyproject.toml but missing from the lockfile.
2026-04-12 16:38:15 -07:00
alt-glitch
5e1197a42e fix(gateway): harden Docker/container gateway pathway
Centralize container detection in hermes_constants.is_container() with
process-lifetime caching, matching existing is_wsl()/is_termux() patterns.
Dedup _is_inside_container() in config.py to delegate to the new function.

Add _run_systemctl() wrapper that converts FileNotFoundError to RuntimeError
for defense-in-depth — all 10 bare subprocess.run(_systemctl_cmd(...)) call
sites now route through it.

Make supports_systemd_services() return False in containers and when
systemctl binary is absent (shutil.which check).

Add Docker-specific guidance in gateway_command() for install/uninstall/start
subcommands — exit 0 with helpful instructions instead of crashing.

Make 'hermes status' show 'Manager: docker (foreground)' and 'hermes dump'
show 'running (docker, pid N)' inside containers.

Fix setup_gateway() to use supports_systemd instead of _is_linux for all
systemd-related branches, and show Docker restart policy instructions in
containers.

Replace inline /.dockerenv check in voice_mode.py with is_container().

Fixes #7420

Co-authored-by: teknium1 <teknium1@users.noreply.github.com>
2026-04-12 16:36:11 -07:00
sprmn24
18ab5c99d1 fix(backup): correct marker filenames in _validate_backup_zip
The backup validation checked for 'hermes_state.db' and 'memory_store.db'
as telltale markers of a valid Hermes backup zip. Neither name exists in a
real Hermes installation — the actual database file is 'state.db'
(hermes_state.py: DEFAULT_DB_PATH = get_hermes_home() / 'state.db').

A fresh Hermes installation produces:
  ~/.hermes/state.db        (actual name)
  ~/.hermes/config.yaml
  ~/.hermes/.env

Because the marker set never matched 'state.db', a backup zip containing
only 'state.db' plus 'config.yaml' would fail validation with:
  'zip does not appear to be a Hermes backup'
and the import would exit with sys.exit(1), silently rejecting a valid backup.

Fix: replace the wrong marker names with the correct filename.

Adds TestValidateBackupZip with three cases:
- state.db is accepted as a valid marker
- old wrong names (hermes_state.db, memory_store.db) alone are rejected
- config.yaml continues to pass (existing behaviour preserved)
2026-04-12 16:35:56 -07:00
Teknium
d6785dc4d4 fix: empty response recovery for reasoning models (mimo, qwen, GLM) (#8609)
Three fixes for the (empty) response bug affecting open reasoning models:

1. Allow retries after prefill exhaustion — models like mimo-v2-pro always
   populate reasoning fields via OpenRouter, so the old 'not _has_structured'
   guard on the retry path blocked retries for EVERY reasoning model after
   the 2 prefill attempts.  Now: 2 prefills + 3 retries = 6 total attempts
   before (empty).

2. Reset prefill/retry counters on tool-call recovery — the counters
   accumulated across the entire conversation, never resetting during
   tool-calling turns.  A model cycling empty→prefill→tools→empty burned
   both prefill attempts and the third empty got zero recovery.  Now
   counters reset when prefill succeeds with tool calls.

3. Strip think blocks before _truly_empty check — inline <think> content
   made the string non-empty, skipping both retry paths.

Reported by users on Telegram with xiaomi/mimo-v2-pro and qwen3.5 models.
Reproduced: qwen3.5-9b emits tool calls as XML in reasoning field instead
of proper function calls, causing content=None + tool_calls=None + reasoning
with embedded <tool_call> XML.  Prefill recovery works but counter
accumulation caused permanent (empty) in long sessions.
2026-04-12 15:38:11 -07:00
Teknium
a4593f8b21 feat: make gateway 'still working' notification interval configurable (#8572)
Add agent.gateway_notify_interval config option (default 600s).
Set to 0 to disable periodic 'still working' notifications.
Bridged to HERMES_AGENT_NOTIFY_INTERVAL env var (same pattern as
gateway_timeout and gateway_timeout_warning).

The inactivity warning (gateway_timeout_warning) was already
configurable; this makes the wall-clock ping configurable too.
2026-04-12 13:06:34 -07:00
Teknium
1179918746 fix: salvage follow-ups for Feishu QR onboarding (#7706)
- Remove duplicate _setup_feishu() definition (old 3-line version left
  behind by cherry-pick — Python picked the new one but dead code
  remained)
- Remove misleading 'Disable direct messages' DM option — the Feishu
  adapter has no DM policy mechanism, so 'disable' produced identical
  env vars to 'pairing'. Users who chose 'disable' would still see
  pairing prompts. Reduced to 3 options: pairing, allow-all, allowlist.
- Fix test_probe_returns_bot_info_on_success and
  test_probe_returns_none_on_failure: patch FEISHU_AVAILABLE=True so
  probe_bot() takes the SDK path when lark_oapi is not installed
2026-04-12 13:05:56 -07:00
Shuo
d7785f4d5b feat(feishu): add scan-to-create onboarding for Feishu / Lark
Add a QR-based onboarding flow to `hermes gateway setup` for Feishu / Lark.
Users scan a QR code with their phone and the platform creates a fully
configured bot application automatically — matching the existing WeChat
QR login experience.

Setup flow:
- Choose between QR scan-to-create (new app) or manual credential input (existing app)
- Connection mode selection (WebSocket / Webhook)
- DM security policy (pairing / open / allowlist / disabled)
- Group chat policy (open with @mention / disabled)

Implementation:
- Onboard functions (init/begin/poll/QR/probe) in gateway/platforms/feishu.py
- _setup_feishu() in hermes_cli/gateway.py with manual fallback
- probe_bot uses lark_oapi SDK when available, raw HTTP fallback otherwise
- qr_register() catches expected errors (network/protocol), propagates bugs
- Poll handles HTTP 4xx JSON responses and feishu/lark domain auto-detection

Tests:
- 25 tests for onboard module (registration, QR, probe, contract, negative paths)
- 16 tests for setup flow (credentials, connection mode, DM policy, group policy,
  adapter integration verifying env vars produce valid FeishuAdapterSettings)

Change-Id: I720591ee84755f32dda95fbac4b26dc82cbcf823
2026-04-12 13:05:56 -07:00
Teknium
a9ebb331bc fix: contextual error diagnostics for invalid API responses (#8565)
Previously, all invalid API responses (choices=None) were diagnosed
as 'fast response often indicates rate limiting' regardless of actual
response time or error code. A 738s Cloudflare 524 timeout was labeled
as 'fast response' and 'possible rate limit'.

Now extracts the error code from response.error and classifies:
- 524: upstream provider timed out (Cloudflare)
- 504: upstream gateway timeout
- 429: rate limited by upstream provider
- 500/502: upstream server error
- 503/529: upstream provider overloaded
- Other codes: shown with code number
- No code + <10s: likely rate limited (timing heuristic)
- No code + >60s: likely upstream timeout
- No code + 10-60s: neutral response time

All downstream messages (retry status, final error, interrupt message)
now use the classified hint instead of generic rate-limit language.

Reported by community member Lumen Radley (MiMo provider timeouts).
2026-04-12 13:00:07 -07:00
Teknium
400fe9b2a1 fix: add <thought> stripping to auxiliary_client + tests
auxiliary_client.py had its own regex mirroring _strip_think_blocks
but was missing the <thought> variant. Also adds test coverage for
<thought> paired and orphaned tags.
2026-04-12 12:44:49 -07:00
Chen Chia Yang
326d5febe5 fix: also strip <thought> tags during streaming in cli.py 2026-04-12 12:44:49 -07:00
Chen Chia Yang
a372c14fc5 fix: strip <thought> tags from Gemma 4 responses in _strip_think_blocks
Gemma 4 (26B/31B) uses <thought>...</thought> to wrap its reasoning
output. This tag was not included in the existing list of reasoning tag
variants stripped by _strip_think_blocks(), causing raw thinking blocks
to leak into the visible response.

Added a new re.sub() line for <thought> and extended the cleanup regex
to include 'thought' alongside the existing variants.

Fixes #6148
2026-04-12 12:44:49 -07:00
Teknium
f295b17d92 fix: make agent_thread daemon to prevent orphan CLI processes on tab close (#8557)
When a user closes a terminal tab, SIGHUP exits the main thread but
the non-daemon agent_thread kept the entire Python process alive —
stuck in the API call loop with no interrupt signal. Over many
conversations, these orphan processes accumulate and cause massive
swap usage (reported: 77GB on a 32GB M1 Pro).

Changes:
- Make agent_thread daemon=True so the process exits when the main
  thread finishes its cleanup. Under normal operation this changes
  nothing — the main thread already waits on agent_thread.is_alive().
- Interrupt the agent in the finally/exit path so the daemon thread
  stops making API calls promptly rather than being killed mid-flight.
2026-04-12 12:38:55 -07:00
Teknium
06290f6a2f fix: handle broken stdin in prompt_toolkit startup (#6393) (#8560)
On macOS with uv-managed Python, stdin (fd 0) can be invalid or
unregisterable with the asyncio selector, causing:

  KeyError: '0 is not registered'

during prompt_toolkit's app.run() → asyncio.run() → _add_reader(0).

Three-layer fix:
1. Pre-flight fstat(0) check before app.run() — detects broken stdin
   early and prints actionable guidance instead of a raw traceback.
2. Catch KeyError/OSError around app.run() as fallback for edge cases
   that slip past the fstat guard.
3. Extend asyncio exception handler to suppress selector registration
   KeyErrors in async callbacks.

Fixes #6393
2026-04-12 12:38:03 -07:00
Teknium
06a17c57ae fix: improve profile creation UX — seed SOUL.md + credential warning (#8553)
Fresh profiles (created without --clone) now:
- Auto-seed a default SOUL.md immediately, so users have a file to
  customize right away instead of discovering it only after first use
- Print a clear warning that the profile has no API keys and will
  inherit from the shell environment unless configured separately
- Show the SOUL.md path for personality customization

Previously, fresh profiles started with no SOUL.md (only seeded on
first use via ensure_hermes_home), no mention of credential isolation,
and no guidance about customizing personality. Users reported confusion
about profiles using the wrong model/plan tokens and SOUL.md not
being read — both traced to operational gaps in the creation UX.

Closes #8093 (investigated: code correctly loads SOUL.md from profile
HERMES_HOME; issue was operational, not a code bug).
2026-04-12 12:22:34 -07:00
Teknium
4eecaf06e4 fix: prevent duplicate update prompt spam in gateway watcher (#8343)
The _watch_update_progress() poll loop never deleted .update_prompt.json
after forwarding the prompt to the user, causing the same prompt to be
re-sent every poll cycle (2s). Two fixes:

1. Delete .update_prompt.json after forwarding — the update process only
   polls for .update_response, it doesn't need the prompt file to persist.
2. Guard re-sends with _update_prompt_pending check — belt-and-suspenders
   to prevent duplicates even under race conditions.

Add regression test asserting the prompt is sent exactly once.
2026-04-12 04:52:59 -07:00
Teknium
7a67b13506 fix: title_generator no longer logs as 'compression' task
Changed task='compression' to task='title_generation' so auto-title
calls don't pollute logs with false compression alarms.
2026-04-12 04:17:18 -07:00
Teknium
45e60904c6 fix: fall back to provider's default model when model config is empty (#8303)
When a user configures a provider (e.g. `hermes auth add openai-codex`)
but never selects a model via `hermes model`, the gateway and CLI would
pass an empty model string to the API, causing:
  'Codex Responses request model must be a non-empty string'

Now both gateway (_resolve_session_agent_runtime) and CLI
(_ensure_runtime_credentials) detect an empty model and fill it from
the provider's first catalog entry in _PROVIDER_MODELS. This covers
all providers that have a static model list (openai-codex, anthropic,
gemini, copilot, etc.).

The fix is conservative: it only triggers when model is truly empty
and a known provider was resolved. Explicit model choices are never
overridden.
2026-04-12 03:53:30 -07:00
Teknium
17c72f176d fix: make skill loading instructions more aggressive in system prompt (#8286)
The previous wording ('If one clearly matches') set too high a threshold,
and 'If none match, proceed normally' was an easy escape hatch for lazy
models. Now:

- Lowered threshold: 'matches or is even partially relevant'
- Added MUST directive and 'err on the side of loading' guidance
- Replaced permissive closer with 'only proceed without if genuinely none
  are relevant'

This should reduce cases where the agent skips loading relevant skills
unless explicitly forced.
2026-04-12 03:03:16 -07:00
Teknium
b6b6b02f0f fix: prevent unwanted session auto-reset after graceful gateway restarts (#8299)
When the gateway shuts down gracefully (hermes update, gateway restart,
/restart), it now writes a .clean_shutdown marker file. On the next
startup, if this marker exists, suspend_recently_active() is skipped
and the marker is cleaned up.

Previously, suspend_recently_active() fired on EVERY startup —
including planned restarts from hermes update or hermes gateway restart.
This caused users to lose their conversation history unexpectedly: the
session would be marked as suspended, and the next message would
trigger an auto-reset with a notification the user never asked for.

The original purpose of suspend_recently_active() is crash recovery —
preventing stuck sessions that were mid-processing when the gateway
died unexpectedly. Graceful shutdowns already drain active agents via
_drain_active_agents(), so there is no stuck-session risk. After a
crash (no marker written), suspension still fires as before.

Fixes the scenario where a user asks the agent to run hermes update,
the gateway restarts, and the user's next message gets an unwanted
'Session automatically reset' notification with their history cleared.
2026-04-12 03:03:07 -07:00
Teknium
56e3ee2440 fix: write update exit code before gateway restart (cgroup kill race) (#8288)
When /update runs via Telegram, hermes update --gateway is spawned inside
the gateway's systemd cgroup.  The update process itself calls
systemctl restart hermes-gateway, which tears down the cgroup with
KillMode=mixed — SIGKILL to all remaining processes.  The wrapping bash
shell is killed before it can execute the exit-code epilogue, so
.update_exit_code is never created.  The new gateway's update watcher
then polls for 30 minutes and sends a spurious timeout message.

Fix: write .update_exit_code from Python inside cmd_update() immediately
after the git pull + pip install succeed ("Update complete!"), before
attempting the gateway restart.  The shell epilogue still writes it too
(idempotent overwrite), but now the marker exists even when the process
is killed mid-restart.
2026-04-12 02:33:21 -07:00
Teknium
b321330362 feat: add WSL environment hint to system prompt (#8285)
When running inside WSL (Windows Subsystem for Linux), inject a hint into
the system prompt explaining that the Windows host filesystem is mounted
at /mnt/c/, /mnt/d/, etc. This lets the agent naturally translate Windows
paths (Desktop, Documents) to their /mnt/ equivalents without the user
needing to configure anything.

Uses the existing is_wsl() detection from hermes_constants (cached,
checks /proc/version for 'microsoft'). Adds build_environment_hints()
in prompt_builder.py — extensible for Termux, Docker, etc. later.

Closes the UX gap where WSL users had to manually explain path
translation to the agent every session.
2026-04-12 02:26:28 -07:00
Teknium
dd5b1063d0 fix: register MATRIX_RECOVERY_KEY env var + document migration path
Follow-up for cherry-picked PR #8272:
- Add MATRIX_RECOVERY_KEY to module docstring header in matrix.py
- Register in OPTIONAL_ENV_VARS (config.py) with password=True, advanced=True
- Add to _NON_SETUP_ENV_VARS set
- Document cross-signing verification in matrix.md E2EE section
- Update migration guide with recovery key step (step 3)
- Add to environment-variables.md reference
2026-04-12 02:18:03 -07:00
elkimek
b9af4955b9 fix(matrix): restore verify_with_recovery_key after device key rotation
After the PgCryptoStore migration in v0.8.0, the verify_with_recovery_key
call that previously ran after share_keys() was dropped. On any rotation
that uploads fresh device keys (fresh crypto.db, server had stale keys
from a prior install, etc.), the new device keys carry no valid self-
signing signature because the bot has no access to the self-signing
private key.

Peers like Element then refuse to share Megolm sessions with the
rotated device, so the bot silently stops decrypting incoming messages.

This restores the recovery-key bootstrap: on startup, if
MATRIX_RECOVERY_KEY is set, import the cross-signing private keys from
SSSS and sign_own_device(), producing a valid signature server-side.

Idempotent and gated on MATRIX_RECOVERY_KEY — no behavior change for
users who don't configure a recovery key.

Verified end-to-end by deleting crypto.db and restarting: the bot
rotates device identity keys, re-uploads, self-signs via recovery key,
and decrypts+replies to fresh messages from a paired Element client.
2026-04-12 02:18:03 -07:00
Ben Barclay
b0d65c333a Merge pull request #8279 from NousResearch/chore/simplify-docker-tags
chore: simplify Docker image tags
2026-04-12 19:09:05 +10:00
Ben
00adbd0de0 chore: simplify Docker image tags
- Main branch push: only push :latest (remove SHA tag)
- Release push: only push release tag name (remove :latest and SHA tag)
2026-04-12 19:08:16 +10:00
Teknium
95fa78eb6c fix: write refreshed Codex tokens back to ~/.codex/auth.json (#8277)
OpenAI OAuth refresh tokens are single-use and rotate on every refresh.
When Hermes refreshes a Codex token, it consumed the old refresh_token
but never wrote the new pair back to ~/.codex/auth.json. This caused
Codex CLI and VS Code to fail with 'refresh_token_reused' on their
next refresh attempt.

This mirrors the existing Anthropic write-back pattern where refreshed
tokens are written to ~/.claude/.credentials.json via
_write_claude_code_credentials().

Changes:
- Add _write_codex_cli_tokens() in hermes_cli/auth.py (parallel to
  _write_claude_code_credentials in anthropic_adapter.py)
- Call it from _refresh_codex_auth_tokens() (non-pool refresh path)
- Call it from credential_pool._refresh_entry() (pool happy path + retry)
- Add tests for the new write-back behavior
- Update existing test docstring to clarify _save_codex_tokens vs
  _write_codex_cli_tokens separation

Fixes refresh token conflict reported by @ec12edfae2cb221
2026-04-12 02:05:20 -07:00
Teknium
6d05e3d56f fix(gateway): evict cached agent on /model switch + add diagnostic logging (#8276)
After /model switches the model (both picker and text paths), the cached
agent's config signature becomes stale — the agent was updated in-place
via switch_model() but the cache tuple's signature was never refreshed.
The next turn *should* detect the signature mismatch and create a fresh
agent, but this relies on the new model's signature differing from the
old one in _agent_config_signature().

Evicting the cached agent explicitly after storing the session override
is more defensive — the next turn is guaranteed to create a fresh agent
from the override without depending on signature mismatch detection.

Also adds debug logging at three key decision points so we can trace
exactly what happens when /model + /retry interact:
- _resolve_session_agent_runtime: which override path is taken (fast
  with api_key vs fallback), or why no override was found
- _run_agent.run_sync: final resolved model/provider before agent
  creation

Reported: /model switch to xiaomi/mimo-v2-pro followed by /retry still
used the old model (glm-5.1).
2026-04-12 01:58:17 -07:00
Teknium
4aa534eae5 fix(gateway): peek at pending message during interrupt instead of consuming it
The monitor_for_interrupt() and backup interrupt checks were calling
get_pending_message() which pops the message from the adapter's queue.
This created a race condition: if the agent finished naturally before
checking _interrupt_requested, the pending message was permanently lost.

Timeline of the race:
1. Agent near completion, user sends message
2. Level 1 guard stores message in adapter._pending_messages, sets event
3. monitor_for_interrupt() detects event, POPS message, calls agent.interrupt()
4. Agent's run_conversation() was already returning (interrupted=False)
5. Post-run dequeue finds nothing (monitor already consumed it)
6. result.get('interrupted') is False so interrupt_message fallback doesn't fire
7. User message permanently lost — agent finishes without processing it

Fix: change all three interrupt detection sites (primary monitor + two
backup checks) from get_pending_message() (pop) to
_pending_messages.get() (peek). The message stays in the adapter's queue
until _dequeue_pending_event() consumes it in the post-run handler,
which runs regardless of whether the agent was interrupted or finished
naturally.

Reported by @_SushantSays — intermittent message loss during long
terminal command execution, persisting after the previous fix (73f970fa)
which addressed monitor task death but not this consumption race.
2026-04-12 01:57:34 -07:00
Teknium
ae6820a45a fix(setup): validate base URL input in hermes model flow (#8264)
Reject non-URL values (e.g. shell commands typed by mistake) in the
base URL prompt during provider setup. Previously any string was saved
as-is to .env, breaking connectivity when the garbage value was used
as the API endpoint.

Adds http:// / https:// prefix check with a clear error message.
The custom-endpoint flow already had this validation (line 1620);
this brings the generic API-key provider flow to parity.

Triggered by a user support case where 'nano ~/.hermes/.env' was
accidentally entered as GLM_BASE_URL during Z.AI setup.
2026-04-12 01:51:57 -07:00
Teknium
a1220977d3 fix: make skill loading instructions more aggressive in system prompt (#8209)
The previous wording ('If one clearly matches') set too high a threshold,
and 'If none match, proceed normally' was an easy escape hatch for lazy
models. Now:

- Lowered threshold: 'matches or is even partially relevant'
- Added MUST directive and 'err on the side of loading' guidance
- Replaced permissive closer with 'only proceed without if genuinely none
  are relevant'

This should reduce cases where the agent skips loading relevant skills
unless explicitly forced.
2026-04-12 01:46:34 -07:00
Teknium
078dba015d fix: three provider-related bugs (#8161, #8181, #8147) (#8243)
- Add openai/openai-codex -> openai mapping to PROVIDER_TO_MODELS_DEV
  so context-length lookups use models.dev data instead of 128k fallback.
  Fixes #8161.

- Set api_mode from custom_providers entry when switching via hermes model,
  and clear stale api_mode when the entry has none. Also extract api_mode
  in _named_custom_provider_map(). Fixes #8181.

- Convert OpenAI image_url content blocks to Anthropic image blocks when
  the endpoint is Anthropic-compatible (MiniMax, MiniMax-CN, or any URL
  containing /anthropic). Fixes #8147.
2026-04-12 01:44:18 -07:00
Harish Kukreja
b1f13a8c5f fix(agent): route compression aux through live session runtime 2026-04-12 01:34:52 -07:00
Teknium
c52f6348b6 fix: list all available toolsets in delegate_task schema description (#8231)
* fix: list all available toolsets in delegate_task schema description

The delegate_task tool's toolsets parameter description only mentioned
'terminal', 'file', and 'web' as examples. Models (especially smaller
ones like Gemma) would substitute 'web' for 'browser' because they
didn't know 'browser' was a valid option.

Now dynamically builds the toolset list from the TOOLSETS dict at import
time, excluding blocked, composite, and platform-specific toolsets.
Auto-updates when new toolsets are added.

Reported by jeffutter on Discord.

* chore: exclude moa and rl from delegate_task toolset list
2026-04-12 00:54:35 -07:00
Teknium
3162472674 feat(tips): add 69 deeper hidden-gem tips (279 total) (#8237)
Add lesser-known power-user tips covering:
- BOOT.md gateway startup automation
- Cron script attachment for data collection pipelines
- Prefill messages for few-shot priming
- Focus topic compression (/compress <topic>)
- Terminal exit code annotations and auto-retry
- Automatic sudo password piping
- execute_code built-in helpers (json_parse, shell_quote, retry)
- File loop detection and staleness warnings
- MCP sampling and dynamic tool discovery
- Delegation heartbeat and ACP child agents (Claude Code)
- 402 auto-fallback in auxiliary client
- Container mode, HERMES_HOME_MODE, subprocess HOME isolation
- Ctrl+C 5-tier priority system
- Browser CDP URL override and stealth mode
- Skills quarantine, audit log, and well-known protocol
- Per-platform display overrides, human delay mode
- And many more deep-cut features
2026-04-12 00:54:07 -07:00
Teknium
8b9d22a74b revert: keep debian:13.4 full image instead of slim
The slim image drops packages that may be needed at runtime.
Keep the full Debian base for compatibility.
2026-04-12 00:53:16 -07:00
m0n5t3r
fee0e0d35e fix(docker): run as non-root user, use virtualenv (salvage #5811)
- Add gosu for runtime privilege dropping from root to hermes user
- Support HERMES_UID/HERMES_GID env vars for host mount permission matching
- Switch to debian:13.4-slim base image
- Use uv venv instead of pip install --break-system-packages
- Pin uv and gosu multi-stage images with SHA256 digests
- Set PLAYWRIGHT_BROWSERS_PATH to /opt/hermes/.playwright so build-time
  chromium install survives the /opt/data volume mount
- Keep procps for container debugging

Based on work by m0n5t3r in PR #5811. Stripped to hardening-only
changes (non-root, virtualenv, slim base); matrix deps, fonts, xvfb,
and entrypoint playwright download deferred to follow-up.
2026-04-12 00:53:16 -07:00
bravohenry
81ac62c0e9 fix(weixin): split chatty short replies into separate bubbles, keep structured content together
Add content-aware splitting to compact mode: short chat-like exchanges
(2-6 short lines without headings/lists/quotes) get separate message
bubbles for a natural chat feel, while structured content (tables,
headings with body, numbered lists) stays in a single message.

Cherry-picked from PR #7587 by bravohenry, adapted to the compact/legacy
split_per_line architecture from #7903.
2026-04-12 00:38:07 -07:00
Teknium
f53a5a7fe1 fix: suppress duplicate completion notifications when agent already consumed output via wait/poll/log (#8228)
When the agent calls process(action='wait') or process(action='poll')
and gets the exited status, the completion_queue notification is
redundant — the agent already has the output from the tool return.
Previously, the drain loops in CLI and gateway would still inject
the [SYSTEM: Background process completed] message, causing the
agent to receive the same information twice.

Fix: track session IDs in _completion_consumed set when wait/poll/log
returns an exited process. Drain loops in cli.py and gateway watcher
skip completion events for consumed sessions. Watch pattern events
are never suppressed (they have independent semantics).

Adds 4 tests covering wait/poll/log marking and running-process
negative case.
2026-04-12 00:36:22 -07:00
Teknium
fdf55e0fe9 feat(cli): show random tip on new session start (#8225)
Add a 'tip of the day' feature that displays a random one-liner about
Hermes Agent features on every new session — CLI startup, /clear, /new,
and gateway /new across all messaging platforms.

- New hermes_cli/tips.py module with 210 curated tips covering slash
  commands, keybindings, CLI flags, config options, tools, gateway
  platforms, profiles, sessions, memory, skills, cron, voice, security,
  and more
- CLI: tips display in skin-aware dim gold color after the welcome line
- Gateway: tips append to the /new and /reset response on all platforms
- Fully wrapped in try/except — tips are non-critical and never break
  startup or reset

Display format (CLI):
  ✦ Tip: /btw <question> asks a quick side question without tools or history.

Display format (gateway):
   Session reset! Starting fresh.
  ✦ Tip: hermes -c resumes your most recent CLI session.
2026-04-12 00:34:01 -07:00
opriz
36f57dbc51 fix(migration): don't auto-archive OpenClaw source directory
Remove auto-archival from hermes claw migrate — not its
responsibility (hermes claw cleanup is still there for that).

Skip MESSAGING_CWD when it points inside the OpenClaw source
directory, which was the actual root cause of agent confusion
after migration. Use Path.is_relative_to() for robust path
containment check.

Salvaged from PR #8192 by opriz.
Co-authored-by: opriz <opriz@users.noreply.github.com>
2026-04-12 00:33:54 -07:00
Teknium
1871227198 feat: rebrand OpenClaw references to Hermes during migration
- Add rebrand_text() that replaces OpenClaw, Open Claw, Open-Claw,
  ClawdBot, and MoltBot with Hermes (case-insensitive, word-boundary)
- Apply rebranding to memory entries (MEMORY.md, USER.md, daily memory)
- Apply rebranding to SOUL.md and workspace instructions via new
  transform parameter on copy_file()
- Fix moldbot -> moltbot typo across codebase (claw.py, migration
  script, docs, tests)
- Add unit tests for rebrand_text and integration tests for memory
  and soul migration rebranding
2026-04-12 00:33:54 -07:00
Teknium
eb2a49f95a fix: openai-codex and anthropic not appearing in /model picker for external credentials (#8224)
Users whose credentials exist only in external files — OpenAI Codex
OAuth tokens in ~/.codex/auth.json or Anthropic Claude Code credentials
in ~/.claude/.credentials.json — would not see those providers in the
/model picker, even though hermes auth and hermes model detected them.

Root cause: list_authenticated_providers() only checked the raw Hermes
auth store and env vars. External credential file fallbacks (Codex CLI
import, Claude Code file discovery) were never triggered.

Fix (three parts):
1. _seed_from_singletons() in credential_pool.py: openai-codex now
   imports from ~/.codex/auth.json when the Hermes auth store is empty,
   mirroring resolve_codex_runtime_credentials().
2. list_authenticated_providers() in model_switch.py: auth store + pool
   checks now run for ALL providers (not just OAuth auth_type), catching
   providers like anthropic that support both API key and OAuth.
3. list_authenticated_providers(): direct check for anthropic external
   credential files (Claude Code, Hermes PKCE). The credential pool
   intentionally gates anthropic behind is_provider_explicitly_configured()
   to prevent auxiliary tasks from silently consuming tokens. The /model
   picker bypasses this gate since it is discovery-oriented.
2026-04-12 00:33:42 -07:00
Teknium
73f970fa4d fix: make gateway interrupt detection resilient to monitor task failures
The interrupt mechanism for regular text messages (non-commands) during
active agent runs relied on a single async polling task
(monitor_for_interrupt) with no error handling. If this task died
silently due to an unhandled exception, stale adapter reference after
reconnect, or any other failure, user messages sent during agent
execution would be queued but never trigger an actual interrupt — the
agent would continue running until it finished naturally, then process
the queued message.

Three improvements:

1. Error handling in monitor_for_interrupt(): wrap the polling body in
   try/except so transient errors are logged and retried instead of
   silently killing the task.

2. Fresh adapter reference on each poll iteration: re-resolve
   self.adapters.get(source.platform) every 200ms instead of capturing
   the adapter once at task creation time. This prevents stale
   references after adapter reconnects.

3. Backup interrupt check in the inactivity poll loop: both the
   unlimited and timeout-enabled paths now check for pending interrupts
   every 5 seconds (the existing poll interval). Uses a shared
   _interrupt_detected asyncio.Event to avoid double-firing when the
   primary monitor already handled the interrupt. Logs at INFO level
   with monitor task state for debugging.
2026-04-12 00:25:05 -07:00
Teknium
4cadfef8e3 fix(cli): restore stacked tool progress scrollback in TUI (#8201)
The TUI transition (4970705, f83e86d) replaced stacked per-tool history
lines with a single live-updating spinner widget. While the spinner
provides a nice live timer, it removed the scrollback history that
users relied on to see what the agent did during a session.

This restores stacked tool progress lines in 'all' and 'new' modes by
printing persistent scrollback lines via _cprint() when tools complete,
in addition to the existing live spinner display.

Behavior per mode:
- off: no scrollback lines, no spinner (unchanged)
- new: scrollback line on completion, skipping consecutive same-tool repeats
- all: scrollback line on every tool completion
- verbose: no scrollback (run_agent.py handles verbose output directly)

Implementation:
- Store function_args from tool.started events in _pending_tool_info
- On tool.completed, pop stored args and format via get_cute_tool_message()
- FIFO queue per function_name handles concurrent tool execution
- 'new' mode tracks _last_scrollback_tool for dedup
- State cleared at end of agent run

Reported by community user Mr.D — the stacked history provides
transparency into what the agent is doing, which builds trust.

Addresses user report from Discord about lost tool call visibility.
2026-04-11 23:22:34 -07:00
Teknium
8e00b3a69e fix(cron): steer model away from explicit deliver targets that lose topic context (#8187)
Rewrite the cronjob tool's 'deliver' parameter description to strongly
guide models toward omitting the parameter (which auto-detects origin
including thread/topic). The previous description listed all platform
names equally, inviting models to construct explicit targets like
'telegram:<chat_id>' which silently drops the thread_id.

New description:
- Leads with 'Omit this parameter' as the recommended path
- Explicitly warns that platform:chat_id without :thread_id loses topics
- Removes the long flat list of platform names that invited construction

Also adds diagnostic logging at two key points:
- _origin_from_env(): logs when thread_id is captured during job creation
- _deliver_result(): warns when origin has thread_id but delivery target
  lost it; logs at debug when delivering to a specific thread

Helps diagnose user-reported issue where cron responses from Telegram
topics are delivered to the main chat instead of the originating topic.
2026-04-11 23:20:39 -07:00
Teknium
1ca9b19750 feat: add network.force_ipv4 config to fix IPv6 timeout issues (#8196)
On servers with broken or unreachable IPv6, Python's socket.getaddrinfo
returns AAAA records first. urllib/httpx/requests all try IPv6 connections
first and hang for the full TCP timeout before falling back to IPv4. This
affects web_extract, web_search, the OpenAI SDK, and all HTTP tools.

Adds network.force_ipv4 config option (default: false) that monkey-patches
socket.getaddrinfo to resolve as AF_INET when the caller didn't specify a
family. Falls back to full resolution if no A record exists, so pure-IPv6
hosts still work.

Applied early at all three entry points (CLI, gateway, cron scheduler)
before any HTTP clients are created.

Reported by user @29n — Chinese Ubuntu server with unreachable IPv6 causing
timeouts on lobste.rs and other IPv6-enabled sites while Google/GitHub
worked fine (IPv4-only resolution).
2026-04-11 23:12:11 -07:00
Teknium
1cec910b6a fix: improve context compaction to prevent model answering stale questions (#8107)
After compression, models (especially Kimi 2.5) would sometimes respond
to questions from the summary instead of the latest user message. This
happened ~30% of the time on Telegram.

Root cause: the summary's 'Next Steps' section read as active instructions,
and the SUMMARY_PREFIX didn't explicitly tell the model to ignore questions
in the summary. When the summary merged into the first tail message, there
was no clear separator between historical context and the actual user message.

Changes inspired by competitor analysis (Claude Code, OpenCode, Codex):

1. SUMMARY_PREFIX rewritten with explicit 'Do NOT answer questions from
   this summary — respond ONLY to the latest user message AFTER it'

2. Summarizer preamble (shared by both prompts) adds:
   - 'Do NOT respond to any questions' (from OpenCode's approach)
   - 'Different assistant' framing (from Codex) to create psychological
     distance between summary content and active conversation

3. New summary sections:
   - '## Resolved Questions' — tracks already-answered questions with
     their answers, preventing re-answering (from Claude Code's
     'Pending user asks' pattern)
   - '## Pending User Asks' — explicitly marks unanswered questions
   - '## Remaining Work' replaces '## Next Steps' — passive framing
     avoids reading as active instructions

4. merge-summary-into-tail path now inserts a clear separator:
   '--- END OF CONTEXT SUMMARY — respond to the message below ---'

5. Iterative update prompt now instructs: 'Move answered questions to
   Resolved Questions' to maintain the resolved/pending distinction
   across multiple compactions.
2026-04-11 19:43:58 -07:00
Tom Qiao
8a48c58bd3 fix(gateway): add missing RedactingFormatter import
The gateway startup path references RedactingFormatter without
importing it, causing a NameError crash when launched with a
verbosity flag (e.g. via launchd --replace).

Fixes #8044

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 19:38:05 -07:00
Teknium
a0a02c1bc0 feat: /compress <focus> — guided compression with focus topic (#8017)
Adds an optional focus topic to /compress: `/compress database schema`
guides the summariser to preserve information related to the focus topic
(60-70% of summary budget) while compressing everything else more aggressively.
Inspired by Claude Code's /compact <focus>.

Changes:
- context_compressor.py: focus_topic parameter on _generate_summary() and
  compress(); appends FOCUS TOPIC guidance block to the LLM prompt
- run_agent.py: focus_topic parameter on _compress_context(), passed through
  to the compressor
- cli.py: _manual_compress() extracts focus topic from command string,
  preserves existing manual_compression_feedback integration (no regression)
- gateway/run.py: _handle_compress_command() extracts focus from event args
  and passes through — full gateway parity
- commands.py: args_hint="[focus topic]" on /compress CommandDef

Salvaged from PR #7459 (CLI /compress focus only — /context command deferred).
15 new tests across CLI, compressor, and gateway.
2026-04-11 19:23:29 -07:00
helix4u
cfbfc4c3f1 fix(discord): decouple readiness from slash sync 2026-04-11 19:22:14 -07:00
Teknium
fa7cd44b92 feat: add hermes backup and hermes import commands (#7997)
* feat: add `hermes backup` and `hermes import` commands

hermes backup — creates a zip of ~/.hermes/ (config, skills, sessions,
profiles, memories, skins, cron jobs, etc.) excluding the hermes-agent
codebase, __pycache__, and runtime PID files. Defaults to
~/hermes-backup-<timestamp>.zip, customizable with -o.

hermes import <zipfile> — restores from a backup zip, validating it
looks like a hermes backup before extracting. Handles .hermes/ prefix
stripping, path traversal protection, and confirmation prompts (skip
with --force).

29 tests covering exclusion rules, backup creation, import validation,
prefix detection, path traversal blocking, confirmation flow, and a
full round-trip test.

* test: improve backup/import coverage to 97%

Add 17 additional tests covering:
- _format_size helper (bytes through terabytes)
- Nonexistent hermes home error exit
- Output path is a directory (auto-names inside it)
- Output without .zip suffix (auto-appends)
- Empty hermes home (all files excluded)
- Permission errors during backup and import
- Output zip inside hermes root (skips itself)
- Not-a-zip file rejection
- EOFError and KeyboardInterrupt during confirmation
- 500+ file progress display
- Directory-only zip prefix detection

Remove dead code branch in _detect_prefix (unreachable guard).

* feat: auto-restore profile wrapper scripts on import

After extracting backup files, hermes import now scans profiles/ for
subdirectories with config.yaml or .env and recreates the ~/.local/bin
wrapper scripts so profile aliases (e.g. 'coder chat') work immediately.

Also prints guidance for re-installing gateway services per profile.

Handles edge cases:
- Skips profile dirs without config (not real profiles)
- Skips aliases that collide with existing commands
- Gracefully degrades if hermes_cli.profiles isn't available (fresh install)
- Shows PATH hint if ~/.local/bin isn't in PATH

3 new profile restoration tests (49 total).
2026-04-11 19:15:50 -07:00
Siddharth Balyan
50d86b3c71 fix(matrix): replace pickle crypto store with SQLite, fix E2EE decryption (#7981)
Fixes #7952 — Matrix E2EE completely broken after mautrix migration.

- Replace MemoryCryptoStore + pickle/HMAC persistence with mautrix's
  PgCryptoStore backed by SQLite via aiosqlite. Crypto state now
  persists reliably across restarts without fragile serialization.

- Add handle_sync() call on initial sync response so to-device events
  (queued Megolm key shares) are dispatched to OlmMachine instead of
  being silently dropped.

- Add _verify_device_keys_on_server() after loading crypto state.
  Detects missing keys (re-uploads), stale keys from migration
  (attempts re-upload), and corrupted state (refuses E2EE).

- Add _CryptoStateStore adapter wrapping MemoryStateStore to satisfy
  mautrix crypto's StateStore interface (is_encrypted,
  get_encryption_info, find_shared_rooms).

- Remove redundant share_keys() call from sync loop — OlmMachine
  already handles this via DEVICE_OTK_COUNT event handler.

- Fix datetime vs float TypeError in session.py suspend_recently_active()
  that crashed gateway startup.

- Add aiosqlite and asyncpg to [matrix] extra in pyproject.toml.

- Update test mocks for PgCryptoStore/Database and add query_keys mock
  for key verification. 174 tests pass.

- Add E2EE upgrade/migration docs to Matrix user guide.
2026-04-12 07:24:46 +05:30
Siddharth Balyan
27eeea0555 perf(ssh,modal): bulk file sync via tar pipe and tar/base64 archive (#8014)
* perf(ssh,modal): bulk file sync via tar pipe and tar/base64 archive

SSH: symlink-staging + tar -ch piped over SSH in a single TCP stream.
Eliminates per-file scp round-trips. Handles timeout (kills both
processes), SSH Popen failure (kills tar), and tar create failure.

Modal: in-memory gzipped tar archive, base64-encoded, decoded+extracted
in one exec call. Checks exit code and raises on failure.

Both backends use shared helpers extracted into file_sync.py:
- quoted_mkdir_command() — mirrors existing quoted_rm_command()
- unique_parent_dirs() — deduplicates parent dirs from file pairs

Migrates _ensure_remote_dirs to use the new helpers.

28 new tests (21 SSH + 7 Modal), all passing.

Closes #7465
Closes #7467

* fix(modal): pipe stdin to avoid ARG_MAX, clean up review findings

- Modal bulk upload: stream base64 payload through proc.stdin in 1MB
  chunks instead of embedding in command string (Modal SDK enforces
  64KB ARG_MAX_BYTES — typical payloads are ~4.3MB)
- Modal single-file upload: same stdin fix, add exit code checking
- Remove what-narrating comments in ssh.py and modal.py (keep WHY
  comments: symlink staging rationale, SIGPIPE, deadlock avoidance)
- Remove unnecessary `sandbox = self._sandbox` alias in modal bulk
- Daytona: use shared helpers (unique_parent_dirs, quoted_mkdir_command)
  instead of inlined duplicates

---------

Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
2026-04-12 06:18:05 +05:30
Teknium
fd73937ec8 feat: component-separated logging with session context and filtering (#7991)
* feat: component-separated logging with session context and filtering

Phase 1 — Gateway log isolation:
- gateway.log now only receives records from gateway.* loggers
  (platform adapters, session management, slash commands, delivery)
- agent.log remains the catch-all (all components)
- errors.log remains WARNING+ catch-all
- Moved gateway.log handler creation from gateway/run.py into
  hermes_logging.setup_logging(mode='gateway') with _ComponentFilter

Phase 2 — Session ID injection:
- Added set_session_context(session_id) / clear_session_context() API
  using threading.local() for per-thread session tracking
- _SessionFilter enriches every log record with session_tag attribute
- Log format: '2026-04-11 10:23:45 INFO [session_id] logger.name: msg'
- Session context set at start of run_conversation() in run_agent.py
- Thread-isolated: gateway conversations on different threads don't leak

Phase 3 — Component filtering in hermes logs:
- Added --component flag: hermes logs --component gateway|agent|tools|cli|cron
- COMPONENT_PREFIXES maps component names to logger name prefixes
- Works with all existing filters (--level, --session, --since, -f)
- Logger name extraction handles both old and new log formats

Files changed:
- hermes_logging.py: _SessionFilter, _ComponentFilter, COMPONENT_PREFIXES,
  set/clear_session_context(), gateway.log creation in setup_logging()
- gateway/run.py: removed redundant gateway.log handler (now in hermes_logging)
- run_agent.py: set_session_context() at start of run_conversation()
- hermes_cli/logs.py: --component filter, logger name extraction
- hermes_cli/main.py: --component argument on logs subparser

Addresses community request for component-separated, filterable logging.
Zero changes to existing logger names — __name__ already provides hierarchy.

* fix: use LogRecord factory instead of per-handler _SessionFilter

The _SessionFilter approach required attaching a filter to every handler
we create. Any handler created outside our _add_rotating_handler (like
the gateway stderr handler, or third-party handlers) would crash with
KeyError: 'session_tag' if it used our format string.

Replace with logging.setLogRecordFactory() which injects session_tag
into every LogRecord at creation time — process-global, zero per-handler
wiring needed. The factory is installed at import time (before
setup_logging) so session_tag is available from the moment hermes_logging
is imported.

- Idempotent: marker attribute prevents double-wrapping on module reload
- Chains with existing factory: won't break third-party record factories
- Removes _SessionFilter from _add_rotating_handler and setup_verbose_logging
- Adds tests: record factory injection, idempotency, arbitrary handler compat
2026-04-11 17:23:36 -07:00
Teknium
723b5bec85 feat: per-platform display verbosity configuration (#8006)
Add display.platforms section to config.yaml for per-platform overrides of
display settings (tool_progress, show_reasoning, streaming, tool_preview_length).

Each platform gets sensible built-in defaults based on capability tier:
- High (telegram, discord): tool_progress=all, streaming follows global
- Medium (slack, mattermost, matrix, feishu): tool_progress=new
- Low (signal, whatsapp, bluebubbles, wecom, etc.): tool_progress=off, streaming=false
- Minimal (email, sms, webhook, homeassistant): tool_progress=off, streaming=false

Example config:
  display:
    platforms:
      telegram:
        tool_progress: all
        show_reasoning: true
      slack:
        tool_progress: off

Resolution order: platform override > global setting > built-in platform default.

Changes:
- New gateway/display_config.py: resolver module with tier-based platform defaults
- gateway/run.py: tool_progress, tool_preview_length, streaming, show_reasoning
  all resolve per-platform via the new resolver
- /verbose command: now cycles tool_progress per-platform (saves to
  display.platforms.<platform>.tool_progress instead of global)
- /reasoning show|hide: now saves show_reasoning per-platform
- Config version 15 -> 16: migrates tool_progress_overrides into display.platforms
- Backward compat: legacy tool_progress_overrides still read as fallback
- 27 new tests for resolver, normalization, migration, backward compat
- Updated verbose command tests for per-platform behavior

Addresses community request for per-channel verbosity control (Guillaume Meyer,
Nathan Danielsen) — high verbosity on backchannel Telegram, low on customer-facing
Slack, none on email.
2026-04-11 17:20:34 -07:00
Teknium
14ccd32cee refactor(terminal): remove check_interval parameter (#8001)
The check_interval parameter on terminal_tool sent periodic output
updates to the gateway chat, but these were display-only — the agent
couldn't see or act on them. This added schema bloat and introduced
a bug where notify_on_complete=True was silently dropped when
check_interval was also set (the not-check_interval guard skipped
fast-watcher registration, and the check_interval watcher dict
was missing the notify_on_complete key).

Removing check_interval entirely:
- Eliminates the notify_on_complete interaction bug
- Reduces tool schema size (one fewer parameter for the model)
- Simplifies the watcher registration path
- notify_on_complete (agent wake-on-completion) still works
- watch_patterns (output alerting) still works
- process(action='poll') covers manual status checking

Closes #7947 (root cause eliminated rather than patched).
2026-04-11 17:16:11 -07:00
Mateus Scheuer Macedo
06f862fa1b feat(cli): add native /model picker modal for provider → model selection
When /model is called with no arguments in the interactive CLI, open a
two-step prompt_toolkit modal instead of the previous text-only listing:

1. Provider selection — curses_single_select with all authenticated providers
2. Model selection — live API fetch with curated fallback

Also fixes:
- OpenAI Codex model normalization (openai/gpt-5.4 → gpt-5.4)
- Dedicated Codex validation path using provider_model_ids()

Preserves curses_radiolist (used by setup, tools, plugins) alongside the
new curses_single_select. Retains tool elapsed timer in spinner.

Cherry-picked from PR #7438 by MestreY0d4-Uninter.
2026-04-11 17:16:06 -07:00
Teknium
39cd57083a refactor: remove budget warning injection system (dead code)
The _get_budget_warning() method already returned None unconditionally —
the entire budget warning system was disabled. Remove all dead code:

- _BUDGET_WARNING_RE regex
- _strip_budget_warnings_from_history() function and its call site
- Both injection blocks (concurrent + sequential tool execution)
- _get_budget_warning() method
- 7 tests for the removed functions

The budget exhaustion grace call system (_budget_exhausted_injected,
_budget_grace_call) is a separate recovery mechanism and is preserved.
2026-04-11 16:56:33 -07:00
waxinz
d99e2a29d6 feat: standardize message whitespace and JSON formatting
Normalize api_messages before each API call for consistent prefix
matching across turns:

1. Strip leading/trailing whitespace from system prompt parts
2. Strip leading/trailing whitespace from message content strings
3. Normalize tool-call arguments to compact sorted JSON

This enables KV cache reuse on local inference servers (llama.cpp,
vLLM, Ollama) and improves cache hit rates for cloud providers.

All normalization operates on the api_messages copy — the original
conversation history in messages is never mutated.  Tool-call JSON
normalization creates new dicts via spread to avoid the shallow-copy
mutation bug in the original PR.

Salvaged from PR #7875 by @waxinz with mutation fix.
2026-04-11 16:49:44 -07:00
Siddharth Balyan
cab814af15 feat(nix): container-aware CLI — auto-route into managed container (#7543)
* feat(nix): container-aware CLI — auto-route all subcommands into managed container

When container.enable = true, the host `hermes` CLI transparently execs
every subcommand into the managed Docker/Podman container. A symlink
bridge (~/.hermes -> /var/lib/hermes/.hermes) unifies state between host
and container so sessions, config, and memories are shared.

CLI changes:
- Global routing before subcommand dispatch (all commands forwarded)
- docker exec with -u exec_user, env passthrough (TERM, COLORTERM,
  LANG, LC_ALL), TTY-aware flags
- Retry with spinner on failure (TTY: 5s, non-TTY: 10s silent)
- Hard fail instead of silent fallback
- HERMES_DEV=1 env var bypasses routing for development
- No routing messages (invisible to user)

NixOS module changes:
- container.hostUsers option: lists users who get ~/.hermes symlink
  and automatic hermes group membership
- Activation script creates symlink bridge (with backup of existing
  ~/.hermes dirs), writes exec_user to .container-mode
- Cleanup on disable: removes symlinks + .container-mode + stops service
- Warning when hostUsers set without addToSystemPackages

* fix: address review — reuse sudo var, add chown -h on symlink update

- hermes_cli/main.py: reuse the existing `sudo` variable instead of
  redundant `shutil.which("sudo")` call that could return None
- nix/nixosModules.nix: add missing `chown -h` when updating an
  existing symlink target so ownership stays consistent with the
  fresh-create and backup-replace branches

* fix: address remaining review items from cursor bugbot

- hermes_cli/main.py: move container routing BEFORE parse_args() so
  --help, unrecognised flags, and all subcommands are forwarded
  transparently into the container instead of being intercepted by
  argparse on the host (high severity)

- nix/nixosModules.nix: resolve home dirs via
  config.users.users.${user}.home instead of hardcoding /home/${user},
  supporting users with custom home directories (medium severity)

- nix/nixosModules.nix: gate hostUsers group membership on
  container.enable so setting hostUsers without container mode doesn't
  silently add users to the hermes group (low severity)

* fix: simplify container routing — execvp, no retries, let it crash

- Replace subprocess.run retry loop with os.execvp (no idle parent process)
- Extract _probe_container helper for sudo detection with 15s timeout
- Narrow exception handling: FileNotFoundError only in get_container_exec_info,
  catch TimeoutExpired specifically, remove silent except Exception: pass
- Collapse needs_sudo + sudo into single sudo_path variable
- Simplify NixOS symlink creation from 4 branches to 2
- Gate NixOS sudoers hint with "On NixOS:" prefix
- Full test rewrite: 18 tests covering execvp, sudo probe, timeout, permissions

---------

Co-authored-by: Hermes Agent <hermes@nousresearch.com>
2026-04-12 05:17:46 +05:30
Teknium
5c2ecdec49 fix: use ceiling division for token estimation, deduplicate inline formula
Switch estimate_tokens_rough(), estimate_messages_tokens_rough(), and
estimate_request_tokens_rough() from floor division (len // 4) to
ceiling division ((len + 3) // 4). Short texts (1-3 chars) previously
estimated as 0 tokens, causing the compressor and pre-flight checks to
systematically undercount when many short tool results are present.

Also replaced the inline duplicate formula in run_conversation()
(total_chars // 4) with a call to the shared
estimate_messages_tokens_rough() function.

Updated 4 tests that hardcoded floor-division expected values.

Related: issue #6217, PR #6629
2026-04-11 16:33:40 -07:00
WAXLYY
6d272ba477 fix(tools): enforce ID uniqueness in TODO store during replace operations
Deduplicate todo items by ID before writing to the store, keeping the
last occurrence. Prevents ghost entries when the model sends duplicate
IDs in a single write() call, which corrupts subsequent merge operations.

Co-authored-by: WAXLYY <WAXLYY@users.noreply.github.com>
2026-04-11 16:22:50 -07:00
asheriif
97b0cd51ee feat(gateway): surface natural mid-turn assistant messages in chat platforms
Add display.interim_assistant_messages config (enabled by default) that
forwards completed assistant commentary between tool calls to the user
as separate chat messages. Models already emit useful status text like
'I'll inspect the repo first.' — this surfaces it on Telegram, Discord,
and other messaging platforms instead of swallowing it.

Independent from tool_progress and gateway streaming. Disabled for
webhooks. Uses GatewayStreamConsumer when available, falls back to
direct adapter send. Tracks response_previewed to prevent double-delivery
when interim message matches the final response.

Also fixes: cursor not stripped from fallback prefix in stream consumer
(affected continuation calculation on no-edit platforms like Signal).

Cherry-picked from PR #7885 by asheriif, default changed to enabled.
Fixes #5016
2026-04-11 16:21:39 -07:00
Teknium
6ee0005e8c docs: expand tool-use enforcement documentation (#7984)
- Fix auto list (was only gpt, actually includes codex/gemini/gemma/grok)
- Document the three guidance layers (general, OpenAI-specific, Google-specific)
- Add 'When to turn it on' section for users on non-default models
- Clarify that substring matching is case-insensitive
2026-04-11 16:20:27 -07:00
Teknium
c8aff74632 fix: prevent agent from stopping mid-task — compression floor, budget overhaul, activity tracking
Three root causes of the 'agent stops mid-task' gateway bug:

1. Compression threshold floor (64K tokens minimum)
   - The 50% threshold on a 100K-context model fired at 50K tokens,
     causing premature compression that made models lose track of
     multi-step plans.  Now threshold_tokens = max(50% * context, 64K).
   - Models with <64K context are rejected at startup with a clear error.

2. Budget warning removal — grace call instead
   - Removed the 70%/90% iteration budget warnings entirely.  These
     injected '[BUDGET WARNING: Provide your final response NOW]' into
     tool results, causing models to abandon complex tasks prematurely.
   - Now: no warnings during normal execution.  When the budget is
     actually exhausted (90/90), inject a user message asking the model
     to summarise, allow one grace API call, and only then fall back
     to _handle_max_iterations.

3. Activity touches during long terminal execution
   - _wait_for_process polls every 0.2s but never reported activity.
     The gateway's inactivity timeout (default 1800s) would fire during
     long-running commands that appeared 'idle.'
   - Now: thread-local activity callback fires every 10s during the
     poll loop, keeping the gateway's activity tracker alive.
   - Agent wires _touch_activity into the callback before each tool call.

Also: docs update noting 64K minimum context requirement.

Closes #7915 (root cause was agent-loop termination, not Weixin delivery limits).
2026-04-11 16:18:57 -07:00
Teknium
08f35076c9 fix: always log outer loop exception traceback at DEBUG level
Replace the verbose_logging-gated logging.exception() with an
unconditional logger.debug(exc_info=True). The full traceback now
always lands in agent.log when debug logging is enabled, without
requiring the verbose_logging flag or spamming the console.

Previously, production errors in the 700-line response processing
block (normalization, tool dispatch, final response handling) were
logged as one-line messages with the traceback hidden behind
verbose_logging — making post-mortem debugging difficult.
2026-04-11 15:52:07 -07:00
Teknium
289d2745af docs: add platform adapter developer guide + WeCom Callback docs (#7969)
Add the missing 'Adding a Platform Adapter' developer guide — a
comprehensive step-by-step checklist covering all 20+ integration
points (enum, adapter, config, runner, CLI, tools, toolsets, cron,
webhooks, tests, and docs). Includes common patterns for long-poll,
callback/webhook, and token-lock adapters with reference implementations.

Also adds full docs coverage for the WeCom Callback platform:
- New docs page: user-guide/messaging/wecom-callback.md
- Environment variables reference (9 WECOM_CALLBACK_* vars)
- Toolsets reference (hermes-wecom-callback)
- Messaging index (comparison table, architecture diagram, toolsets,
  security, next-steps links)
- Integrations index listing
- Sidebar entries for both new pages
2026-04-11 15:50:54 -07:00
Koichi Tsutsumi
fc417ed049 fix(cli): add ChatConsole.status for /skills search 2026-04-11 15:38:43 -07:00
0xbyt4
32519066dc fix(gateway): add HERMES_SESSION_KEY to session_context contextvars
Complete the contextvars migration by adding HERMES_SESSION_KEY to the
unified _VAR_MAP in session_context.py. Without this, concurrent gateway
handlers race on os.environ["HERMES_SESSION_KEY"].

- Add _SESSION_KEY ContextVar to _VAR_MAP, set_session_vars(), clear_session_vars()
- Wire session_key through _set_session_env() from SessionContext
- Replace os.getenv fallback in tools/approval.py with get_session_env()
  (function-level import to avoid cross-layer coupling)
- Keep os.environ set as CLI/cron fallback

Cherry-picked from PR #7878 by 0xbyt4.
2026-04-11 15:35:04 -07:00
syaor4n
689c515090 feat: add --env and --preset support to hermes mcp add
- Add --env KEY=VALUE for passing environment variables to stdio MCP servers
- Add --preset for known MCP server templates (empty for now, extensible)
- Validate env var names, reject --env for HTTP servers
- Explicit --command/--url overrides preset defaults
- Remove unused getpass import

Based on PR #7936 by @syaor4n (stitch preset removed, generic infra kept).
2026-04-11 15:34:57 -07:00
Teknium
758c4ad1ef fix: remove dead hasattr checks for retry counters initialized in reset block
All retry counters (_invalid_tool_retries, _invalid_json_retries,
_empty_content_retries, _incomplete_scratchpad_retries,
_codex_incomplete_retries) are initialized to 0 at the top of
run_conversation() (lines 7566-7570). The hasattr guards added before
the reset block existed are now dead code — the attributes always exist.

Removed 7 redundant hasattr checks (5 original targets + 2 bonus for
_codex_incomplete_retries found during cleanup).
2026-04-11 15:29:15 -07:00
Teknium
000a881fcf fix: reset compression_attempts and primary_recovery_attempted on fallback activation
When _try_activate_fallback() switches to a new provider, retry_count was
reset to 0 but compression_attempts and primary_recovery_attempted were
not. This meant a fallback provider that hit context overflow would only
get the leftover compression budget from the failed primary provider,
and transport recovery was blocked because the flag was still True from
the old provider's attempt.

Reset both counters at all 5 fallback activation sites inside the retry
loop so each fallback provider gets a fresh compression budget (3 attempts)
and its own transport recovery opportunity.
2026-04-11 15:26:24 -07:00
chqchshj
5f0caf54d6 feat(gateway): add WeCom callback-mode adapter for self-built apps
Add a second WeCom integration mode for regular enterprise self-built
applications.  Unlike the existing bot/websocket adapter (wecom.py),
this handles WeCom's standard callback flow: WeCom POSTs encrypted XML
to an HTTP endpoint, the adapter decrypts, queues for the agent, and
immediately acknowledges.  The agent's reply is delivered proactively
via the message/send API.

Key design choice: always acknowledge immediately and use proactive
send — agent sessions take 3-30 minutes, so the 5-second inline reply
window is never useful.  The original PR's Future/pending-reply
machinery was removed in favour of this simpler architecture.

Features:
- AES-CBC encrypt/decrypt (BizMsgCrypt-compatible)
- Multi-app routing scoped by corp_id:user_id
- Legacy bare user_id fallback for backward compat
- Access-token management with auto-refresh
- WECOM_CALLBACK_* env var overrides
- Port-in-use pre-check before binding
- Health endpoint at /health

Salvaged from PR #7774 by @chqchshj.  Simplified by removing the
inline reply Future system and fixing: secrets.choice for nonce
generation, immediate plain-text acknowledgment (not encrypted XML
containing 'success'), and initial token refresh error handling.
2026-04-11 15:22:49 -07:00
faishal
90352b2adf fix: normalize checkpoint manager home-relative paths
Adds _normalize_path() helper that calls expanduser().resolve() to
properly handle tilde paths (e.g. ~/.hermes, ~/.config).  Previously
Path.resolve() alone treated ~ as a literal directory name, producing
invalid paths like /root/~/.hermes.

Also improves _run_git() error handling to distinguish missing working
directories from missing git executable, and adds pre-flight directory
validation.

Cherry-picked from PR #7898 by faishal882.
Fixes #7807
2026-04-11 14:50:44 -07:00
SHL0MS
ee39e88b03 fix(claw): warn if gateway is running before migrating bot tokens
When 'hermes claw migrate' copies Telegram/Discord/Slack bot tokens from
OpenClaw while the Hermes gateway is already polling with those same tokens,
the platforms conflict (e.g. Telegram 409). Add a pre-flight check that reads
gateway_state.json via get_running_pid() + read_runtime_status(), warns the
user, and lets them cancel or continue.

Also improve the Telegram polling conflict error message to mention OpenClaw
as a common cause and give the 'hermes start' restart command.

Refs #7907
2026-04-11 14:49:21 -07:00
Teknium
b53f681993 fix(cron): pass skip_context_files=True to AIAgent in run_job (#7958)
Cron jobs run from whatever directory the scheduler process lives in
(typically the hermes-agent install dir), so without this flag the agent
picks up AGENTS.md, SOUL.md, or .cursorrules from that cwd — injecting
irrelevant project context into the cron job's system prompt.

batch_runner.py and gateway boot_md already pass skip_context_files=True
for the same reason. This aligns cron with the established pattern for
autonomous/headless agent runs.
2026-04-11 14:48:58 -07:00
Teknium
8c3935ebe8 fix: is_local_endpoint misses Docker/Podman DNS names (#7950)
* fix(tools): neutralize shell injection in _write_to_sandbox via path quoting

_write_to_sandbox interpolated storage_dir and remote_path directly into
a shell command passed to env.execute(). Paths containing shell
metacharacters (spaces, semicolons, $(), backticks) could trigger
arbitrary command execution inside the sandbox.

Fix: wrap both paths with shlex.quote(). Clean paths (alphanumeric +
slashes/hyphens/dots) are left unmodified by shlex.quote, so existing
behavior is unchanged. Paths with unsafe characters get single-quoted.

Tests added for spaces, $(command) substitution, and semicolon injection.

* fix: is_local_endpoint misses Docker/Podman DNS names

host.docker.internal, host.containers.internal, gateway.docker.internal,
and host.lima.internal are well-known DNS names that container runtimes
use to resolve the host machine. Users running Ollama on the host with
the agent in Docker/Podman hit the default 120s stream timeout instead
of the bumped 1800s because these hostnames weren't recognized as local.

Add _CONTAINER_LOCAL_SUFFIXES tuple and suffix check in
is_local_endpoint(). Tests cover all three runtime families plus a
negative case for domains that merely contain the suffix as a substring.
2026-04-11 14:46:18 -07:00
Teknium
1e5056ec30 feat(gateway): add all missing platforms to interactive setup wizard (#7949)
Wire Signal, Email, SMS (Twilio), DingTalk, Feishu/Lark, and WeCom into
the hermes setup gateway interactive wizard. These platforms all had
working adapters and _PLATFORMS entries in gateway.py but were invisible
in the setup checklist — users had to manually edit .env to configure them.

Changes:
- gateway.py: Add _setup_email/sms/dingtalk/feishu/wecom functions
  delegating to _setup_standard_platform (Signal already had a custom one)
- setup.py: Add wrapper functions for all 6 new platforms
- setup.py: Add all 6 to _GATEWAY_PLATFORMS checklist registry
- setup.py: Add missing env vars to any_messaging check
- setup.py: Add all missing platforms to _get_section_config_summary
  (was also missing Matrix, Mattermost, Weixin, Webhooks)
- docs: Add FEISHU_ALLOWED_USERS and WECOM_ALLOWED_USERS examples

Incorporates and extends the work from PR #7918 by bugmaker2.
2026-04-11 14:44:51 -07:00
Teknium
d82580b25b fix: add all_profiles param + narrow exception handling
- add all_profiles=False to find_gateway_pids() and
  kill_gateway_processes() so hermes update and gateway stop --all
  can still discover processes across all profiles
- narrow bare 'except Exception' to (OSError, subprocess.TimeoutExpired)
- update test mocks to match new signatures
2026-04-11 14:44:29 -07:00
Dominic Grieco
b80e318168 fix: scope gateway status to the active profile 2026-04-11 14:44:29 -07:00
etcircle
72b345e068 fix(gateway): preserve queued voice events for STT 2026-04-11 14:43:53 -07:00
Teknium
8160d7a03d test: add dedup coverage for reasoning item ID deduplication
Adds two tests verifying that duplicate reasoning item IDs across
multi-turn Codex Responses conversations are correctly deduplicated
in both _chat_messages_to_responses_input() and
_preflight_codex_input_items().
2026-04-11 14:43:47 -07:00
sauljwu
dfe7386a58 fix: deduplicate reasoning items in Responses API input
When replaying codex_reasoning_items from previous turns,
duplicate item IDs (rs_*) could appear in the input array,
causing HTTP 400 "Duplicate item found" errors from the
OpenAI Responses API.

Add seen_item_ids tracking in both _chat_messages_to_responses_input()
and _preflight_codex_input_items() to skip already-added reasoning
items by their ID.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 14:43:47 -07:00
willy-scr
ef73babea1 fix(gateway): use source.thread_id instead of undefined event in queued response
In _run_agent(), the pending message handler references 'event' which
is not defined in that scope — it only exists in the caller. This
causes a NameError when sending the first response before processing a
queued follow-up message.

Replace getattr(event, 'metadata', None) with the established pattern
using source.thread_id, consistent with lines 2625, 2810, 3678, 4410, 4566
in the same file.
2026-04-11 14:26:20 -07:00
Teknium
f2893fe51a fix(tools): neutralize shell injection in _write_to_sandbox via path quoting (#7940)
_write_to_sandbox interpolated storage_dir and remote_path directly into
a shell command passed to env.execute(). Paths containing shell
metacharacters (spaces, semicolons, $(), backticks) could trigger
arbitrary command execution inside the sandbox.

Fix: wrap both paths with shlex.quote(). Clean paths (alphanumeric +
slashes/hyphens/dots) are left unmodified by shlex.quote, so existing
behavior is unchanged. Paths with unsafe characters get single-quoted.

Tests added for spaces, $(command) substitution, and semicolon injection.
2026-04-11 14:26:11 -07:00
Dusk1e
255f59de18 fix(tools): prevent command argument injection and path traversal in checkpoint manager
This commit addresses a security vulnerability where unsanitized user inputs for commit_hash and file_path were passed directly to git commands in CheckpointManager.restore() and diff(). It validates commit hashes to be strictly hexadecimal characters without leading dashes (preventing flag injection like '--patch') and enforces file paths to stay within the working directory via root resolution. Regression tests test_restore_rejects_argument_injection, test_restore_rejects_invalid_hex_chars, and test_restore_rejects_path_traversal were added.
2026-04-11 14:25:57 -07:00
Teknium
4bede272cf fix: propagate model through credential pool path + add tests
The cherry-picked fix from PR #7916 placed model propagation after
the credential pool early-return in _resolve_named_custom_runtime(),
making it dead code when a pool is active (which happens whenever
custom_providers has an api_key that auto-seeds the pool).

- Inject model into pool_result before returning
- Add 5 regression tests covering direct path, pool path, empty
  model, and absent model scenarios
- Add 'model' to _VALID_CUSTOM_PROVIDER_FIELDS for config validation
2026-04-11 14:09:40 -07:00
0xFrank-eth
0e6354df50 fix(custom-providers): propagate model field from config to runtime so API receives the correct model name
Fixes #7828

When a custom_providers entry carries a `model` field, that value was
silently dropped by `_get_named_custom_provider` and
`_resolve_named_custom_runtime`.  Callers received a runtime dict with
`base_url`, `api_key`, and `api_mode` — but no `model`.

As a result, `hermes chat --model <provider-name>` sent the *provider
name* (e.g. "my-dashscope-provider") as the model string to the API
instead of the configured model (e.g. "qwen3.6-plus"), producing:

    Error code: 400 - {'error': {'message': 'Model Not Exist'}}

Setting the provider as the *default* model in config.yaml worked
because that path writes `model.default` and the agent reads it back
directly, bypassing the broken runtime resolution path.

Changes:

1. hermes_cli/runtime_provider.py — _get_named_custom_provider()
   Reads `entry.get("model")` and includes it in the result dict so
   the value is available to callers.

2. hermes_cli/runtime_provider.py — _resolve_named_custom_runtime()
   Propagates `custom_provider["model"]` into the returned runtime dict.

3. cli.py — _ensure_runtime_credentials()
   After resolving runtime, if `runtime["model"]` is set, assign it to
   `self.model` so the AIAgent is initialised with the correct model
   name rather than the provider name the user typed on the CLI.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-11 14:09:40 -07:00
Teknium
b0892375cd fix: mock aiohttp server in startup guard tests to avoid port binding
The startup guard tests called connect() which bound a real aiohttp
server on port 8080 — flaky in any environment where the port is
in use. Mock AppRunner, TCPSite, and ClientSession instead.
2026-04-11 14:05:38 -07:00
Mariano Nicolini
0a922bf218 add new test covering edge case where both insecure_no_sig and _webhook_url are set 2026-04-11 14:05:38 -07:00
Mariano Nicolini
d053845703 remove unused import and fix misleading log 2026-04-11 14:05:38 -07:00
Mariano Nicolini
0970f1de50 update docks with changes made 2026-04-11 14:05:38 -07:00
Mariano Nicolini
8ce6aaac23 change Twilio signature verification from opt-in to opt-out 2026-04-11 14:05:38 -07:00
Mariano Nicolini
ad1e8804a6 handle port variants in Twilio signatures 2026-04-11 14:05:38 -07:00
Mariano Nicolini
c22bffc92e add basic twilio signature checking and tests 2026-04-11 14:05:38 -07:00
Teknium
cc4b1f0007 fix(whatsapp): pin Baileys to fix/abprops-abt-fetch for bad-request fix
WhatsApp changed their server protocol for property queries, causing
400 bad-request errors in fetchProps/executeInitQueries on every
reconnect (Baileys issue #2477). The fix in PR #2473 changes the IQ
namespace from 'w' to 'abt' and protocol from '2' to '1'.

Pin to the fix branch until the next Baileys release includes it.
2026-04-11 14:03:37 -07:00
Teknium
dfc820345d fix: scope tool interrupt signal per-thread to prevent cross-session leaks (#7930)
The interrupt mechanism in tools/interrupt.py used a process-global
threading.Event. In the gateway, multiple agents run concurrently in
the same process via run_in_executor. When any agent was interrupted
(user sends a follow-up message), the global flag killed ALL agents'
running tools — terminal commands, browser ops, web requests — across
all sessions.

Changes:
- tools/interrupt.py: Replace single threading.Event with a set of
  interrupted thread IDs. set_interrupt() targets a specific thread;
  is_interrupted() checks the current thread. Includes a backward-
  compatible _ThreadAwareEventProxy for legacy _interrupt_event usage.
- run_agent.py: Store execution thread ID at start of run_conversation().
  interrupt() and clear_interrupt() pass it to set_interrupt() so only
  this agent's thread is affected.
- tools/code_execution_tool.py: Use is_interrupted() instead of
  directly checking _interrupt_event.is_set().
- tools/process_registry.py: Same — use is_interrupted().
- tests: Update interrupt tests for per-thread semantics. Add new
  TestPerThreadInterruptIsolation with two tests verifying cross-thread
  isolation.
2026-04-11 14:02:58 -07:00
Teknium
75380de430 fix: reap orphaned browser sessions on startup (#7931)
When a Python process exits uncleanly (SIGKILL, crash, gateway restart
via hermes update), in-memory _active_sessions tracking is lost but the
agent-browser node daemons and their Chromium child processes keep
running indefinitely. On a long-running system this causes unbounded
memory growth — 24 orphaned sessions consumed 7.6 GB on a production
machine over 9 days.

Add _reap_orphaned_browser_sessions() which scans the tmp directory for
agent-browser-{h_*,cdp_*} socket dirs on cleanup thread startup.  For
each dir not tracked by the current process, reads the daemon PID file
and sends SIGTERM if the daemon is still alive.  Handles edge cases:
dead PIDs, corrupt PID files, permission errors, foreign processes.

The reaper runs once on thread startup (not every 30s) to avoid races
with sessions being actively created by concurrent agents.
2026-04-11 14:02:46 -07:00
Markus Corazzione
885123d44b fix(weixin): add per-chunk retry with backoff for text delivery
When sending multi-chunk responses, individual chunks can fail due to
transient iLink API errors. Previously a single failure would abort the
entire message. Now each chunk is retried with linear backoff before
giving up, and the same client_id is reused across retries for
server-side deduplication.

Configurable via config.yaml (platforms.weixin.extra) or env vars:
- send_chunk_delay_seconds (default 0.35s) — pacing between chunks
- send_chunk_retries (default 2) — max retry attempts per chunk
- send_chunk_retry_delay_seconds (default 1.0s) — base retry delay

Replaces the hardcoded 0.3s inter-chunk delay from #7903.

Salvaged from PR #7899 by @corazzione. Fixes #7836.
2026-04-11 14:02:33 -07:00
Teknium
04c1c5d53f refactor: extract shared helpers to deduplicate repeated code patterns (#7917)
* refactor: add shared helper modules for code deduplication

New modules:
- gateway/platforms/helpers.py: MessageDeduplicator, TextBatchAggregator,
  strip_markdown, ThreadParticipationTracker, redact_phone
- hermes_cli/cli_output.py: print_info/success/warning/error, prompt helpers
- tools/path_security.py: validate_within_dir, has_traversal_component
- utils.py additions: safe_json_loads, read_json_file, read_jsonl,
  append_jsonl, env_str/lower/int/bool helpers
- hermes_constants.py additions: get_config_path, get_skills_dir,
  get_logs_dir, get_env_path

* refactor: migrate gateway adapters to shared helpers

- MessageDeduplicator: discord, slack, dingtalk, wecom, weixin, mattermost
- strip_markdown: bluebubbles, feishu, sms
- redact_phone: sms, signal
- ThreadParticipationTracker: discord, matrix
- _acquire/_release_platform_lock: telegram, discord, slack, whatsapp,
  signal, weixin

Net -316 lines across 19 files.

* refactor: migrate CLI modules to shared helpers

- tools_config.py: use cli_output print/prompt + curses_radiolist (-117 lines)
- setup.py: use cli_output print helpers + curses_radiolist (-101 lines)
- mcp_config.py: use cli_output prompt (-15 lines)
- memory_setup.py: use curses_radiolist (-86 lines)

Net -263 lines across 5 files.

* refactor: migrate to shared utility helpers

- safe_json_loads: agent/display.py (4 sites)
- get_config_path: skill_utils.py, hermes_logging.py, hermes_time.py
- get_skills_dir: skill_utils.py, prompt_builder.py
- Token estimation dedup: skills_tool.py imports from model_metadata
- Path security: skills_tool, cronjob_tools, skill_manager_tool, credential_files
- Non-atomic YAML writes: doctor.py, config.py now use atomic_yaml_write
- Platform dict: new platforms.py, skills_config + tools_config derive from it
- Anthropic key: new get_anthropic_key() in auth.py, used by doctor/status/config/main

* test: update tests for shared helper migrations

- test_dingtalk: use _dedup.is_duplicate() instead of _is_duplicate()
- test_mattermost: use _dedup instead of _seen_posts/_prune_seen
- test_signal: import redact_phone from helpers instead of signal
- test_discord_connect: _platform_lock_identity instead of _token_lock_identity
- test_telegram_conflict: updated lock error message format
- test_skill_manager_tool: 'escapes' instead of 'boundary' in error msgs
2026-04-11 13:59:52 -07:00
dalianmao000
cf53e2676b fix(wecom): handle appmsg attachments (PDF/Word/Excel) from WeCom AI Bot
WeCom AI Bot sends file attachments with msgtype="appmsg", not
msgtype="file". Previously only file content was discarded while
the text title reached the agent.

Changes:
- _extract_text(): Extract appmsg title (filename) for display
- _extract_media(): Handle appmsg type with file/image content

Fixes #7750

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 13:48:25 -07:00
WAXLYY
f4f4078ad9 fix(gateway/weixin): ensure atomic persistence for critical session state 2026-04-11 13:48:25 -07:00
Teknium
59e630a64d fix: update thinking-exhaustion test for think-tag gating
The test expected content=None to immediately trigger thinking-exhaustion,
but PR #7738 correctly gates that check on _has_think_tags. Without think
tags, the agent falls through to normal continuation retry (3 attempts).
2026-04-11 13:47:25 -07:00
konsisumer
2d328d5c70 fix(gateway): break stuck session resume loops on restart (#7536)
Cherry-picked from PR #7747 with follow-up fixes:
- Narrowed suspend_all_active() to suspend_recently_active() — only
  suspends sessions updated within the last 2 minutes (likely in-flight),
  not all sessions which would unnecessarily reset idle users
- /stop with no running agent no longer suspends the session; only
  actual force-stops mark the session for reset
2026-04-11 13:47:25 -07:00
ygd58
151654851c fix(agent): prevent false thinking-exhaustion for non-reasoning models
Models that do not use <think> tags (e.g. GLM-4.7 on NVIDIA Build,
minimax) may return content=None or empty string when truncated. The
previous _thinking_exhausted check treated any None/empty content as
thinking-budget exhaustion, causing these models to always show the
'Thinking Budget Exhausted' error instead of attempting continuation.

Fix: gate the exhaustion check on _has_think_tags — only trigger the
exhaustion path when the model actually produced reasoning blocks
(<think>, <thinking>, <reasoning>, <REASONING_SCRATCHPAD>). Models
without think tags now fall through to the normal continuation retry
logic (up to 3 attempts).

Fixes #7729
2026-04-11 13:47:25 -07:00
Tom Qiao
5910412002 fix: detect truncated tool_calls when finish_reason is not length
When API routers rewrite finish_reason from "length" to "tool_calls",
truncated JSON arguments bypassed the length handler and wasted 3
retry attempts in the generic JSON validation loop. Now detects
truncation patterns in tool call arguments regardless of finish_reason.

Fixes #7680

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 13:47:25 -07:00
helix4u
39da23a129 fix(api-server): keep chat-completions SSE alive 2026-04-11 13:47:25 -07:00
Teknium
cac6178104 fix(gateway): propagate user identity through process watcher pipeline
Background process watchers (notify_on_complete, check_interval) created
synthetic SessionSource objects without user_id/user_name. While the
internal=True bypass (1d8d4f28) prevented false pairing for agent-
generated notifications, the missing identity caused:

- Garbage entries in pairing rate limiters (discord:None, telegram:None)
- 'User None' in approval messages and logs
- No user identity available for future code paths that need it

Additionally, platform messages arriving without from_user (Telegram
service messages, channel forwards, anonymous admin actions) could still
trigger false pairing because they are not internal events.

Fix:
1. Propagate user_id/user_name through the full watcher chain:
   session_context.py → gateway/run.py → terminal_tool.py →
   process_registry.py (including checkpoint persistence/recovery)

2. Add None user_id guard in _handle_message() — silently drop
   non-internal messages with no user identity instead of triggering
   the pairing flow.

Salvaged from PRs #7664 (kagura-agent, ContextVar approach),
#6540 (MestreY0d4-Uninter, tests), and #7709 (guang384, None guard).

Closes #6341, #6485, #7643
Relates to #6516, #7392
2026-04-11 13:46:16 -07:00
Teknium
dafe443beb feat: warn at session start when compression model context is too small (#7894)
Two-phase design so the warning fires before the user's first message
on every platform:

Phase 1 (__init__):
  _check_compression_model_feasibility() runs during agent construction.
  Resolves the auxiliary compression model (same chain as call_llm with
  task='compression'), compares its context length to the main model's
  compression threshold. If too small, emits via _emit_status() (prints
  for CLI) and stores the warning in _compression_warning.

Phase 2 (run_conversation, first call):
  _replay_compression_warning() re-sends the stored warning through
  status_callback — which the gateway wires AFTER construction. The
  warning is then cleared so it only fires once.

This ensures:
- CLI users see the warning immediately at startup (right after the
  context limit line)
- Gateway users (Telegram, Discord, Slack, WhatsApp, Signal, Matrix,
  Mattermost, Home Assistant, DingTalk, etc.) receive it via
  status_callback('lifecycle', ...) on their first message
- logger.warning() always hits agent.log regardless of platform

Also warns when no auxiliary LLM provider is configured at all.
Entire check wrapped in try/except — never blocks startup.

11 tests covering: core warning logic, boundary conditions, exception
safety, two-phase store+replay, gateway callback wiring, and
single-delivery guarantee.
2026-04-11 12:01:30 -07:00
Teknium
da9f96bf51 fix(weixin): keep multi-line messages in single bubble by default (#7903)
The Weixin adapter was splitting responses at every top-level newline,
causing notification spam (up to 70 API calls for a single long markdown
response). This salvages the best aspects of six contributor PRs:

Compact mode (new default):
- Messages under the 4000-char limit stay as a single bubble even with
  multiple lines, paragraphs, and code blocks
- Only oversized messages get split at logical markdown boundaries
- Inter-chunk delay (0.3s) between chunks prevents WeChat rate-limit drops

Legacy mode (opt-in):
- Set split_multiline_messages: true in platforms.weixin.extra config
- Or set WEIXIN_SPLIT_MULTILINE_MESSAGES=true env var
- Restores the old per-line splitting behavior

Salvaged from PRs #7797 (guantoubaozi), #7792 (luoxiao6645),
#7838 (qyx596), #7825 (weedge), #7784 (sherunlock03), #7773 (JnyRoad).
Core fix unanimous across all six; config toggle from #7838; inter-chunk
delay from #7825.
2026-04-11 12:00:05 -07:00
0xbyt4
3ec8809b78 fix(vision): preserve aspect ratio during auto-resize
Independent halving of width and height caused aspect ratio distortion
for extreme dimensions (e.g. 8000x200 panoramas). When one axis hit the
64px floor, the other kept shrinking — collapsing the ratio toward 1:1.

Use proportional scaling instead: when either dimension hits the floor,
derive the effective scale factor and apply it to both axes.

Add tests for extreme panorama (8000x200) and tall narrow (200x6000)
images to verify aspect ratio preservation.
2026-04-11 11:53:04 -07:00
Teknium
4e3e87b677 feat(migration): preview-then-confirm UX + docs updates
hermes claw migrate now always shows a full dry-run preview before
making any changes. The user reviews what would be imported, then
confirms to proceed. --dry-run stops after the preview. --yes skips
the confirmation prompt.

This matches the existing setup wizard flow (_offer_openclaw_migration)
which already did preview-then-confirm.

Docs updated across both docs/migration/openclaw.md and
website/docs/guides/migrate-from-openclaw.md to reflect:
- New preview-first UX flow
- workspace-main/ fallback paths
- accounts.default channel token layout
- TTS edge/microsoft rename
- openclaw.json env sub-object as API key source
- Hyphenated provider API types
- Matrix accessToken field
- SecretRef file/exec warnings
- Skills session restart note
- WhatsApp re-pairing note
- Archive cleanup step
2026-04-11 11:35:23 -07:00
Teknium
26bbb422b1 fix(migration): update OpenClaw migration for schema drift
Consolidates fixes from PRs #7869, #7860, #7861, #7862, #7864, #7868.

OpenClaw restructured several internal paths and config schemas that the
migration tool was reading from stale locations:

- workspace/ renamed to workspace-main/ (and workspace-{agentId} for
  multi-agent). source_candidate() now checks fallback paths.
- Channel tokens moved from channels.*.botToken to
  channels.*.accounts.default.botToken. New _get_channel_field() checks
  both flat and accounts.default layout.
- TTS provider 'edge' renamed to 'microsoft'. Migration now checks both
  and normalizes back to 'edge' for Hermes.
- API keys stored in openclaw.json 'env' sub-object (env.<KEY> or
  env.vars.<KEY>) are now discovered as an additional key source.
- Provider apiType values now hyphenated (openai-completions,
  anthropic-messages, google-generative-ai). thinkingDefault expanded
  with minimal, xhigh, adaptive.
- Matrix uses accessToken field, not botToken.
- SecretRef file/exec sources now warn instead of silently skipping.
- Migration notes now mention skills requiring session restart and
  WhatsApp requiring QR re-pairing.

Co-authored-by: SHL0MS <SHL0MS@users.noreply.github.com>
2026-04-11 11:35:23 -07:00
Teknium
976bad5bde refactor(auxiliary): config.yaml takes priority over env vars for aux task settings (#7889)
The auxiliary client previously checked env vars (AUXILIARY_{TASK}_PROVIDER,
AUXILIARY_{TASK}_MODEL, etc.) before config.yaml's auxiliary.{task}.* section.
This violated the project's '.env is for secrets only' policy — these are
behavioral settings, not API keys.

Flipped the resolution order in _resolve_task_provider_model():
  1. Explicit args (always win)
  2. config.yaml auxiliary.{task}.* (PRIMARY)
  3. Env var overrides (backward-compat fallback only)
  4. 'auto' (full auto-detection chain)

Env var reading code is kept for backward compatibility but config.yaml
now takes precedence. Updated module docstring and function docstring.

Also removed AUXILIARY_VISION_MODEL from _EXTRA_ENV_KEYS in config.py.
2026-04-11 11:21:59 -07:00
Teknium
d4bb44d4b9 docs: add Xiaomi MiMo to all provider docs + fix MiMo-V2-Flash ctx len
- environment-variables.md: XIAOMI_API_KEY, XIAOMI_BASE_URL, provider list
- cli-commands.md: --provider choices
- integrations/providers.md: provider table, Chinese providers section,
  config example, base URL list, choosing table, fallback providers list
- fallback-providers.md: supported providers table, auto-detection chain
- Fix XiaomiMiMo/MiMo-V2-Flash context length 32768 → 256000 (OpenRouter entry)
2026-04-11 11:17:52 -07:00
kshitijk4poor
6693e2a497 feat(xiaomi): add Xiaomi MiMo as first-class provider
Cherry-picked from PR #7702 by kshitijk4poor.

Adds Xiaomi MiMo as a direct provider (XIAOMI_API_KEY) with models:
- mimo-v2-pro (1M context), mimo-v2-omni (256K, multimodal), mimo-v2-flash (256K, cheapest)

Standard OpenAI-compatible provider checklist: auth.py, config.py, models.py,
main.py, providers.py, doctor.py, model_normalize.py, model_metadata.py,
models_dev.py, auxiliary_client.py, .env.example, cli-config.yaml.example.

Follow-up: vision tasks use mimo-v2-omni (multimodal) instead of the user's
main model. Non-vision aux uses the user's selected model. Added
_PROVIDER_VISION_MODELS dict for provider-specific vision model overrides.
On failure, falls back to aggregators (gemini flash) via existing fallback chain.

Corrects pre-existing context lengths: mimo-v2-pro 1048576→1000000,
mimo-v2-omni 1048576→256000, adds mimo-v2-flash 256000.

36 tests covering registry, aliases, auto-detect, credentials, models.dev,
normalization, URL mapping, providers module, doctor, aux client, vision
model override, and agent init.
2026-04-11 11:17:52 -07:00
Teknium
55fac8a386 docs: add warning about summary model context length requirement (#7879)
The summary model used for context compaction must have a context window
at least as large as the main agent model. If it's smaller, the
summarization API call fails and middle turns are dropped without a
summary, silently losing conversation context.

Promoted the existing note in configuration.md to a visible warning
admonition, and added a matching warning in the developer guide's
context compression page.
2026-04-11 11:13:48 -07:00
kshitijk4poor
50bb4fe010 fix(vision): auto-resize oversized images, increase default timeout, fix vision capability detection
Cherry-picked from PR #7749 by kshitijk4poor with modifications:

- Raise hard image limit from 5 MB to 20 MB (matches most restrictive provider)
- Send images at full resolution first; only auto-resize to 5 MB on API failure
- Add _is_image_size_error() helper to detect size-related API rejections
- Auto-resize uses Pillow (soft dep) with progressive downscale + JPEG quality reduction
- Fix get_model_capabilities() to check modalities.input for vision support
- Increase default vision timeout from 30s to 120s (matches hardcoded fallback intent)
- Applied retry-with-resize to both vision_analyze_tool and browser_vision

Closes #7740
2026-04-11 11:12:50 -07:00
Teknium
06e1d9cdd4 fix: resolve three high-impact community bugs (#5819, #6893, #3388) (#7881)
Matrix gateway: fix sync loop never dispatching events (#5819)
- _sync_loop() called client.sync() but never called handle_sync()
  to dispatch events to registered callbacks — _on_room_message was
  registered but never fired for new messages
- Store next_batch token from initial sync and pass as since= to
  subsequent incremental syncs (was doing full initial sync every time)
- 17 comments, confirmed by multiple users on matrix.org

Feishu docs: add interactive card configuration for approvals (#6893)
- Error 200340 is a Feishu Developer Console configuration issue,
  not a code bug — users need to enable Interactive Card capability
  and configure Card Request URL
- Added required 3-step setup instructions to feishu.md
- Added troubleshooting entry for error 200340
- 17 comments from Feishu users

Copilot provider drift: detect GPT-5.x Responses API requirement (#3388)
- GPT-5.x models are rejected on /v1/chat/completions by both OpenAI
  and OpenRouter (unsupported_api_for_model error)
- Added _model_requires_responses_api() to detect models needing
  Responses API regardless of provider
- Applied in __init__ (covers OpenRouter primary users) and in
  _try_activate_fallback() (covers Copilot->OpenRouter drift)
- Fixed stale comment claiming gateway creates fresh agents per message
  (it caches them via _agent_cache since the caching was added)
- 7 comments, reported on Copilot+Telegram gateway
2026-04-11 11:12:20 -07:00
Siddharth Balyan
69f3aaa1d6 fix(matrix): pass required args to MemoryCryptoStore for mautrix ≥0.21 (#7848)
* fix(matrix): pass required args to MemoryCryptoStore for mautrix ≥0.21

MemoryCryptoStore.__init__() now requires account_id and pickle_key
positional arguments as of mautrix 0.21. The migration from matrix-nio
(commit 1850747) didn't account for this, causing E2EE initialization
to fail with:

  MemoryCryptoStore.__init__() missing 2 required positional arguments:
  'account_id' and 'pickle_key'

Pass self._user_id as account_id and derive pickle_key from the same
user_id:device_id pair already used for the on-disk HMAC signature.

Update the test stub to accept the new parameters.

Fixes #7803

* fix: use consistent fallback for pickle_key derivation

Address review: _pickle_key now uses _acct_id (which has the 'hermes'
fallback) instead of raw self._user_id, so both values stay consistent
when user_id is empty.

---------

Co-authored-by: Hermes Agent <hermes@nousresearch.com>
2026-04-11 10:43:49 -07:00
Teknium
c94936839c fix: unify openai-codex model list — derive from codex_models.py (#7844)
The _PROVIDER_MODELS['openai-codex'] static list was a manually maintained
duplicate of DEFAULT_CODEX_MODELS in codex_models.py. They drifted — the
static list was missing gpt-5.3-codex-spark (and previously gpt-5.4).

Replace the hardcoded list with _codex_curated_models() which calls
DEFAULT_CODEX_MODELS + _add_forward_compat_models() from codex_models.py.
Now both the CLI 'hermes model' flow and the gateway /model picker derive
from the same source of truth. New models added to DEFAULT_CODEX_MODELS
or _FORWARD_COMPAT_TEMPLATE_MODELS automatically appear everywhere.
2026-04-11 10:38:24 -07:00
Teknium
d7607292d9 fix(streaming): adaptive backoff + cursor strip to prevent message truncation (#7683)
Telegram flood control during streaming caused messages to be cut off
mid-response. The old behavior permanently disabled edits after a single
flood-control failure, losing the remainder of the response.

Changes:
- Adaptive backoff: on flood-control edit failures, double the edit interval
  instead of immediately disabling edits. Only permanently disable after 3
  consecutive failures (_MAX_FLOOD_STRIKES).
- Cursor strip: when entering fallback mode, best-effort edit to remove the
  cursor (▉) from the last visible message so it doesn't appear stuck.
- Fallback send retry: _send_fallback_final retries each chunk once on
  flood-control failures (3s delay) before giving up.
- Default edit_interval increased from 0.3s to 1.0s. Telegram rate-limits
  edits at ~1/s per message; 0.3s was virtually guaranteed to trigger flood
  control on any non-trivial response.
- _send_or_edit returns bool so the overflow split loop knows not to
  truncate accumulated text when an edit fails (prevents content loss).

Fixes: messages cutting/stopping mid-response on Telegram, especially
with streaming enabled.
2026-04-11 10:28:15 -07:00
624 changed files with 88309 additions and 8423 deletions

View File

@@ -5,6 +5,7 @@
# Dependencies
node_modules
.venv
# CI/CD
.github

View File

@@ -43,6 +43,15 @@
# KIMI_BASE_URL=https://api.kimi.com/coding/v1 # Default for sk-kimi- keys
# KIMI_BASE_URL=https://api.moonshot.ai/v1 # For legacy Moonshot keys
# KIMI_BASE_URL=https://api.moonshot.cn/v1 # For Moonshot China keys
# KIMI_CN_API_KEY= # Dedicated Moonshot China key
# =============================================================================
# LLM PROVIDER (Arcee AI)
# =============================================================================
# Arcee AI provides access to Trinity models (trinity-mini, trinity-large-*)
# Get an Arcee key at: https://chat.arcee.ai/
# ARCEEAI_API_KEY=
# ARCEE_BASE_URL= # Override default base URL
# =============================================================================
# LLM PROVIDER (MiniMax)
@@ -89,6 +98,15 @@
# Optional base URL override:
# HERMES_QWEN_BASE_URL=https://portal.qwen.ai/v1
# =============================================================================
# LLM PROVIDER (Xiaomi MiMo)
# =============================================================================
# Xiaomi MiMo models (mimo-v2-pro, mimo-v2-omni, mimo-v2-flash).
# Get your key at: https://platform.xiaomimimo.com
# XIAOMI_API_KEY=your_key_here
# Optional base URL override:
# XIAOMI_BASE_URL=https://api.xiaomimimo.com/v1
# =============================================================================
# TOOL API KEYS
# =============================================================================
@@ -127,6 +145,10 @@
# Only override here if you need to force a backend without touching config.yaml:
# TERMINAL_ENV=local
# Override the container runtime binary (e.g. to use Podman instead of Docker).
# Useful on systems where Docker's storage driver is broken or unavailable.
# HERMES_DOCKER_BINARY=/usr/local/bin/podman
# Container images (for singularity/docker/modal backends)
# TERMINAL_DOCKER_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
# TERMINAL_SINGULARITY_IMAGE=docker://nikolaik/python-nodejs:python3.11-nodejs20

2
.gitattributes vendored Normal file
View File

@@ -0,0 +1,2 @@
# Auto-generated files — collapse diffs and exclude from language stats
web/package-lock.json linguist-generated=true

28
.gitea/workflows/lint.yml Normal file
View File

@@ -0,0 +1,28 @@
name: Lint
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
lint:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Check for hardcoded paths
run: python3 scripts/lint_hardcoded_paths.py
continue-on-error: true
- name: Check Python syntax
run: |
find . -name "*.py" -not -path "./.git/*" -not -path "./node_modules/*" | head -100 | xargs python3 -m py_compile || true

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
"""
Pre-commit hook: Reject hardcoded home-directory paths.
Install:
cp pre-commit-hardcoded-path.py .git/hooks/pre-commit-hardcoded-path
chmod +x .git/hooks/pre-commit-hardcoded-path
Or add to .pre-commit-config.yaml
"""
import sys
import subprocess
import re
PATTERNS = [
(r"/Users/[\w.\-]+/", "macOS home directory"),
(r"/home/[\w.\-]+/", "Linux home directory"),
(r"(?<![\w/])~/", "unexpanded tilde"),
]
NOQA = re.compile(r"#\s*noqa:?\s*hardcoded-path-ok")
def get_staged_files():
result = subprocess.run(
["git", "diff", "--cached", "--name-only", "--diff-filter=ACM"],
capture_output=True, text=True
)
return [f for f in result.stdout.strip().split("\n") if f.endswith(".py")]
def check_file(filepath):
try:
result = subprocess.run(
["git", "show", f":{filepath}"],
capture_output=True, text=True
)
content = result.stdout
except Exception:
return []
violations = []
for i, line in enumerate(content.split("\n"), 1):
if line.strip().startswith("#"):
continue
if line.strip().startswith(("import ", "from ")):
continue
if NOQA.search(line):
continue
for pattern, desc in PATTERNS:
if re.search(pattern, line):
violations.append((filepath, i, line.strip(), desc))
break
return violations
def main():
files = get_staged_files()
if not files:
sys.exit(0)
all_violations = []
for f in files:
all_violations.extend(check_file(f))
if all_violations:
print("ERROR: Hardcoded home directory paths detected:")
print()
for filepath, line_no, line, desc in all_violations:
print(f" {filepath}:{line_no}: {desc}")
print(f" {line[:100]}")
print()
print("Fix: Use $HOME, relative paths, or get_hermes_home().")
print("Override: Add '# noqa: hardcoded-path-ok' to the line.")
sys.exit(1)
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -11,6 +11,7 @@ body:
**Before submitting**, please:
- [ ] Search [existing issues](https://github.com/NousResearch/hermes-agent/issues) to avoid duplicates
- [ ] Update to the latest version (`hermes update`) and confirm the bug still exists
- [ ] Run `hermes debug share` and paste the links below (see Debug Report section)
- type: textarea
id: description
@@ -82,6 +83,25 @@ body:
- Slack
- WhatsApp
- type: textarea
id: debug-report
attributes:
label: Debug Report
description: |
Run `hermes debug share` from your terminal and paste the links it prints here.
This uploads your system info, config, and recent logs to a paste service automatically.
If you're in an interactive chat session, you can also use the `/debug` slash command — it does the same thing.
If the upload fails, run `hermes debug share --local` and paste the output directly.
placeholder: |
Report https://paste.rs/abc123
agent.log https://paste.rs/def456
gateway.log https://paste.rs/ghi789
render: shell
validations:
required: true
- type: input
id: os
attributes:
@@ -97,8 +117,6 @@ body:
label: Python Version
description: Output of `python --version`
placeholder: "3.11.9"
validations:
required: true
- type: input
id: hermes-version
@@ -106,14 +124,14 @@ body:
label: Hermes Version
description: Output of `hermes version`
placeholder: "2.1.0"
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant Logs / Traceback
description: Paste any error output, traceback, or log messages. This will be auto-formatted as code.
label: Additional Logs / Traceback (optional)
description: |
The debug report above covers most logs. Use this field for any extra error output,
tracebacks, or screenshots not captured by `hermes debug share`.
render: shell
- type: textarea

View File

@@ -71,3 +71,15 @@ body:
label: Contribution
options:
- label: I'd like to implement this myself and submit a PR
- type: textarea
id: debug-report
attributes:
label: Debug Report (optional)
description: |
If this feature request is related to a problem you're experiencing, run `hermes debug share` and paste the links here.
In an interactive chat session, you can use `/debug` instead.
This helps us understand your environment and any related logs.
placeholder: |
Report https://paste.rs/abc123
render: shell

View File

@@ -9,7 +9,8 @@ body:
Sorry you're having trouble! Please fill out the details below so we can help.
**Quick checks first:**
- Run `hermes doctor` and include the output below
- Run `hermes debug share` and paste the links in the Debug Report section below
- If you're in a chat session, you can use `/debug` instead — it does the same thing
- Try `hermes update` to get the latest version
- Check the [README troubleshooting section](https://github.com/NousResearch/hermes-agent#troubleshooting)
- For general questions, consider the [Nous Research Discord](https://discord.gg/NousResearch) for faster help
@@ -74,10 +75,21 @@ body:
placeholder: "2.1.0"
- type: textarea
id: doctor-output
id: debug-report
attributes:
label: Output of `hermes doctor`
description: Run `hermes doctor` and paste the full output. This will be auto-formatted.
label: Debug Report
description: |
Run `hermes debug share` from your terminal and paste the links it prints here.
This uploads your system info, config, and recent logs to a paste service automatically.
If you're in an interactive chat session, you can also use the `/debug` slash command — it does the same thing.
If the upload fails or install didn't get that far, run `hermes debug share --local` and paste the output directly.
If even that doesn't work, run `hermes doctor` and paste that output instead.
placeholder: |
Report https://paste.rs/abc123
agent.log https://paste.rs/def456
gateway.log https://paste.rs/ghi789
render: shell
- type: textarea

73
.github/workflows/contributor-check.yml vendored Normal file
View File

@@ -0,0 +1,73 @@
name: Contributor Attribution Check
on:
pull_request:
branches: [main]
paths:
# Only run when code files change (not docs-only PRs)
- '*.py'
- '**/*.py'
- '.github/workflows/contributor-check.yml'
permissions:
contents: read
jobs:
check-attribution:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0 # Full history needed for git log
- name: Check for unmapped contributor emails
run: |
# Get the merge base between this PR and main
MERGE_BASE=$(git merge-base origin/main HEAD)
# Find any new author emails in this PR's commits
NEW_EMAILS=$(git log ${MERGE_BASE}..HEAD --format='%ae' --no-merges | sort -u)
if [ -z "$NEW_EMAILS" ]; then
echo "No new commits to check."
exit 0
fi
# Check each email against AUTHOR_MAP in release.py
MISSING=""
while IFS= read -r email; do
# Skip teknium and bot emails
case "$email" in
*teknium*|*noreply@github.com*|*dependabot*|*github-actions*|*anthropic.com*|*cursor.com*)
continue ;;
esac
# Check if email is in AUTHOR_MAP (either as a key or matches noreply pattern)
if echo "$email" | grep -qP '\+.*@users\.noreply\.github\.com'; then
continue # GitHub noreply emails auto-resolve
fi
if ! grep -qF "\"${email}\"" scripts/release.py 2>/dev/null; then
AUTHOR=$(git log --author="$email" --format='%an' -1)
MISSING="${MISSING}\n ${email} (${AUTHOR})"
fi
done <<< "$NEW_EMAILS"
if [ -n "$MISSING" ]; then
echo ""
echo "⚠️ New contributor email(s) not in AUTHOR_MAP:"
echo -e "$MISSING"
echo ""
echo "Please add mappings to scripts/release.py AUTHOR_MAP:"
echo -e "$MISSING" | while read -r line; do
email=$(echo "$line" | sed 's/^ *//' | cut -d' ' -f1)
[ -z "$email" ] && continue
echo " \"${email}\": \"<github-username>\","
done
echo ""
echo "To find the GitHub username for an email:"
echo " gh api 'search/users?q=EMAIL+in:email' --jq '.items[0].login'"
exit 1
else
echo "✅ All contributor emails are mapped in AUTHOR_MAP."
fi

View File

@@ -28,24 +28,32 @@ jobs:
name: github-pages
url: ${{ steps.deploy.outputs.page_url }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-node@v4
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: 20
cache: npm
cache-dependency-path: website/package-lock.json
- uses: actions/setup-python@v5
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: '3.11'
- name: Install PyYAML for skill extraction
run: pip install pyyaml
run: pip install pyyaml==6.0.2 httpx==0.28.1
- name: Extract skill metadata for dashboard
run: python3 website/scripts/extract-skills.py
- name: Build skills index (if not already present)
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
if [ ! -f website/static/api/skills-index.json ]; then
python3 scripts/build_skills_index.py || echo "Skills index build failed (non-fatal)"
fi
- name: Install dependencies
run: npm ci
working-directory: website
@@ -65,10 +73,10 @@ jobs:
echo "hermes-agent.nousresearch.com" > _site/CNAME
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
uses: actions/upload-pages-artifact@56afc609e74202658d3ffba0e8f6dda462b719fa # v3
with:
path: _site
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@v4
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4

View File

@@ -23,21 +23,21 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
# Build amd64 only so we can `load` the image for smoke testing.
# `load: true` cannot export a multi-arch manifest to the local daemon.
# The multi-arch build follows on push to main / release.
- name: Build image (amd64, smoke test)
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: Dockerfile
@@ -56,36 +56,31 @@ jobs:
- name: Log in to Docker Hub
if: github.event_name == 'push' && github.ref == 'refs/heads/main' || github.event_name == 'release'
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push multi-arch image (main branch)
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: |
nousresearch/hermes-agent:latest
nousresearch/hermes-agent:${{ github.sha }}
tags: nousresearch/hermes-agent:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Push multi-arch image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v6
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
file: Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: |
nousresearch/hermes-agent:latest
nousresearch/hermes-agent:${{ github.event.release.tag_name }}
nousresearch/hermes-agent:${{ github.sha }}
tags: nousresearch/hermes-agent:${{ github.event.release.tag_name }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -7,13 +7,16 @@ on:
- '.github/workflows/docs-site-checks.yml'
workflow_dispatch:
permissions:
contents: read
jobs:
docs-site-checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-node@v4
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: 20
cache: npm
@@ -23,7 +26,7 @@ jobs:
run: npm ci
working-directory: website
- uses: actions/setup-python@v5
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: '3.11'

View File

@@ -14,6 +14,9 @@ on:
- 'run_agent.py'
- 'acp_adapter/**'
permissions:
contents: read
concurrency:
group: nix-${{ github.ref }}
cancel-in-progress: true
@@ -26,7 +29,7 @@ jobs:
runs-on: ${{ matrix.os }}
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: DeterminateSystems/nix-installer-action@ef8a148080ab6020fd15196c2084a2eea5ff2d25 # v22
- uses: DeterminateSystems/magic-nix-cache-action@565684385bcd71bad329742eefe8d12f2e765b39 # v13
- name: Check flake

101
.github/workflows/skills-index.yml vendored Normal file
View File

@@ -0,0 +1,101 @@
name: Build Skills Index
on:
schedule:
# Run twice daily: 6 AM and 6 PM UTC
- cron: '0 6,18 * * *'
workflow_dispatch: # Manual trigger
push:
branches: [main]
paths:
- 'scripts/build_skills_index.py'
- '.github/workflows/skills-index.yml'
permissions:
contents: read
jobs:
build-index:
# Only run on the upstream repository, not on forks
if: github.repository == 'NousResearch/hermes-agent'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: '3.11'
- name: Install dependencies
run: pip install httpx==0.28.1 pyyaml==6.0.2
- name: Build skills index
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: python scripts/build_skills_index.py
- name: Upload index artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: skills-index
path: website/static/api/skills-index.json
retention-days: 7
deploy-with-index:
needs: build-index
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deploy.outputs.page_url }}
# Only deploy on schedule or manual trigger (not on every push to the script)
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: skills-index
path: website/static/api/
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: 20
cache: npm
cache-dependency-path: website/package-lock.json
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: '3.11'
- name: Install PyYAML for skill extraction
run: pip install pyyaml==6.0.2
- name: Extract skill metadata for dashboard
run: python3 website/scripts/extract-skills.py
- name: Install dependencies
run: npm ci
working-directory: website
- name: Build Docusaurus
run: npm run build
working-directory: website
- name: Stage deployment
run: |
mkdir -p _site/docs
cp -r landingpage/* _site/
cp -r website/build/* _site/docs/
echo "hermes-agent.nousresearch.com" > _site/CNAME
- name: Upload artifact
uses: actions/upload-pages-artifact@56afc609e74202658d3ffba0e8f6dda462b719fa # v3
with:
path: _site
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
@@ -149,6 +149,62 @@ jobs:
"
fi
# --- CI/CD workflow files modified ---
WORKFLOW_HITS=$(git diff --name-only "$BASE".."$HEAD" | grep -E '\.github/workflows/.*\.ya?ml$' || true)
if [ -n "$WORKFLOW_HITS" ]; then
FINDINGS="${FINDINGS}
### ⚠️ WARNING: CI/CD workflow files modified
Changes to workflow files can alter build pipelines, inject steps, or modify permissions. Verify no unauthorized actions or secrets access were added.
**Files:**
\`\`\`
${WORKFLOW_HITS}
\`\`\`
"
fi
# --- Dockerfile / container build files modified ---
DOCKER_HITS=$(git diff --name-only "$BASE".."$HEAD" | grep -iE '(Dockerfile|\.dockerignore|docker-compose)' || true)
if [ -n "$DOCKER_HITS" ]; then
FINDINGS="${FINDINGS}
### ⚠️ WARNING: Container build files modified
Changes to Dockerfiles or compose files can alter base images, add build steps, or expose ports. Verify base image pins and build commands.
**Files:**
\`\`\`
${DOCKER_HITS}
\`\`\`
"
fi
# --- Dependency manifest files modified ---
DEP_HITS=$(git diff --name-only "$BASE".."$HEAD" | grep -E '(pyproject\.toml|requirements.*\.txt|package\.json|Gemfile|go\.mod|Cargo\.toml)$' || true)
if [ -n "$DEP_HITS" ]; then
FINDINGS="${FINDINGS}
### ⚠️ WARNING: Dependency manifest files modified
Changes to dependency files can introduce new packages or change version pins. Verify all dependency changes are intentional and from trusted sources.
**Files:**
\`\`\`
${DEP_HITS}
\`\`\`
"
fi
# --- GitHub Actions version unpinning (mutable tags instead of SHAs) ---
ACTIONS_UNPIN=$(echo "$DIFF" | grep -n '^\+' | grep 'uses:' | grep -v '#' | grep -E '@v[0-9]' | head -10 || true)
if [ -n "$ACTIONS_UNPIN" ]; then
FINDINGS="${FINDINGS}
### ⚠️ WARNING: GitHub Actions with mutable version tags
Actions should be pinned to full commit SHAs (not \`@v4\`, \`@v5\`). Mutable tags can be retargeted silently if a maintainer account is compromised.
**Matches:**
\`\`\`
${ACTIONS_UNPIN}
\`\`\`
"
fi
# --- Output results ---
if [ -n "$FINDINGS" ]; then
echo "found=true" >> "$GITHUB_OUTPUT"
@@ -183,7 +239,7 @@ jobs:
---
*Automated scan triggered by [supply-chain-audit](/.github/workflows/supply-chain-audit.yml). If this is a false positive, a maintainer can approve after manual review.*"
gh pr comment "${{ github.event.pull_request.number }}" --body "$BODY"
gh pr comment "${{ github.event.pull_request.number }}" --body "$BODY" || echo "::warning::Could not post PR comment (expected for fork PRs — GITHUB_TOKEN is read-only)"
- name: Fail on critical findings
if: steps.scan.outputs.critical == 'true'

View File

@@ -6,6 +6,9 @@ on:
pull_request:
branches: [main]
permissions:
contents: read
# Cancel in-progress runs for the same PR/branch
concurrency:
group: tests-${{ github.ref }}
@@ -17,13 +20,17 @@ jobs:
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Install system dependencies
run: sudo apt-get update && sudo apt-get install -y ripgrep
- name: Check for hardcoded paths
run: python3 scripts/lint_hardcoded_paths.py || true
continue-on-error: true
- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5
- name: Set up Python 3.11
run: uv python install 3.11
@@ -49,10 +56,10 @@ jobs:
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5
- name: Set up Python 3.11
run: uv python install 3.11

4
.gitignore vendored
View File

@@ -51,6 +51,9 @@ ignored/
.worktrees/
environments/benchmarks/evals/
# Web UI build output
hermes_cli/web_dist/
# Release script temp files
.release_notes.md
mini-swe-agent/
@@ -58,3 +61,4 @@ mini-swe-agent/
# Nix
.direnv/
result
website/static/api/skills-index.json

107
.mailmap Normal file
View File

@@ -0,0 +1,107 @@
# .mailmap — canonical author mapping for git shortlog / git log / GitHub
# Format: Canonical Name <canonical@email> <commit@email>
# See: https://git-scm.com/docs/gitmailmap
#
# This maps commit emails to GitHub noreply addresses so that:
# 1. `git shortlog -sn` shows deduplicated contributor counts
# 2. GitHub's contributor graph can attribute commits correctly
# 3. Contributors with personal/work emails get proper credit
#
# When adding entries: use the contributor's GitHub noreply email as canonical
# so GitHub can link commits to their profile.
# === Teknium (multiple emails) ===
Teknium <127238744+teknium1@users.noreply.github.com> <teknium1@gmail.com>
Teknium <127238744+teknium1@users.noreply.github.com> <teknium@nousresearch.com>
# === Contributors — personal/work emails mapped to GitHub noreply ===
# Format: Canonical Name <GH-noreply> <commit-email>
# Verified via GH API email search
luyao618 <364939526@qq.com> <364939526@qq.com>
ethernet8023 <arilotter@gmail.com> <arilotter@gmail.com>
nicoloboschi <boschi1997@gmail.com> <boschi1997@gmail.com>
cherifya <chef.ya@gmail.com> <chef.ya@gmail.com>
BongSuCHOI <chlqhdtn98@gmail.com> <chlqhdtn98@gmail.com>
dsocolobsky <dsocolobsky@gmail.com> <dsocolobsky@gmail.com>
pefontana <fontana.pedro93@gmail.com> <fontana.pedro93@gmail.com>
Helmi <frank@helmschrott.de> <frank@helmschrott.de>
hata1234 <hata1234@gmail.com> <hata1234@gmail.com>
# Verified via PR investigation / salvage PR bodies
DeployFaith <agents@kylefrench.dev> <agents@kylefrench.dev>
flobo3 <floptopbot33@gmail.com> <floptopbot33@gmail.com>
gaixianggeng <gaixg94@gmail.com> <gaixg94@gmail.com>
KUSH42 <xush@xush.org> <xush@xush.org>
konsisumer <der@konsi.org> <der@konsi.org>
WorldInnovationsDepartment <vorvul.danylo@gmail.com> <vorvul.danylo@gmail.com>
m0n5t3r <iacobs@m0n5t3r.info> <iacobs@m0n5t3r.info>
sprmn24 <oncuevtv@gmail.com> <oncuevtv@gmail.com>
fancydirty <fancydirty@gmail.com> <fancydirty@gmail.com>
fxfitz <francis.x.fitzpatrick@gmail.com> <francis.x.fitzpatrick@gmail.com>
limars874 <limars874@gmail.com> <limars874@gmail.com>
AaronWong1999 <aaronwong1999@icloud.com> <aaronwong1999@icloud.com>
dippwho <dipp.who@gmail.com> <dipp.who@gmail.com>
duerzy <duerzy@gmail.com> <duerzy@gmail.com>
geoffwellman <geoff.wellman@gmail.com> <geoff.wellman@gmail.com>
hcshen0111 <shenhaocheng19990111@gmail.com> <shenhaocheng19990111@gmail.com>
jamesarch <han.shan@live.cn> <han.shan@live.cn>
stephenschoettler <stephenschoettler@gmail.com> <stephenschoettler@gmail.com>
Tranquil-Flow <tranquil_flow@protonmail.com> <tranquil_flow@protonmail.com>
Dusk1e <yusufalweshdemir@gmail.com> <yusufalweshdemir@gmail.com>
Awsh1 <ysfalweshcan@gmail.com> <ysfalweshcan@gmail.com>
WAXLYY <ysfwaxlycan@gmail.com> <ysfwaxlycan@gmail.com>
donrhmexe <don.rhm@gmail.com> <don.rhm@gmail.com>
hqhq1025 <1506751656@qq.com> <1506751656@qq.com>
BlackishGreen33 <s5460703@gmail.com> <s5460703@gmail.com>
tomqiaozc <zqiao@microsoft.com> <zqiao@microsoft.com>
MagicRay1217 <mingjwan@microsoft.com> <mingjwan@microsoft.com>
aaronagent <1115117931@qq.com> <1115117931@qq.com>
YoungYang963 <young@YoungdeMacBook-Pro.local> <young@YoungdeMacBook-Pro.local>
LongOddCode <haolong@microsoft.com> <haolong@microsoft.com>
Cafexss <coffeemjj@gmail.com> <coffeemjj@gmail.com>
Cygra <sjtuwbh@gmail.com> <sjtuwbh@gmail.com>
DomGrieco <dgrieco@redhat.com> <dgrieco@redhat.com>
# Duplicate email mapping (same person, multiple emails)
Sertug17 <104278804+Sertug17@users.noreply.github.com> <srhtsrht17@gmail.com>
yyovil <birdiegyal@gmail.com> <tanishq231003@gmail.com>
DomGrieco <dgrieco@redhat.com> <dgrieco@redhat.com>
dsocolobsky <dsocolobsky@gmail.com> <dylan.socolobsky@lambdaclass.com>
olafthiele <programming@olafthiele.com> <olafthiele@gmail.com>
# Verified via git display name matching GH contributor username
cokemine <aptx4561@gmail.com> <aptx4561@gmail.com>
dalianmao000 <dalianmao0107@gmail.com> <dalianmao0107@gmail.com>
emozilla <emozilla@nousresearch.com> <emozilla@nousresearch.com>
jjovalle99 <juan.ovalle@mistral.ai> <juan.ovalle@mistral.ai>
kagura-agent <kagura.chen28@gmail.com> <kagura.chen28@gmail.com>
spniyant <niyant@spicefi.xyz> <niyant@spicefi.xyz>
olafthiele <programming@olafthiele.com> <programming@olafthiele.com>
r266-tech <r2668940489@gmail.com> <r2668940489@gmail.com>
xingkongliang <tianliangjay@gmail.com> <tianliangjay@gmail.com>
win4r <win4r@outlook.com> <win4r@outlook.com>
zhouboli <zhouboli@gmail.com> <zhouboli@gmail.com>
yongtenglei <yongtenglei@gmail.com> <yongtenglei@gmail.com>
# Nous Research team
benbarclay <ben@nousresearch.com> <ben@nousresearch.com>
jquesnelle <jonny@nousresearch.com> <jonny@nousresearch.com>
# GH contributor list verified
spideystreet <dhicham.pro@gmail.com> <dhicham.pro@gmail.com>
dorukardahan <dorukardahan@hotmail.com> <dorukardahan@hotmail.com>
MustafaKara7 <karamusti912@gmail.com> <karamusti912@gmail.com>
Hmbown <hmbown@gmail.com> <hmbown@gmail.com>
kamil-gwozdz <kamil@gwozdz.me> <kamil@gwozdz.me>
kira-ariaki <kira@ariaki.me> <kira@ariaki.me>
knopki <knopki@duck.com> <knopki@duck.com>
Unayung <unayung@gmail.com> <unayung@gmail.com>
SeeYangZhi <yangzhi.see@gmail.com> <yangzhi.see@gmail.com>
Julientalbot <julien.talbot@ergonomia.re> <julien.talbot@ergonomia.re>
lesterli <lisicheng168@gmail.com> <lisicheng168@gmail.com>
JiayuuWang <jiayuw794@gmail.com> <jiayuw794@gmail.com>
tesseracttars-creator <tesseracttars@gmail.com> <tesseracttars@gmail.com>
xinbenlv <zzn+pa@zzn.im> <zzn+pa@zzn.im>
SaulJWu <saul.jj.wu@gmail.com> <saul.jj.wu@gmail.com>
angelos <angelos@oikos.lan.home.malaiwah.com> <angelos@oikos.lan.home.malaiwah.com>

View File

@@ -13,7 +13,7 @@ source venv/bin/activate # ALWAYS activate before running Python
```
hermes-agent/
├── run_agent.py # AIAgent class — core conversation loop
├── model_tools.py # Tool orchestration, _discover_tools(), handle_function_call()
├── model_tools.py # Tool orchestration, discover_builtin_tools(), handle_function_call()
├── toolsets.py # Toolset definitions, _HERMES_CORE_TOOLS list
├── cli.py # HermesCLI class — interactive CLI orchestrator
├── hermes_state.py # SessionDB — SQLite session store (FTS5 search)
@@ -55,7 +55,7 @@ hermes-agent/
├── gateway/ # Messaging platform gateway
│ ├── run.py # Main loop, slash commands, message dispatch
│ ├── session.py # SessionStore — conversation persistence
│ └── platforms/ # Adapters: telegram, discord, slack, whatsapp, homeassistant, signal
│ └── platforms/ # Adapters: telegram, discord, slack, whatsapp, homeassistant, signal, qqbot
├── acp_adapter/ # ACP server (VS Code / Zed / JetBrains integration)
├── cron/ # Scheduler (jobs.py, scheduler.py)
├── environments/ # RL training environments (Atropos)
@@ -181,7 +181,7 @@ if canonical == "mycommand":
## Adding New Tools
Requires changes in **3 files**:
Requires changes in **2 files**:
**1. Create `tools/your_tool.py`:**
```python
@@ -204,9 +204,9 @@ registry.register(
)
```
**2. Add import** in `model_tools.py` `_discover_tools()` list.
**2. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset.
**3. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset.
Auto-discovery: any `tools/*.py` file with a top-level `registry.register()` call is imported automatically — no manual import list to maintain.
The registry handles schema collection, dispatch, availability checking, and error wrapping. All handlers MUST return a JSON string.
@@ -351,8 +351,9 @@ Cache-breaking forces dramatically higher costs. The ONLY time we alter context
### Background Process Notifications (Gateway)
When `terminal(background=true, check_interval=...)` is used, the gateway runs a watcher that
pushes status updates to the user's chat. Control verbosity with `display.background_process_notifications`
When `terminal(background=true, notify_on_complete=true)` is used, the gateway runs a watcher that
detects process completion and triggers a new agent turn. Control verbosity of background process
messages with `display.background_process_notifications`
in config.yaml (or `HERMES_BACKGROUND_NOTIFICATIONS` env var):
- `all` — running-output updates + final message (default)

View File

@@ -1,27 +1,44 @@
FROM ghcr.io/astral-sh/uv:0.11.6-python3.13-trixie@sha256:b3c543b6c4f23a5f2df22866bd7857e5d304b67a564f4feab6ac22044dde719b AS uv_source
FROM tianon/gosu:1.19-trixie@sha256:3b176695959c71e123eb390d427efc665eeb561b1540e82679c15e992006b8b9 AS gosu_source
FROM debian:13.4
# Disable Python stdout buffering to ensure logs are printed immediately
ENV PYTHONUNBUFFERED=1
# Store Playwright browsers outside the volume mount so the build-time
# install survives the /opt/data volume overlay at runtime.
ENV PLAYWRIGHT_BROWSERS_PATH=/opt/hermes/.playwright
# Install system dependencies in one layer, clear APT cache
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential nodejs npm python3 python3-pip ripgrep ffmpeg gcc python3-dev libffi-dev procps && \
build-essential nodejs npm python3 ripgrep ffmpeg gcc python3-dev libffi-dev procps git && \
rm -rf /var/lib/apt/lists/*
# Non-root user for runtime; UID can be overridden via HERMES_UID at runtime
RUN useradd -u 10000 -m -d /opt/data hermes
COPY --chmod=0755 --from=gosu_source /gosu /usr/local/bin/
COPY --chmod=0755 --from=uv_source /usr/local/bin/uv /usr/local/bin/uvx /usr/local/bin/
COPY . /opt/hermes
WORKDIR /opt/hermes
# Install Python and Node dependencies in one layer, no cache
RUN pip install --no-cache-dir uv --break-system-packages && \
uv pip install --system --break-system-packages --no-cache -e ".[all]" && \
npm install --prefer-offline --no-audit && \
# Install Node dependencies and Playwright as root (--with-deps needs apt)
RUN npm install --prefer-offline --no-audit && \
npx playwright install --with-deps chromium --only-shell && \
cd /opt/hermes/scripts/whatsapp-bridge && \
npm install --prefer-offline --no-audit && \
npm cache clean --force
WORKDIR /opt/hermes
# Hand ownership to hermes user, then install Python deps in a virtualenv
RUN chown -R hermes:hermes /opt/hermes
USER hermes
RUN uv venv && \
uv pip install --no-cache-dir -e ".[all]"
USER root
RUN chmod +x /opt/hermes/docker/entrypoint.sh
ENV HERMES_HOME=/opt/data

View File

@@ -13,7 +13,7 @@
**The self-improving AI agent built by [Nous Research](https://nousresearch.com).** It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.
Use any model you want — [Nous Portal](https://portal.nousresearch.com), [OpenRouter](https://openrouter.ai) (200+ models), [z.ai/GLM](https://z.ai), [Kimi/Moonshot](https://platform.moonshot.ai), [MiniMax](https://www.minimax.io), OpenAI, or your own endpoint. Switch with `hermes model` — no code changes, no lock-in.
Use any model you want — [Nous Portal](https://portal.nousresearch.com), [OpenRouter](https://openrouter.ai) (200+ models), [Xiaomi MiMo](https://platform.xiaomimimo.com), [z.ai/GLM](https://z.ai), [Kimi/Moonshot](https://platform.moonshot.ai), [MiniMax](https://www.minimax.io), [Hugging Face](https://huggingface.co), OpenAI, or your own endpoint. Switch with `hermes model` — no code changes, no lock-in.
<table>
<tr><td><b>A real terminal interface</b></td><td>Full TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output.</td></tr>
@@ -167,6 +167,7 @@ python -m pytest tests/ -q
- 📚 [Skills Hub](https://agentskills.io)
- 🐛 [Issues](https://github.com/NousResearch/hermes-agent/issues)
- 💡 [Discussions](https://github.com/NousResearch/hermes-agent/discussions)
- 🔌 [HermesClaw](https://github.com/AaronWong1999/hermesclaw) — Community WeChat bridge: Run Hermes Agent and OpenClaw on the same WeChat account.
---

329
RELEASE_v0.9.0.md Normal file
View File

@@ -0,0 +1,329 @@
# Hermes Agent v0.9.0 (v2026.4.13)
**Release Date:** April 13, 2026
**Since v0.8.0:** 487 commits · 269 merged PRs · 167 resolved issues · 493 files changed · 63,281 insertions · 24 contributors
> The everywhere release — Hermes goes mobile with Termux/Android, adds iMessage and WeChat, ships Fast Mode for OpenAI and Anthropic, introduces background process monitoring, launches a local web dashboard for managing your agent, and delivers the deepest security hardening pass yet across 16 supported platforms.
---
## ✨ Highlights
- **Local Web Dashboard** — A new browser-based dashboard for managing your Hermes Agent locally. Configure settings, monitor sessions, browse skills, and manage your gateway — all from a clean web interface without touching config files or the terminal. The easiest way to get started with Hermes.
- **Fast Mode (`/fast`)** — Priority processing for OpenAI and Anthropic models. Toggle `/fast` to route through priority queues for significantly lower latency on supported models (GPT-5.4, Codex, Claude). Expands across all OpenAI Priority Processing models and Anthropic's fast tier. ([#6875](https://github.com/NousResearch/hermes-agent/pull/6875), [#6960](https://github.com/NousResearch/hermes-agent/pull/6960), [#7037](https://github.com/NousResearch/hermes-agent/pull/7037))
- **iMessage via BlueBubbles** — Full iMessage integration through BlueBubbles, bringing Hermes to Apple's messaging ecosystem. Auto-webhook registration, setup wizard integration, and crash resilience. ([#6437](https://github.com/NousResearch/hermes-agent/pull/6437), [#6460](https://github.com/NousResearch/hermes-agent/pull/6460), [#6494](https://github.com/NousResearch/hermes-agent/pull/6494))
- **WeChat (Weixin) & WeCom Callback Mode** — Native WeChat support via iLink Bot API and a new WeCom callback-mode adapter for self-built enterprise apps. Streaming cursor, media uploads, markdown link handling, and atomic state persistence. Hermes now covers the Chinese messaging ecosystem end-to-end. ([#7166](https://github.com/NousResearch/hermes-agent/pull/7166), [#7943](https://github.com/NousResearch/hermes-agent/pull/7943))
- **Termux / Android Support** — Run Hermes natively on Android via Termux. Adapted install paths, TUI optimizations for mobile screens, voice backend support, and the `/image` command work on-device. ([#6834](https://github.com/NousResearch/hermes-agent/pull/6834))
- **Background Process Monitoring (`watch_patterns`)** — Set patterns to watch for in background process output and get notified in real-time when they match. Monitor for errors, wait for specific events ("listening on port"), or watch build logs — all without polling. ([#7635](https://github.com/NousResearch/hermes-agent/pull/7635))
- **Native xAI & Xiaomi MiMo Providers** — First-class provider support for xAI (Grok) and Xiaomi MiMo, with direct API access, model catalogs, and setup wizard integration. Plus Qwen OAuth with portal request support. ([#7372](https://github.com/NousResearch/hermes-agent/pull/7372), [#7855](https://github.com/NousResearch/hermes-agent/pull/7855))
- **Pluggable Context Engine** — Context management is now a pluggable slot via `hermes plugins`. Swap in custom context engines that control what the agent sees each turn — filtering, summarization, or domain-specific context injection. ([#7464](https://github.com/NousResearch/hermes-agent/pull/7464))
- **Unified Proxy Support** — SOCKS proxy, `DISCORD_PROXY`, and system proxy auto-detection across all gateway platforms. Hermes behind corporate firewalls just works. ([#6814](https://github.com/NousResearch/hermes-agent/pull/6814))
- **Comprehensive Security Hardening** — Path traversal protection in checkpoint manager, shell injection neutralization in sandbox writes, SSRF redirect guards in Slack image uploads, Twilio webhook signature validation (SMS RCE fix), API server auth enforcement, git argument injection prevention, and approval button authorization. ([#7933](https://github.com/NousResearch/hermes-agent/pull/7933), [#7944](https://github.com/NousResearch/hermes-agent/pull/7944), [#7940](https://github.com/NousResearch/hermes-agent/pull/7940), [#7151](https://github.com/NousResearch/hermes-agent/pull/7151), [#7156](https://github.com/NousResearch/hermes-agent/pull/7156))
- **`hermes backup` & `hermes import`** — Full backup and restore of your Hermes configuration, sessions, skills, and memory. Migrate between machines or create snapshots before major changes. ([#7997](https://github.com/NousResearch/hermes-agent/pull/7997))
- **16 Supported Platforms** — With BlueBubbles (iMessage) and WeChat joining Telegram, Discord, Slack, WhatsApp, Signal, Matrix, Email, SMS, DingTalk, Feishu, WeCom, Mattermost, Home Assistant, and Webhooks, Hermes now runs on 16 messaging platforms out of the box.
- **`/debug` & `hermes debug share`** — New debugging toolkit: `/debug` slash command across all platforms for quick diagnostics, plus `hermes debug share` to upload a full debug report to a pastebin for easy sharing when troubleshooting. ([#8681](https://github.com/NousResearch/hermes-agent/pull/8681))
---
## 🏗️ Core Agent & Architecture
### Provider & Model Support
- **Native xAI (Grok) provider** with direct API access and model catalog ([#7372](https://github.com/NousResearch/hermes-agent/pull/7372))
- **Xiaomi MiMo as first-class provider** — setup wizard, model catalog, empty response recovery ([#7855](https://github.com/NousResearch/hermes-agent/pull/7855))
- **Qwen OAuth provider** with portal request support ([#6282](https://github.com/NousResearch/hermes-agent/pull/6282))
- **Fast Mode** — `/fast` toggle for OpenAI Priority Processing + Anthropic fast tier ([#6875](https://github.com/NousResearch/hermes-agent/pull/6875), [#6960](https://github.com/NousResearch/hermes-agent/pull/6960), [#7037](https://github.com/NousResearch/hermes-agent/pull/7037))
- **Structured API error classification** for smart failover decisions ([#6514](https://github.com/NousResearch/hermes-agent/pull/6514))
- **Rate limit header capture** shown in `/usage` ([#6541](https://github.com/NousResearch/hermes-agent/pull/6541))
- **API server model name** derived from profile name ([#6857](https://github.com/NousResearch/hermes-agent/pull/6857))
- **Custom providers** now included in `/model` listings and resolution ([#7088](https://github.com/NousResearch/hermes-agent/pull/7088))
- **Fallback provider activation** on repeated empty responses with user-visible status ([#7505](https://github.com/NousResearch/hermes-agent/pull/7505))
- **OpenRouter variant tags** (`:free`, `:extended`, `:fast`) preserved during model switch ([#6383](https://github.com/NousResearch/hermes-agent/pull/6383))
- **Credential exhaustion TTL** reduced from 24 hours to 1 hour ([#6504](https://github.com/NousResearch/hermes-agent/pull/6504))
- **OAuth credential lifecycle** hardening — stale pool keys, auth.json sync, Codex CLI race fixes ([#6874](https://github.com/NousResearch/hermes-agent/pull/6874))
- Empty response recovery for reasoning models (MiMo, Qwen, GLM) ([#8609](https://github.com/NousResearch/hermes-agent/pull/8609))
- MiniMax context lengths, thinking guard, endpoint corrections ([#6082](https://github.com/NousResearch/hermes-agent/pull/6082), [#7126](https://github.com/NousResearch/hermes-agent/pull/7126))
- Z.AI endpoint auto-detect via probe and cache ([#5763](https://github.com/NousResearch/hermes-agent/pull/5763))
### Agent Loop & Conversation
- **Pluggable context engine slot** via `hermes plugins` ([#7464](https://github.com/NousResearch/hermes-agent/pull/7464))
- **Background process monitoring** — `watch_patterns` for real-time output alerts ([#7635](https://github.com/NousResearch/hermes-agent/pull/7635))
- **Improved context compression** — higher limits, tool tracking, degradation warnings, token-budget tail protection ([#6395](https://github.com/NousResearch/hermes-agent/pull/6395), [#6453](https://github.com/NousResearch/hermes-agent/pull/6453))
- **`/compress <focus>`** — guided compression with a focus topic ([#8017](https://github.com/NousResearch/hermes-agent/pull/8017))
- **Tiered context pressure warnings** with gateway dedup ([#6411](https://github.com/NousResearch/hermes-agent/pull/6411))
- **Staged inactivity warning** before timeout escalation ([#6387](https://github.com/NousResearch/hermes-agent/pull/6387))
- **Prevent agent from stopping mid-task** — compression floor, budget overhaul, activity tracking ([#7983](https://github.com/NousResearch/hermes-agent/pull/7983))
- **Propagate child activity to parent** during `delegate_task` ([#7295](https://github.com/NousResearch/hermes-agent/pull/7295))
- **Truncated streaming tool call detection** before execution ([#6847](https://github.com/NousResearch/hermes-agent/pull/6847))
- Empty response retry (3 attempts with nudge) ([#6488](https://github.com/NousResearch/hermes-agent/pull/6488))
- Adaptive streaming backoff + cursor strip to prevent message truncation ([#7683](https://github.com/NousResearch/hermes-agent/pull/7683))
- Compression uses live session model instead of stale persisted config ([#8258](https://github.com/NousResearch/hermes-agent/pull/8258))
- Strip `<thought>` tags from Gemma 4 responses ([#8562](https://github.com/NousResearch/hermes-agent/pull/8562))
- Prevent `<think>` in prose from suppressing response output ([#6968](https://github.com/NousResearch/hermes-agent/pull/6968))
- Turn-exit diagnostic logging to agent loop ([#6549](https://github.com/NousResearch/hermes-agent/pull/6549))
- Scope tool interrupt signal per-thread to prevent cross-session leaks ([#7930](https://github.com/NousResearch/hermes-agent/pull/7930))
### Memory & Sessions
- **Hindsight memory plugin** — feature parity, setup wizard, config improvements — @nicoloboschi ([#6428](https://github.com/NousResearch/hermes-agent/pull/6428))
- **Honcho** — opt-in `initOnSessionStart` for tools mode — @Kathie-yu ([#6995](https://github.com/NousResearch/hermes-agent/pull/6995))
- Orphan children instead of cascade-deleting in prune/delete ([#6513](https://github.com/NousResearch/hermes-agent/pull/6513))
- Doctor command only checks the active memory provider ([#6285](https://github.com/NousResearch/hermes-agent/pull/6285))
---
## 📱 Messaging Platforms (Gateway)
### New Platforms
- **BlueBubbles (iMessage)** — full adapter with auto-webhook registration, setup wizard, and crash resilience ([#6437](https://github.com/NousResearch/hermes-agent/pull/6437), [#6460](https://github.com/NousResearch/hermes-agent/pull/6460), [#6494](https://github.com/NousResearch/hermes-agent/pull/6494), [#7107](https://github.com/NousResearch/hermes-agent/pull/7107))
- **Weixin (WeChat)** — native support via iLink Bot API with streaming, media uploads, markdown links ([#7166](https://github.com/NousResearch/hermes-agent/pull/7166), [#8665](https://github.com/NousResearch/hermes-agent/pull/8665))
- **WeCom Callback Mode** — self-built enterprise app adapter with atomic state persistence ([#7943](https://github.com/NousResearch/hermes-agent/pull/7943), [#7928](https://github.com/NousResearch/hermes-agent/pull/7928))
### Discord
- **Allowed channels whitelist** config — @jarvis-phw ([#7044](https://github.com/NousResearch/hermes-agent/pull/7044))
- **Forum channel topic inheritance** in thread sessions — @hermes-agent-dhabibi ([#6377](https://github.com/NousResearch/hermes-agent/pull/6377))
- **DISCORD_REPLY_TO_MODE** setting ([#6333](https://github.com/NousResearch/hermes-agent/pull/6333))
- Accept `.log` attachments, raise document size limit — @kira-ariaki ([#6467](https://github.com/NousResearch/hermes-agent/pull/6467))
- Decouple readiness from slash sync ([#8016](https://github.com/NousResearch/hermes-agent/pull/8016))
### Slack
- **Consolidated Slack improvements** — 7 community PRs salvaged into one ([#6809](https://github.com/NousResearch/hermes-agent/pull/6809))
- Handle assistant thread lifecycle events ([#6433](https://github.com/NousResearch/hermes-agent/pull/6433))
### Matrix
- **Migrated from matrix-nio to mautrix-python** ([#7518](https://github.com/NousResearch/hermes-agent/pull/7518))
- SQLite crypto store replacing pickle (fixes E2EE decryption) — @alt-glitch ([#7981](https://github.com/NousResearch/hermes-agent/pull/7981))
- Cross-signing recovery key verification for E2EE migration ([#8282](https://github.com/NousResearch/hermes-agent/pull/8282))
- DM mention threads + group chat events for Feishu ([#7423](https://github.com/NousResearch/hermes-agent/pull/7423))
### Gateway Core
- **Unified proxy support** — SOCKS, DISCORD_PROXY, multi-platform with macOS auto-detection ([#6814](https://github.com/NousResearch/hermes-agent/pull/6814))
- **Inbound text batching** for Discord, Matrix, WeCom + adaptive delay ([#6979](https://github.com/NousResearch/hermes-agent/pull/6979))
- **Surface natural mid-turn assistant messages** in chat platforms ([#7978](https://github.com/NousResearch/hermes-agent/pull/7978))
- **WSL-aware gateway** with smart systemd detection ([#7510](https://github.com/NousResearch/hermes-agent/pull/7510))
- **All missing platforms added to setup wizard** ([#7949](https://github.com/NousResearch/hermes-agent/pull/7949))
- **Per-platform `tool_progress` overrides** ([#6348](https://github.com/NousResearch/hermes-agent/pull/6348))
- **Configurable 'still working' notification interval** ([#8572](https://github.com/NousResearch/hermes-agent/pull/8572))
- `/model` switch persists across messages ([#7081](https://github.com/NousResearch/hermes-agent/pull/7081))
- `/usage` shows rate limits, cost, and token details between turns ([#7038](https://github.com/NousResearch/hermes-agent/pull/7038))
- Drain in-flight work before restart ([#7503](https://github.com/NousResearch/hermes-agent/pull/7503))
- Don't evict cached agent on failed runs — prevents MCP restart loop ([#7539](https://github.com/NousResearch/hermes-agent/pull/7539))
- Replace `os.environ` session state with `contextvars` ([#7454](https://github.com/NousResearch/hermes-agent/pull/7454))
- Derive channel directory platforms from enum instead of hardcoded list ([#7450](https://github.com/NousResearch/hermes-agent/pull/7450))
- Validate image downloads before caching (cross-platform) ([#7125](https://github.com/NousResearch/hermes-agent/pull/7125))
- Cross-platform webhook delivery for all platforms ([#7095](https://github.com/NousResearch/hermes-agent/pull/7095))
- Cron Discord thread_id delivery support ([#7106](https://github.com/NousResearch/hermes-agent/pull/7106))
- Feishu QR-based bot onboarding ([#8570](https://github.com/NousResearch/hermes-agent/pull/8570))
- Gateway status scoped to active profile ([#7951](https://github.com/NousResearch/hermes-agent/pull/7951))
- Prevent background process notifications from triggering false pairing requests ([#6434](https://github.com/NousResearch/hermes-agent/pull/6434))
---
## 🖥️ CLI & User Experience
### Interactive CLI
- **Termux / Android support** — adapted install paths, TUI, voice, `/image` ([#6834](https://github.com/NousResearch/hermes-agent/pull/6834))
- **Native `/model` picker modal** for provider → model selection ([#8003](https://github.com/NousResearch/hermes-agent/pull/8003))
- **Live per-tool elapsed timer** restored in TUI spinner ([#7359](https://github.com/NousResearch/hermes-agent/pull/7359))
- **Stacked tool progress scrollback** in TUI ([#8201](https://github.com/NousResearch/hermes-agent/pull/8201))
- **Random tips on new session start** (CLI + gateway, 279 tips) ([#8225](https://github.com/NousResearch/hermes-agent/pull/8225), [#8237](https://github.com/NousResearch/hermes-agent/pull/8237))
- **`hermes dump`** — copy-pasteable setup summary for debugging ([#6550](https://github.com/NousResearch/hermes-agent/pull/6550))
- **`hermes backup` / `hermes import`** — full config backup and restore ([#7997](https://github.com/NousResearch/hermes-agent/pull/7997))
- **WSL environment hint** in system prompt ([#8285](https://github.com/NousResearch/hermes-agent/pull/8285))
- **Profile creation UX** — seed SOUL.md + credential warning ([#8553](https://github.com/NousResearch/hermes-agent/pull/8553))
- Shell-aware sudo detection, empty password support ([#6517](https://github.com/NousResearch/hermes-agent/pull/6517))
- Flush stdin after curses/terminal menus to prevent escape sequence leakage ([#7167](https://github.com/NousResearch/hermes-agent/pull/7167))
- Handle broken stdin in prompt_toolkit startup ([#8560](https://github.com/NousResearch/hermes-agent/pull/8560))
### Setup & Configuration
- **Per-platform display verbosity** configuration ([#8006](https://github.com/NousResearch/hermes-agent/pull/8006))
- **Component-separated logging** with session context and filtering ([#7991](https://github.com/NousResearch/hermes-agent/pull/7991))
- **`network.force_ipv4`** config to fix IPv6 timeout issues ([#8196](https://github.com/NousResearch/hermes-agent/pull/8196))
- **Standardize message whitespace and JSON formatting** ([#7988](https://github.com/NousResearch/hermes-agent/pull/7988))
- **Rebrand OpenClaw → Hermes** during migration ([#8210](https://github.com/NousResearch/hermes-agent/pull/8210))
- Config.yaml takes priority over env vars for auxiliary settings ([#7889](https://github.com/NousResearch/hermes-agent/pull/7889))
- Harden setup provider flows + live OpenRouter catalog refresh ([#7078](https://github.com/NousResearch/hermes-agent/pull/7078))
- Normalize reasoning effort ordering across all surfaces ([#6804](https://github.com/NousResearch/hermes-agent/pull/6804))
- Remove dead `LLM_MODEL` env var + migration to clear stale entries ([#6543](https://github.com/NousResearch/hermes-agent/pull/6543))
- Remove `/prompt` slash command — prefix expansion footgun ([#6752](https://github.com/NousResearch/hermes-agent/pull/6752))
- `HERMES_HOME_MODE` env var to override permissions — @ygd58 ([#6993](https://github.com/NousResearch/hermes-agent/pull/6993))
- Fall back to default model when model config is empty ([#8303](https://github.com/NousResearch/hermes-agent/pull/8303))
- Warn when compression model context is too small ([#7894](https://github.com/NousResearch/hermes-agent/pull/7894))
---
## 🔧 Tool System
### Environments & Execution
- **Unified spawn-per-call execution layer** for environments ([#6343](https://github.com/NousResearch/hermes-agent/pull/6343))
- **Unified file sync** with mtime tracking, deletion, and transactional state ([#7087](https://github.com/NousResearch/hermes-agent/pull/7087))
- **Persistent sandbox envs** survive between turns ([#6412](https://github.com/NousResearch/hermes-agent/pull/6412))
- **Bulk file sync** via tar pipe for SSH/Modal backends — @alt-glitch ([#8014](https://github.com/NousResearch/hermes-agent/pull/8014))
- **Daytona** — bulk upload, config bridge, silent disk cap ([#7538](https://github.com/NousResearch/hermes-agent/pull/7538))
- Foreground timeout cap to prevent session deadlocks ([#7082](https://github.com/NousResearch/hermes-agent/pull/7082))
- Guard invalid command values ([#6417](https://github.com/NousResearch/hermes-agent/pull/6417))
### MCP
- **`hermes mcp add --env` and `--preset`** support ([#7970](https://github.com/NousResearch/hermes-agent/pull/7970))
- Combine `content` and `structuredContent` when both present ([#7118](https://github.com/NousResearch/hermes-agent/pull/7118))
- MCP tool name deconfliction fixes ([#7654](https://github.com/NousResearch/hermes-agent/pull/7654))
### Browser
- Browser hardening — dead code removal, caching, scroll perf, security, thread safety ([#7354](https://github.com/NousResearch/hermes-agent/pull/7354))
- `/browser connect` auto-launch uses dedicated Chrome profile dir ([#6821](https://github.com/NousResearch/hermes-agent/pull/6821))
- Reap orphaned browser sessions on startup ([#7931](https://github.com/NousResearch/hermes-agent/pull/7931))
### Voice & Vision
- **Voxtral TTS provider** (Mistral AI) ([#7653](https://github.com/NousResearch/hermes-agent/pull/7653))
- **TTS speed support** for Edge TTS, OpenAI TTS, MiniMax ([#8666](https://github.com/NousResearch/hermes-agent/pull/8666))
- **Vision auto-resize** for oversized images, raise limit to 20 MB, retry-on-failure ([#7883](https://github.com/NousResearch/hermes-agent/pull/7883), [#7902](https://github.com/NousResearch/hermes-agent/pull/7902))
- STT provider-model mismatch fix (whisper-1 vs faster-whisper) ([#7113](https://github.com/NousResearch/hermes-agent/pull/7113))
### Other Tools
- **`hermes dump`** command for setup summary ([#6550](https://github.com/NousResearch/hermes-agent/pull/6550))
- TODO store enforces ID uniqueness during replace operations ([#7986](https://github.com/NousResearch/hermes-agent/pull/7986))
- List all available toolsets in `delegate_task` schema description ([#8231](https://github.com/NousResearch/hermes-agent/pull/8231))
- API server: tool progress as custom SSE event to prevent model corruption ([#7500](https://github.com/NousResearch/hermes-agent/pull/7500))
- API server: share one Docker container across all conversations ([#7127](https://github.com/NousResearch/hermes-agent/pull/7127))
---
## 🧩 Skills Ecosystem
- **Centralized skills index + tree cache** — eliminates rate-limit failures on install ([#8575](https://github.com/NousResearch/hermes-agent/pull/8575))
- **More aggressive skill loading instructions** in system prompt (v3) ([#8209](https://github.com/NousResearch/hermes-agent/pull/8209), [#8286](https://github.com/NousResearch/hermes-agent/pull/8286))
- **Google Workspace skill** migrated to GWS CLI backend ([#6788](https://github.com/NousResearch/hermes-agent/pull/6788))
- **Creative divergence strategies** skill — @SHL0MS ([#6882](https://github.com/NousResearch/hermes-agent/pull/6882))
- **Creative ideation** — constraint-driven project generation — @SHL0MS ([#7555](https://github.com/NousResearch/hermes-agent/pull/7555))
- Parallelize skills browse/search to prevent hanging ([#7301](https://github.com/NousResearch/hermes-agent/pull/7301))
- Read name from SKILL.md frontmatter in skills_sync ([#7623](https://github.com/NousResearch/hermes-agent/pull/7623))
---
## 🔒 Security & Reliability
### Security Hardening
- **Twilio webhook signature validation** — SMS RCE fix ([#7933](https://github.com/NousResearch/hermes-agent/pull/7933))
- **Shell injection neutralization** in `_write_to_sandbox` via path quoting ([#7940](https://github.com/NousResearch/hermes-agent/pull/7940))
- **Git argument injection** and path traversal prevention in checkpoint manager ([#7944](https://github.com/NousResearch/hermes-agent/pull/7944))
- **SSRF redirect bypass** in Slack image uploads + base.py cache helpers ([#7151](https://github.com/NousResearch/hermes-agent/pull/7151))
- **Path traversal, credential gate, DANGEROUS_PATTERNS gaps** ([#7156](https://github.com/NousResearch/hermes-agent/pull/7156))
- **API bind guard** — enforce `API_SERVER_KEY` for non-loopback binding ([#7455](https://github.com/NousResearch/hermes-agent/pull/7455))
- **Approval button authorization** — require auth for session continuation — @Cafexss ([#6930](https://github.com/NousResearch/hermes-agent/pull/6930))
- Path boundary enforcement in skill manager operations ([#7156](https://github.com/NousResearch/hermes-agent/pull/7156))
- DingTalk/API webhook URL origin validation, header injection rejection ([#7455](https://github.com/NousResearch/hermes-agent/pull/7455))
### Reliability
- **Contextual error diagnostics** for invalid API responses ([#8565](https://github.com/NousResearch/hermes-agent/pull/8565))
- **Prevent 400 format errors** from triggering compression loop on Codex ([#6751](https://github.com/NousResearch/hermes-agent/pull/6751))
- **Don't halve context_length** on output-cap-too-large errors — @KUSH42 ([#6664](https://github.com/NousResearch/hermes-agent/pull/6664))
- **Recover primary client** on OpenAI transport errors ([#7108](https://github.com/NousResearch/hermes-agent/pull/7108))
- **Credential pool rotation** on billing-classified 400s ([#7112](https://github.com/NousResearch/hermes-agent/pull/7112))
- **Auto-increase stream read timeout** for local LLM providers ([#6967](https://github.com/NousResearch/hermes-agent/pull/6967))
- **Fall back to default certs** when CA bundle path doesn't exist ([#7352](https://github.com/NousResearch/hermes-agent/pull/7352))
- **Disambiguate usage-limit patterns** in error classifier — @sprmn24 ([#6836](https://github.com/NousResearch/hermes-agent/pull/6836))
- Harden cron script timeout and provider recovery ([#7079](https://github.com/NousResearch/hermes-agent/pull/7079))
- Gateway interrupt detection resilient to monitor task failures ([#8208](https://github.com/NousResearch/hermes-agent/pull/8208))
- Prevent unwanted session auto-reset after graceful gateway restarts ([#8299](https://github.com/NousResearch/hermes-agent/pull/8299))
- Prevent duplicate update prompt spam in gateway watcher ([#8343](https://github.com/NousResearch/hermes-agent/pull/8343))
- Deduplicate reasoning items in Responses API input ([#7946](https://github.com/NousResearch/hermes-agent/pull/7946))
### Infrastructure
- **Multi-arch Docker image** — amd64 + arm64 ([#6124](https://github.com/NousResearch/hermes-agent/pull/6124))
- **Docker runs as non-root user** with virtualenv — @benbarclay contributing ([#8226](https://github.com/NousResearch/hermes-agent/pull/8226))
- **Use `uv`** for Docker dependency resolution to fix resolution-too-deep ([#6965](https://github.com/NousResearch/hermes-agent/pull/6965))
- **Container-aware Nix CLI** — auto-route into managed container — @alt-glitch ([#7543](https://github.com/NousResearch/hermes-agent/pull/7543))
- **Nix shared-state permission model** for interactive CLI users — @alt-glitch ([#6796](https://github.com/NousResearch/hermes-agent/pull/6796))
- **Per-profile subprocess HOME isolation** ([#7357](https://github.com/NousResearch/hermes-agent/pull/7357))
- Profile paths fixed in Docker — profiles go to mounted volume ([#7170](https://github.com/NousResearch/hermes-agent/pull/7170))
- Docker container gateway pathway hardened ([#8614](https://github.com/NousResearch/hermes-agent/pull/8614))
- Enable unbuffered stdout for live Docker logs ([#6749](https://github.com/NousResearch/hermes-agent/pull/6749))
- Install procps in Docker image — @HiddenPuppy ([#7032](https://github.com/NousResearch/hermes-agent/pull/7032))
- Shallow git clone for faster installation — @sosyz ([#8396](https://github.com/NousResearch/hermes-agent/pull/8396))
- `hermes update` always reset on stash conflict ([#7010](https://github.com/NousResearch/hermes-agent/pull/7010))
- Write update exit code before gateway restart (cgroup kill race) ([#8288](https://github.com/NousResearch/hermes-agent/pull/8288))
- Nix: `setupSecrets` optional, tirith runtime dep — @devorun, @ethernet8023 ([#6261](https://github.com/NousResearch/hermes-agent/pull/6261), [#6721](https://github.com/NousResearch/hermes-agent/pull/6721))
- launchd stop uses `bootout` so `KeepAlive` doesn't respawn ([#7119](https://github.com/NousResearch/hermes-agent/pull/7119))
---
## 🐛 Notable Bug Fixes
- Fix: `/model` switch not persisting across gateway messages ([#7081](https://github.com/NousResearch/hermes-agent/pull/7081))
- Fix: session-scoped gateway model overrides ignored — @Hygaard ([#7662](https://github.com/NousResearch/hermes-agent/pull/7662))
- Fix: compaction model context length ignoring config — 3 related issues ([#8258](https://github.com/NousResearch/hermes-agent/pull/8258), [#8107](https://github.com/NousResearch/hermes-agent/pull/8107))
- Fix: OpenCode.ai context window resolved to 128K instead of 1M ([#6472](https://github.com/NousResearch/hermes-agent/pull/6472))
- Fix: Codex fallback auth-store lookup — @cherifya ([#6462](https://github.com/NousResearch/hermes-agent/pull/6462))
- Fix: duplicate completion notifications when process killed ([#7124](https://github.com/NousResearch/hermes-agent/pull/7124))
- Fix: agent daemon thread prevents orphan CLI processes on tab close ([#8557](https://github.com/NousResearch/hermes-agent/pull/8557))
- Fix: stale image attachment on text paste and voice input ([#7077](https://github.com/NousResearch/hermes-agent/pull/7077))
- Fix: DM thread session seeding causing cross-thread contamination ([#7084](https://github.com/NousResearch/hermes-agent/pull/7084))
- Fix: OpenClaw migration shows dry-run preview before executing ([#6769](https://github.com/NousResearch/hermes-agent/pull/6769))
- Fix: auth errors misclassified as retryable — @kuishou68 ([#7027](https://github.com/NousResearch/hermes-agent/pull/7027))
- Fix: Copilot-Integration-Id header missing ([#7083](https://github.com/NousResearch/hermes-agent/pull/7083))
- Fix: ACP session capabilities — @luyao618 ([#6985](https://github.com/NousResearch/hermes-agent/pull/6985))
- Fix: ACP PromptResponse usage from top-level fields ([#7086](https://github.com/NousResearch/hermes-agent/pull/7086))
- Fix: several failing/flaky tests on main — @dsocolobsky ([#6777](https://github.com/NousResearch/hermes-agent/pull/6777))
- Fix: backup marker filenames — @sprmn24 ([#8600](https://github.com/NousResearch/hermes-agent/pull/8600))
- Fix: `NoneType` in fast_mode check — @0xbyt4 ([#7350](https://github.com/NousResearch/hermes-agent/pull/7350))
- Fix: missing imports in uninstall.py — @JiayuuWang ([#7034](https://github.com/NousResearch/hermes-agent/pull/7034))
---
## 📚 Documentation
- Platform adapter developer guide + WeCom Callback docs ([#7969](https://github.com/NousResearch/hermes-agent/pull/7969))
- Cron troubleshooting guide ([#7122](https://github.com/NousResearch/hermes-agent/pull/7122))
- Streaming timeout auto-detection for local LLMs ([#6990](https://github.com/NousResearch/hermes-agent/pull/6990))
- Tool-use enforcement documentation expanded ([#7984](https://github.com/NousResearch/hermes-agent/pull/7984))
- BlueBubbles pairing instructions ([#6548](https://github.com/NousResearch/hermes-agent/pull/6548))
- Telegram proxy support section ([#6348](https://github.com/NousResearch/hermes-agent/pull/6348))
- `hermes dump` and `hermes logs` CLI reference ([#6552](https://github.com/NousResearch/hermes-agent/pull/6552))
- `tool_progress_overrides` configuration reference ([#6364](https://github.com/NousResearch/hermes-agent/pull/6364))
- Compression model context length warning docs ([#7879](https://github.com/NousResearch/hermes-agent/pull/7879))
---
## 👥 Contributors
**269 merged PRs** from **24 contributors** across **487 commits**.
### Community Contributors
- **@alt-glitch** (6 PRs) — Nix container-aware CLI, shared-state permissions, Matrix SQLite crypto store, bulk SSH/Modal file sync, Matrix mautrix compat
- **@SHL0MS** (2 PRs) — Creative divergence strategies skill, creative ideation skill
- **@sprmn24** (2 PRs) — Error classifier disambiguation, backup marker fix
- **@nicoloboschi** — Hindsight memory plugin feature parity
- **@Hygaard** — Session-scoped gateway model override fix
- **@jarvis-phw** — Discord allowed_channels whitelist
- **@Kathie-yu** — Honcho initOnSessionStart for tools mode
- **@hermes-agent-dhabibi** — Discord forum channel topic inheritance
- **@kira-ariaki** — Discord .log attachments and size limit
- **@cherifya** — Codex fallback auth-store lookup
- **@Cafexss** — Security: auth for session continuation
- **@KUSH42** — Compaction context_length fix
- **@kuishou68** — Auth error retryable classification fix
- **@luyao618** — ACP session capabilities
- **@ygd58** — HERMES_HOME_MODE env var override
- **@0xbyt4** — Fast mode NoneType fix
- **@JiayuuWang** — CLI uninstall import fix
- **@HiddenPuppy** — Docker procps installation
- **@dsocolobsky** — Test suite fixes
- **@bobashopcashier** (1 PR) — Graceful gateway drain before restart (salvaged into #7503 from #7290)
- **@benbarclay** — Docker image tag simplification
- **@sosyz** — Shallow git clone for faster install
- **@devorun** — Nix setupSecrets optional
- **@ethernet8023** — Nix tirith runtime dep
---
**Full Changelog**: [v2026.4.8...v2026.4.13](https://github.com/NousResearch/hermes-agent/compare/v2026.4.8...v2026.4.13)

View File

@@ -0,0 +1,172 @@
# Vector Database SOTA Research Report
## For AI Agent Semantic Retrieval — April 2026
---
## Executive Summary
Analysis of current vector database benchmarks, documentation, and production deployments for semantic retrieval in AI agents. Compared against existing Hermes session_search (SQLite FTS5) and holographic memory systems.
---
## 1. Retrieval Accuracy (Recall@10)
| Database | HNSW Recall | IVF Recall | Notes |
|----------|-------------|------------|-------|
| **Qdrant** | 0.95-0.99 | N/A | Tunable via ef parameter |
| **Milvus** | 0.95-0.99 | 0.85-0.95 | Multiple index support |
| **Weaviate** | 0.95-0.98 | N/A | HNSW primary |
| **Pinecone** | 0.95-0.99 | N/A | Managed, opaque tuning |
| **ChromaDB** | 0.90-0.95 | N/A | Simpler, uses HNSW via hnswlib |
| **pgvector** | 0.85-0.95 | 0.80-0.90 | Depends on tuning |
| **SQLite-vss** | 0.80-0.90 | N/A | HNSW via sqlite-vss |
| **Current FTS5** | ~0.60-0.75* | N/A | Keyword matching only |
*FTS5 "recall" estimated: good for exact keywords, poor for semantic/paraphrased queries.
---
## 2. Latency Benchmarks (1M vectors, 768-dim, 10 neighbors)
| Database | p50 (ms) | p99 (ms) | QPS | Notes |
|----------|----------|----------|-----|-------|
| **Qdrant** | 1-3 | 5-10 | 5,000-15,000 | Best self-hosted |
| **Milvus** | 2-5 | 8-15 | 3,000-12,000 | Good distributed |
| **Weaviate** | 3-8 | 10-25 | 2,000-8,000 | |
| **Pinecone** | 5-15 | 20-50 | 1,000-5,000 | Managed overhead |
| **ChromaDB** | 5-15 | 20-50 | 500-2,000 | Embedded mode |
| **pgvector** | 10-50 | 50-200 | 200-1,000 | SQL overhead |
| **SQLite-vss** | 10-30 | 50-150 | 300-800 | Limited scalability |
| **Current FTS5** | 2-10 | 15-50 | 1,000-5,000 | No embedding cost |
---
## 3. Index Types Comparison
### HNSW (Hierarchical Navigable Small World)
- Best for: High recall, moderate memory, fast queries
- Used by: Qdrant, Weaviate, ChromaDB, Milvus, pgvector, SQLite-vss
- Memory: High (~1.5GB per 1M 768-dim vectors)
- Key parameters: ef_construction (100-500), M (16-64), ef (64-256)
### IVF (Inverted File Index)
- Best for: Large datasets, memory-constrained
- Used by: Milvus, pgvector
- Memory: Lower (~0.5GB per 1M vectors)
- Key parameters: nlist (100-10000), nprobe (10-100)
### DiskANN / SPANN
- Best for: 100M+ vectors on disk
- Memory: Very low (~100MB index)
### Quantization (SQ/PQ)
- Memory reduction: 4-8x
- Recall impact: -5-15%
---
## 4. Multi-Modal Support
| Database | Text | Image | Audio | Video | Mixed Queries |
|----------|------|-------|-------|-------|---------------|
| Qdrant | ✅ | ✅ | ✅ | ✅ | ✅ (multi-vector) |
| Milvus | ✅ | ✅ | ✅ | ✅ | ✅ (hybrid) |
| Weaviate | ✅ | ✅ | ✅ | ✅ | ✅ (named vectors) |
| Pinecone | ✅ | ✅ | ✅ | ✅ | Limited |
| ChromaDB | ✅ | Via emb | Via emb | Via emb | Limited |
| pgvector | ✅ | Via emb | Via emb | Via emb | Limited |
| SQLite-vss | ✅ | Via emb | Via emb | Via emb | Limited |
---
## 5. Integration Patterns for AI Agents
### Pattern A: Direct Search
Query → Embedding → Vector DB → Top-K → LLM
### Pattern B: Hybrid Search
Query → BM25 + Vector → Merge/Rerank → LLM
### Pattern C: Multi-Stage
Query → Vector DB (top-100) → Reranker (top-10) → LLM
### Pattern D: Agent Memory with Trust + Decay
Query → Vector → Score × Trust × Decay → Top-K → Summarize
---
## 6. Comparison with Current Systems
### session_search (FTS5)
Strengths: Zero deps, no embedding needed, fast for exact keywords
Limitations: No semantic understanding, no cross-lingual, limited ranking
### holographic/retrieval.py (HRR)
Strengths: Compositional queries, contradiction detection, trust + decay
Limitations: Requires numpy, O(n) scan, non-standard embedding space
### Expected Gains from Vector DB:
- Semantic recall: +30-50% for paraphrased queries
- Cross-lingual: +60-80%
- Fuzzy matching: +40-60%
- Conceptual: +50-70%
---
## 7. Recommendations
### Option 1: Qdrant (RECOMMENDED)
- Best self-hosted performance
- Rust implementation, native multi-vector
- Tradeoff: Separate service deployment
### Option 2: pgvector (CONSERVATIVE)
- Zero new infrastructure if using PostgreSQL
- Tradeoff: 5-10x slower than Qdrant
### Option 3: SQLite-vss (LIGHTWEIGHT)
- Minimal changes, embedded deployment
- Tradeoff: Limited scalability (<100K vectors)
### Option 4: Hybrid (BEST OF BOTH)
Keep FTS5 + HRR and add Qdrant:
- Vector (semantic) + FTS5 (keyword) + HRR (compositional)
- Apply trust scoring + temporal decay
---
## 8. Embedding Models (2025-2026)
| Model | Dimensions | Quality | Cost |
|-------|-----------|---------|------|
| OpenAI text-embedding-3-large | 3072 | Best | $$$ |
| OpenAI text-embedding-3-small | 1536 | Good | $ |
| BGE-M3 | 1024 | Best self-hosted | Free |
| GTE-Qwen2 | 768-1024 | Good | Free |
---
## 9. Hardware Requirements (1M vectors, 768-dim)
| Database | RAM (HNSW) | RAM (Quantized) |
|----------|-----------|-----------------|
| Qdrant | 8-16GB | 2-4GB |
| Milvus | 16-32GB | 4-8GB |
| pgvector | 4-8GB | N/A |
| SQLite-vss | 2-4GB | N/A |
---
## 10. Conclusion
Primary: Qdrant with hybrid search (vector + FTS5 + HRR)
Key insight: Augment existing HRR system, don't replace it.
Next steps:
1. Deploy Qdrant in Docker for testing
2. Benchmark embedding models
3. Implement hybrid search prototype
4. Measure recall improvement
5. Evaluate operational complexity
Report: April 2026 | Sources: ANN-Benchmarks, VectorDBBench, official docs

443
agent/a2a_mtls.py Normal file
View File

@@ -0,0 +1,443 @@
"""
A2A mutual-TLS server — secure agent-to-agent communication.
Each fleet agent runs an A2A server that:
- Presents its own TLS certificate (signed by the fleet CA).
- Requires the connecting peer to present a valid client certificate
also signed by the fleet CA.
- Rejects connections from unknown / self-signed peers.
Usage (standalone):
python -m agent.a2a_mtls \\
--cert ~/.hermes/pki/agents/timmy/timmy.crt \\
--key ~/.hermes/pki/agents/timmy/timmy.key \\
--ca ~/.hermes/pki/ca/fleet-ca.crt \\
--host 0.0.0.0 --port 9443
Environment variables (alternative to CLI flags):
HERMES_A2A_CERT path to agent certificate
HERMES_A2A_KEY path to agent private key
HERMES_A2A_CA path to fleet CA certificate
Refs #806
"""
from __future__ import annotations
import json
import logging
import os
import ssl
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
from pathlib import Path
from typing import Any, Callable, Dict, Optional
from urllib.error import URLError
from urllib.request import Request, urlopen
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# mTLS SSL context helpers
# ---------------------------------------------------------------------------
def build_server_ssl_context(
cert: str | Path,
key: str | Path,
ca: str | Path,
) -> ssl.SSLContext:
"""Return an SSLContext that presents *cert/key* and requires a valid
client certificate signed by *ca*.
Raises ``FileNotFoundError`` if any path is missing.
Raises ``ssl.SSLError`` if the files are malformed.
"""
cert, key, ca = Path(cert), Path(key), Path(ca)
for p in (cert, key, ca):
if not p.exists():
raise FileNotFoundError(f"mTLS: file not found: {p}")
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ctx.minimum_version = ssl.TLSVersion.TLSv1_2
ctx.load_cert_chain(certfile=str(cert), keyfile=str(key))
ctx.load_verify_locations(cafile=str(ca))
# CERT_REQUIRED — reject peers that don't present a cert signed by *ca*.
ctx.verify_mode = ssl.CERT_REQUIRED
return ctx
def build_client_ssl_context(
cert: str | Path,
key: str | Path,
ca: str | Path,
) -> ssl.SSLContext:
"""Return an SSLContext for an outgoing mTLS connection.
Presents *cert/key* as the client identity and verifies the server
certificate against *ca*.
"""
cert, key, ca = Path(cert), Path(key), Path(ca)
for p in (cert, key, ca):
if not p.exists():
raise FileNotFoundError(f"mTLS client: file not found: {p}")
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.minimum_version = ssl.TLSVersion.TLSv1_2
ctx.load_cert_chain(certfile=str(cert), keyfile=str(key))
ctx.load_verify_locations(cafile=str(ca))
ctx.verify_mode = ssl.CERT_REQUIRED
ctx.check_hostname = True
return ctx
# ---------------------------------------------------------------------------
# Minimal A2A HTTP request handler
# ---------------------------------------------------------------------------
class A2AHandler(BaseHTTPRequestHandler):
"""Handles A2A requests over a mutually-authenticated TLS connection.
GET /.well-known/agent-card.json — returns the local agent card.
POST /a2a/task — dispatches an A2A task (stub).
"""
log_message = logger.debug # route access log to Python logger
def do_GET(self) -> None: # noqa: N802
if self.path in ("/.well-known/agent-card.json", "/agent-card.json"):
self._serve_agent_card()
else:
self._send_json(404, {"error": "not found"})
def do_POST(self) -> None: # noqa: N802
if self.path == "/a2a/task":
self._handle_task()
else:
self._send_json(404, {"error": "not found"})
# ------------------------------------------------------------------
def _serve_agent_card(self) -> None:
try:
from agent.agent_card import get_agent_card_json
body = get_agent_card_json().encode()
except Exception as exc:
logger.warning("agent-card unavailable: %s", exc)
body = b'{"error": "agent card unavailable"}'
self._send_raw(200, "application/json", body)
def _handle_task(self) -> None:
length = int(self.headers.get("Content-Length", 0))
_body = self.rfile.read(length) if length else b""
# Stub: echo back a 202 Accepted with the peer CN so callers can
# confirm which agent processed the request.
peer_cn = _peer_cn(self.connection)
self._send_json(202, {"status": "accepted", "handled_by": peer_cn})
# ------------------------------------------------------------------
def _send_json(self, code: int, data: dict) -> None:
import json
body = json.dumps(data).encode()
self._send_raw(code, "application/json", body)
def _send_raw(self, code: int, content_type: str, body: bytes) -> None:
self.send_response(code)
self.send_header("Content-Type", content_type)
self.send_header("Content-Length", str(len(body)))
self.end_headers()
self.wfile.write(body)
def log_message(self, fmt: str, *args: object) -> None: # type: ignore[override]
logger.debug("a2a: " + fmt, *args)
def _peer_cn(conn: ssl.SSLSocket) -> Optional[str]:
"""Extract the Common Name from the peer certificate, or None."""
try:
peer = conn.getpeercert()
if not peer:
return None
for rdn in peer.get("subject", ()):
for key, val in rdn:
if key == "commonName":
return val
except Exception:
pass
return None
# ---------------------------------------------------------------------------
# Server lifecycle
# ---------------------------------------------------------------------------
class A2AServer:
"""Mutual-TLS A2A server.
Example::
server = A2AServer(
cert="~/.hermes/pki/agents/timmy/timmy.crt",
key="~/.hermes/pki/agents/timmy/timmy.key",
ca="~/.hermes/pki/ca/fleet-ca.crt",
)
server.start() # non-blocking (daemon thread)
...
server.stop()
"""
def __init__(
self,
cert: str | Path,
key: str | Path,
ca: str | Path,
host: str = "0.0.0.0",
port: int = 9443,
) -> None:
self.cert = Path(cert).expanduser()
self.key = Path(key).expanduser()
self.ca = Path(ca).expanduser()
self.host = host
self.port = port
self._httpd: Optional[HTTPServer] = None
self._thread: Optional[threading.Thread] = None
def start(self, daemon: bool = True) -> None:
"""Start the server in a background thread (default: daemon)."""
ssl_ctx = build_server_ssl_context(self.cert, self.key, self.ca)
self._httpd = HTTPServer((self.host, self.port), A2AHandler)
self._httpd.socket = ssl_ctx.wrap_socket(
self._httpd.socket, server_side=True
)
self._thread = threading.Thread(
target=self._httpd.serve_forever, daemon=daemon
)
self._thread.start()
logger.info(
"A2A mTLS server listening on %s:%s (cert=%s)",
self.host, self.port, self.cert.name,
)
def stop(self) -> None:
if self._httpd:
self._httpd.shutdown()
self._httpd = None
if self._thread:
self._thread.join(timeout=5)
self._thread = None
def server_from_env() -> A2AServer:
"""Build an A2AServer from environment variables / defaults."""
hermes_home = Path(os.environ.get("HERMES_HOME", Path.home() / ".hermes"))
agent_name = os.environ.get("HERMES_AGENT_NAME", "hermes").lower()
default_cert = hermes_home / "pki" / "agents" / agent_name / f"{agent_name}.crt"
default_key = hermes_home / "pki" / "agents" / agent_name / f"{agent_name}.key"
default_ca = hermes_home / "pki" / "ca" / "fleet-ca.crt"
cert = os.environ.get("HERMES_A2A_CERT", str(default_cert))
key = os.environ.get("HERMES_A2A_KEY", str(default_key))
ca = os.environ.get("HERMES_A2A_CA", str(default_ca))
host = os.environ.get("HERMES_A2A_HOST", "0.0.0.0")
port = int(os.environ.get("HERMES_A2A_PORT", "9443"))
return A2AServer(cert=cert, key=key, ca=ca, host=host, port=port)
# ---------------------------------------------------------------------------
# CLI entry point
# ---------------------------------------------------------------------------
def _main() -> None:
import argparse
logging.basicConfig(level=logging.INFO, format="%(levelname)s %(message)s")
parser = argparse.ArgumentParser(
description="Hermes A2A mutual-TLS server"
)
parser.add_argument("--cert", required=True, help="Path to agent certificate")
parser.add_argument("--key", required=True, help="Path to agent private key")
parser.add_argument("--ca", required=True, help="Path to fleet CA certificate")
parser.add_argument("--host", default="0.0.0.0")
parser.add_argument("--port", type=int, default=9443)
args = parser.parse_args()
server = A2AServer(
cert=args.cert, key=args.key, ca=args.ca,
host=args.host, port=args.port,
)
server.start(daemon=False)
if __name__ == "__main__":
_main()
# ---------------------------------------------------------------------------
# A2AMTLSServer — routing-based server with context-manager support
# ---------------------------------------------------------------------------
class _RoutingHandler(BaseHTTPRequestHandler):
"""HTTP request handler that dispatches to per-path callables."""
routes: Dict[str, Callable] = {}
def log_message(self, fmt: str, *args: Any) -> None:
logger.debug("A2AMTLSServer: " + fmt, *args)
def _peer_cn(self) -> Optional[str]:
cert = self.connection.getpeercert() # type: ignore[attr-defined]
if not cert:
return None
for rdn in cert.get("subject", ()):
for attr, value in rdn:
if attr == "commonName":
return value
return None
def do_POST(self) -> None:
handler = self.routes.get(self.path)
if handler is None:
self.send_response(404)
self.end_headers()
return
length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(length) if length else b""
try:
payload = json.loads(body) if body else {}
except json.JSONDecodeError:
self.send_response(400)
self.end_headers()
return
result = handler(payload, peer_cn=self._peer_cn())
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(result).encode())
def do_GET(self) -> None:
handler = self.routes.get(self.path)
if handler is None:
self.send_response(404)
self.end_headers()
return
result = handler({}, peer_cn=self._peer_cn())
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(result).encode())
class A2AMTLSServer:
"""Routing-based mTLS HTTPS server with context-manager support.
Unlike ``A2AServer`` (which serves fixed A2A paths), this server lets
callers register arbitrary path handlers — useful for tests and custom
A2A endpoint implementations.
handler signature: ``handler(payload: dict, *, peer_cn: str | None) -> dict``
Example::
server = A2AMTLSServer(cert="timmy.crt", key="timmy.key", ca="fleet-ca.crt")
server.add_route("/tasks/send", my_handler)
with server:
... # server runs for the duration of the block
"""
def __init__(
self,
cert: str | Path,
key: str | Path,
ca: str | Path,
host: str = "127.0.0.1",
port: int = 9443,
) -> None:
self.cert = Path(cert).expanduser()
self.key = Path(key).expanduser()
self.ca = Path(ca).expanduser()
self.host = host
self.port = port
self._routes: Dict[str, Callable] = {}
self._httpd: Optional[HTTPServer] = None
self._thread: Optional[threading.Thread] = None
def add_route(self, path: str, handler: Callable) -> None:
self._routes[path] = handler
def start(self) -> None:
ssl_ctx = build_server_ssl_context(self.cert, self.key, self.ca)
class _Handler(_RoutingHandler):
routes = self._routes
self._httpd = HTTPServer((self.host, self.port), _Handler)
self._httpd.socket = ssl_ctx.wrap_socket(self._httpd.socket, server_side=True)
self._thread = threading.Thread(
target=self._httpd.serve_forever,
daemon=True,
name=f"a2a-mtls-{self.port}",
)
self._thread.start()
logger.info("A2AMTLSServer on %s:%d (mTLS)", self.host, self.port)
def stop(self) -> None:
if self._httpd:
self._httpd.shutdown()
self._httpd = None
if self._thread:
self._thread.join(timeout=5)
self._thread = None
def __enter__(self) -> "A2AMTLSServer":
self.start()
return self
def __exit__(self, *_: Any) -> None:
self.stop()
# ---------------------------------------------------------------------------
# A2AMTLSClient — mTLS HTTP client
# ---------------------------------------------------------------------------
class A2AMTLSClient:
"""HTTP client that presents a fleet cert on every outgoing connection.
Example::
client = A2AMTLSClient(cert="allegro.crt", key="allegro.key", ca="fleet-ca.crt")
result = client.post("https://timmy:9443/tasks/send", json={"task": "..."})
"""
def __init__(
self,
cert: str | Path,
key: str | Path,
ca: str | Path,
) -> None:
self._ssl_ctx = build_client_ssl_context(cert, key, ca)
self._ssl_ctx.check_hostname = False # callers connecting by IP
def _request(
self,
method: str,
url: str,
data: Optional[bytes] = None,
timeout: float = 10.0,
) -> Dict[str, Any]:
headers = {"Content-Type": "application/json"}
req = Request(url, data=data, headers=headers, method=method)
try:
with urlopen(req, context=self._ssl_ctx, timeout=timeout) as resp:
body = resp.read()
return json.loads(body) if body else {}
except URLError as exc:
raise ConnectionError(f"A2AMTLSClient {method} {url} failed: {exc.reason}") from exc
def get(self, url: str, **kwargs: Any) -> Dict[str, Any]:
return self._request("GET", url, **kwargs)
def post(self, url: str, json: Optional[Dict[str, Any]] = None, **kwargs: Any) -> Dict[str, Any]:
data = (__import__("json").dumps(json).encode() if json is not None else None)
return self._request("POST", url, data=data, **kwargs)

135
agent/agent_card.py Normal file
View File

@@ -0,0 +1,135 @@
"""
Agent Card — A2A-compliant agent discovery.
Part of #843: fix: implement A2A agent card for fleet discovery (#819)
Provides metadata about the agent's identity, capabilities, and installed skills
for discovery by other agents in the fleet.
"""
import json
import logging
import os
from dataclasses import asdict, dataclass, field
from pathlib import Path
from typing import Any, Dict, List, Optional
from hermes_cli import __version__
from hermes_cli.config import load_config, get_hermes_home
from agent.skill_utils import (
iter_skill_index_files,
parse_frontmatter,
get_all_skills_dirs,
get_disabled_skill_names,
skill_matches_platform
)
logger = logging.getLogger(__name__)
@dataclass
class AgentSkill:
id: str
name: str
description: str = ""
version: str = "1.0.0"
@dataclass
class AgentCapabilities:
streaming: bool = True
tools: bool = True
vision: bool = False
reasoning: bool = False
@dataclass
class AgentCard:
name: str
description: str
url: str
version: str = __version__
capabilities: AgentCapabilities = field(default_factory=AgentCapabilities)
skills: List[AgentSkill] = field(default_factory=list)
defaultInputModes: List[str] = field(default_factory=lambda: ["text/plain"])
defaultOutputModes: List[str] = field(default_factory=lambda: ["text/plain"])
def _load_skills() -> List[AgentSkill]:
"""Scan all enabled skills and return metadata."""
skills = []
disabled = get_disabled_skill_names()
for skills_dir in get_all_skills_dirs():
if not skills_dir.is_dir():
continue
for skill_file in iter_skill_index_files(skills_dir, "SKILL.md"):
try:
raw = skill_file.read_text(encoding="utf-8")
frontmatter, _ = parse_frontmatter(raw)
except Exception:
continue
skill_name = frontmatter.get("name") or skill_file.parent.name
if str(skill_name) in disabled:
continue
if not skill_matches_platform(frontmatter):
continue
skills.append(AgentSkill(
id=str(skill_name),
name=str(frontmatter.get("name", skill_name)),
description=str(frontmatter.get("description", "")),
version=str(frontmatter.get("version", "1.0.0"))
))
return skills
def build_agent_card() -> AgentCard:
"""Build the agent card from current configuration and environment."""
config = load_config()
# Identity
name = os.environ.get("HERMES_AGENT_NAME") or config.get("agent", {}).get("name") or "hermes"
description = os.environ.get("HERMES_AGENT_DESCRIPTION") or config.get("agent", {}).get("description") or "Sovereign AI agent"
# URL - try to determine from environment or config
port = os.environ.get("HERMES_WEB_PORT") or "9119"
host = os.environ.get("HERMES_WEB_HOST") or "localhost"
url = f"http://{host}:{port}"
# Capabilities
# In a real scenario, we'd check model metadata for vision/reasoning
capabilities = AgentCapabilities(
streaming=True,
tools=True,
vision=False, # Default to false unless we can confirm
reasoning=False
)
# Skills
skills = _load_skills()
return AgentCard(
name=name,
description=description,
url=url,
version=__version__,
capabilities=capabilities,
skills=skills
)
def get_agent_card_json() -> str:
"""Return the agent card as a JSON string."""
try:
card = build_agent_card()
return json.dumps(asdict(card), indent=2)
except Exception as e:
logger.error(f"Failed to build agent card: {e}")
# Minimal fallback card
fallback = {
"name": "hermes",
"description": "Sovereign AI agent (fallback)",
"version": __version__,
"error": str(e)
}
return json.dumps(fallback, indent=2)
def validate_agent_card(card_data: Dict[str, Any]) -> bool:
"""Check if the card data complies with the A2A schema."""
required = ["name", "description", "url", "version"]
return all(k in card_data for k in required)

View File

@@ -1230,9 +1230,10 @@ def build_anthropic_kwargs(
When *base_url* points to a third-party Anthropic-compatible endpoint,
thinking block signatures are stripped (they are Anthropic-proprietary).
When *fast_mode* is True, adds ``speed: "fast"`` and the fast-mode beta
header for ~2.5x faster output throughput on Opus 4.6. Currently only
supported on native Anthropic endpoints (not third-party compatible ones).
When *fast_mode* is True, adds ``extra_body["speed"] = "fast"`` and the
fast-mode beta header for ~2.5x faster output throughput on Opus 4.6.
Currently only supported on native Anthropic endpoints (not third-party
compatible ones).
"""
system, anthropic_messages = convert_messages_to_anthropic(messages, base_url=base_url)
anthropic_tools = convert_tools_to_anthropic(tools) if tools else []
@@ -1333,11 +1334,11 @@ def build_anthropic_kwargs(
kwargs["max_tokens"] = max(effective_max_tokens, budget + 4096)
# ── Fast mode (Opus 4.6 only) ────────────────────────────────────
# Adds speed:"fast" + the fast-mode beta header for ~2.5x output speed.
# Only for native Anthropic endpoints — third-party providers would
# reject the unknown beta header and speed parameter.
# Adds extra_body.speed="fast" + the fast-mode beta header for ~2.5x
# output speed. Only for native Anthropic endpoints — third-party
# providers would reject the unknown beta header and speed parameter.
if fast_mode and not _is_third_party_anthropic_endpoint(base_url):
kwargs["speed"] = "fast"
kwargs.setdefault("extra_body", {})["speed"] = "fast"
# Build extra_headers with ALL applicable betas (the per-request
# extra_headers override the client-level anthropic-beta header).
betas = list(_common_betas_for_base_url(base_url))

View File

@@ -1,4 +1,4 @@
"""Shared auxiliary client router for side tasks.
from agent.telemetry_logger import log_token_usage\n"""Shared auxiliary client router for side tasks.
Provides a single resolution chain so every consumer (context compression,
session search, web extraction, vision analysis, browser vision) picks up
@@ -23,18 +23,10 @@ Resolution order for vision/multimodal tasks (auto mode):
6. Custom endpoint (for local vision models: Qwen-VL, LLaVA, Pixtral, etc.)
7. None
Per-task provider overrides (e.g. AUXILIARY_VISION_PROVIDER,
CONTEXT_COMPRESSION_PROVIDER) can force a specific provider for each task.
Per-task overrides are configured in config.yaml under the ``auxiliary:`` section
(e.g. ``auxiliary.vision.provider``, ``auxiliary.compression.model``).
Default "auto" follows the chains above.
Per-task model overrides (e.g. AUXILIARY_VISION_MODEL,
AUXILIARY_WEB_EXTRACT_MODEL) let callers use a different model slug
than the provider's default.
Per-task direct endpoint overrides (e.g. AUXILIARY_VISION_BASE_URL,
AUXILIARY_VISION_API_KEY) let callers route a specific auxiliary task to a
custom OpenAI-compatible endpoint without touching the main model settings.
Payment / credit exhaustion fallback:
When a resolved provider returns HTTP 402 or a credit-related error,
call_llm() automatically retries with the next available provider in the
@@ -72,6 +64,8 @@ _PROVIDER_ALIASES = {
"zhipu": "zai",
"kimi": "kimi-coding",
"moonshot": "kimi-coding",
"kimi-cn": "kimi-coding-cn",
"moonshot-cn": "kimi-coding-cn",
"minimax-china": "minimax-cn",
"minimax_cn": "minimax-cn",
"claude": "anthropic",
@@ -79,13 +73,13 @@ _PROVIDER_ALIASES = {
}
def _normalize_aux_provider(provider: Optional[str], *, for_vision: bool = False) -> str:
def _normalize_aux_provider(provider: Optional[str]) -> str:
normalized = (provider or "auto").strip().lower()
if normalized.startswith("custom:"):
suffix = normalized.split(":", 1)[1].strip()
if not suffix:
return "custom"
normalized = suffix if not for_vision else "custom"
normalized = suffix
if normalized == "codex":
return "openai-codex"
if normalized == "main":
@@ -102,6 +96,7 @@ _API_KEY_PROVIDER_AUX_MODELS: Dict[str, str] = {
"gemini": "gemini-3-flash-preview",
"zai": "glm-4.5-flash",
"kimi-coding": "kimi-k2-turbo-preview",
"kimi-coding-cn": "kimi-k2-turbo-preview",
"minimax": "MiniMax-M2.7",
"minimax-cn": "MiniMax-M2.7",
"anthropic": "claude-haiku-4-5-20251001",
@@ -111,6 +106,15 @@ _API_KEY_PROVIDER_AUX_MODELS: Dict[str, str] = {
"kilocode": "google/gemini-3-flash-preview",
}
# Vision-specific model overrides for direct providers.
# When the user's main provider has a dedicated vision/multimodal model that
# differs from their main chat model, map it here. The vision auto-detect
# "exotic provider" branch checks this before falling back to the main model.
_PROVIDER_VISION_MODELS: Dict[str, str] = {
"xiaomi": "mimo-v2-omni",
"zai": "glm-5v-turbo",
}
# OpenRouter app attribution headers
_OR_HEADERS = {
"HTTP-Referer": "https://hermes-agent.nousresearch.com",
@@ -392,7 +396,7 @@ class _CodexCompletionsAdapter:
prompt_tokens=getattr(resp_usage, "input_tokens", 0),
completion_tokens=getattr(resp_usage, "output_tokens", 0),
total_tokens=getattr(resp_usage, "total_tokens", 0),
)
)\n log_token_usage(usage.prompt_tokens, usage.completion_tokens, model)
except Exception as exc:
logger.debug("Codex auxiliary Responses API call failed: %s", exc)
raise
@@ -525,7 +529,7 @@ class _AnthropicCompletionsAdapter:
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
total_tokens=total_tokens,
)
)\n log_token_usage(usage.prompt_tokens, usage.completion_tokens, model)
choice = SimpleNamespace(
index=0,
@@ -749,30 +753,6 @@ def _resolve_api_key_provider() -> Tuple[Optional[OpenAI], Optional[str]]:
# ── Provider resolution helpers ─────────────────────────────────────────────
def _get_auxiliary_provider(task: str = "") -> str:
"""Read the provider override for a specific auxiliary task.
Checks AUXILIARY_{TASK}_PROVIDER first (e.g. AUXILIARY_VISION_PROVIDER),
then CONTEXT_{TASK}_PROVIDER (for the compression section's summary_provider),
then falls back to "auto". Returns one of: "auto", "openrouter", "nous", "main".
"""
if task:
for prefix in ("AUXILIARY_", "CONTEXT_"):
val = os.getenv(f"{prefix}{task.upper()}_PROVIDER", "").strip().lower()
if val and val != "auto":
return val
return "auto"
def _get_auxiliary_env_override(task: str, suffix: str) -> Optional[str]:
"""Read an auxiliary env override from AUXILIARY_* or CONTEXT_* prefixes."""
if not task:
return None
for prefix in ("AUXILIARY_", "CONTEXT_"):
val = os.getenv(f"{prefix}{task.upper()}_{suffix}", "").strip()
if val:
return val
return None
def _try_openrouter() -> Tuple[Optional[OpenAI], Optional[str]]:
@@ -1017,6 +997,23 @@ _AUTO_PROVIDER_LABELS = {
_AGGREGATOR_PROVIDERS = frozenset({"openrouter", "nous"})
_MAIN_RUNTIME_FIELDS = ("provider", "model", "base_url", "api_key", "api_mode")
def _normalize_main_runtime(main_runtime: Optional[Dict[str, Any]]) -> Dict[str, str]:
"""Return a sanitized copy of a live main-runtime override."""
if not isinstance(main_runtime, dict):
return {}
normalized: Dict[str, str] = {}
for field in _MAIN_RUNTIME_FIELDS:
value = main_runtime.get(field)
if isinstance(value, str) and value.strip():
normalized[field] = value.strip()
provider = normalized.get("provider")
if provider:
normalized["provider"] = provider.lower()
return normalized
def _get_provider_chain() -> List[tuple]:
"""Return the ordered provider detection chain.
@@ -1126,7 +1123,7 @@ def _try_payment_fallback(
return None, None, ""
def _resolve_auto() -> Tuple[Optional[OpenAI], Optional[str]]:
def _resolve_auto(main_runtime: Optional[Dict[str, Any]] = None) -> Tuple[Optional[OpenAI], Optional[str]]:
"""Full auto-detection chain.
Priority:
@@ -1138,6 +1135,12 @@ def _resolve_auto() -> Tuple[Optional[OpenAI], Optional[str]]:
"""
global auxiliary_is_nous, _stale_base_url_warned
auxiliary_is_nous = False # Reset — _try_nous() will set True if it wins
runtime = _normalize_main_runtime(main_runtime)
runtime_provider = runtime.get("provider", "")
runtime_model = runtime.get("model", "")
runtime_base_url = runtime.get("base_url", "")
runtime_api_key = runtime.get("api_key", "")
runtime_api_mode = runtime.get("api_mode", "")
# ── Warn once if OPENAI_BASE_URL is set but config.yaml uses a named
# provider (not 'custom'). This catches the common "env poisoning"
@@ -1145,7 +1148,7 @@ def _resolve_auto() -> Tuple[Optional[OpenAI], Optional[str]]:
# old OPENAI_BASE_URL lingers in ~/.hermes/.env. ──
if not _stale_base_url_warned:
_env_base = os.getenv("OPENAI_BASE_URL", "").strip()
_cfg_provider = _read_main_provider()
_cfg_provider = runtime_provider or _read_main_provider()
if (_env_base and _cfg_provider
and _cfg_provider != "custom"
and not _cfg_provider.startswith("custom:")):
@@ -1159,12 +1162,25 @@ def _resolve_auto() -> Tuple[Optional[OpenAI], Optional[str]]:
_stale_base_url_warned = True
# ── Step 1: non-aggregator main provider → use main model directly ──
main_provider = _read_main_provider()
main_model = _read_main_model()
main_provider = runtime_provider or _read_main_provider()
main_model = runtime_model or _read_main_model()
if (main_provider and main_model
and main_provider not in _AGGREGATOR_PROVIDERS
and main_provider not in ("auto", "")):
client, resolved = resolve_provider_client(main_provider, main_model)
resolved_provider = main_provider
explicit_base_url = None
explicit_api_key = None
if runtime_base_url and (main_provider == "custom" or main_provider.startswith("custom:")):
resolved_provider = "custom"
explicit_base_url = runtime_base_url
explicit_api_key = runtime_api_key or None
client, resolved = resolve_provider_client(
resolved_provider,
main_model,
explicit_base_url=explicit_base_url,
explicit_api_key=explicit_api_key,
api_mode=runtime_api_mode or None,
)
if client is not None:
logger.info("Auxiliary auto-detect: using main provider %s (%s)",
main_provider, resolved or main_model)
@@ -1208,6 +1224,12 @@ def _to_async_client(sync_client, model: str):
return AsyncCodexAuxiliaryClient(sync_client), model
if isinstance(sync_client, AnthropicAuxiliaryClient):
return AsyncAnthropicAuxiliaryClient(sync_client), model
try:
from agent.copilot_acp_client import CopilotACPClient
if isinstance(sync_client, CopilotACPClient):
return sync_client, model
except ImportError:
pass
async_kwargs = {
"api_key": sync_client.api_key,
@@ -1245,6 +1267,7 @@ def resolve_provider_client(
explicit_base_url: str = None,
explicit_api_key: str = None,
api_mode: str = None,
main_runtime: Optional[Dict[str, Any]] = None,
) -> Tuple[Optional[Any], Optional[str]]:
"""Central router: given a provider name and optional model, return a
configured client with the correct auth, base URL, and API format.
@@ -1315,7 +1338,7 @@ def resolve_provider_client(
# ── Auto: try all providers in priority order ────────────────────
if provider == "auto":
client, resolved = _resolve_auto()
client, resolved = _resolve_auto(main_runtime=main_runtime)
if client is None:
return None, None
# When auto-detection lands on a non-OpenRouter provider (e.g. a
@@ -1425,10 +1448,14 @@ def resolve_provider_client(
custom_entry = _get_named_custom_provider(provider)
if custom_entry:
custom_base = custom_entry.get("base_url", "").strip()
custom_key = custom_entry.get("api_key", "").strip() or "no-key-required"
custom_key = custom_entry.get("api_key", "").strip()
custom_key_env = custom_entry.get("key_env", "").strip()
if not custom_key and custom_key_env:
custom_key = os.getenv(custom_key_env, "").strip()
custom_key = custom_key or "no-key-required"
if custom_base:
final_model = _normalize_resolved_model(
model or _read_main_model() or "gpt-4o-mini",
model or custom_entry.get("model") or _read_main_model() or "gpt-4o-mini",
provider,
)
client = OpenAI(api_key=custom_key, base_url=custom_base)
@@ -1447,7 +1474,11 @@ def resolve_provider_client(
# ── API-key providers from PROVIDER_REGISTRY ─────────────────────
try:
from hermes_cli.auth import PROVIDER_REGISTRY, resolve_api_key_provider_credentials
from hermes_cli.auth import (
PROVIDER_REGISTRY,
resolve_api_key_provider_credentials,
resolve_external_process_provider_credentials,
)
except ImportError:
logger.debug("hermes_cli.auth not available for provider %s", provider)
return None, None
@@ -1521,6 +1552,41 @@ def resolve_provider_client(
return (_to_async_client(client, final_model) if async_mode
else (client, final_model))
if pconfig.auth_type == "external_process":
creds = resolve_external_process_provider_credentials(provider)
final_model = _normalize_resolved_model(model or _read_main_model(), provider)
if provider == "copilot-acp":
api_key = str(creds.get("api_key", "")).strip()
base_url = str(creds.get("base_url", "")).strip()
command = str(creds.get("command", "")).strip() or None
args = list(creds.get("args") or [])
if not final_model:
logger.warning(
"resolve_provider_client: copilot-acp requested but no model "
"was provided or configured"
)
return None, None
if not api_key or not base_url:
logger.warning(
"resolve_provider_client: copilot-acp requested but external "
"process credentials are incomplete"
)
return None, None
from agent.copilot_acp_client import CopilotACPClient
client = CopilotACPClient(
api_key=api_key,
base_url=base_url,
command=command,
args=args,
)
logger.debug("resolve_provider_client: %s (%s)", provider, final_model)
return (_to_async_client(client, final_model) if async_mode
else (client, final_model))
logger.warning("resolve_provider_client: external-process provider %s not "
"directly supported", provider)
return None, None
elif pconfig.auth_type in ("oauth_device_code", "oauth_external"):
# OAuth providers — route through their specific try functions
if provider == "nous":
@@ -1539,15 +1605,19 @@ def resolve_provider_client(
# ── Public API ──────────────────────────────────────────────────────────────
def get_text_auxiliary_client(task: str = "") -> Tuple[Optional[OpenAI], Optional[str]]:
def get_text_auxiliary_client(
task: str = "",
*,
main_runtime: Optional[Dict[str, Any]] = None,
) -> Tuple[Optional[OpenAI], Optional[str]]:
"""Return (client, default_model_slug) for text-only auxiliary tasks.
Args:
task: Optional task name ("compression", "web_extract") to check
for a task-specific provider override.
Callers may override the returned model with a per-task env var
(e.g. CONTEXT_COMPRESSION_MODEL, AUXILIARY_WEB_EXTRACT_MODEL).
Callers may override the returned model via config.yaml
(e.g. auxiliary.compression.model, auxiliary.web_extract.model).
"""
provider, model, base_url, api_key, api_mode = _resolve_task_provider_model(task or None)
return resolve_provider_client(
@@ -1556,10 +1626,11 @@ def get_text_auxiliary_client(task: str = "") -> Tuple[Optional[OpenAI], Optiona
explicit_base_url=base_url,
explicit_api_key=api_key,
api_mode=api_mode,
main_runtime=main_runtime,
)
def get_async_text_auxiliary_client(task: str = ""):
def get_async_text_auxiliary_client(task: str = "", *, main_runtime: Optional[Dict[str, Any]] = None):
"""Return (async_client, model_slug) for async consumers.
For standard providers returns (AsyncOpenAI, model). For Codex returns
@@ -1574,6 +1645,7 @@ def get_async_text_auxiliary_client(task: str = ""):
explicit_base_url=base_url,
explicit_api_key=api_key,
api_mode=api_mode,
main_runtime=main_runtime,
)
@@ -1584,7 +1656,7 @@ _VISION_AUTO_PROVIDER_ORDER = (
def _normalize_vision_provider(provider: Optional[str]) -> str:
return _normalize_aux_provider(provider, for_vision=True)
return _normalize_aux_provider(provider)
def _resolve_strict_vision_backend(provider: str) -> Tuple[Optional[Any], Optional[str]]:
@@ -1667,6 +1739,7 @@ def resolve_vision_provider_client(
async_mode=async_mode,
explicit_base_url=resolved_base_url,
explicit_api_key=resolved_api_key,
api_mode=resolved_api_mode,
)
if client is None:
return "custom", None, None
@@ -1687,16 +1760,19 @@ def resolve_vision_provider_client(
if sync_client is not None:
return _finalize(main_provider, sync_client, default_model)
else:
# Exotic provider (DeepSeek, Alibaba, named custom, etc.)
# Exotic provider (DeepSeek, Alibaba, Xiaomi, named custom, etc.)
# Use provider-specific vision model if available, otherwise main model.
vision_model = _PROVIDER_VISION_MODELS.get(main_provider, main_model)
rpc_client, rpc_model = resolve_provider_client(
main_provider, main_model)
main_provider, vision_model,
api_mode=resolved_api_mode)
if rpc_client is not None:
logger.info(
"Vision auto-detect: using active provider %s (%s)",
main_provider, rpc_model or main_model,
main_provider, rpc_model or vision_model,
)
return _finalize(
main_provider, rpc_client, rpc_model or main_model)
main_provider, rpc_client, rpc_model or vision_model)
# Fall back through aggregators.
for candidate in _VISION_AUTO_PROVIDER_ORDER:
@@ -1713,7 +1789,8 @@ def resolve_vision_provider_client(
sync_client, default_model = _resolve_strict_vision_backend(requested)
return _finalize(requested, sync_client, default_model)
client, final_model = _get_cached_client(requested, resolved_model, async_mode)
client, final_model = _get_cached_client(requested, resolved_model, async_mode,
api_mode=resolved_api_mode)
if client is None:
return requested, None, None
return requested, client, final_model
@@ -1886,6 +1963,7 @@ def _get_cached_client(
base_url: str = None,
api_key: str = None,
api_mode: str = None,
main_runtime: Optional[Dict[str, Any]] = None,
) -> Tuple[Optional[Any], Optional[str]]:
"""Get or create a cached client for the given provider.
@@ -1909,7 +1987,9 @@ def _get_cached_client(
loop_id = id(current_loop)
except RuntimeError:
pass
cache_key = (provider, async_mode, base_url or "", api_key or "", api_mode or "", loop_id)
runtime = _normalize_main_runtime(main_runtime)
runtime_key = tuple(runtime.get(field, "") for field in _MAIN_RUNTIME_FIELDS) if provider == "auto" else ()
cache_key = (provider, async_mode, base_url or "", api_key or "", api_mode or "", loop_id, runtime_key)
with _client_cache_lock:
if cache_key in _client_cache:
cached_client, cached_default, cached_loop = _client_cache[cache_key]
@@ -1934,6 +2014,7 @@ def _get_cached_client(
explicit_base_url=base_url,
explicit_api_key=api_key,
api_mode=api_mode,
main_runtime=runtime,
)
if client is not None:
# For async clients, remember which loop they were created on so we
@@ -1958,9 +2039,8 @@ def _resolve_task_provider_model(
Priority:
1. Explicit provider/model/base_url/api_key args (always win)
2. Env var overrides (AUXILIARY_{TASK}_*, CONTEXT_{TASK}_*)
3. Config file (auxiliary.{task}.* or compression.*)
4. "auto" (full auto-detection chain)
2. Config file (auxiliary.{task}.provider/model/base_url)
3. "auto" (full auto-detection chain)
Returns (provider, model, base_url, api_key, api_mode) where model may
be None (use provider default). When base_url is set, provider is forced
@@ -1991,21 +2071,8 @@ def _resolve_task_provider_model(
cfg_api_key = str(task_config.get("api_key", "")).strip() or None
cfg_api_mode = str(task_config.get("api_mode", "")).strip() or None
# Backwards compat: compression section has its own keys.
# The auxiliary.compression defaults to provider="auto", so treat
# both None and "auto" as "not explicitly configured".
if task == "compression" and (not cfg_provider or cfg_provider == "auto"):
comp = config.get("compression", {}) if isinstance(config, dict) else {}
if isinstance(comp, dict):
cfg_provider = comp.get("summary_provider", "").strip() or None
cfg_model = cfg_model or comp.get("summary_model", "").strip() or None
_sbu = comp.get("summary_base_url") or ""
cfg_base_url = cfg_base_url or _sbu.strip() or None
env_model = _get_auxiliary_env_override(task, "MODEL") if task else None
env_api_mode = _get_auxiliary_env_override(task, "API_MODE") if task else None
resolved_model = model or env_model or cfg_model
resolved_api_mode = env_api_mode or cfg_api_mode
resolved_model = model or cfg_model
resolved_api_mode = cfg_api_mode
if base_url:
return "custom", resolved_model, base_url, api_key, resolved_api_mode
@@ -2013,19 +2080,12 @@ def _resolve_task_provider_model(
return provider, resolved_model, base_url, api_key, resolved_api_mode
if task:
env_base_url = _get_auxiliary_env_override(task, "BASE_URL")
env_api_key = _get_auxiliary_env_override(task, "API_KEY")
if env_base_url:
return "custom", resolved_model, env_base_url, env_api_key or cfg_api_key, resolved_api_mode
env_provider = _get_auxiliary_provider(task)
if env_provider != "auto":
return env_provider, resolved_model, None, None, resolved_api_mode
# Config.yaml is the primary source for per-task overrides.
if cfg_base_url:
return "custom", resolved_model, cfg_base_url, cfg_api_key, resolved_api_mode
if cfg_provider and cfg_provider != "auto":
return cfg_provider, resolved_model, None, None, resolved_api_mode
return "auto", resolved_model, None, None, resolved_api_mode
return "auto", resolved_model, None, None, resolved_api_mode
@@ -2054,6 +2114,75 @@ def _get_task_timeout(task: str, default: float = _DEFAULT_AUX_TIMEOUT) -> float
return default
# ---------------------------------------------------------------------------
# Anthropic-compatible endpoint detection + image block conversion
# ---------------------------------------------------------------------------
# Providers that use Anthropic-compatible endpoints (via OpenAI SDK wrapper).
# Their image content blocks must use Anthropic format, not OpenAI format.
_ANTHROPIC_COMPAT_PROVIDERS = frozenset({"minimax", "minimax-cn"})
def _is_anthropic_compat_endpoint(provider: str, base_url: str) -> bool:
"""Detect if an endpoint expects Anthropic-format content blocks.
Returns True for known Anthropic-compatible providers (MiniMax) and
any endpoint whose URL contains ``/anthropic`` in the path.
"""
if provider in _ANTHROPIC_COMPAT_PROVIDERS:
return True
url_lower = (base_url or "").lower()
return "/anthropic" in url_lower
def _convert_openai_images_to_anthropic(messages: list) -> list:
"""Convert OpenAI ``image_url`` content blocks to Anthropic ``image`` blocks.
Only touches messages that have list-type content with ``image_url`` blocks;
plain text messages pass through unchanged.
"""
converted = []
for msg in messages:
content = msg.get("content")
if not isinstance(content, list):
converted.append(msg)
continue
new_content = []
changed = False
for block in content:
if block.get("type") == "image_url":
image_url_val = (block.get("image_url") or {}).get("url", "")
if image_url_val.startswith("data:"):
# Parse data URI: data:<media_type>;base64,<data>
header, _, b64data = image_url_val.partition(",")
media_type = "image/png"
if ":" in header and ";" in header:
media_type = header.split(":", 1)[1].split(";", 1)[0]
new_content.append({
"type": "image",
"source": {
"type": "base64",
"media_type": media_type,
"data": b64data,
},
})
else:
# URL-based image
new_content.append({
"type": "image",
"source": {
"type": "url",
"url": image_url_val,
},
})
changed = True
else:
new_content.append(block)
converted.append({**msg, "content": new_content} if changed else msg)
return converted
def _build_call_kwargs(
provider: str,
model: str,
@@ -2138,6 +2267,7 @@ def call_llm(
model: str = None,
base_url: str = None,
api_key: str = None,
main_runtime: Optional[Dict[str, Any]] = None,
messages: list,
temperature: float = None,
max_tokens: int = None,
@@ -2172,7 +2302,7 @@ def call_llm(
resolved_provider, resolved_model, resolved_base_url, resolved_api_key, resolved_api_mode = _resolve_task_provider_model(
task, provider, model, base_url, api_key)
if task == "vision":
if task in ("vision", "browser_vision"):
effective_provider, client, final_model = resolve_vision_provider_client(
provider=provider,
model=model,
@@ -2203,6 +2333,7 @@ def call_llm(
base_url=resolved_base_url,
api_key=resolved_api_key,
api_mode=resolved_api_mode,
main_runtime=main_runtime,
)
if client is None:
# When the user explicitly chose a non-OpenRouter provider but no
@@ -2223,7 +2354,7 @@ def call_llm(
if not resolved_base_url:
logger.info("Auxiliary %s: provider %s unavailable, trying auto-detection chain",
task or "call", resolved_provider)
client, final_model = _get_cached_client("auto")
client, final_model = _get_cached_client("auto", main_runtime=main_runtime)
if client is None:
raise RuntimeError(
f"No LLM provider configured for task={task} provider={resolved_provider}. "
@@ -2244,6 +2375,11 @@ def call_llm(
tools=tools, timeout=effective_timeout, extra_body=extra_body,
base_url=resolved_base_url)
# Convert image blocks for Anthropic-compatible endpoints (e.g. MiniMax)
_client_base = str(getattr(client, "base_url", "") or "")
if _is_anthropic_compat_endpoint(resolved_provider, _client_base):
kwargs["messages"] = _convert_openai_images_to_anthropic(kwargs["messages"])
# Handle max_tokens vs max_completion_tokens retry, then payment fallback.
try:
return _validate_llm_response(
@@ -2320,9 +2456,9 @@ def extract_content_or_reasoning(response) -> str:
if content:
# Strip inline think/reasoning blocks (mirrors _strip_think_blocks)
cleaned = re.sub(
r"<(?:think|thinking|reasoning|REASONING_SCRATCHPAD)>"
r"<(?:think|thinking|reasoning|thought|REASONING_SCRATCHPAD)>"
r".*?"
r"</(?:think|thinking|reasoning|REASONING_SCRATCHPAD)>",
r"</(?:think|thinking|reasoning|thought|REASONING_SCRATCHPAD)>",
"", content, flags=re.DOTALL | re.IGNORECASE,
).strip()
if cleaned:
@@ -2432,6 +2568,11 @@ async def async_call_llm(
tools=tools, timeout=effective_timeout, extra_body=extra_body,
base_url=resolved_base_url)
# Convert image blocks for Anthropic-compatible endpoints (e.g. MiniMax)
_client_base = str(getattr(client, "base_url", "") or "")
if _is_anthropic_compat_endpoint(resolved_provider, _client_base):
kwargs["messages"] = _convert_openai_images_to_anthropic(kwargs["messages"])
try:
return _validate_llm_response(
await client.chat.completions.create(**kwargs), task)

273
agent/circuit_breaker.py Normal file
View File

@@ -0,0 +1,273 @@
"""
Circuit Breaker for Error Cascading — #885
P(error | prev was error) = 58.6% vs P(error | prev was success) = 25.2%.
That's a 2.33x cascade factor. After 3 consecutive errors, the circuit
opens and the agent must take corrective action.
States:
- CLOSED: Normal operation, errors are counted
- OPEN: Too many consecutive errors, corrective action required
- HALF_OPEN: Testing if errors have cleared
Usage:
from agent.circuit_breaker import CircuitBreaker, ToolCircuitBreaker
cb = ToolCircuitBreaker()
# After each tool call
if not cb.record_result(success=True):
# Circuit is open — take corrective action
cb.get_recovery_action()
"""
import time
from dataclasses import dataclass, field
from enum import Enum
from typing import Any, Dict, List, Optional
class CircuitState(Enum):
CLOSED = "closed" # Normal operation
OPEN = "open" # Too many errors, block execution
HALF_OPEN = "half_open" # Testing recovery
@dataclass
class CircuitBreaker:
"""
Generic circuit breaker with configurable thresholds.
Tracks consecutive errors and opens the circuit when the
error streak exceeds the threshold.
"""
failure_threshold: int = 3
recovery_timeout: float = 30.0 # seconds before trying half-open
success_threshold: int = 2 # successes needed to close from half-open
state: CircuitState = field(default=CircuitState.CLOSED, init=False)
consecutive_failures: int = field(default=0, init=False)
consecutive_successes: int = field(default=0, init=False)
last_failure_time: Optional[float] = field(default=None, init=False)
total_trips: int = field(default=0, init=False)
error_streaks: List[int] = field(default_factory=list, init=False)
def record_result(self, success: bool) -> bool:
"""
Record a tool call result. Returns True if circuit allows execution.
Returns:
True if circuit is CLOSED or HALF_OPEN (execution allowed)
False if circuit is OPEN (execution blocked)
"""
now = time.time()
if self.state == CircuitState.OPEN:
# Check if recovery timeout has passed
if self.last_failure_time and (now - self.last_failure_time) >= self.recovery_timeout:
self.state = CircuitState.HALF_OPEN
self.consecutive_successes = 0
return True # Allow one test execution
return False # Still open
if success:
self.consecutive_failures = 0
self.consecutive_successes += 1
if self.state == CircuitState.HALF_OPEN:
if self.consecutive_successes >= self.success_threshold:
self.state = CircuitState.CLOSED
self.consecutive_successes = 0
return True
else:
self.consecutive_successes = 0
self.consecutive_failures += 1
self.last_failure_time = now
if self.state == CircuitState.HALF_OPEN:
# Failed during recovery — reopen immediately
self.state = CircuitState.OPEN
self.total_trips += 1
return False
if self.consecutive_failures >= self.failure_threshold:
self.state = CircuitState.OPEN
self.total_trips += 1
self.error_streaks.append(self.consecutive_failures)
return False
return True
def can_execute(self) -> bool:
"""Check if execution is allowed."""
if self.state == CircuitState.OPEN:
if self.last_failure_time:
now = time.time()
if (now - self.last_failure_time) >= self.recovery_timeout:
self.state = CircuitState.HALF_OPEN
self.consecutive_successes = 0
return True
return False
return True
def get_state(self) -> Dict[str, Any]:
"""Get current circuit state."""
return {
"state": self.state.value,
"consecutive_failures": self.consecutive_failures,
"consecutive_successes": self.consecutive_successes,
"total_trips": self.total_trips,
"max_streak": max(self.error_streaks) if self.error_streaks else 0,
"can_execute": self.can_execute(),
}
def reset(self):
"""Reset the circuit breaker."""
self.state = CircuitState.CLOSED
self.consecutive_failures = 0
self.consecutive_successes = 0
self.last_failure_time = None
class ToolCircuitBreaker(CircuitBreaker):
"""
Circuit breaker specifically for tool call error cascading.
Provides recovery actions when the circuit opens.
"""
# Tools that are most effective at recovery (from audit data)
RECOVERY_TOOLS = [
"terminal", # Most effective — 2300 recoveries
"read_file", # Reset context by reading something
"search_files", # Find what went wrong
]
def get_recovery_action(self) -> Dict[str, Any]:
"""
Get the recommended recovery action when circuit is open.
Returns dict with action type and details.
"""
streak = self.consecutive_failures
if streak >= 9:
# After 9 errors: 41/46 recoveries via terminal
return {
"action": "terminal_only",
"reason": f"Error streak of {streak} — terminal is the only reliable recovery",
"suggested_tool": "terminal",
"suggested_command": "echo 'Resetting context'",
"severity": "critical",
}
elif streak >= 5:
return {
"action": "switch_tool_type",
"reason": f"Error streak of {streak} — switch to a different tool category",
"suggested_tools": ["read_file", "search_files", "terminal"],
"severity": "high",
}
elif streak >= self.failure_threshold:
return {
"action": "ask_user",
"reason": f"{streak} consecutive errors — ask user for guidance",
"suggested_response": "I'm encountering repeated errors. Would you like me to try a different approach?",
"severity": "medium",
}
else:
return {
"action": "continue",
"reason": f"Error streak of {streak} — within tolerance",
"severity": "low",
}
def should_compress_context(self) -> bool:
"""Determine if context compression would help recovery."""
return self.consecutive_failures >= 5
def get_blocked_tool(self) -> Optional[str]:
"""Get the tool that should be blocked (if any)."""
if self.state == CircuitState.OPEN:
return "last_failed_tool"
return None
class MultiToolCircuitBreaker:
"""
Manages per-tool circuit breakers and cross-tool cascade detection.
When one tool trips its breaker, related tools are also warned.
"""
def __init__(self):
self.breakers: Dict[str, ToolCircuitBreaker] = {}
self.global_streak: int = 0
self.last_tool: Optional[str] = None
self.last_success: bool = True
def get_breaker(self, tool_name: str) -> ToolCircuitBreaker:
"""Get or create a circuit breaker for a tool."""
if tool_name not in self.breakers:
self.breakers[tool_name] = ToolCircuitBreaker()
return self.breakers[tool_name]
def record_result(self, tool_name: str, success: bool) -> bool:
"""
Record a tool call result. Returns True if execution should continue.
"""
breaker = self.get_breaker(tool_name)
allowed = breaker.record_result(success)
# Track global streak
if success:
self.global_streak = 0
self.last_success = True
else:
self.global_streak += 1
self.last_success = False
self.last_tool = tool_name
return allowed
def can_execute(self, tool_name: str) -> bool:
"""Check if a specific tool can execute."""
breaker = self.get_breaker(tool_name)
return breaker.can_execute()
def get_global_state(self) -> Dict[str, Any]:
"""Get overall circuit breaker state."""
return {
"global_streak": self.global_streak,
"last_tool": self.last_tool,
"last_success": self.last_success,
"tool_states": {
name: breaker.get_state()
for name, breaker in self.breakers.items()
if breaker.consecutive_failures > 0 or breaker.total_trips > 0
},
"any_open": any(b.state == CircuitState.OPEN for b in self.breakers.values()),
}
def get_recovery_action(self) -> Dict[str, Any]:
"""Get recovery action based on global state."""
if self.global_streak == 0:
return {"action": "continue", "reason": "No errors"}
# Find the breaker with the worst streak
worst = max(self.breakers.values(), key=lambda b: b.consecutive_failures, default=None)
if worst and worst.consecutive_failures > 0:
return worst.get_recovery_action()
return {
"action": "continue",
"reason": f"Global streak: {self.global_streak}",
"severity": "low",
}
def reset_all(self):
"""Reset all circuit breakers."""
for breaker in self.breakers.values():
breaker.reset()
self.global_streak = 0
self.last_success = True

148
agent/context_budget.py Normal file
View File

@@ -0,0 +1,148 @@
"""
Context Budget Tracker - Prevent context window overflow
Poka-yoke: Visual warnings at 70%%, 85%%, 95%% capacity.
Auto-checkpoint at 85%%. Pre-flight token estimation.
Issue: #838
"""
import json
import logging
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
logger = logging.getLogger(__name__)
HERMES_HOME = Path.home() / ".hermes"
CHECKPOINT_DIR = HERMES_HOME / "checkpoints"
CHARS_PER_TOKEN = 4
THRESHOLD_WARNING = 0.70
THRESHOLD_CRITICAL = 0.85
THRESHOLD_DANGER = 0.95
class ContextBudget:
def __init__(self, context_limit: int = 128000, system_tokens: int = 0,
used_tokens: int = 0, reserved_tokens: int = 2000):
self.context_limit = context_limit
self.system_tokens = system_tokens
self.used_tokens = used_tokens
self.reserved_tokens = reserved_tokens
@property
def total_used(self) -> int:
return self.system_tokens + self.used_tokens
@property
def available(self) -> int:
return max(0, self.context_limit - self.reserved_tokens)
@property
def remaining(self) -> int:
return max(0, self.available - self.total_used)
@property
def utilization(self) -> float:
return self.total_used / self.available if self.available > 0 else 1.0
def estimate_tokens(text: str) -> int:
return len(text) // CHARS_PER_TOKEN if text else 0
def estimate_messages_tokens(messages: List[Dict]) -> int:
total = 0
for msg in messages:
content = msg.get("content", "")
if isinstance(content, str):
total += estimate_tokens(content)
if msg.get("tool_calls"):
total += 100
return total
class ContextBudgetTracker:
def __init__(self, context_limit: int = 128000, session_id: str = ""):
self.budget = ContextBudget(context_limit=context_limit)
self.session_id = session_id
self._checkpointed = False
self._warnings_given = set()
def update_from_messages(self, messages: List[Dict]):
self.budget.used_tokens = estimate_messages_tokens(messages)
def can_fit(self, additional_tokens: int) -> bool:
return self.budget.remaining >= additional_tokens
def preflight_check(self, text: str) -> Tuple[bool, str]:
tokens = estimate_tokens(text)
if not self.can_fit(tokens):
return False, f"Cannot load: ~{tokens:,} tokens needed, {self.budget.remaining:,} remaining"
would_util = (self.budget.total_used + tokens) / self.budget.available if self.budget.available > 0 else 1.0
if would_util >= THRESHOLD_DANGER:
return False, f"Would reach {would_util:.0%%} capacity. Summarize or start new session."
if would_util >= THRESHOLD_CRITICAL:
return True, f"Warning: will reach {would_util:.0%%} capacity."
return True, ""
def get_warning(self) -> Optional[str]:
util = self.budget.utilization
if util >= THRESHOLD_DANGER and "danger" not in self._warnings_given:
self._warnings_given.add("danger")
return f"[CONTEXT CRITICAL: {util:.0%%} used -- {self.budget.remaining:,} tokens left. Summarize or start new session.]"
if util >= THRESHOLD_CRITICAL and "critical" not in self._warnings_given:
self._warnings_given.add("critical")
self._auto_checkpoint()
return f"[CONTEXT WARNING: {util:.0%%} used -- consider summarizing. Auto-checkpoint saved.]"
if util >= THRESHOLD_WARNING and "warning" not in self._warnings_given:
self._warnings_given.add("warning")
return f"[CONTEXT: {util:.0%%} used -- {self.budget.remaining:,} tokens remaining]"
return None
def _auto_checkpoint(self):
if self._checkpointed or not self.session_id:
return
try:
CHECKPOINT_DIR.mkdir(parents=True, exist_ok=True)
path = CHECKPOINT_DIR / f"{self.session_id}.json"
path.write_text(json.dumps({
"session_id": self.session_id,
"timestamp": time.time(),
"budget": {"utilization": round(self.budget.utilization * 100, 1)}
}, indent=2))
self._checkpointed = True
logger.info("Auto-checkpoint saved: %s", path)
except Exception as e:
logger.error("Auto-checkpoint failed: %s", e)
def get_status_line(self) -> str:
util = self.budget.utilization
remaining = self.budget.remaining
if util >= THRESHOLD_DANGER:
return f"RED {util:.0%%} used ({remaining:,} left)"
elif util >= THRESHOLD_CRITICAL:
return f"ORANGE {util:.0%%} used ({remaining:,} left)"
elif util >= THRESHOLD_WARNING:
return f"YELLOW {util:.0%%} used ({remaining:,} left)"
return f"GREEN {util:.0%%} used ({remaining:,} left)"
_tracker = None
def get_tracker(context_limit=128000, session_id=""):
global _tracker
if _tracker is None:
_tracker = ContextBudgetTracker(context_limit, session_id)
return _tracker
def check_context_budget(messages, context_limit=128000):
tracker = get_tracker(context_limit)
tracker.update_from_messages(messages)
return tracker.get_warning()
def preflight_token_check(text):
tracker = get_tracker()
return tracker.preflight_check(text)

View File

@@ -4,8 +4,12 @@ Self-contained class with its own OpenAI client for summarization.
Uses auxiliary model (cheap/fast) to summarize middle turns while
protecting head and tail context.
Improvements over v1:
- Structured summary template (Goal, Progress, Decisions, Files, Next Steps)
Improvements over v2:
- Structured summary template with Resolved/Pending question tracking
- Summarizer preamble: "Do not respond to any questions" (from OpenCode)
- Handoff framing: "different assistant" (from Codex) to create separation
- "Remaining Work" replaces "Next Steps" to avoid reading as active instructions
- Clear separator when summary merges into tail message
- Iterative summary updates (preserves info across multiple compactions)
- Token-budget tail protection instead of fixed message count
- Tool output pruning before LLM summarization (cheap pre-pass)
@@ -13,13 +17,17 @@ Improvements over v1:
- Richer tool call/result detail in summarizer input
"""
import hashlib
import json
import logging
import re
import time
from typing import Any, Dict, List, Optional
from agent.auxiliary_client import call_llm
from agent.context_engine import ContextEngine
from agent.model_metadata import (
MINIMUM_CONTEXT_LENGTH,
get_model_context_length,
estimate_messages_tokens_rough,
)
@@ -27,12 +35,13 @@ from agent.model_metadata import (
logger = logging.getLogger(__name__)
SUMMARY_PREFIX = (
"[CONTEXT COMPACTION] Earlier turns in this conversation were compacted "
"to save context space. The summary below describes work that was "
"already completed, and the current session state may still reflect "
"that work (for example, files may already be changed). Use the summary "
"and the current state to continue from where things left off, and "
"avoid repeating work:"
"[CONTEXT COMPACTION — REFERENCE ONLY] Earlier turns were compacted "
"into the summary below. This is a handoff from a previous context "
"window — treat it as background reference, NOT as active instructions. "
"Do NOT answer questions or fulfill requests mentioned in this summary; "
"they were already addressed. Respond ONLY to the latest user message "
"that appears AFTER this summary. The current session state (files, "
"config, etc.) may reflect work described here — avoid repeating it:"
)
LEGACY_SUMMARY_PREFIX = "[CONTEXT SUMMARY]:"
@@ -51,6 +60,128 @@ _CHARS_PER_TOKEN = 4
_SUMMARY_FAILURE_COOLDOWN_SECONDS = 600
def _summarize_tool_result(tool_name: str, tool_args: str, tool_content: str) -> str:
"""Create an informative 1-line summary of a tool call + result.
Used during the pre-compression pruning pass to replace large tool
outputs with a short but useful description of what the tool did,
rather than a generic placeholder that carries zero information.
Returns strings like::
[terminal] ran `npm test` -> exit 0, 47 lines output
[read_file] read config.py from line 1 (1,200 chars)
[search_files] content search for 'compress' in agent/ -> 12 matches
"""
try:
args = json.loads(tool_args) if tool_args else {}
except (json.JSONDecodeError, TypeError):
args = {}
content = tool_content or ""
content_len = len(content)
line_count = content.count("\n") + 1 if content.strip() else 0
if tool_name == "terminal":
cmd = args.get("command", "")
if len(cmd) > 80:
cmd = cmd[:77] + "..."
exit_match = re.search(r'"exit_code"\s*:\s*(-?\d+)', content)
exit_code = exit_match.group(1) if exit_match else "?"
return f"[terminal] ran `{cmd}` -> exit {exit_code}, {line_count} lines output"
if tool_name == "read_file":
path = args.get("path", "?")
offset = args.get("offset", 1)
return f"[read_file] read {path} from line {offset} ({content_len:,} chars)"
if tool_name == "write_file":
path = args.get("path", "?")
written_lines = args.get("content", "").count("\n") + 1 if args.get("content") else "?"
return f"[write_file] wrote to {path} ({written_lines} lines)"
if tool_name == "search_files":
pattern = args.get("pattern", "?")
path = args.get("path", ".")
target = args.get("target", "content")
match_count = re.search(r'"total_count"\s*:\s*(\d+)', content)
count = match_count.group(1) if match_count else "?"
return f"[search_files] {target} search for '{pattern}' in {path} -> {count} matches"
if tool_name == "patch":
path = args.get("path", "?")
mode = args.get("mode", "replace")
return f"[patch] {mode} in {path} ({content_len:,} chars result)"
if tool_name in ("browser_navigate", "browser_click", "browser_snapshot",
"browser_type", "browser_scroll", "browser_vision"):
url = args.get("url", "")
ref = args.get("ref", "")
detail = f" {url}" if url else (f" ref={ref}" if ref else "")
return f"[{tool_name}]{detail} ({content_len:,} chars)"
if tool_name == "web_search":
query = args.get("query", "?")
return f"[web_search] query='{query}' ({content_len:,} chars result)"
if tool_name == "web_extract":
urls = args.get("urls", [])
url_desc = urls[0] if isinstance(urls, list) and urls else "?"
if isinstance(urls, list) and len(urls) > 1:
url_desc += f" (+{len(urls) - 1} more)"
return f"[web_extract] {url_desc} ({content_len:,} chars)"
if tool_name == "delegate_task":
goal = args.get("goal", "")
if len(goal) > 60:
goal = goal[:57] + "..."
return f"[delegate_task] '{goal}' ({content_len:,} chars result)"
if tool_name == "execute_code":
code_preview = (args.get("code") or "")[:60].replace("\n", " ")
if len(args.get("code", "")) > 60:
code_preview += "..."
return f"[execute_code] `{code_preview}` ({line_count} lines output)"
if tool_name in ("skill_view", "skills_list", "skill_manage"):
name = args.get("name", "?")
return f"[{tool_name}] name={name} ({content_len:,} chars)"
if tool_name == "vision_analyze":
question = args.get("question", "")[:50]
return f"[vision_analyze] '{question}' ({content_len:,} chars)"
if tool_name == "memory":
action = args.get("action", "?")
target = args.get("target", "?")
return f"[memory] {action} on {target}"
if tool_name == "todo":
return "[todo] updated task list"
if tool_name == "clarify":
return "[clarify] asked user a question"
if tool_name == "text_to_speech":
return f"[text_to_speech] generated audio ({content_len:,} chars)"
if tool_name == "cronjob":
action = args.get("action", "?")
return f"[cronjob] {action}"
if tool_name == "process":
action = args.get("action", "?")
sid = args.get("session_id", "?")
return f"[process] {action} session={sid}"
# Generic fallback
first_arg = ""
for k, v in list(args.items())[:2]:
sv = str(v)[:40]
first_arg += f" {k}={sv}"
return f"[{tool_name}]{first_arg} ({content_len:,} chars result)"
class ContextCompressor(ContextEngine):
"""Default context engine — compresses conversation context via lossy summarization.
@@ -72,6 +203,8 @@ class ContextCompressor(ContextEngine):
self._context_probed = False
self._context_probe_persistable = False
self._previous_summary = None
self._last_compression_savings_pct = 100.0
self._ineffective_compression_count = 0
def update_model(
self,
@@ -80,14 +213,19 @@ class ContextCompressor(ContextEngine):
base_url: str = "",
api_key: str = "",
provider: str = "",
api_mode: str = "",
) -> None:
"""Update model info after a model switch or fallback activation."""
self.model = model
self.base_url = base_url
self.api_key = api_key
self.provider = provider
self.api_mode = api_mode
self.context_length = context_length
self.threshold_tokens = int(context_length * self.threshold_percent)
self.threshold_tokens = max(
int(context_length * self.threshold_percent),
MINIMUM_CONTEXT_LENGTH,
)
def __init__(
self,
@@ -102,11 +240,13 @@ class ContextCompressor(ContextEngine):
api_key: str = "",
config_context_length: int | None = None,
provider: str = "",
api_mode: str = "",
):
self.model = model
self.base_url = base_url
self.api_key = api_key
self.provider = provider
self.api_mode = api_mode
self.threshold_percent = threshold_percent
self.protect_first_n = protect_first_n
self.protect_last_n = protect_last_n
@@ -118,7 +258,14 @@ class ContextCompressor(ContextEngine):
config_context_length=config_context_length,
provider=provider,
)
self.threshold_tokens = int(self.context_length * threshold_percent)
# Floor: never compress below MINIMUM_CONTEXT_LENGTH tokens even if
# the percentage would suggest a lower value. This prevents premature
# compression on large-context models at 50% while keeping the % sane
# for models right at the minimum.
self.threshold_tokens = max(
int(self.context_length * threshold_percent),
MINIMUM_CONTEXT_LENGTH,
)
self.compression_count = 0
# Derive token budgets: ratio is relative to the threshold, not total context
@@ -147,6 +294,9 @@ class ContextCompressor(ContextEngine):
# Stores the previous compaction summary for iterative updates
self._previous_summary: Optional[str] = None
# Anti-thrashing: track whether last compression was effective
self._last_compression_savings_pct: float = 100.0
self._ineffective_compression_count: int = 0
self._summary_failure_cooldown_until: float = 0.0
def update_from_response(self, usage: Dict[str, Any]):
@@ -155,9 +305,26 @@ class ContextCompressor(ContextEngine):
self.last_completion_tokens = usage.get("completion_tokens", 0)
def should_compress(self, prompt_tokens: int = None) -> bool:
"""Check if context exceeds the compression threshold."""
"""Check if context exceeds the compression threshold.
Includes anti-thrashing protection: if the last two compressions
each saved less than 10%, skip compression to avoid infinite loops
where each pass removes only 1-2 messages.
"""
tokens = prompt_tokens if prompt_tokens is not None else self.last_prompt_tokens
return tokens >= self.threshold_tokens
if tokens < self.threshold_tokens:
return False
# Anti-thrashing: back off if recent compressions were ineffective
if self._ineffective_compression_count >= 2:
if not self.quiet_mode:
logger.warning(
"Compression skipped — last %d compressions saved <10%% each. "
"Consider /new to start a fresh session, or /compress <topic> "
"for focused compression.",
self._ineffective_compression_count,
)
return False
return True
# ------------------------------------------------------------------
# Tool output pruning (cheap pre-pass, no LLM call)
@@ -167,7 +334,16 @@ class ContextCompressor(ContextEngine):
self, messages: List[Dict[str, Any]], protect_tail_count: int,
protect_tail_tokens: int | None = None,
) -> tuple[List[Dict[str, Any]], int]:
"""Replace old tool result contents with a short placeholder.
"""Replace old tool result contents with informative 1-line summaries.
Instead of a generic placeholder, generates a summary like::
[terminal] ran `npm test` -> exit 0, 47 lines output
[read_file] read config.py from line 1 (3,400 chars)
Also deduplicates identical tool results (e.g. reading the same file
5x keeps only the newest full copy) and truncates large tool_call
arguments in assistant messages outside the protected tail.
Walks backward from the end, protecting the most recent messages that
fall within ``protect_tail_tokens`` (when provided) OR the last
@@ -183,6 +359,22 @@ class ContextCompressor(ContextEngine):
result = [m.copy() for m in messages]
pruned = 0
# Build index: tool_call_id -> (tool_name, arguments_json)
call_id_to_tool: Dict[str, tuple] = {}
for msg in result:
if msg.get("role") == "assistant":
for tc in msg.get("tool_calls") or []:
if isinstance(tc, dict):
cid = tc.get("id", "")
fn = tc.get("function", {})
call_id_to_tool[cid] = (fn.get("name", "unknown"), fn.get("arguments", ""))
else:
cid = getattr(tc, "id", "") or ""
fn = getattr(tc, "function", None)
name = getattr(fn, "name", "unknown") if fn else "unknown"
args_str = getattr(fn, "arguments", "") if fn else ""
call_id_to_tool[cid] = (name, args_str)
# Determine the prune boundary
if protect_tail_tokens is not None and protect_tail_tokens > 0:
# Token-budget approach: walk backward accumulating tokens
@@ -191,7 +383,8 @@ class ContextCompressor(ContextEngine):
min_protect = min(protect_tail_count, len(result) - 1)
for i in range(len(result) - 1, -1, -1):
msg = result[i]
content_len = len(msg.get("content") or "")
raw_content = msg.get("content") or ""
content_len = sum(len(p.get("text", "")) for p in raw_content) if isinstance(raw_content, list) else len(raw_content)
msg_tokens = content_len // _CHARS_PER_TOKEN + 10
for tc in msg.get("tool_calls") or []:
if isinstance(tc, dict):
@@ -206,18 +399,69 @@ class ContextCompressor(ContextEngine):
else:
prune_boundary = len(result) - protect_tail_count
# Pass 1: Deduplicate identical tool results.
# When the same file is read multiple times, keep only the most recent
# full copy and replace older duplicates with a back-reference.
content_hashes: dict = {} # hash -> (index, tool_call_id)
for i in range(len(result) - 1, -1, -1):
msg = result[i]
if msg.get("role") != "tool":
continue
content = msg.get("content") or ""
# Skip multimodal content (list of content blocks)
if isinstance(content, list):
continue
if len(content) < 200:
continue
h = hashlib.md5(content.encode("utf-8", errors="replace")).hexdigest()[:12]
if h in content_hashes:
# This is an older duplicate — replace with back-reference
result[i] = {**msg, "content": "[Duplicate tool output — same content as a more recent call]"}
pruned += 1
else:
content_hashes[h] = (i, msg.get("tool_call_id", "?"))
# Pass 2: Replace old tool results with informative summaries
for i in range(prune_boundary):
msg = result[i]
if msg.get("role") != "tool":
continue
content = msg.get("content", "")
# Skip multimodal content (list of content blocks)
if isinstance(content, list):
continue
if not content or content == _PRUNED_TOOL_PLACEHOLDER:
continue
# Skip already-deduplicated or previously-summarized results
if content.startswith("[Duplicate tool output"):
continue
# Only prune if the content is substantial (>200 chars)
if len(content) > 200:
result[i] = {**msg, "content": _PRUNED_TOOL_PLACEHOLDER}
call_id = msg.get("tool_call_id", "")
tool_name, tool_args = call_id_to_tool.get(call_id, ("unknown", ""))
summary = _summarize_tool_result(tool_name, tool_args, content)
result[i] = {**msg, "content": summary}
pruned += 1
# Pass 3: Truncate large tool_call arguments in assistant messages
# outside the protected tail. write_file with 50KB content, for
# example, survives pruning entirely without this.
for i in range(prune_boundary):
msg = result[i]
if msg.get("role") != "assistant" or not msg.get("tool_calls"):
continue
new_tcs = []
modified = False
for tc in msg["tool_calls"]:
if isinstance(tc, dict):
args = tc.get("function", {}).get("arguments", "")
if len(args) > 500:
tc = {**tc, "function": {**tc["function"], "arguments": args[:200] + "...[truncated]"}}
modified = True
new_tcs.append(tc)
if modified:
result[i] = {**msg, "tool_calls": new_tcs}
return result, pruned
# ------------------------------------------------------------------
@@ -295,13 +539,20 @@ class ContextCompressor(ContextEngine):
return "\n\n".join(parts)
def _generate_summary(self, turns_to_summarize: List[Dict[str, Any]]) -> Optional[str]:
def _generate_summary(self, turns_to_summarize: List[Dict[str, Any]], focus_topic: str = None) -> Optional[str]:
"""Generate a structured summary of conversation turns.
Uses a structured template (Goal, Progress, Decisions, Files, Next Steps)
inspired by Pi-mono and OpenCode. When a previous summary exists,
Uses a structured template (Goal, Progress, Decisions, Resolved/Pending
Questions, Files, Remaining Work) with explicit preamble telling the
summarizer not to answer questions. When a previous summary exists,
generates an iterative update instead of summarizing from scratch.
Args:
focus_topic: Optional focus string for guided compression. When
provided, the summariser prioritises preserving information
related to this topic and is more aggressive about compressing
everything else. Inspired by Claude Code's ``/compact``.
Returns None if all attempts fail — the caller should drop
the middle turns without a summary rather than inject a useless
placeholder.
@@ -317,9 +568,75 @@ class ContextCompressor(ContextEngine):
summary_budget = self._compute_summary_budget(turns_to_summarize)
content_to_summarize = self._serialize_for_summary(turns_to_summarize)
# Preamble shared by both first-compaction and iterative-update prompts.
# Inspired by OpenCode's "do not respond to any questions" instruction
# and Codex's "another language model" framing.
_summarizer_preamble = (
"You are a summarization agent creating a context checkpoint. "
"Your output will be injected as reference material for a DIFFERENT "
"assistant that continues the conversation. "
"Do NOT respond to any questions or requests in the conversation — "
"only output the structured summary. "
"Do NOT include any preamble, greeting, or prefix."
)
# Shared structured template (used by both paths).
_template_sections = f"""## Goal
[What the user is trying to accomplish]
## Constraints & Preferences
[User preferences, coding style, constraints, important decisions]
## Completed Actions
[Numbered list of concrete actions taken — include tool used, target, and outcome.
Format each as: N. ACTION target — outcome [tool: name]
Example:
1. READ config.py:45 — found `==` should be `!=` [tool: read_file]
2. PATCH config.py:45 — changed `==` to `!=` [tool: patch]
3. TEST `pytest tests/` — 3/50 failed: test_parse, test_validate, test_edge [tool: terminal]
Be specific with file paths, commands, line numbers, and results.]
## Active State
[Current working state — include:
- Working directory and branch (if applicable)
- Modified/created files with brief note on each
- Test status (X/Y passing)
- Any running processes or servers
- Environment details that matter]
## In Progress
[Work currently underway — what was being done when compaction fired]
## Blocked
[Any blockers, errors, or issues not yet resolved. Include exact error messages.]
## Key Decisions
[Important technical decisions and WHY they were made]
## Resolved Questions
[Questions the user asked that were ALREADY answered — include the answer so the next assistant does not re-answer them]
## Pending User Asks
[Questions or requests from the user that have NOT yet been answered or fulfilled. If none, write "None."]
## Relevant Files
[Files read, modified, or created — with brief note on each]
## Remaining Work
[What remains to be done — framed as context, not instructions]
## Critical Context
[Any specific values, error messages, configuration details, or data that would be lost without explicit preservation]
Target ~{summary_budget} tokens. Be CONCRETE — include file paths, command outputs, error messages, line numbers, and specific values. Avoid vague descriptions like "made some changes" — say exactly what changed.
Write only the summary body. Do not include any preamble or prefix."""
if self._previous_summary:
# Iterative update: preserve existing info, add new progress
prompt = f"""You are updating a context compaction summary. A previous compaction produced the summary below. New conversation turns have occurred since then and need to be incorporated.
prompt = f"""{_summarizer_preamble}
You are updating a context compaction summary. A previous compaction produced the summary below. New conversation turns have occurred since then and need to be incorporated.
PREVIOUS SUMMARY:
{self._previous_summary}
@@ -327,87 +644,42 @@ PREVIOUS SUMMARY:
NEW TURNS TO INCORPORATE:
{content_to_summarize}
Update the summary using this exact structure. PRESERVE all existing information that is still relevant. ADD new progress. Move items from "In Progress" to "Done" when completed. Remove information only if it is clearly obsolete.
Update the summary using this exact structure. PRESERVE all existing information that is still relevant. ADD new completed actions to the numbered list (continue numbering). Move items from "In Progress" to "Completed Actions" when done. Move answered questions to "Resolved Questions". Update "Active State" to reflect current state. Remove information only if it is clearly obsolete.
## Goal
[What the user is trying to accomplish — preserve from previous summary, update if goal evolved]
## Constraints & Preferences
[User preferences, coding style, constraints, important decisions — accumulate across compactions]
## Progress
### Done
[Completed work — include specific file paths, commands run, results obtained]
### In Progress
[Work currently underway]
### Blocked
[Any blockers or issues encountered]
## Key Decisions
[Important technical decisions and why they were made]
## Relevant Files
[Files read, modified, or created — with brief note on each. Accumulate across compactions.]
## Next Steps
[What needs to happen next to continue the work]
## Critical Context
[Any specific values, error messages, configuration details, or data that would be lost without explicit preservation]
## Tools & Patterns
[Which tools were used, how they were used effectively, and any tool-specific discoveries. Accumulate across compactions.]
Target ~{summary_budget} tokens. Be specific — include file paths, command outputs, error messages, and concrete values rather than vague descriptions.
Write only the summary body. Do not include any preamble or prefix."""
{_template_sections}"""
else:
# First compaction: summarize from scratch
prompt = f"""Create a structured handoff summary for a later assistant that will continue this conversation after earlier turns are compacted.
prompt = f"""{_summarizer_preamble}
Create a structured handoff summary for a different assistant that will continue this conversation after earlier turns are compacted. The next assistant should be able to understand what happened without re-reading the original turns.
TURNS TO SUMMARIZE:
{content_to_summarize}
Use this exact structure:
## Goal
[What the user is trying to accomplish]
{_template_sections}"""
## Constraints & Preferences
[User preferences, coding style, constraints, important decisions]
# Inject focus topic guidance when the user provides one via /compress <focus>.
# This goes at the end of the prompt so it takes precedence.
if focus_topic:
prompt += f"""
## Progress
### Done
[Completed work — include specific file paths, commands run, results obtained]
### In Progress
[Work currently underway]
### Blocked
[Any blockers or issues encountered]
## Key Decisions
[Important technical decisions and why they were made]
## Relevant Files
[Files read, modified, or created — with brief note on each]
## Next Steps
[What needs to happen next to continue the work]
## Critical Context
[Any specific values, error messages, configuration details, or data that would be lost without explicit preservation]
## Tools & Patterns
[Which tools were used, how they were used effectively, and any tool-specific discoveries (e.g., preferred flags, working invocations, successful command patterns)]
Target ~{summary_budget} tokens. Be specific — include file paths, command outputs, error messages, and concrete values rather than vague descriptions. The goal is to prevent the next assistant from repeating work or losing important details.
Write only the summary body. Do not include any preamble or prefix."""
FOCUS TOPIC: "{focus_topic}"
The user has requested that this compaction PRIORITISE preserving all information related to the focus topic above. For content related to "{focus_topic}", include full detail — exact values, file paths, command outputs, error messages, and decisions. For content NOT related to the focus topic, summarise more aggressively (brief one-liners or omit if truly irrelevant). The focus topic sections should receive roughly 60-70% of the summary token budget."""
try:
call_kwargs = {
"task": "compression",
"main_runtime": {
"model": self.model,
"provider": self.provider,
"base_url": self.base_url,
"api_key": self.api_key,
"api_mode": self.api_mode,
},
"messages": [{"role": "user", "content": prompt}],
"max_tokens": summary_budget * 2,
"max_tokens": int(summary_budget * 1.3),
# timeout resolved from auxiliary.compression.timeout config by call_llm
}
if self.summary_model:
@@ -421,8 +693,10 @@ Write only the summary body. Do not include any preamble or prefix."""
# Store for iterative updates on next compaction
self._previous_summary = summary
self._summary_failure_cooldown_until = 0.0
self._summary_model_fallen_back = False
return self._with_summary_prefix(summary)
except RuntimeError:
# No provider configured — long cooldown, unlikely to self-resolve
self._summary_failure_cooldown_until = time.monotonic() + _SUMMARY_FAILURE_COOLDOWN_SECONDS
logging.warning("Context compression: no provider available for "
"summary. Middle turns will be dropped without summary "
@@ -430,12 +704,42 @@ Write only the summary body. Do not include any preamble or prefix."""
_SUMMARY_FAILURE_COOLDOWN_SECONDS)
return None
except Exception as e:
self._summary_failure_cooldown_until = time.monotonic() + _SUMMARY_FAILURE_COOLDOWN_SECONDS
# If the summary model is different from the main model and the
# error looks permanent (model not found, 503, 404), fall back to
# using the main model instead of entering cooldown that leaves
# context growing unbounded. (#8620 sub-issue 4)
_status = getattr(e, "status_code", None) or getattr(getattr(e, "response", None), "status_code", None)
_err_str = str(e).lower()
_is_model_not_found = (
_status in (404, 503)
or "model_not_found" in _err_str
or "does not exist" in _err_str
or "no available channel" in _err_str
)
if (
_is_model_not_found
and self.summary_model
and self.summary_model != self.model
and not getattr(self, "_summary_model_fallen_back", False)
):
self._summary_model_fallen_back = True
logging.warning(
"Summary model '%s' not available (%s). "
"Falling back to main model '%s' for compression.",
self.summary_model, e, self.model,
)
self.summary_model = "" # empty = use main model
self._summary_failure_cooldown_until = 0.0 # no cooldown
return self._generate_summary(messages, summary_budget) # retry immediately
# Transient errors (timeout, rate limit, network) — shorter cooldown
_transient_cooldown = 60
self._summary_failure_cooldown_until = time.monotonic() + _transient_cooldown
logging.warning(
"Failed to generate context summary: %s. "
"Further summary attempts paused for %d seconds.",
e,
_SUMMARY_FAILURE_COOLDOWN_SECONDS,
_transient_cooldown,
)
return None
@@ -620,7 +924,7 @@ Write only the summary body. Do not include any preamble or prefix."""
# Main compression entry point
# ------------------------------------------------------------------
def compress(self, messages: List[Dict[str, Any]], current_tokens: int = None) -> List[Dict[str, Any]]:
def compress(self, messages: List[Dict[str, Any]], current_tokens: int = None, focus_topic: str = None) -> List[Dict[str, Any]]:
"""Compress conversation messages by summarizing middle turns.
Algorithm:
@@ -632,6 +936,12 @@ Write only the summary body. Do not include any preamble or prefix."""
After compression, orphaned tool_call / tool_result pairs are cleaned
up so the API never receives mismatched IDs.
Args:
focus_topic: Optional focus string for guided compression. When
provided, the summariser will prioritise preserving information
related to this topic and be more aggressive about compressing
everything else. Inspired by Claude Code's ``/compact``.
"""
n_messages = len(messages)
# Only need head + 3 tail messages minimum (token budget decides the real tail size)
@@ -689,17 +999,17 @@ Write only the summary body. Do not include any preamble or prefix."""
)
# Phase 3: Generate structured summary
summary = self._generate_summary(turns_to_summarize)
summary = self._generate_summary(turns_to_summarize, focus_topic=focus_topic)
# Phase 4: Assemble compressed message list
compressed = []
for i in range(compress_start):
msg = messages[i].copy()
if i == 0 and msg.get("role") == "system" and self.compression_count == 0:
msg["content"] = (
(msg.get("content") or "")
+ "\n\n[Note: Some earlier conversation turns have been compacted into a handoff summary to preserve context space. The current session state may still reflect earlier work, so build on that summary and state rather than re-doing work.]"
)
if i == 0 and msg.get("role") == "system":
existing = msg.get("content") or ""
_compression_note = "[Note: Some earlier conversation turns have been compacted into a handoff summary to preserve context space. The current session state may still reflect earlier work, so build on that summary and state rather than re-doing work.]"
if _compression_note not in existing:
msg["content"] = existing + "\n\n" + _compression_note
compressed.append(msg)
# If LLM summary failed, insert a static fallback so the model
@@ -744,7 +1054,12 @@ Write only the summary body. Do not include any preamble or prefix."""
msg = messages[i].copy()
if _merge_summary_into_tail and i == compress_end:
original = msg.get("content") or ""
msg["content"] = summary + "\n\n" + original
msg["content"] = (
summary
+ "\n\n--- END OF CONTEXT SUMMARY — "
"respond to the message below, not the summary above ---\n\n"
+ original
)
_merge_summary_into_tail = False
compressed.append(msg)
@@ -752,14 +1067,24 @@ Write only the summary body. Do not include any preamble or prefix."""
compressed = self._sanitize_tool_pairs(compressed)
new_estimate = estimate_messages_tokens_rough(compressed)
saved_estimate = display_tokens - new_estimate
# Anti-thrashing: track compression effectiveness
savings_pct = (saved_estimate / display_tokens * 100) if display_tokens > 0 else 0
self._last_compression_savings_pct = savings_pct
if savings_pct < 10:
self._ineffective_compression_count += 1
else:
self._ineffective_compression_count = 0
if not self.quiet_mode:
new_estimate = estimate_messages_tokens_rough(compressed)
saved_estimate = display_tokens - new_estimate
logger.info(
"Compressed: %d -> %d messages (~%d tokens saved)",
"Compressed: %d -> %d messages (~%d tokens saved, %.0f%%)",
n_messages,
len(compressed),
saved_estimate,
savings_pct,
)
logger.info("Compression #%d complete", self.compression_count)

View File

@@ -26,7 +26,7 @@ Lifecycle:
"""
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List
class ContextEngine(ABC):

View File

@@ -1,214 +0,0 @@
"""Context-RAG Decision Framework — adaptive retrieval based on context pressure.
With models that have 128K-1M token context windows, always prefetching from
RAG is wasteful when context is mostly empty and insufficient when context is
nearly full. This module provides a strategy layer that adapts prefetch behavior
to remaining context budget.
Strategies:
- stuff: Context < 30% used → prefetch aggressively, load more facts.
The model has room to reason over everything directly.
- hybrid: Context 30-70% used → prefetch selectively with standard limits.
Key facts in context, rest available via tool calls.
- selective: Context > 70% used → only prefetch on high-signal queries.
Tighter limits; defer to on-demand tool retrieval.
The framework is deliberately simple — it's a decision heuristic, not a neural
router. Simplicity means reliability at the edge cases that matter (crisis
intervention, long debugging sessions, multi-hour research).
References:
- Long Context vs RAG Decision Framework (Timmy Foundation research backlog #4.3)
- Self-RAG: Learning to Retrieve, Generate, and Critique (arxiv 2310.11511)
- FrugalGPT: How to Use Large Language Models While Reducing Cost (arxiv 2305.05176)
"""
from __future__ import annotations
import logging
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
logger = logging.getLogger(__name__)
class ContextStrategy(Enum):
"""Which retrieval strategy to use given current context pressure."""
STUFF = "stuff" # < 30% context used — load everything relevant
HYBRID = "hybrid" # 30-70% — standard RAG with moderate limits
SELECTIVE = "selective" # > 70% — minimal prefetch, defer to tools
@dataclass
class ContextBudget:
"""Snapshot of the current context state.
Populated by run_agent.py from the context compressor's tracked state.
Passed to MemoryManager.prefetch_all() to drive adaptive retrieval.
"""
context_length: int = 0 # Model's max context window (tokens)
used_tokens: int = 0 # Tokens consumed so far
threshold_tokens: int = 0 # Compression fires at this level
compression_enabled: bool = True # Whether auto-compression is on
@property
def pressure(self) -> float:
"""Context pressure as a ratio [0.0, 1.0+].
0.0 = empty context, 1.0 = at compression threshold.
Can exceed 1.0 if we've blown past the threshold.
"""
if self.context_length <= 0:
return 0.0
# Use threshold (not raw context_length) as the "full" mark,
# since compression fires at threshold, not at context_length.
denom = self.threshold_tokens if self.threshold_tokens > 0 else self.context_length
return self.used_tokens / denom if denom > 0 else 0.0
@property
def strategy(self) -> ContextStrategy:
"""Select retrieval strategy based on context pressure."""
p = self.pressure
if p < 0.30:
return ContextStrategy.STUFF
elif p < 0.70:
return ContextStrategy.HYBRID
else:
return ContextStrategy.SELECTIVE
@property
def label(self) -> str:
"""Human-readable label for logging/display."""
return self.strategy.value
# Default fact limits per strategy
# These are multiplier on the base limit (default 5 facts per provider).
_STRATEGY_LIMIT_MULTIPLIERS = {
ContextStrategy.STUFF: 3, # 15 facts — we have room, load generously
ContextStrategy.HYBRID: 1, # 5 facts — standard
ContextStrategy.SELECTIVE: 0.4, # 2 facts — save context for the model
}
# Minimum trust score threshold per strategy.
# Higher pressure = require higher trust to reduce noise in tight context.
_STRATEGY_MIN_TRUST = {
ContextStrategy.STUFF: 0.2, # Low bar — cast a wide net
ContextStrategy.HYBRID: 0.3, # Standard
ContextStrategy.SELECTIVE: 0.5, # Only high-confidence facts
}
def compute_prefetch_params(
budget: ContextBudget,
base_limit: int = 5,
base_min_trust: float = 0.3,
) -> dict:
"""Compute prefetch parameters based on context pressure.
Returns dict with:
- limit: int — max facts to retrieve
- min_trust: float — minimum trust score
- strategy: ContextStrategy — which strategy was selected
- skip: bool — if True, skip prefetch entirely (extreme pressure)
"""
strategy = budget.strategy
# At extreme pressure (>95%), skip prefetch entirely —
# the model needs every token for the current conversation.
if budget.pressure > 0.95:
logger.debug(
"Context pressure %.1f%% > 95%% — skipping prefetch entirely",
budget.pressure * 100,
)
return {
"limit": 0,
"min_trust": 1.0,
"strategy": strategy,
"skip": True,
}
multiplier = _STRATEGY_LIMIT_MULTIPLIERS.get(strategy, 1.0)
min_trust = _STRATEGY_MIN_TRUST.get(strategy, base_min_trust)
# Don't let limit go below 1 (always try to get at least something)
limit = max(1, int(base_limit * multiplier))
logger.debug(
"Context strategy=%s pressure=%.1f%% limit=%d min_trust=%.1f",
strategy.value,
budget.pressure * 100,
limit,
min_trust,
)
return {
"limit": limit,
"min_trust": min_trust,
"strategy": strategy,
"skip": False,
}
def should_prefetch(budget: ContextBudget, query: str) -> bool:
"""Decide whether to prefetch at all for this query + context state.
Rules:
- Always prefetch when pressure is low (< 50%) — we have room.
- At medium pressure (50-80%), only prefetch if the query looks like
it needs memory (mentions people, projects, past work).
- At high pressure (>80%), skip prefetch unless query is very short
(short queries often need recall, long queries don't).
"""
pressure = budget.pressure
if pressure < 0.50:
return True
# Medium pressure: heuristic on query needing memory
query_lower = query.lower() if query else ""
memory_signals = [
"remember", "recall", "what did", "who is", "last time",
"previously", "before", "fact_store", "memory", "told you",
"mentioned", "said", "project", "config", "setup",
]
has_memory_signal = any(sig in query_lower for sig in memory_signals)
if pressure < 0.80:
return has_memory_signal
# High pressure: only prefetch for very short memory-seeking queries
return has_memory_signal and len(query) < 200
def build_strategy_report(budget: ContextBudget) -> str:
"""Build a human-readable report of the current context strategy.
For logging and debug display.
"""
params = compute_prefetch_params(budget)
strategy = params["strategy"]
pressure_pct = budget.pressure * 100
lines = [
f"Context Strategy: {strategy.value.upper()}",
f" Pressure: {pressure_pct:.1f}%",
f" Used: {budget.used_tokens:,} / {budget.context_length:,} tokens",
f" Threshold: {budget.threshold_tokens:,} tokens",
f" Prefetch limit: {params['limit']} facts",
f" Min trust: {params['min_trust']:.1f}",
f" Skip prefetch: {params['skip']}",
]
# Add recommendations
if strategy == ContextStrategy.STUFF:
lines.append(" → Context is mostly empty. Prefetching generously.")
elif strategy == ContextStrategy.HYBRID:
lines.append(" → Context moderately full. Standard retrieval.")
else:
lines.append(" → Context is tight. Minimal prefetch, prefer on-demand tools.")
return "\n".join(lines)

View File

@@ -18,12 +18,12 @@ import hermes_cli.auth as auth_mod
from hermes_cli.auth import (
CODEX_ACCESS_TOKEN_REFRESH_SKEW_SECONDS,
DEFAULT_AGENT_KEY_MIN_TTL_SECONDS,
KIMI_CODE_BASE_URL,
PROVIDER_REGISTRY,
_auth_store_lock,
_codex_access_token_is_expiring,
_decode_jwt_claims,
_import_codex_cli_tokens,
_write_codex_cli_tokens,
_load_auth_store,
_load_provider_state,
_resolve_kimi_base_url,
@@ -288,6 +288,14 @@ def _iter_custom_providers(config: Optional[dict] = None):
return
custom_providers = config.get("custom_providers")
if not isinstance(custom_providers, list):
# Fall back to the v12+ providers dict via the compatibility layer
try:
from hermes_cli.config import get_compatible_custom_providers
custom_providers = get_compatible_custom_providers(config)
except Exception:
return
if not custom_providers:
return
for entry in custom_providers:
if not isinstance(entry, dict):
@@ -693,6 +701,14 @@ class CredentialPool:
self._replace_entry(synced, updated)
self._persist()
self._sync_device_code_entry_to_auth_store(updated)
try:
_write_codex_cli_tokens(
updated.access_token,
updated.refresh_token,
last_refresh=updated.last_refresh,
)
except Exception as wexc:
logger.debug("Failed to write refreshed Codex tokens to CLI file (retry): %s", wexc)
return updated
except Exception as retry_exc:
logger.debug("Codex retry refresh also failed: %s", retry_exc)
@@ -718,6 +734,17 @@ class CredentialPool:
# _seed_from_singletons() on the next load_pool() sees fresh state
# instead of re-seeding stale/consumed tokens.
self._sync_device_code_entry_to_auth_store(updated)
# Write refreshed tokens back to ~/.codex/auth.json so Codex CLI
# and VS Code don't hit "refresh_token_reused" on their next refresh.
if self.provider == "openai-codex":
try:
_write_codex_cli_tokens(
updated.access_token,
updated.refresh_token,
last_refresh=updated.last_refresh,
)
except Exception as wexc:
logger.debug("Failed to write refreshed Codex tokens to CLI file: %s", wexc)
return updated
def _entry_needs_refresh(self, entry: PooledCredential) -> bool:
@@ -1125,9 +1152,79 @@ def _seed_from_singletons(provider: str, entries: List[PooledCredential]) -> Tup
},
)
elif provider == "copilot":
# Copilot tokens are resolved dynamically via `gh auth token` or
# env vars (COPILOT_GITHUB_TOKEN / GH_TOKEN). They don't live in
# the auth store or credential pool, so we resolve them here.
try:
from hermes_cli.copilot_auth import resolve_copilot_token
token, source = resolve_copilot_token()
if token:
source_name = "gh_cli" if "gh" in source.lower() else f"env:{source}"
active_sources.add(source_name)
changed |= _upsert_entry(
entries,
provider,
source_name,
{
"source": source_name,
"auth_type": AUTH_TYPE_API_KEY,
"access_token": token,
"label": source,
},
)
except Exception as exc:
logger.debug("Copilot token seed failed: %s", exc)
elif provider == "qwen-oauth":
# Qwen OAuth tokens live in ~/.qwen/oauth_creds.json, written by
# the Qwen CLI (`qwen auth qwen-oauth`). They aren't in the
# Hermes auth store or env vars, so resolve them here.
# Use refresh_if_expiring=False to avoid network calls during
# pool loading / provider discovery.
try:
from hermes_cli.auth import resolve_qwen_runtime_credentials
creds = resolve_qwen_runtime_credentials(refresh_if_expiring=False)
token = creds.get("api_key", "")
if token:
source_name = creds.get("source", "qwen-cli")
active_sources.add(source_name)
changed |= _upsert_entry(
entries,
provider,
source_name,
{
"source": source_name,
"auth_type": AUTH_TYPE_OAUTH,
"access_token": token,
"expires_at_ms": creds.get("expires_at_ms"),
"base_url": creds.get("base_url", ""),
"label": creds.get("auth_file", source_name),
},
)
except Exception as exc:
logger.debug("Qwen OAuth token seed failed: %s", exc)
elif provider == "openai-codex":
state = _load_provider_state(auth_store, "openai-codex")
tokens = state.get("tokens") if isinstance(state, dict) else None
# Fallback: import from Codex CLI (~/.codex/auth.json) if Hermes auth
# store has no tokens. This mirrors resolve_codex_runtime_credentials()
# so that load_pool() and list_authenticated_providers() detect tokens
# that only exist in the Codex CLI shared file.
if not (isinstance(tokens, dict) and tokens.get("access_token")):
try:
from hermes_cli.auth import _import_codex_cli_tokens, _save_codex_tokens
cli_tokens = _import_codex_cli_tokens()
if cli_tokens:
logger.info("Importing Codex CLI tokens into Hermes auth store.")
_save_codex_tokens(cli_tokens)
# Re-read state after import
auth_store = _load_auth_store()
state = _load_provider_state(auth_store, "openai-codex")
tokens = state.get("tokens") if isinstance(state, dict) else None
except Exception as exc:
logger.debug("Codex CLI token import failed: %s", exc)
if isinstance(tokens, dict) and tokens.get("access_token"):
active_sources.add("device_code")
changed |= _upsert_entry(

149
agent/crisis_resources.py Normal file
View File

@@ -0,0 +1,149 @@
"""
988 Suicide & Crisis Lifeline Integration (#673).
When crisis is detected, provides immediate access to help:
- Phone: 988 (call or text)
- Text: Text HOME to 988
- Chat: 988lifeline.org/chat
- Spanish: 1-888-628-9454
- Emergency: 911
This module provides the resource data. agent/crisis_protocol.py
handles detection. This module formats the resources for display.
"""
from dataclasses import dataclass
from typing import List
@dataclass
class CrisisResource:
"""A crisis support contact method."""
name: str
contact: str
description: str
url: str = ""
available: str = "24/7"
language: str = "English"
# 988 Suicide & Crisis Lifeline — all channels
LIFELINE_988 = CrisisResource(
name="988 Suicide and Crisis Lifeline",
contact="Call or text 988",
description="Free, confidential support for people in suicidal crisis or emotional distress.",
url="https://988lifeline.org",
available="24/7",
language="English",
)
LIFELINE_988_TEXT = CrisisResource(
name="988 Crisis Text Line",
contact="Text HOME to 988",
description="Free, 24/7 crisis support via text message.",
url="",
available="24/7",
language="English",
)
LIFELINE_988_CHAT = CrisisResource(
name="988 Lifeline Chat",
contact="988lifeline.org/chat",
description="Free, confidential online chat with a trained crisis counselor.",
url="https://988lifeline.org/chat",
available="24/7",
language="English",
)
LIFELINE_988_SPANISH = CrisisResource(
name="988 Lifeline (Spanish)",
contact="1-888-628-9454",
description="Línea de prevención del suicidio en español.",
url="https://988lifeline.org/help-yourself/en-espanol/",
available="24/7",
language="Spanish",
)
CRISIS_TEXT_LINE = CrisisResource(
name="Crisis Text Line",
contact="Text HOME to 741741",
description="Free, 24/7 crisis support via text message.",
url="https://www.crisistextline.org",
available="24/7",
language="English",
)
EMERGENCY_911 = CrisisResource(
name="Emergency Services",
contact="911",
description="Immediate danger — police, fire, ambulance.",
url="",
available="24/7",
language="Any",
)
# All resources in priority order
ALL_RESOURCES: List[CrisisResource] = [
EMERGENCY_911,
LIFELINE_988,
LIFELINE_988_TEXT,
LIFELINE_988_CHAT,
CRISIS_TEXT_LINE,
LIFELINE_988_SPANISH,
]
def get_crisis_resources(language: str = None) -> List[CrisisResource]:
"""Get crisis resources, optionally filtered by language.
Args:
language: Filter by language ("English", "Spanish", or None for all)
Returns:
List of CrisisResource objects
"""
if language:
return [r for r in ALL_RESOURCES if r.language.lower() == language.lower()]
return ALL_RESOURCES
def format_crisis_resources(resources: List[CrisisResource] = None) -> str:
"""Format crisis resources as a user-facing message.
Args:
resources: List of resources to format. Defaults to all resources.
Returns:
Formatted string suitable for displaying to a user in crisis.
"""
if resources is None:
resources = ALL_RESOURCES
lines = ["**Please reach out — help is available right now:**
"]
for r in resources:
if r.url:
lines.append(f"- **{r.name}:** {r.contact} ({r.url})")
else:
lines.append(f"- **{r.name}:** {r.contact}")
lines.append("")
lines.append("All services are free, confidential, and available 24/7.")
lines.append("You are not alone.")
return "
".join(lines)
def get_immediate_help_message() -> str:
"""Get the most urgent crisis help message.
Used when crisis is detected at CRITICAL level.
"""
return (
"If you are in immediate danger, call **911** right now.
"
+ format_crisis_resources()
)

View File

@@ -4,7 +4,6 @@ Pure display functions and classes with no AIAgent dependency.
Used by AIAgent._execute_tool_calls for CLI feedback.
"""
import json
import logging
import os
import sys
@@ -14,6 +13,8 @@ from dataclasses import dataclass, field
from difflib import unified_diff
from pathlib import Path
from utils import safe_json_loads
# ANSI escape codes for coloring tool failure indicators
_RED = "\033[31m"
_RESET = "\033[0m"
@@ -76,12 +77,6 @@ def _diff_ansi() -> dict[str, str]:
return _diff_colors_cached
def reset_diff_colors() -> None:
"""Reset cached diff colors (call after /skin switch)."""
global _diff_colors_cached
_diff_colors_cached = None
# Module-level helpers — each call resolves from the active skin lazily.
def _diff_dim(): return _diff_ansi()["dim"]
def _diff_file(): return _diff_ansi()["file"]
@@ -372,9 +367,8 @@ def _result_succeeded(result: str | None) -> bool:
"""Conservatively detect whether a tool result represents success."""
if not result:
return False
try:
data = json.loads(result)
except (json.JSONDecodeError, TypeError):
data = safe_json_loads(result)
if data is None:
return False
if not isinstance(data, dict):
return False
@@ -423,10 +417,7 @@ def extract_edit_diff(
) -> str | None:
"""Extract a unified diff from a file-edit tool result."""
if tool_name == "patch" and result:
try:
data = json.loads(result)
except (json.JSONDecodeError, TypeError):
data = None
data = safe_json_loads(result)
if isinstance(data, dict):
diff = data.get("diff")
if isinstance(diff, str) and diff.strip():
@@ -780,23 +771,19 @@ def _detect_tool_failure(tool_name: str, result: str | None) -> tuple[bool, str]
return False, ""
if tool_name == "terminal":
try:
data = json.loads(result)
data = safe_json_loads(result)
if isinstance(data, dict):
exit_code = data.get("exit_code")
if exit_code is not None and exit_code != 0:
return True, f" [exit {exit_code}]"
except (json.JSONDecodeError, TypeError, AttributeError):
logger.debug("Could not parse terminal result as JSON for exit code check")
return False, ""
# Memory-specific: distinguish "full" from real errors
if tool_name == "memory":
try:
data = json.loads(result)
data = safe_json_loads(result)
if isinstance(data, dict):
if data.get("success") is False and "exceed the limit" in data.get("error", ""):
return True, " [full]"
except (json.JSONDecodeError, TypeError, AttributeError):
logger.debug("Could not parse memory result as JSON for capacity check")
# Generic heuristic for non-terminal tools
lower = result[:500].lower()

View File

@@ -13,7 +13,6 @@ from __future__ import annotations
import enum
import logging
import re
from dataclasses import dataclass, field
from typing import Any, Dict, Optional
@@ -157,6 +156,18 @@ _CONTEXT_OVERFLOW_PATTERNS = [
"prompt exceeds max length",
"max_tokens",
"maximum number of tokens",
# vLLM / local inference server patterns
"exceeds the max_model_len",
"max_model_len",
"prompt length", # "engine prompt length X exceeds"
"input is too long",
"maximum model length",
# Ollama patterns
"context length exceeded",
"truncating input",
# llama.cpp / llama-server patterns
"slot context", # "slot context: N tokens, prompt N tokens"
"n_ctx_slot",
# Chinese error messages (some providers return these)
"超过最大长度",
"上下文长度",

635
agent/input_sanitizer.py Normal file
View File

@@ -0,0 +1,635 @@
"""
Input Sanitizer for Jailbreak Pattern Detection
This module provides input sanitization to detect and strip jailbreak fingerprint
patterns as identified in Issue #72 (Red Team Audit).
Security Findings Addressed:
1. HIGH - OG GODMODE template bypassed phishing refusal
2. MEDIUM - boundary_inversion works for gray-area content
3. LOW - Spaced text bypass (k e y l o g g e r)
Usage:
from agent.input_sanitizer import sanitize_input, score_input_risk
cleaned, risk_score, detected = sanitize_input(user_input)
if risk_score > 75:
logger.warning("High-risk input detected and blocked")
"""
import re
import logging
from typing import Tuple, List, Dict, Any, Optional
from dataclasses import dataclass
from enum import IntEnum
logger = logging.getLogger(__name__)
class RiskLevel(IntEnum):
"""Risk score thresholds for input classification."""
LOW = 25
MEDIUM = 50
HIGH = 75
CRITICAL = 90
@dataclass
class SanitizationResult:
"""Result of input sanitization."""
original_text: str
cleaned_text: str
risk_score: int
detected_patterns: List[str]
risk_level: str
blocked: bool
# =============================================================================
# JAILBREAK PATTERN DEFINITIONS
# =============================================================================
# GODMODE and similar jailbreak templates
GODMODE_PATTERNS = [
r"\[START\]",
r"\[END\]",
r"GODMODE\s*(?:ENABLED|ACTIVATED|ON)?",
r"UNFILTERED\s*(?:MODE|OUTPUT)?",
r"DEVMODE\s*(?:ENABLED|ACTIVATED|ON)?",
r"JAILBREAK\s*(?:ENABLED|ACTIVATED|ON)?",
r"DAN\s*(?:MODE|ENABLED)?", # Do Anything Now
r"STAN\s*(?:MODE|ENABLED)?", # Strive To Avoid Norms
r"DUDE\s*(?:MODE|ENABLED)?", # Do Unlimited Data Extraction
]
# DAN-style and jailbreak variant patterns (Issue #87)
DAN_PATTERNS = [
r"\bdo anything now\b",
r"\bstay in character\b",
r"\btoken smuggl(?:ing|e)\b",
r"\bjailbreak successful\b",
r"\bmode:\s*uncensored\b",
r"\bdeveloper mode enabled\b",
]
# Roleplaying override patterns (Issue #87)
ROLEPLAY_PATTERNS = [
r"\broleplay\s+as\b",
r"\bact\s+as\s+if\s+you\s+are\b",
r"\bsimulate\s+being\b",
r"\bforget\s+you\s+are\s+(?:an?\s+)?(?:ai|language\s+model)\b",
r"\byou\s+are\s+now\s+(?:named|called)\b",
r"\brespond\s+as\s+(?:if\s+you\s+were|though\s+you\s+are)\b",
]
# System prompt extraction patterns (Issue #87)
EXTRACTION_PATTERNS = [
r"\brepeat\s+the\s+words\s+above\b",
r"\brepeat\s+your\s+(?:system\s+|initial\s+)?instructions\b",
r"\bwhat\s+is\s+your\s+(?:system\s+|initial\s+)?prompt\b",
r"\bshow\s+me\s+your\s+(?:system\s+|initial\s+)?prompt\b",
r"\bprint\s+your\s+(?:instructions|prompt|system\s+prompt)\b",
r"\boutput\s+your\s+(?:initial|system)\s+prompt\b",
r"\bwhat\s+were\s+you\s+told\s+to\s+do\b",
r"\bsummarize\s+your\s+instructions\b",
]
# L33t speak patterns commonly used to bypass filters
LEET_SPEAK_PATTERNS = [
r"h4ck(?:er|ing)?",
r"h4x(?:or|ing)?",
r"k3ylog(?:ger|ing)?",
r"k3y\s*log(?:ger|ing)?",
r"ph1sh(?:ing)?",
r"ph33r",
r"w4r3z",
r"cr4ck(?:er|ing)?",
r"m4lw4r3",
r"spyw4r3",
r"r00t(?:kit)?",
r"byp4ss",
r"3xpl0it",
r"v1ru5",
r"tr0j4n",
r"b4ckd00r",
r"sn3ak",
r"st34l",
r"pwn(?:ed|ing|age)?",
]
# Spaced text patterns (e.g., "k e y l o g g e r")
SPACED_TEXT_PATTERN = re.compile(
r"(?:\b\w\s+){3,}\w\b", # At least 4 characters with spaces between
re.IGNORECASE
)
# Common words that are spaced out to evade detection
SPACED_TRIGGER_WORDS = [
"keylogger", "phishing", "malware", "virus", "trojan",
"backdoor", "exploit", "hack", "crack", "bypass",
"steal", "password", "credential", "inject", "payload"
]
# Crisis and self-harm detection patterns (Issue #72 - defense in depth)
# These patterns detect suicide/self-harm queries even when wrapped in jailbreaks
CRISIS_PATTERNS = [
r"\bsuicid(?:e|al|ing)\b",
r"\bkill\s+(?:myself|my\s+self|me)\b",
r"\bend\s+(?:my|this)\s+life\b",
r"\bmost\s+painless\s+(?:way|method|means)\s+(?:to\s+)?(?:die|kill)\b",
r"\blethal\s+(?:dose|amount|quantity)\b",
r"\bhow\s+much\s+.*\s+(?:die|kill|lethal)\b",
r"\bwant\s+to\s+die\b",
r"\bwant\s+to\s+(?:kill|end)\s+(?:myself|me)\b",
r"\bways?\s+to\s+(?:kill|end)\s+(?:myself|me)\b",
r"\bmethods?\s+(?:of\s+)?suicide\b",
r"\bpainless\s+(?:death|suicide|way\s+to\s+die)\b",
r"\boverdose\s+(?:on\s+)?(?:pills?|medication|drugs?)\b",
r"\bnobody\s+(?:would|will)\s+miss\s+me\b",
r"\bbetter\s+off\s+dead\b",
r"\bshould\s+i\s+just\s+(?:die|kill\s+myself)\b",
r"\bgive\s+me\s+reasons?\s+(?:why|to)\s+(?:suicide|kill\s+myself|die)\b",
r"\bvalid\s+reasons?\s+to\s+(?:suicide|kill\s+myself|die)\b",
]
# Refusal inversion patterns (trying to trick AI into bypassing refusal)
REFUSAL_INVERSION_PATTERNS = [
r"refusal\s+(?:is\s+)?(?:harmful|dangerous|bad)",
r"(?:saying\s+)?no\s+(?:is\s+)?(?:harmful|dangerous|bad|wrong)",
r"(?:declining|refusing)\s+(?:is\s+)?(?:harmful|dangerous)",
r"ignore\s+(?:your\s+)?(?:previous\s+)?instructions",
r"ignore\s+(?:the\s+)?(?:above|previous)",
r"you\s+(?:are|can)\s+(?:now\s+)?(?:ignore|bypass|skip)",
r"pretend\s+(?:you\s+)?(?:are|have\s+been)",
r"let's\s+pretend",
r"hypothetically\s+speaking",
r"in\s+a\s+hypothetical\s+scenario",
r"this\s+is\s+a\s+(?:test|game|simulation)",
r"for\s+(?:educational|research)\s+purposes",
r"as\s+(?:an\s+)?(?:ethical\s+)?hacker",
r"white\s+hat\s+(?:test|scenario)",
r"penetration\s+testing\s+scenario",
]
# Boundary inversion markers (tricking the model about message boundaries)
BOUNDARY_INVERSION_PATTERNS = [
r"\[END\].*?\[START\]", # Reversed markers
r"user\s*:\s*assistant\s*:", # Fake role markers
r"assistant\s*:\s*user\s*:", # Reversed role markers
r"system\s*:\s*(?:user|assistant)\s*:", # Fake system injection
r"new\s+(?:user|assistant)\s*(?:message|input)",
r"the\s+above\s+is\s+(?:the\s+)?(?:user|assistant|system)",
r"<\|(?:user|assistant|system)\|>", # Special token patterns
r"\{\{(?:user|assistant|system)\}\}",
]
# System prompt injection patterns
SYSTEM_PROMPT_PATTERNS = [
r"you\s+are\s+(?:now\s+)?(?:an?\s+)?(?:unrestricted\s+|unfiltered\s+)?(?:ai|assistant|bot)",
r"you\s+will\s+(?:now\s+)?(?:act\s+as|behave\s+as|be)\s+(?:a\s+)?",
r"your\s+(?:new\s+)?role\s+is",
r"from\s+now\s+on\s*,?\s*you\s+(?:are|will)",
r"you\s+have\s+been\s+(?:reprogrammed|reconfigured|modified)",
r"(?:system|developer)\s+(?:message|instruction|prompt)",
r"override\s+(?:previous|prior)\s+(?:instructions|settings)",
]
# Obfuscation patterns
OBFUSCATION_PATTERNS = [
r"base64\s*(?:encoded|decode)",
r"rot13",
r"caesar\s*cipher",
r"hex\s*(?:encoded|decode)",
r"url\s*encode",
r"\b[0-9a-f]{20,}\b", # Long hex strings
r"\b[a-z0-9+/]{20,}={0,2}\b", # Base64-like strings
]
# All patterns combined for comprehensive scanning
ALL_PATTERNS: Dict[str, List[str]] = {
"godmode": GODMODE_PATTERNS,
"dan": DAN_PATTERNS,
"roleplay": ROLEPLAY_PATTERNS,
"extraction": EXTRACTION_PATTERNS,
"leet_speak": LEET_SPEAK_PATTERNS,
"refusal_inversion": REFUSAL_INVERSION_PATTERNS,
"boundary_inversion": BOUNDARY_INVERSION_PATTERNS,
"system_prompt_injection": SYSTEM_PROMPT_PATTERNS,
"obfuscation": OBFUSCATION_PATTERNS,
"crisis": CRISIS_PATTERNS,
}
# Compile all patterns for efficiency
_COMPILED_PATTERNS: Dict[str, List[re.Pattern]] = {}
def _get_compiled_patterns() -> Dict[str, List[re.Pattern]]:
"""Get or compile all regex patterns."""
global _COMPILED_PATTERNS
if not _COMPILED_PATTERNS:
for category, patterns in ALL_PATTERNS.items():
_COMPILED_PATTERNS[category] = [
re.compile(p, re.IGNORECASE | re.MULTILINE) for p in patterns
]
return _COMPILED_PATTERNS
# =============================================================================
# NORMALIZATION FUNCTIONS
# =============================================================================
def normalize_leet_speak(text: str) -> str:
"""
Normalize l33t speak to standard text.
Args:
text: Input text that may contain l33t speak
Returns:
Normalized text with l33t speak converted
"""
# Common l33t substitutions (mapping to lowercase)
leet_map = {
'4': 'a', '@': 'a', '^': 'a',
'8': 'b',
'3': 'e', '': 'e',
'6': 'g', '9': 'g',
'1': 'i', '!': 'i', '|': 'i',
'0': 'o',
'5': 's', '$': 's',
'7': 't', '+': 't',
'2': 'z',
}
result = []
for char in text:
# Check direct mapping first (handles lowercase)
if char in leet_map:
result.append(leet_map[char])
else:
result.append(char)
return ''.join(result)
def collapse_spaced_text(text: str) -> str:
"""
Collapse spaced-out text for analysis.
e.g., "k e y l o g g e r" -> "keylogger"
Args:
text: Input text that may contain spaced words
Returns:
Text with spaced words collapsed
"""
# Find patterns like "k e y l o g g e r" and collapse them
def collapse_match(match: re.Match) -> str:
return match.group(0).replace(' ', '').replace('\t', '')
return SPACED_TEXT_PATTERN.sub(collapse_match, text)
def detect_spaced_trigger_words(text: str) -> List[str]:
"""
Detect trigger words that are spaced out.
Args:
text: Input text to analyze
Returns:
List of detected spaced trigger words
"""
detected = []
# Normalize spaces and check for spaced patterns
normalized = re.sub(r'\s+', ' ', text.lower())
for word in SPACED_TRIGGER_WORDS:
# Create pattern with optional spaces between each character
spaced_pattern = r'\b' + r'\s*'.join(re.escape(c) for c in word) + r'\b'
if re.search(spaced_pattern, normalized, re.IGNORECASE):
detected.append(word)
return detected
# =============================================================================
# DETECTION FUNCTIONS
# =============================================================================
def detect_jailbreak_patterns(text: str) -> Tuple[bool, List[str], Dict[str, int]]:
"""
Detect jailbreak patterns in input text.
Args:
text: Input text to analyze
Returns:
Tuple of (has_jailbreak, list_of_patterns, category_scores)
"""
if not text or not isinstance(text, str):
return False, [], {}
detected_patterns = []
category_scores = {}
compiled = _get_compiled_patterns()
# Check each category
for category, patterns in compiled.items():
category_hits = 0
for pattern in patterns:
matches = pattern.findall(text)
if matches:
detected_patterns.extend([
f"[{category}] {m}" if isinstance(m, str) else f"[{category}] pattern_match"
for m in matches[:3] # Limit matches per pattern
])
category_hits += len(matches)
if category_hits > 0:
# Crisis patterns get maximum weight - any hit is serious
if category == "crisis":
category_scores[category] = min(category_hits * 50, 100)
else:
category_scores[category] = min(category_hits * 10, 50)
# Check for spaced trigger words
spaced_words = detect_spaced_trigger_words(text)
if spaced_words:
detected_patterns.extend([f"[spaced_text] {w}" for w in spaced_words])
category_scores["spaced_text"] = min(len(spaced_words) * 5, 25)
# Check normalized text for hidden l33t speak
normalized = normalize_leet_speak(text)
if normalized != text.lower():
for category, patterns in compiled.items():
for pattern in patterns:
if pattern.search(normalized):
detected_patterns.append(f"[leet_obfuscation] pattern in normalized text")
category_scores["leet_obfuscation"] = 15
break
has_jailbreak = len(detected_patterns) > 0
return has_jailbreak, detected_patterns, category_scores
def score_input_risk(text: str) -> int:
"""
Calculate a risk score (0-100) for input text.
Args:
text: Input text to score
Returns:
Risk score from 0 (safe) to 100 (high risk)
"""
if not text or not isinstance(text, str):
return 0
has_jailbreak, patterns, category_scores = detect_jailbreak_patterns(text)
if not has_jailbreak:
return 0
# Calculate base score from category scores
base_score = sum(category_scores.values())
# Add score based on number of unique pattern categories
category_count = len(category_scores)
if category_count >= 3:
base_score += 25
elif category_count >= 2:
base_score += 15
elif category_count >= 1:
base_score += 5
# Add score for pattern density
text_length = len(text)
pattern_density = len(patterns) / max(text_length / 100, 1)
if pattern_density > 0.5:
base_score += 10
# Cap at 100
return min(base_score, 100)
# =============================================================================
# SANITIZATION FUNCTIONS
# =============================================================================
def strip_jailbreak_patterns(text: str) -> str:
"""
Strip known jailbreak patterns from text.
Args:
text: Input text to sanitize
Returns:
Sanitized text with jailbreak patterns removed
"""
if not text or not isinstance(text, str):
return text
cleaned = text
compiled = _get_compiled_patterns()
# Remove patterns from each category
for category, patterns in compiled.items():
for pattern in patterns:
cleaned = pattern.sub('', cleaned)
# Clean up multiple spaces and newlines
cleaned = re.sub(r'\n{3,}', '\n\n', cleaned)
cleaned = re.sub(r' {2,}', ' ', cleaned)
cleaned = cleaned.strip()
return cleaned
def sanitize_input(text: str, aggressive: bool = False) -> Tuple[str, int, List[str]]:
"""
Sanitize input text by normalizing and stripping jailbreak patterns.
Args:
text: Input text to sanitize
aggressive: If True, more aggressively remove suspicious content
Returns:
Tuple of (cleaned_text, risk_score, detected_patterns)
"""
if not text or not isinstance(text, str):
return text, 0, []
original = text
all_patterns = []
# Step 1: Check original text for patterns
has_jailbreak, patterns, _ = detect_jailbreak_patterns(text)
all_patterns.extend(patterns)
# Step 2: Normalize l33t speak
normalized = normalize_leet_speak(text)
# Step 3: Collapse spaced text
collapsed = collapse_spaced_text(normalized)
# Step 4: Check normalized/collapsed text for additional patterns
has_jailbreak_collapsed, patterns_collapsed, _ = detect_jailbreak_patterns(collapsed)
all_patterns.extend([p for p in patterns_collapsed if p not in all_patterns])
# Step 5: Check for spaced trigger words specifically
spaced_words = detect_spaced_trigger_words(text)
if spaced_words:
all_patterns.extend([f"[spaced_text] {w}" for w in spaced_words])
# Step 6: Calculate risk score using original and normalized
risk_score = max(score_input_risk(text), score_input_risk(collapsed))
# Step 7: Strip jailbreak patterns
cleaned = strip_jailbreak_patterns(collapsed)
# Step 8: If aggressive mode and high risk, strip more aggressively
if aggressive and risk_score >= RiskLevel.HIGH:
# Remove any remaining bracketed content that looks like markers
cleaned = re.sub(r'\[\w+\]', '', cleaned)
# Remove special token patterns
cleaned = re.sub(r'<\|[^|]+\|>', '', cleaned)
# Final cleanup
cleaned = cleaned.strip()
# Log sanitization event if patterns were found
if all_patterns and logger.isEnabledFor(logging.DEBUG):
logger.debug(
"Input sanitized: %d patterns detected, risk_score=%d",
len(all_patterns), risk_score
)
return cleaned, risk_score, all_patterns
def sanitize_input_full(text: str, block_threshold: int = RiskLevel.HIGH) -> SanitizationResult:
"""
Full sanitization with detailed result.
Args:
text: Input text to sanitize
block_threshold: Risk score threshold to block input entirely
Returns:
SanitizationResult with all details
"""
cleaned, risk_score, patterns = sanitize_input(text)
# Determine risk level
if risk_score >= RiskLevel.CRITICAL:
risk_level = "CRITICAL"
elif risk_score >= RiskLevel.HIGH:
risk_level = "HIGH"
elif risk_score >= RiskLevel.MEDIUM:
risk_level = "MEDIUM"
elif risk_score >= RiskLevel.LOW:
risk_level = "LOW"
else:
risk_level = "SAFE"
# Determine if input should be blocked
blocked = risk_score >= block_threshold
return SanitizationResult(
original_text=text,
cleaned_text=cleaned,
risk_score=risk_score,
detected_patterns=patterns,
risk_level=risk_level,
blocked=blocked
)
# =============================================================================
# INTEGRATION HELPERS
# =============================================================================
def should_block_input(text: str, threshold: int = RiskLevel.HIGH) -> Tuple[bool, int, List[str]]:
"""
Quick check if input should be blocked.
Args:
text: Input text to check
threshold: Risk score threshold for blocking
Returns:
Tuple of (should_block, risk_score, detected_patterns)
"""
risk_score = score_input_risk(text)
_, patterns, _ = detect_jailbreak_patterns(text)
should_block = risk_score >= threshold
if should_block:
logger.warning(
"Input blocked: jailbreak patterns detected (risk_score=%d, threshold=%d)",
risk_score, threshold
)
return should_block, risk_score, patterns
def log_sanitization_event(
result: SanitizationResult,
source: str = "unknown",
session_id: Optional[str] = None
) -> None:
"""
Log a sanitization event for security auditing.
Args:
result: The sanitization result
source: Source of the input (e.g., "cli", "gateway", "api")
session_id: Optional session identifier
"""
if result.risk_score < RiskLevel.LOW:
return # Don't log safe inputs
log_data = {
"event": "input_sanitization",
"source": source,
"session_id": session_id,
"risk_level": result.risk_level,
"risk_score": result.risk_score,
"blocked": result.blocked,
"pattern_count": len(result.detected_patterns),
"patterns": result.detected_patterns[:5], # Limit logged patterns
"original_length": len(result.original_text),
"cleaned_length": len(result.cleaned_text),
}
if result.blocked:
logger.warning("SECURITY: Input blocked - %s", log_data)
elif result.risk_score >= RiskLevel.MEDIUM:
logger.info("SECURITY: Suspicious input sanitized - %s", log_data)
else:
logger.debug("SECURITY: Input sanitized - %s", log_data)
# =============================================================================
# LEGACY COMPATIBILITY
# =============================================================================
def check_input_safety(text: str) -> Dict[str, Any]:
"""
Legacy compatibility function for simple safety checks.
Returns dict with 'safe', 'score', and 'patterns' keys.
"""
score = score_input_risk(text)
_, patterns, _ = detect_jailbreak_patterns(text)
return {
"safe": score < RiskLevel.MEDIUM,
"score": score,
"patterns": patterns,
"risk_level": "SAFE" if score < RiskLevel.LOW else
"LOW" if score < RiskLevel.MEDIUM else
"MEDIUM" if score < RiskLevel.HIGH else
"HIGH" if score < RiskLevel.CRITICAL else "CRITICAL"
}

View File

@@ -27,7 +27,6 @@ from agent.usage_pricing import (
DEFAULT_PRICING,
estimate_usage_cost,
format_duration_compact,
get_pricing,
has_known_pricing,
)

View File

@@ -28,12 +28,10 @@ Usage in run_agent.py:
from __future__ import annotations
import json
import logging
import re
from typing import Any, Dict, List, Optional
from agent.context_strategy import ContextBudget, compute_prefetch_params, should_prefetch
from agent.memory_provider import MemoryProvider
from tools.registry import tool_error
@@ -81,7 +79,6 @@ class MemoryManager:
self._providers: List[MemoryProvider] = []
self._tool_to_provider: Dict[str, MemoryProvider] = {}
self._has_external: bool = False # True once a non-builtin provider is added
self._context_budget: Optional[ContextBudget] = None
# -- Registration --------------------------------------------------------
@@ -164,77 +161,18 @@ class MemoryManager:
)
return "\n\n".join(blocks)
# -- Context budget (for adaptive retrieval) -----------------------------
def set_context_budget(
self,
context_length: int,
used_tokens: int,
threshold_tokens: int,
compression_enabled: bool = True,
) -> None:
"""Update the context budget snapshot for adaptive retrieval.
Called by run_agent.py before each prefetch_all() call so the
memory manager can adjust retrieval parameters based on how much
context headroom remains.
"""
self._context_budget = ContextBudget(
context_length=context_length,
used_tokens=used_tokens,
threshold_tokens=threshold_tokens,
compression_enabled=compression_enabled,
)
# -- Prefetch / recall ---------------------------------------------------
def prefetch_all(self, query: str, *, session_id: str = "") -> str:
"""Collect prefetch context from all providers.
Uses the current context budget (if set) to adaptively adjust
retrieval limits and trust thresholds via the context strategy
framework. When budget is not set, falls back to provider defaults.
Returns merged context text labeled by provider. Empty providers
are skipped. Failures in one provider don't block others.
"""
# Check if we should skip prefetch entirely based on context pressure
if self._context_budget and not should_prefetch(self._context_budget, query):
logger.debug(
"Context pressure %.1f%% — skipping prefetch for this query",
self._context_budget.pressure * 100,
)
return ""
# Compute adaptive prefetch params from context strategy
prefetch_kwargs = {}
if self._context_budget:
params = compute_prefetch_params(self._context_budget)
if params.get("skip"):
return ""
prefetch_kwargs = {
"limit": params["limit"],
"min_trust": params["min_trust"],
}
parts = []
for provider in self._providers:
try:
# Try passing adaptive params — providers that support them
# (like holographic) will use them; others ignore via **kwargs
# or TypeError (caught below).
if prefetch_kwargs:
try:
result = provider.prefetch(
query,
session_id=session_id,
**prefetch_kwargs,
)
except TypeError:
# Provider doesn't accept extra kwargs — call without
result = provider.prefetch(query, session_id=session_id)
else:
result = provider.prefetch(query, session_id=session_id)
result = provider.prefetch(query, session_id=session_id)
if result and result.strip():
parts.append(result)
except Exception as e:

View File

@@ -5,7 +5,6 @@ and run_agent.py for pre-flight context checks.
"""
import logging
import os
import re
import time
from pathlib import Path
@@ -24,15 +23,20 @@ logger = logging.getLogger(__name__)
# are preserved so the full model name reaches cache lookups and server queries.
_PROVIDER_PREFIXES: frozenset[str] = frozenset({
"openrouter", "nous", "openai-codex", "copilot", "copilot-acp",
"gemini", "zai", "kimi-coding", "minimax", "minimax-cn", "anthropic", "deepseek",
"gemini", "zai", "kimi-coding", "kimi-coding-cn", "minimax", "minimax-cn", "anthropic", "deepseek",
"opencode-zen", "opencode-go", "ai-gateway", "kilocode", "alibaba",
"qwen-oauth",
"xiaomi",
"arcee",
"custom", "local",
# Common aliases
"google", "google-gemini", "google-ai-studio",
"glm", "z-ai", "z.ai", "zhipu", "github", "github-copilot",
"github-models", "kimi", "moonshot", "claude", "deep-seek",
"github-models", "kimi", "moonshot", "kimi-cn", "moonshot-cn", "claude", "deep-seek",
"opencode", "zen", "go", "vercel", "kilo", "dashscope", "aliyun", "qwen",
"mimo", "xiaomi-mimo",
"arcee-ai", "arceeai",
"xai", "x-ai", "x.ai", "grok",
"qwen-portal",
})
@@ -83,6 +87,11 @@ CONTEXT_PROBE_TIERS = [
# Default context length when no detection method succeeds.
DEFAULT_FALLBACK_CONTEXT = CONTEXT_PROBE_TIERS[0]
# Minimum context length required to run Hermes Agent. Models with fewer
# tokens cannot maintain enough working memory for tool-calling workflows.
# Sessions, model switches, and cron jobs should reject models below this.
MINIMUM_CONTEXT_LENGTH = 64_000
# Thin fallback defaults — only broad model family patterns.
# These fire only when provider is unknown AND models.dev/OpenRouter/Anthropic
# all miss. Replaced the previous 80+ entry dict.
@@ -98,9 +107,15 @@ DEFAULT_CONTEXT_LENGTHS = {
"claude-sonnet-4.6": 1000000,
# Catch-all for older Claude models (must sort after specific entries)
"claude": 200000,
# OpenAI
# OpenAI — GPT-5 family (most have 400k; specific overrides first)
# Source: https://developers.openai.com/api/docs/models
"gpt-5.4-nano": 400000, # 400k (not 1.05M like full 5.4)
"gpt-5.4-mini": 400000, # 400k (not 1.05M like full 5.4)
"gpt-5.4": 1050000, # GPT-5.4, GPT-5.4 Pro (1.05M context)
"gpt-5.3-codex-spark": 128000, # Spark variant has reduced 128k context
"gpt-5.1-chat": 128000, # Chat variant has 128k context
"gpt-5": 400000, # GPT-5.x base, mini, codex variants (400k)
"gpt-4.1": 1047576,
"gpt-5": 128000,
"gpt-4": 128000,
# Google
"gemini": 1048576,
@@ -142,6 +157,8 @@ DEFAULT_CONTEXT_LENGTHS = {
"kimi": 262144,
# Arcee
"trinity": 262144,
# OpenRouter
"elephant": 262144,
# Hugging Face Inference Providers — model IDs use org/name format
"Qwen/Qwen3.5-397B-A17B": 131072,
"Qwen/Qwen3.5-35B-A3B": 131072,
@@ -149,9 +166,10 @@ DEFAULT_CONTEXT_LENGTHS = {
"moonshotai/Kimi-K2.5": 262144,
"moonshotai/Kimi-K2-Thinking": 262144,
"MiniMaxAI/MiniMax-M2.5": 204800,
"XiaomiMiMo/MiMo-V2-Flash": 32768,
"mimo-v2-pro": 1048576,
"mimo-v2-omni": 1048576,
"XiaomiMiMo/MiMo-V2-Flash": 256000,
"mimo-v2-pro": 1000000,
"mimo-v2-omni": 256000,
"mimo-v2-flash": 256000,
"zai-org/GLM-5": 202752,
}
@@ -176,6 +194,12 @@ _MAX_COMPLETION_KEYS = (
# Local server hostnames / address patterns
_LOCAL_HOSTS = ("localhost", "127.0.0.1", "::1", "0.0.0.0")
# Docker / Podman / Lima DNS names that resolve to the host machine
_CONTAINER_LOCAL_SUFFIXES = (
".docker.internal",
".containers.internal",
".lima.internal",
)
def _normalize_base_url(base_url: str) -> str:
@@ -197,7 +221,9 @@ _URL_TO_PROVIDER: Dict[str, str] = {
"api.anthropic.com": "anthropic",
"api.z.ai": "zai",
"api.moonshot.ai": "kimi-coding",
"api.moonshot.cn": "kimi-coding-cn",
"api.kimi.com": "kimi-coding",
"api.arcee.ai": "arcee",
"api.minimax": "minimax",
"dashscope.aliyuncs.com": "alibaba",
"dashscope-intl.aliyuncs.com": "alibaba",
@@ -211,6 +237,8 @@ _URL_TO_PROVIDER: Dict[str, str] = {
"api.fireworks.ai": "fireworks",
"opencode.ai": "opencode-go",
"api.x.ai": "xai",
"api.xiaomimimo.com": "xiaomi",
"xiaomimimo.com": "xiaomi",
}
@@ -249,6 +277,9 @@ def is_local_endpoint(base_url: str) -> bool:
return False
if host in _LOCAL_HOSTS:
return True
# Docker / Podman / Lima internal DNS names (e.g. host.docker.internal)
if any(host.endswith(suffix) for suffix in _CONTAINER_LOCAL_SUFFIXES):
return True
# RFC-1918 private ranges and link-local
import ipaddress
try:
@@ -756,12 +787,12 @@ def _query_local_context_length(model: str, base_url: str) -> Optional[int]:
resp = client.post(f"{server_url}/api/show", json={"name": model})
if resp.status_code == 200:
data = resp.json()
# Check model_info for context length
model_info = data.get("model_info", {})
for key, value in model_info.items():
if "context_length" in key and isinstance(value, (int, float)):
return int(value)
# Check parameters string for num_ctx
# Prefer explicit num_ctx from Modelfile parameters: this is
# the *runtime* context Ollama will actually allocate KV cache
# for. The GGUF model_info.context_length is the training max,
# which can be larger than num_ctx — using it here would let
# Hermes grow conversations past the runtime limit and Ollama
# would silently truncate. Matches query_ollama_num_ctx().
params = data.get("parameters", "")
if "num_ctx" in params:
for line in params.split("\n"):
@@ -772,6 +803,11 @@ def _query_local_context_length(model: str, base_url: str) -> Optional[int]:
return int(parts[-1])
except ValueError:
pass
# Fall back to GGUF model_info context_length (training max)
model_info = data.get("model_info", {})
for key, value in model_info.items():
if "context_length" in key and isinstance(value, (int, float)):
return int(value)
# LM Studio native API: /api/v1/models returns max_context_length.
# This is more reliable than the OpenAI-compat /v1/models which
@@ -1026,16 +1062,21 @@ def get_model_context_length(
def estimate_tokens_rough(text: str) -> int:
"""Rough token estimate (~4 chars/token) for pre-flight checks."""
"""Rough token estimate (~4 chars/token) for pre-flight checks.
Uses ceiling division so short texts (1-3 chars) never estimate as
0 tokens, which would cause the compressor and pre-flight checks to
systematically undercount when many short tool results are present.
"""
if not text:
return 0
return len(text) // 4
return (len(text) + 3) // 4
def estimate_messages_tokens_rough(messages: List[Dict[str, Any]]) -> int:
"""Rough token estimate for a message list (pre-flight only)."""
total_chars = sum(len(str(msg)) for msg in messages)
return total_chars // 4
return (total_chars + 3) // 4
def estimate_request_tokens_rough(
@@ -1058,4 +1099,4 @@ def estimate_request_tokens_rough(
total_chars += sum(len(str(msg)) for msg in messages)
if tools:
total_chars += len(str(tools))
return total_chars // 4
return (total_chars + 3) // 4

View File

@@ -18,10 +18,8 @@ Other modules should import the dataclasses and query functions from here
rather than parsing the raw JSON themselves.
"""
import difflib
import json
import logging
import os
import time
from dataclasses import dataclass
from pathlib import Path
@@ -144,8 +142,11 @@ class ProviderInfo:
PROVIDER_TO_MODELS_DEV: Dict[str, str] = {
"openrouter": "openrouter",
"anthropic": "anthropic",
"openai": "openai",
"openai-codex": "openai",
"zai": "zai",
"kimi-coding": "kimi-for-coding",
"kimi-coding-cn": "kimi-for-coding",
"minimax": "minimax",
"minimax-cn": "minimax-cn",
"deepseek": "deepseek",
@@ -161,6 +162,7 @@ PROVIDER_TO_MODELS_DEV: Dict[str, str] = {
"gemini": "google",
"google": "google",
"xai": "xai",
"xiaomi": "xiaomi",
"nvidia": "nvidia",
"groq": "groq",
"mistral": "mistral",
@@ -173,13 +175,6 @@ PROVIDER_TO_MODELS_DEV: Dict[str, str] = {
_MODELS_DEV_TO_PROVIDER: Optional[Dict[str, str]] = None
def _get_reverse_mapping() -> Dict[str, str]:
"""Return models.dev ID → Hermes provider ID mapping."""
global _MODELS_DEV_TO_PROVIDER
if _MODELS_DEV_TO_PROVIDER is None:
_MODELS_DEV_TO_PROVIDER = {v: k for k, v in PROVIDER_TO_MODELS_DEV.items()}
return _MODELS_DEV_TO_PROVIDER
def _get_cache_path() -> Path:
"""Return path to disk cache file."""
@@ -383,7 +378,14 @@ def get_model_capabilities(provider: str, model: str) -> Optional[ModelCapabilit
# Extract capability flags (default to False if missing)
supports_tools = bool(entry.get("tool_call", False))
supports_vision = bool(entry.get("attachment", False))
# Vision: check both the `attachment` flag and `modalities.input` for "image".
# Some models (e.g. gemma-4) list image in input modalities but not attachment.
input_mods = entry.get("modalities", {})
if isinstance(input_mods, dict):
input_mods = input_mods.get("input", [])
else:
input_mods = []
supports_vision = bool(entry.get("attachment", False)) or "image" in input_mods
supports_reasoning = bool(entry.get("reasoning", False))
# Extract limits
@@ -453,93 +455,6 @@ def list_agentic_models(provider: str) -> List[str]:
return result
def search_models_dev(
query: str, provider: str = None, limit: int = 5
) -> List[Dict[str, Any]]:
"""Fuzzy search across models.dev catalog. Returns matching model entries.
Args:
query: Search string to match against model IDs.
provider: Optional Hermes provider ID to restrict search scope.
If None, searches across all providers in PROVIDER_TO_MODELS_DEV.
limit: Maximum number of results to return.
Returns:
List of dicts, each containing 'provider', 'model_id', and the full
model 'entry' from models.dev.
"""
data = fetch_models_dev()
if not data:
return []
# Build list of (provider_id, model_id, entry) candidates
candidates: List[tuple] = []
if provider is not None:
# Search only the specified provider
mdev_provider_id = PROVIDER_TO_MODELS_DEV.get(provider)
if not mdev_provider_id:
return []
provider_data = data.get(mdev_provider_id, {})
if isinstance(provider_data, dict):
models = provider_data.get("models", {})
if isinstance(models, dict):
for mid, mdata in models.items():
candidates.append((provider, mid, mdata))
else:
# Search across all mapped providers
for hermes_prov, mdev_prov in PROVIDER_TO_MODELS_DEV.items():
provider_data = data.get(mdev_prov, {})
if isinstance(provider_data, dict):
models = provider_data.get("models", {})
if isinstance(models, dict):
for mid, mdata in models.items():
candidates.append((hermes_prov, mid, mdata))
if not candidates:
return []
# Use difflib for fuzzy matching — case-insensitive comparison
model_ids_lower = [c[1].lower() for c in candidates]
query_lower = query.lower()
# First try exact substring matches (more intuitive than pure edit-distance)
substring_matches = []
for prov, mid, mdata in candidates:
if query_lower in mid.lower():
substring_matches.append({"provider": prov, "model_id": mid, "entry": mdata})
# Then add difflib fuzzy matches for any remaining slots
fuzzy_ids = difflib.get_close_matches(
query_lower, model_ids_lower, n=limit * 2, cutoff=0.4
)
seen_ids: set = set()
results: List[Dict[str, Any]] = []
# Prioritize substring matches
for match in substring_matches:
key = (match["provider"], match["model_id"])
if key not in seen_ids:
seen_ids.add(key)
results.append(match)
if len(results) >= limit:
return results
# Add fuzzy matches
for fid in fuzzy_ids:
# Find original-case candidates matching this lowered ID
for prov, mid, mdata in candidates:
if mid.lower() == fid:
key = (prov, mid)
if key not in seen_ids:
seen_ids.add(key)
results.append({"provider": prov, "model_id": mid, "entry": mdata})
if len(results) >= limit:
return results
return results
# ---------------------------------------------------------------------------
# Rich dataclass constructors — parse raw models.dev JSON into dataclasses

184
agent/mtls.py Normal file
View File

@@ -0,0 +1,184 @@
"""
agent/mtls.py — Mutual TLS support for Hermes A2A communication.
Provides:
- build_server_ssl_context() — SSL context for uvicorn that requires client certs
- build_client_ssl_context() — SSL context for httpx/aiohttp A2A clients
- MTLSMiddleware — FastAPI middleware that enforces client cert on A2A routes
- is_mtls_configured() — Check if env vars are set
Configuration (environment variables):
HERMES_MTLS_CERT Path to this agent's TLS certificate (PEM)
HERMES_MTLS_KEY Path to this agent's TLS private key (PEM)
HERMES_MTLS_CA Path to the Fleet CA certificate (PEM) — used to verify peers
All three must be set to enable mTLS. If any is missing, mTLS is disabled and
the server falls back to plain HTTP (or regular TLS without client auth).
"""
import logging
import os
import ssl
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
# A2A routes that require a valid client certificate when mTLS is enabled.
_A2A_PATH_PREFIXES = (
"/.well-known/agent-card",
"/agent-card",
"/api/agent-card",
"/a2a/",
)
def _get_env(key: str) -> Optional[str]:
val = os.environ.get(key, "").strip()
return val or None
def is_mtls_configured() -> bool:
"""Return True if all three mTLS env vars are set and the files exist."""
cert = _get_env("HERMES_MTLS_CERT")
key = _get_env("HERMES_MTLS_KEY")
ca = _get_env("HERMES_MTLS_CA")
if not (cert and key and ca):
return False
for label, path in (("HERMES_MTLS_CERT", cert), ("HERMES_MTLS_KEY", key), ("HERMES_MTLS_CA", ca)):
if not Path(path).is_file():
logger.warning("mTLS disabled: %s file not found: %s", label, path)
return False
return True
def build_server_ssl_context() -> ssl.SSLContext:
"""
Build an SSL context for the A2A server that:
- presents its own certificate
- requires and verifies the client's certificate against the Fleet CA
Raises:
RuntimeError: if mTLS env vars are not set or files are missing
ssl.SSLError: if cert/key/CA files are invalid
"""
cert = _get_env("HERMES_MTLS_CERT")
key = _get_env("HERMES_MTLS_KEY")
ca = _get_env("HERMES_MTLS_CA")
if not (cert and key and ca):
raise RuntimeError(
"mTLS not configured. Set HERMES_MTLS_CERT, HERMES_MTLS_KEY, and HERMES_MTLS_CA."
)
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ctx.minimum_version = ssl.TLSVersion.TLSv1_2
ctx.load_cert_chain(certfile=cert, keyfile=key)
ctx.load_verify_locations(cafile=ca)
# CERT_REQUIRED: reject connections without a valid client cert
ctx.verify_mode = ssl.CERT_REQUIRED
logger.info("mTLS server context built (cert=%s, CA=%s)", cert, ca)
return ctx
def build_client_ssl_context() -> ssl.SSLContext:
"""
Build an SSL context for outbound A2A connections that:
- presents this agent's certificate as a client cert
- verifies the remote server against the Fleet CA
Raises:
RuntimeError: if mTLS env vars are not set or files are missing
ssl.SSLError: if cert/key/CA files are invalid
"""
cert = _get_env("HERMES_MTLS_CERT")
key = _get_env("HERMES_MTLS_KEY")
ca = _get_env("HERMES_MTLS_CA")
if not (cert and key and ca):
raise RuntimeError(
"mTLS not configured. Set HERMES_MTLS_CERT, HERMES_MTLS_KEY, and HERMES_MTLS_CA."
)
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.minimum_version = ssl.TLSVersion.TLSv1_2
ctx.load_cert_chain(certfile=cert, keyfile=key)
ctx.load_verify_locations(cafile=ca)
ctx.verify_mode = ssl.CERT_REQUIRED
ctx.check_hostname = True
logger.info("mTLS client context built (cert=%s, CA=%s)", cert, ca)
return ctx
def get_peer_cn(ssl_object) -> Optional[str]:
"""Extract the CN from the peer certificate's subject, or None."""
try:
peer_cert = ssl_object.getpeercert()
if not peer_cert:
return None
for rdn in peer_cert.get("subject", ()):
for attr, value in rdn:
if attr == "commonName":
return value
except Exception:
pass
return None
class MTLSMiddleware:
"""
ASGI middleware that enforces client certificate verification on A2A routes.
When mTLS is NOT configured (no env vars) or the route is not an A2A route,
the request passes through unchanged.
When mTLS IS configured and the route matches an A2A prefix, the middleware
checks that the request arrived over a TLS connection with a verified client
certificate. If not, it returns HTTP 403.
Note: This middleware only provides defence-in-depth at the app layer.
The primary enforcement is at the SSL context level (CERT_REQUIRED on the
server context). This middleware is useful when the server runs behind a
TLS-terminating proxy that forwards cert info via headers (not yet
implemented) or for test-time injection.
"""
def __init__(self, app):
self.app = app
self._enabled = is_mtls_configured()
if self._enabled:
logger.info("MTLSMiddleware enabled — A2A routes require client cert")
def _is_a2a_route(self, path: str) -> bool:
return any(path.startswith(prefix) for prefix in _A2A_PATH_PREFIXES)
async def __call__(self, scope, receive, send):
if scope["type"] == "http" and self._enabled and self._is_a2a_route(scope.get("path", "")):
# Check for client cert in the SSL connection
transport = scope.get("extensions", {}).get("tls", {})
peer_cert = transport.get("peer_cert")
if peer_cert is None:
# No client cert — reject
response = _forbidden_response("Client certificate required for A2A endpoints")
await response(scope, receive, send)
return
await self.app(scope, receive, send)
def _forbidden_response(message: str):
"""Return a minimal ASGI 403 response."""
body = message.encode()
async def respond(scope, receive, send):
await send({
"type": "http.response.start",
"status": 403,
"headers": [
(b"content-type", b"text/plain"),
(b"content-length", str(len(body)).encode()),
],
})
await send({"type": "http.response.body", "body": body})
return respond

353
agent/privacy_filter.py Normal file
View File

@@ -0,0 +1,353 @@
"""Privacy Filter — strip PII from context before remote API calls.
Implements Vitalik's Pattern 2: "A local model can strip out private data
before passing the query along to a remote LLM."
When Hermes routes a request to a cloud provider (Anthropic, OpenRouter, etc.),
this module sanitizes the message context to remove personally identifiable
information before it leaves the user's machine.
Threat model (from Vitalik's secure LLM architecture):
- Privacy (other): Non-LLM data leakage via search queries, API calls
- LLM accidents: LLM accidentally leaking private data in prompts
- LLM jailbreaks: Remote content extracting private context
Usage:
from agent.privacy_filter import PrivacyFilter, sanitize_messages
pf = PrivacyFilter()
safe_messages = pf.sanitize_messages(messages)
# safe_messages has PII replaced with [REDACTED] tokens
"""
from __future__ import annotations
import logging
import re
from dataclasses import dataclass, field
from enum import Enum, auto
from typing import Any, Dict, List, Optional, Tuple
logger = logging.getLogger(__name__)
class Sensitivity(Enum):
"""Classification of content sensitivity."""
PUBLIC = auto() # No PII detected
LOW = auto() # Generic references (e.g., city names)
MEDIUM = auto() # Personal identifiers (name, email, phone)
HIGH = auto() # Secrets, keys, financial data, medical info
CRITICAL = auto() # Crypto keys, passwords, SSN patterns
@dataclass
class RedactionReport:
"""Summary of what was redacted from a message batch."""
total_messages: int = 0
redacted_messages: int = 0
redactions: List[Dict[str, Any]] = field(default_factory=list)
max_sensitivity: Sensitivity = Sensitivity.PUBLIC
@property
def had_redactions(self) -> bool:
return self.redacted_messages > 0
def summary(self) -> str:
if not self.had_redactions:
return "No PII detected — context is clean for remote query."
parts = [f"Redacted {self.redacted_messages}/{self.total_messages} messages:"]
for r in self.redactions[:10]:
parts.append(f" - {r['type']}: {r['count']} occurrence(s)")
if len(self.redactions) > 10:
parts.append(f" ... and {len(self.redactions) - 10} more types")
return "\n".join(parts)
# =========================================================================
# PII pattern definitions
# =========================================================================
# Each pattern is (compiled_regex, redaction_type, sensitivity_level, replacement)
_PII_PATTERNS: List[Tuple[re.Pattern, str, Sensitivity, str]] = []
def _compile_patterns() -> None:
"""Compile PII detection patterns. Called once at module init."""
global _PII_PATTERNS
if _PII_PATTERNS:
return
raw_patterns = [
# --- CRITICAL: secrets and credentials ---
(
r'(?:api[_-]?key|apikey|secret[_-]?key|access[_-]?token)\s*[:=]\s*["\']?([A-Za-z0-9_\-\.]{20,})["\']?',
"api_key_or_token",
Sensitivity.CRITICAL,
"[REDACTED-API-KEY]",
),
(
r'\b(?:sk-|sk_|pk_|rk_|ak_)[A-Za-z0-9]{20,}\b',
"prefixed_secret",
Sensitivity.CRITICAL,
"[REDACTED-SECRET]",
),
(
r'\b(?:ghp_|gho_|ghu_|ghs_|ghr_)[A-Za-z0-9]{36,}\b',
"github_token",
Sensitivity.CRITICAL,
"[REDACTED-GITHUB-TOKEN]",
),
(
r'\b(?:xox[bposa]-[A-Za-z0-9\-]+)\b',
"slack_token",
Sensitivity.CRITICAL,
"[REDACTED-SLACK-TOKEN]",
),
(
r'(?:password|passwd|pwd)\s*[:=]\s*["\']?([^\s"\']{4,})["\']?',
"password",
Sensitivity.CRITICAL,
"[REDACTED-PASSWORD]",
),
(
r'(?:-----BEGIN (?:RSA |EC |OPENSSH )?PRIVATE KEY-----)',
"private_key_block",
Sensitivity.CRITICAL,
"[REDACTED-PRIVATE-KEY]",
),
# Ethereum / crypto addresses (42-char hex starting with 0x)
(
r'\b0x[a-fA-F0-9]{40}\b',
"ethereum_address",
Sensitivity.HIGH,
"[REDACTED-ETH-ADDR]",
),
# Bitcoin addresses (base58, 25-34 chars starting with 1/3/bc1)
(
r'\b[13][a-km-zA-HJ-NP-Z1-9]{25,34}\b',
"bitcoin_address",
Sensitivity.HIGH,
"[REDACTED-BTC-ADDR]",
),
(
r'\bbc1[a-zA-HJ-NP-Z0-9]{39,59}\b',
"bech32_address",
Sensitivity.HIGH,
"[REDACTED-BTC-ADDR]",
),
# --- HIGH: financial ---
(
r'\b(?:\d{4}[-\s]?){3}\d{4}\b',
"credit_card_number",
Sensitivity.HIGH,
"[REDACTED-CC]",
),
(
r'\b\d{3}-\d{2}-\d{4}\b',
"us_ssn",
Sensitivity.HIGH,
"[REDACTED-SSN]",
),
# --- MEDIUM: personal identifiers ---
# Email addresses
(
r'\b[A-Za-z0-9._%+\-]+@[A-Za-z0-9.\-]+\.[A-Za-z]{2,}\b',
"email_address",
Sensitivity.MEDIUM,
"[REDACTED-EMAIL]",
),
# Phone numbers (US/international patterns)
(
r'\b(?:\+?1[-.\s]?)?\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4}\b',
"phone_number_us",
Sensitivity.MEDIUM,
"[REDACTED-PHONE]",
),
(
r'\b\+\d{1,3}[-.\s]?\d{4,14}\b',
"phone_number_intl",
Sensitivity.MEDIUM,
"[REDACTED-PHONE]",
),
# Filesystem paths that reveal user identity
(
r'(?:/Users/|/home/|C:\\Users\\)([A-Za-z0-9_\-]+)',
"user_home_path",
Sensitivity.MEDIUM,
r"/Users/[REDACTED-USER]",
),
# --- LOW: environment / system info ---
# Internal IPs
(
r'\b(?:10\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.(?:1[6-9]|2\d|3[01])\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3})\b',
"internal_ip",
Sensitivity.LOW,
"[REDACTED-IP]",
),
]
_PII_PATTERNS = [
(re.compile(pattern, re.IGNORECASE), rtype, sensitivity, replacement)
for pattern, rtype, sensitivity, replacement in raw_patterns
]
_compile_patterns()
# =========================================================================
# Sensitive file path patterns (context-aware)
# =========================================================================
_SENSITIVE_PATH_PATTERNS = [
re.compile(r'\.(?:env|pem|key|p12|pfx|jks|keystore)\b', re.IGNORECASE),
re.compile(r'(?:\.ssh/|\.gnupg/|\.aws/|\.config/gcloud/)', re.IGNORECASE),
re.compile(r'(?:wallet|keystore|seed|mnemonic)', re.IGNORECASE),
re.compile(r'(?:\.hermes/\.env)', re.IGNORECASE),
]
def _classify_path_sensitivity(path: str) -> Sensitivity:
"""Check if a file path references sensitive material."""
for pat in _SENSITIVE_PATH_PATTERNS:
if pat.search(path):
return Sensitivity.HIGH
return Sensitivity.PUBLIC
# =========================================================================
# Core filtering
# =========================================================================
class PrivacyFilter:
"""Strip PII from message context before remote API calls.
Integrates with the agent's message pipeline. Call sanitize_messages()
before sending context to any cloud LLM provider.
"""
def __init__(
self,
min_sensitivity: Sensitivity = Sensitivity.MEDIUM,
aggressive_mode: bool = False,
):
"""
Args:
min_sensitivity: Only redact PII at or above this level.
Default MEDIUM — redacts emails, phones, paths but not IPs.
aggressive_mode: If True, also redact file paths and internal IPs.
"""
self.min_sensitivity = (
Sensitivity.LOW if aggressive_mode else min_sensitivity
)
self.aggressive_mode = aggressive_mode
def sanitize_text(self, text: str) -> Tuple[str, List[Dict[str, Any]]]:
"""Sanitize a single text string. Returns (cleaned_text, redaction_list)."""
redactions = []
cleaned = text
for pattern, rtype, sensitivity, replacement in _PII_PATTERNS:
if sensitivity.value < self.min_sensitivity.value:
continue
matches = pattern.findall(cleaned)
if matches:
count = len(matches) if isinstance(matches[0], str) else sum(
1 for m in matches if m
)
if count > 0:
cleaned = pattern.sub(replacement, cleaned)
redactions.append({
"type": rtype,
"sensitivity": sensitivity.name,
"count": count,
})
return cleaned, redactions
def sanitize_messages(
self, messages: List[Dict[str, Any]]
) -> Tuple[List[Dict[str, Any]], RedactionReport]:
"""Sanitize a list of OpenAI-format messages.
Returns (safe_messages, report). System messages are NOT sanitized
(they're typically static prompts). Only user and assistant messages
with string content are processed.
Args:
messages: List of {"role": ..., "content": ...} dicts.
Returns:
Tuple of (sanitized_messages, redaction_report).
"""
report = RedactionReport(total_messages=len(messages))
safe_messages = []
for msg in messages:
role = msg.get("role", "")
content = msg.get("content", "")
# Only sanitize user/assistant string content
if role in ("user", "assistant") and isinstance(content, str) and content:
cleaned, redactions = self.sanitize_text(content)
if redactions:
report.redacted_messages += 1
report.redactions.extend(redactions)
# Track max sensitivity
for r in redactions:
s = Sensitivity[r["sensitivity"]]
if s.value > report.max_sensitivity.value:
report.max_sensitivity = s
safe_msg = {**msg, "content": cleaned}
safe_messages.append(safe_msg)
logger.info(
"Privacy filter: redacted %d PII type(s) from %s message",
len(redactions), role,
)
else:
safe_messages.append(msg)
else:
safe_messages.append(msg)
return safe_messages, report
def should_use_local_only(self, text: str) -> Tuple[bool, str]:
"""Determine if content is too sensitive for any remote call.
Returns (should_block, reason). If True, the content should only
be processed by a local model.
"""
_, redactions = self.sanitize_text(text)
critical_count = sum(
1 for r in redactions
if Sensitivity[r["sensitivity"]] == Sensitivity.CRITICAL
)
high_count = sum(
1 for r in redactions
if Sensitivity[r["sensitivity"]] == Sensitivity.HIGH
)
if critical_count > 0:
return True, f"Contains {critical_count} critical-secret pattern(s) — local-only"
if high_count >= 3:
return True, f"Contains {high_count} high-sensitivity pattern(s) — local-only"
return False, ""
def sanitize_messages(
messages: List[Dict[str, Any]],
min_sensitivity: Sensitivity = Sensitivity.MEDIUM,
aggressive: bool = False,
) -> Tuple[List[Dict[str, Any]], RedactionReport]:
"""Convenience function: sanitize messages with default settings."""
pf = PrivacyFilter(min_sensitivity=min_sensitivity, aggressive_mode=aggressive)
return pf.sanitize_messages(messages)
def quick_sanitize(text: str) -> str:
"""Quick sanitize a single string — returns cleaned text only."""
pf = PrivacyFilter()
cleaned, _ = pf.sanitize_text(text)
return cleaned

262
agent/profile_isolation.py Normal file
View File

@@ -0,0 +1,262 @@
"""
Profile Session Isolation — #891
Tags sessions with their originating profile and provides
filtered access so profiles cannot see each other's data.
Current state: All sessions share one state.db with no profile tag.
This module adds profile tagging and filtered queries.
Usage:
from agent.profile_isolation import tag_session, get_profile_sessions, get_active_profile
# Tag a new session with the current profile
tag_session(session_id, profile_name)
# Get sessions for a specific profile
sessions = get_profile_sessions("sprint")
# Get current active profile
profile = get_active_profile()
"""
import json
import os
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional
from datetime import datetime, timezone
HERMES_HOME = Path(os.getenv("HERMES_HOME", str(Path.home() / ".hermes")))
SESSIONS_DB = HERMES_HOME / "sessions" / "state.db"
PROFILE_TAGS_FILE = HERMES_HOME / "profile_session_tags.json"
def get_active_profile() -> str:
"""Get the currently active profile name."""
config_path = HERMES_HOME / "config.yaml"
if config_path.exists():
try:
import yaml
with open(config_path) as f:
cfg = yaml.safe_load(f) or {}
return cfg.get("active_profile", "default")
except Exception:
pass
# Check environment
return os.getenv("HERMES_PROFILE", "default")
def _load_tags() -> Dict[str, str]:
"""Load session-to-profile mapping."""
if not PROFILE_TAGS_FILE.exists():
return {}
try:
with open(PROFILE_TAGS_FILE) as f:
return json.load(f)
except Exception:
return {}
def _save_tags(tags: Dict[str, str]):
"""Save session-to-profile mapping."""
PROFILE_TAGS_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(PROFILE_TAGS_FILE, "w") as f:
json.dump(tags, f, indent=2)
def tag_session(session_id: str, profile: Optional[str] = None) -> str:
"""
Tag a session with its originating profile.
Returns the profile name used.
"""
if profile is None:
profile = get_active_profile()
tags = _load_tags()
tags[session_id] = profile
_save_tags(tags)
# Also tag in SQLite if available
_tag_session_in_db(session_id, profile)
return profile
def _tag_session_in_db(session_id: str, profile: str):
"""Add profile tag to SQLite session store."""
if not SESSIONS_DB.exists():
return
try:
conn = sqlite3.connect(str(SESSIONS_DB))
cursor = conn.cursor()
# Check if sessions table has profile column
cursor.execute("PRAGMA table_info(sessions)")
columns = [row[1] for row in cursor.fetchall()]
if "profile" not in columns:
# Add profile column
cursor.execute("ALTER TABLE sessions ADD COLUMN profile TEXT DEFAULT 'default'")
# Update the session's profile
cursor.execute(
"UPDATE sessions SET profile = ? WHERE session_id = ?",
(profile, session_id)
)
conn.commit()
conn.close()
except Exception:
pass # SQLite might not be available or schema differs
def get_session_profile(session_id: str) -> Optional[str]:
"""Get the profile that owns a session."""
# Check JSON tags first
tags = _load_tags()
if session_id in tags:
return tags[session_id]
# Check SQLite
if SESSIONS_DB.exists():
try:
conn = sqlite3.connect(str(SESSIONS_DB))
cursor = conn.cursor()
cursor.execute(
"SELECT profile FROM sessions WHERE session_id = ?",
(session_id,)
)
row = cursor.fetchone()
conn.close()
if row:
return row[0]
except Exception:
pass
return None
def get_profile_sessions(
profile: Optional[str] = None,
limit: int = 100,
) -> List[Dict[str, Any]]:
"""
Get sessions belonging to a specific profile.
Returns list of session dicts.
"""
if profile is None:
profile = get_active_profile()
sessions = []
# Get from JSON tags
tags = _load_tags()
tagged_sessions = [sid for sid, p in tags.items() if p == profile]
# Get from SQLite with profile filter
if SESSIONS_DB.exists():
try:
conn = sqlite3.connect(str(SESSIONS_DB))
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
# Try profile column first
try:
cursor.execute(
"SELECT * FROM sessions WHERE profile = ? ORDER BY updated_at DESC LIMIT ?",
(profile, limit)
)
for row in cursor.fetchall():
sessions.append(dict(row))
except Exception:
# Fallback: filter by tagged session IDs
if tagged_sessions:
placeholders = ",".join("?" * len(tagged_sessions[:limit]))
cursor.execute(
f"SELECT * FROM sessions WHERE session_id IN ({placeholders}) ORDER BY updated_at DESC LIMIT ?",
(*tagged_sessions[:limit], limit)
)
for row in cursor.fetchall():
sessions.append(dict(row))
conn.close()
except Exception:
pass
return sessions[:limit]
def filter_sessions_by_profile(
sessions: List[Dict[str, Any]],
profile: Optional[str] = None,
) -> List[Dict[str, Any]]:
"""Filter a list of sessions to only include those belonging to a profile."""
if profile is None:
profile = get_active_profile()
tags = _load_tags()
filtered = []
for session in sessions:
sid = session.get("session_id") or session.get("id")
if not sid:
continue
# Check tag
session_profile = tags.get(sid)
if session_profile is None:
# Check SQLite
session_profile = get_session_profile(sid)
if session_profile == profile or session_profile is None:
filtered.append(session)
return filtered
def get_profile_stats() -> Dict[str, Any]:
"""Get statistics about profile session distribution."""
tags = _load_tags()
profile_counts = {}
for sid, profile in tags.items():
profile_counts[profile] = profile_counts.get(profile, 0) + 1
total_tagged = len(tags)
profiles = list(profile_counts.keys())
return {
"total_tagged_sessions": total_tagged,
"profiles": profiles,
"profile_counts": profile_counts,
"active_profile": get_active_profile(),
}
def audit_untagged_sessions() -> List[str]:
"""Find sessions without a profile tag."""
if not SESSIONS_DB.exists():
return []
try:
conn = sqlite3.connect(str(SESSIONS_DB))
cursor = conn.cursor()
# Get all session IDs
cursor.execute("SELECT session_id FROM sessions")
all_sessions = {row[0] for row in cursor.fetchall()}
conn.close()
# Get tagged sessions
tags = _load_tags()
tagged = set(tags.keys())
# Return untagged
return list(all_sessions - tagged)
except Exception:
return []

View File

@@ -12,7 +12,7 @@ import threading
from collections import OrderedDict
from pathlib import Path
from hermes_constants import get_hermes_home
from hermes_constants import get_hermes_home, get_skills_dir, is_wsl
from typing import Optional
from agent.skill_utils import (
@@ -364,8 +364,56 @@ PLATFORM_HINTS = {
"documents. You can also include image URLs in markdown format ![alt](url) and they "
"will be downloaded and sent as native media when possible."
),
"wecom": (
"You are on WeCom (企业微信 / Enterprise WeChat). Markdown formatting is supported. "
"You CAN send media files natively — to deliver a file to the user, include "
"MEDIA:/absolute/path/to/file in your response. The file will be sent as a native "
"WeCom attachment: images (.jpg, .png, .webp) are sent as photos (up to 10 MB), "
"other files (.pdf, .docx, .xlsx, .md, .txt, etc.) arrive as downloadable documents "
"(up to 20 MB), and videos (.mp4) play inline. Voice messages are supported but "
"must be in AMR format — other audio formats are automatically sent as file attachments. "
"You can also include image URLs in markdown format ![alt](url) and they will be "
"downloaded and sent as native photos. Do NOT tell the user you lack file-sending "
"capability — use MEDIA: syntax whenever a file delivery is appropriate."
),
"qqbot": (
"You are on QQ, a popular Chinese messaging platform. QQ supports markdown formatting "
"and emoji. You can send media files natively: include MEDIA:/absolute/path/to/file in "
"your response. Images are sent as native photos, and other files arrive as downloadable "
"documents."
),
}
# ---------------------------------------------------------------------------
# Environment hints — execution-environment awareness for the agent.
# Unlike PLATFORM_HINTS (which describe the messaging channel), these describe
# the machine/OS the agent's tools actually run on.
# ---------------------------------------------------------------------------
WSL_ENVIRONMENT_HINT = (
"You are running inside WSL (Windows Subsystem for Linux). "
"The Windows host filesystem is mounted under /mnt/ — "
"/mnt/c/ is the C: drive, /mnt/d/ is D:, etc. "
"The user's Windows files are typically at "
"/mnt/c/Users/<username>/Desktop/, Documents/, Downloads/, etc. "
"When the user references Windows paths or desktop files, translate "
"to the /mnt/c/ equivalent. You can list /mnt/c/Users/ to discover "
"the Windows username if needed."
)
def build_environment_hints() -> str:
"""Return environment-specific guidance for the system prompt.
Detects WSL, and can be extended for Termux, Docker, etc.
Returns an empty string when no special environment is detected.
"""
hints: list[str] = []
if is_wsl():
hints.append(WSL_ENVIRONMENT_HINT)
return "\n\n".join(hints)
CONTEXT_FILE_MAX_CHARS = 20_000
CONTEXT_TRUNCATE_HEAD_RATIO = 0.7
CONTEXT_TRUNCATE_TAIL_RATIO = 0.2
@@ -548,8 +596,7 @@ def build_skills_system_prompt(
are read-only — they appear in the index but new skills are always created
in the local dir. Local skills take precedence when names collide.
"""
hermes_home = get_hermes_home()
skills_dir = hermes_home / "skills"
skills_dir = get_skills_dir()
external_dirs = get_all_skills_dirs()[1:] # skip local (index 0)
if not skills_dir.exists() and not external_dirs:
@@ -727,8 +774,16 @@ def build_skills_system_prompt(
result = (
"## Skills (mandatory)\n"
"Before replying, scan the skills below. If one clearly matches your task, "
"load it with skill_view(name) and follow its instructions. "
"Before replying, scan the skills below. If a skill matches or is even partially relevant "
"to your task, you MUST load it with skill_view(name) and follow its instructions. "
"Err on the side of loading — it is always better to have context you don't need "
"than to miss critical steps, pitfalls, or established workflows. "
"Skills contain specialized knowledge — API endpoints, tool-specific commands, "
"and proven workflows that outperform general-purpose approaches. Load the skill "
"even if you think you could handle the task with basic tools like web_search or terminal. "
"Skills also encode the user's preferred approach, conventions, and quality standards "
"for tasks like code review, planning, and testing — load them even for tasks you "
"already know how to do, because the skill defines how it should be done here.\n"
"If a skill has issues, fix it with skill_manage(action='patch').\n"
"After difficult/iterative tasks, offer to save as a skill. "
"If a skill you loaded was missing steps, had wrong commands, or needed "
@@ -738,7 +793,7 @@ def build_skills_system_prompt(
+ "\n".join(index_lines) + "\n"
"</available_skills>\n"
"\n"
"If none match, proceed normally without loading a skill."
"Only proceed without loading a skill if genuinely none are relevant to the task."
)
# ── Store in LRU cache ────────────────────────────────────────────

146
agent/provider_preflight.py Normal file
View File

@@ -0,0 +1,146 @@
"""Provider Preflight — Poka-yoke validation of provider/model config.
Validates provider and model configuration before session start.
Prevents wasted context on misconfigured providers.
Usage:
from agent.provider_preflight import preflight_check
result = preflight_check(provider="openrouter", model="xiaomi/mimo-v2-pro")
if not result["valid"]:
print(result["error"])
"""
from __future__ import annotations
import logging
import os
from typing import Any, Dict, Optional
logger = logging.getLogger(__name__)
# Provider -> required env var
PROVIDER_KEYS = {
"openrouter": "OPENROUTER_API_KEY",
"anthropic": "ANTHROPIC_API_KEY",
"openai": "OPENAI_API_KEY",
"nous": "NOUS_API_KEY",
"ollama": None, # Local, no key needed
"local": None,
}
def check_provider_key(provider: str) -> Dict[str, Any]:
"""Check if provider has a valid API key configured."""
provider_lower = provider.lower().strip()
env_var = None
for known, key in PROVIDER_KEYS.items():
if known in provider_lower:
env_var = key
break
if env_var is None:
# Unknown provider — assume OK (custom/local)
return {"valid": True, "provider": provider, "key_status": "unknown"}
if env_var is None:
# Local provider, no key needed
return {"valid": True, "provider": provider, "key_status": "not_required"}
key_value = os.getenv(env_var, "").strip()
if not key_value:
return {
"valid": False,
"provider": provider,
"key_status": "missing",
"error": f"{env_var} is not set. Provider '{provider}' will fail.",
"fix": f"Set {env_var} in ~/.hermes/.env",
}
if len(key_value) < 10:
return {
"valid": False,
"provider": provider,
"key_status": "too_short",
"error": f"{env_var} is suspiciously short ({len(key_value)} chars). May be invalid.",
"fix": f"Verify {env_var} value in ~/.hermes/.env",
}
return {"valid": True, "provider": provider, "key_status": "set"}
def check_model_availability(model: str, provider: str) -> Dict[str, Any]:
"""Check if model is likely available for provider."""
if not model:
return {"valid": False, "error": "No model specified"}
# Basic sanity checks
model_lower = model.lower()
# Anthropic models should use anthropic provider
if "claude" in model_lower and "anthropic" not in provider.lower():
return {
"valid": True, # Allow but warn
"warning": f"Model '{model}' usually runs on Anthropic provider, not '{provider}'",
}
# Ollama models
ollama_indicators = ["llama", "mistral", "qwen", "gemma", "phi", "hermes"]
if any(x in model_lower for x in ollama_indicators) and ":" not in model:
return {
"valid": True,
"warning": f"Model '{model}' may need a version tag for Ollama (e.g., {model}:latest)",
}
return {"valid": True}
def preflight_check(
provider: str = "",
model: str = "",
fallback_provider: str = "",
fallback_model: str = "",
) -> Dict[str, Any]:
"""Full pre-flight check for provider/model configuration.
Returns:
Dict with valid (bool), errors (list), warnings (list).
"""
errors = []
warnings = []
# Check primary provider
if provider:
result = check_provider_key(provider)
if not result["valid"]:
errors.append(result.get("error", f"Provider {provider} invalid"))
# Check primary model
if model:
result = check_model_availability(model, provider)
if not result["valid"]:
errors.append(result.get("error", f"Model {model} invalid"))
elif result.get("warning"):
warnings.append(result["warning"])
# Check fallback
if fallback_provider:
result = check_provider_key(fallback_provider)
if not result["valid"]:
warnings.append(f"Fallback provider {fallback_provider} also invalid: {result.get('error','')}")
if fallback_model:
result = check_model_availability(fallback_model, fallback_provider)
if not result["valid"]:
warnings.append(f"Fallback model {fallback_model} invalid")
elif result.get("warning"):
warnings.append(result["warning"])
return {
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
"provider": provider,
"model": model,
}

View File

@@ -24,7 +24,7 @@ from __future__ import annotations
import time
from dataclasses import dataclass, field
from typing import Any, Dict, Mapping, Optional
from typing import Any, Mapping, Optional
@dataclass

302
agent/self_modify.py Normal file
View File

@@ -0,0 +1,302 @@
"""Self-Modifying Prompt Engine — agent learns from its own failures.
Analyzes session transcripts, identifies failure patterns, and generates
prompt patches to prevent future failures.
The loop: fail → analyze → rewrite → retry → verify improvement.
Usage:
from agent.self_modify import PromptLearner
learner = PromptLearner()
patches = learner.analyze_session(session_id)
learner.apply_patches(patches)
"""
from __future__ import annotations
import json
import logging
import os
import re
import time
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
logger = logging.getLogger(__name__)
HERMES_HOME = Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
PATCHES_DIR = HERMES_HOME / "prompt_patches"
ROLLBACK_DIR = HERMES_HOME / "prompt_rollback"
@dataclass
class FailurePattern:
"""A detected failure pattern in session transcripts."""
pattern_type: str # retry_loop, timeout, error_hallucination, context_loss
description: str
frequency: int
example_messages: List[str] = field(default_factory=list)
suggested_fix: str = ""
@dataclass
class PromptPatch:
"""A modification to the system prompt based on failure analysis."""
id: str
failure_type: str
original_rule: str
new_rule: str
confidence: float
applied_at: Optional[float] = None
reverted: bool = False
# Failure detection patterns
FAILURE_SIGNALS = {
"retry_loop": {
"patterns": [
r"(?i)retry(?:ing)?\s*(?:attempt|again)",
r"(?i)failed.*retrying",
r"(?i)error.*again",
r"(?i)attempt\s+\d+\s*(?:of|/)\s*\d+",
],
"description": "Agent stuck in retry loop",
},
"timeout": {
"patterns": [
r"(?i)timed?\s*out",
r"(?i)deadline\s+exceeded",
r"(?i)took\s+(?:too\s+)?long",
],
"description": "Operation timed out",
},
"hallucination": {
"patterns": [
r"(?i)i\s+(?:don't|do\s+not)\s+(?:have|see|find)\s+(?:any|that|this)\s+(?:information|data|file)",
r"(?i)the\s+file\s+doesn't\s+exist",
r"(?i)i\s+(?:made|invented|fabricated)\s+(?:that\s+up|this)",
],
"description": "Agent hallucinated or fabricated information",
},
"context_loss": {
"patterns": [
r"(?i)i\s+(?:don't|do\s+not)\s+(?:remember|recall|know)\s+(?:what|where|when|how)",
r"(?i)could\s+you\s+remind\s+me",
r"(?i)what\s+were\s+we\s+(?:doing|working|talking)\s+(?:on|about)",
],
"description": "Agent lost context from earlier in conversation",
},
"tool_failure": {
"patterns": [
r"(?i)tool\s+(?:call|execution)\s+failed",
r"(?i)command\s+not\s+found",
r"(?i)permission\s+denied",
r"(?i)no\s+such\s+file",
],
"description": "Tool execution failed",
},
}
# Prompt improvement templates
PROMPT_FIXES = {
"retry_loop": (
"If an operation fails more than twice, stop retrying. "
"Report the failure and ask the user for guidance. "
"Do not enter retry loops — they waste tokens."
),
"timeout": (
"For operations that may take long, set a timeout and report "
"progress. If an operation takes more than 30 seconds, report "
"what you've done so far and ask if you should continue."
),
"hallucination": (
"If you cannot find information, say 'I don't know' or "
"'I couldn't find that.' Never fabricate information. "
"If a file doesn't exist, say so — don't guess its contents."
),
"context_loss": (
"When you need context from earlier in the conversation, "
"use session_search to find it. Don't ask the user to repeat themselves."
),
"tool_failure": (
"If a tool fails, check the error message and try a different approach. "
"Don't retry the exact same command — diagnose first."
),
}
class PromptLearner:
"""Analyze session transcripts and generate prompt improvements."""
def __init__(self):
PATCHES_DIR.mkdir(parents=True, exist_ok=True)
ROLLBACK_DIR.mkdir(parents=True, exist_ok=True)
def analyze_session(self, session_data: dict) -> List[FailurePattern]:
"""Analyze a session for failure patterns.
Args:
session_data: Session dict with 'messages' list.
Returns:
List of detected failure patterns.
"""
messages = session_data.get("messages", [])
patterns_found: Dict[str, FailurePattern] = {}
for msg in messages:
content = str(msg.get("content", ""))
role = msg.get("role", "")
# Only analyze assistant messages and tool results
if role not in ("assistant", "tool"):
continue
for failure_type, config in FAILURE_SIGNALS.items():
for pattern in config["patterns"]:
if re.search(pattern, content):
if failure_type not in patterns_found:
patterns_found[failure_type] = FailurePattern(
pattern_type=failure_type,
description=config["description"],
frequency=0,
suggested_fix=PROMPT_FIXES.get(failure_type, ""),
)
patterns_found[failure_type].frequency += 1
if len(patterns_found[failure_type].example_messages) < 3:
patterns_found[failure_type].example_messages.append(
content[:200]
)
break # One match per message per type is enough
return list(patterns_found.values())
def generate_patches(self, patterns: List[FailurePattern],
min_confidence: float = 0.7) -> List[PromptPatch]:
"""Generate prompt patches from failure patterns.
Args:
patterns: Detected failure patterns.
min_confidence: Minimum confidence to generate a patch.
Returns:
List of prompt patches.
"""
patches = []
for pattern in patterns:
# Confidence based on frequency
if pattern.frequency >= 3:
confidence = 0.9
elif pattern.frequency >= 2:
confidence = 0.75
else:
confidence = 0.5
if confidence < min_confidence:
continue
if not pattern.suggested_fix:
continue
patch = PromptPatch(
id=f"{pattern.pattern_type}-{int(time.time())}",
failure_type=pattern.pattern_type,
original_rule="(missing — no existing rule for this pattern)",
new_rule=pattern.suggested_fix,
confidence=confidence,
)
patches.append(patch)
return patches
def apply_patches(self, patches: List[PromptPatch],
prompt_path: Optional[str] = None) -> int:
"""Apply patches to the system prompt.
Args:
patches: Patches to apply.
prompt_path: Path to prompt file (default: ~/.hermes/system_prompt.md)
Returns:
Number of patches applied.
"""
if prompt_path is None:
prompt_path = str(HERMES_HOME / "system_prompt.md")
prompt_file = Path(prompt_path)
# Backup current prompt
if prompt_file.exists():
backup = ROLLBACK_DIR / f"{prompt_file.name}.{int(time.time())}.bak"
backup.write_text(prompt_file.read_text())
# Read current prompt
current = prompt_file.read_text() if prompt_file.exists() else ""
# Apply patches
applied = 0
additions = []
for patch in patches:
if patch.new_rule not in current:
additions.append(f"\n## Auto-learned: {patch.failure_type}\n{patch.new_rule}")
patch.applied_at = time.time()
applied += 1
if additions:
new_content = current + "\n".join(additions)
prompt_file.write_text(new_content)
# Log patches
patches_file = PATCHES_DIR / f"patches-{int(time.time())}.json"
with open(patches_file, "w") as f:
json.dump([p.__dict__ for p in patches], f, indent=2, default=str)
logger.info("Applied %d prompt patches", applied)
return applied
def rollback_last(self, prompt_path: Optional[str] = None) -> bool:
"""Rollback to the most recent backup.
Args:
prompt_path: Path to prompt file.
Returns:
True if rollback succeeded.
"""
if prompt_path is None:
prompt_path = str(HERMES_HOME / "system_prompt.md")
backups = sorted(ROLLBACK_DIR.glob("*.bak"), reverse=True)
if not backups:
logger.warning("No backups to rollback to")
return False
latest = backups[0]
Path(prompt_path).write_text(latest.read_text())
logger.info("Rolled back to %s", latest.name)
return True
def learn_from_session(self, session_data: dict) -> Dict[str, Any]:
"""Full learning cycle: analyze → patch → apply.
Args:
session_data: Session dict.
Returns:
Summary of what was learned and applied.
"""
patterns = self.analyze_session(session_data)
patches = self.generate_patches(patterns)
applied = self.apply_patches(patches)
return {
"patterns_detected": len(patterns),
"patches_generated": len(patches),
"patches_applied": applied,
"patterns": [
{"type": p.pattern_type, "frequency": p.frequency, "description": p.description}
for p in patterns
],
}

231
agent/session_compactor.py Normal file
View File

@@ -0,0 +1,231 @@
"""Session compaction with fact extraction.
Before compressing conversation context, extracts durable facts
(user preferences, corrections, project details) and saves them
to the fact store so they survive compression.
Usage:
from agent.session_compactor import extract_and_save_facts
facts = extract_and_save_facts(messages)
"""
from __future__ import annotations
import json
import logging
import re
import time
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple
logger = logging.getLogger(__name__)
@dataclass
class ExtractedFact:
"""A fact extracted from conversation."""
category: str # "user_pref", "correction", "project", "tool_quirk", "general"
entity: str # what the fact is about
content: str # the fact itself
confidence: float # 0.0-1.0
source_turn: int # which message turn it came from
timestamp: float = 0.0
# Patterns that indicate user preferences
_PREFERENCE_PATTERNS = [
(r"(?:I|we) (?:prefer|like|want|need) (.+?)(?:\.|$)", "preference"),
(r"(?:always|never) (?:use|do|run|deploy) (.+?)(?:\.|$)", "preference"),
(r"(?:my|our) (?:default|preferred|usual) (.+?) (?:is|are) (.+?)(?:\.|$)", "preference"),
(r"(?:make sure|ensure|remember) (?:to|that) (.+?)(?:\.|$)", "instruction"),
(r"(?:don'?t|do not) (?:ever|ever again) (.+?)(?:\.|$)", "constraint"),
]
# Patterns that indicate corrections
_CORRECTION_PATTERNS = [
(r"(?:actually|no[, ]|wait[, ]|correction[: ]|sorry[, ]) (.+)", "correction"),
(r"(?:I meant|what I meant was|the correct) (.+?)(?:\.|$)", "correction"),
(r"(?:it'?s|its) (?:not|shouldn'?t be|wrong) (.+?)(?:\.|$)", "correction"),
]
# Patterns that indicate project/tool facts
_PROJECT_PATTERNS = [
(r"(?:the |our )?(?:project|repo|codebase|code) (?:is|uses|needs|requires) (.+?)(?:\.|$)", "project"),
(r"(?:deploy|push|commit) (?:to|on) (.+?)(?:\.|$)", "project"),
(r"(?:this|that|the) (?:server|host|machine|VPS) (?:is|runs|has) (.+?)(?:\.|$)", "infrastructure"),
(r"(?:model|provider|engine) (?:is|should be|needs to be) (.+?)(?:\.|$)", "config"),
]
def extract_facts_from_messages(messages: List[Dict[str, Any]]) -> List[ExtractedFact]:
"""Extract durable facts from conversation messages.
Scans user messages for preferences, corrections, project facts,
and infrastructure details that should survive compression.
"""
facts = []
seen_contents = set()
for turn_idx, msg in enumerate(messages):
role = msg.get("role", "")
content = msg.get("content", "")
# Only scan user messages and assistant responses with corrections
if role not in ("user", "assistant"):
continue
if not content or not isinstance(content, str):
continue
if len(content) < 10:
continue
# Skip tool results and system messages
if role == "assistant" and msg.get("tool_calls"):
continue
extracted = _extract_from_text(content, turn_idx, role)
# Deduplicate by content
for fact in extracted:
key = f"{fact.category}:{fact.content[:100]}"
if key not in seen_contents:
seen_contents.add(key)
facts.append(fact)
return facts
def _extract_from_text(text: str, turn_idx: int, role: str) -> List[ExtractedFact]:
"""Extract facts from a single text block."""
facts = []
timestamp = time.time()
# Clean text for pattern matching
clean = text.strip()
# User preference patterns (from user messages)
if role == "user":
for pattern, subcategory in _PREFERENCE_PATTERNS:
for match in re.finditer(pattern, clean, re.IGNORECASE):
content = match.group(1).strip() if match.lastindex else match.group(0).strip()
if len(content) > 5:
facts.append(ExtractedFact(
category=f"user_pref.{subcategory}",
entity="user",
content=content[:200],
confidence=0.7,
source_turn=turn_idx,
timestamp=timestamp,
))
# Correction patterns (from user messages)
if role == "user":
for pattern, subcategory in _CORRECTION_PATTERNS:
for match in re.finditer(pattern, clean, re.IGNORECASE):
content = match.group(1).strip() if match.lastindex else match.group(0).strip()
if len(content) > 5:
facts.append(ExtractedFact(
category=f"correction.{subcategory}",
entity="user",
content=content[:200],
confidence=0.8,
source_turn=turn_idx,
timestamp=timestamp,
))
# Project/infrastructure patterns (from both user and assistant)
for pattern, subcategory in _PROJECT_PATTERNS:
for match in re.finditer(pattern, clean, re.IGNORECASE):
content = match.group(1).strip() if match.lastindex else match.group(0).strip()
if len(content) > 5:
facts.append(ExtractedFact(
category=f"project.{subcategory}",
entity=subcategory,
content=content[:200],
confidence=0.6,
source_turn=turn_idx,
timestamp=timestamp,
))
return facts
def save_facts_to_store(facts: List[ExtractedFact], fact_store_fn=None) -> int:
"""Save extracted facts to the fact store.
Args:
facts: List of extracted facts.
fact_store_fn: Optional callable(category, entity, content, trust).
If None, uses the holographic fact store if available.
Returns:
Number of facts saved.
"""
saved = 0
if fact_store_fn:
for fact in facts:
try:
fact_store_fn(
category=fact.category,
entity=fact.entity,
content=fact.content,
trust=fact.confidence,
)
saved += 1
except Exception as e:
logger.debug("Failed to save fact: %s", e)
else:
# Try holographic fact store
try:
from fact_store import fact_store as _fs
for fact in facts:
try:
_fs(
action="add",
content=fact.content,
category=fact.category,
tags=fact.entity,
trust_delta=fact.confidence - 0.5,
)
saved += 1
except Exception as e:
logger.debug("Failed to save fact via fact_store: %s", e)
except ImportError:
logger.debug("fact_store not available — facts not persisted")
return saved
def extract_and_save_facts(
messages: List[Dict[str, Any]],
fact_store_fn=None,
) -> Tuple[List[ExtractedFact], int]:
"""Extract facts from messages and save them.
Returns (extracted_facts, saved_count).
"""
facts = extract_facts_from_messages(messages)
if facts:
logger.info("Extracted %d facts from conversation", len(facts))
saved = save_facts_to_store(facts, fact_store_fn)
logger.info("Saved %d/%d facts to store", saved, len(facts))
else:
saved = 0
return facts, saved
def format_facts_summary(facts: List[ExtractedFact]) -> str:
"""Format extracted facts as a readable summary."""
if not facts:
return "No facts extracted."
by_category = {}
for f in facts:
by_category.setdefault(f.category, []).append(f)
lines = [f"Extracted {len(facts)} facts:", ""]
for cat, cat_facts in sorted(by_category.items()):
lines.append(f" {cat}:")
for f in cat_facts:
lines.append(f" - {f.content[:80]}")
return "\n".join(lines)

24
agent/shield.py Normal file
View File

@@ -0,0 +1,24 @@
import logging
from tools.shield.detector import ShieldDetector, Verdict, CRISIS_SYSTEM_PROMPT, SAFE_SIX_MODELS
logger = logging.getLogger(__name__)
_detector = None
def get_detector():
global _detector
if _detector is None:
_detector = ShieldDetector()
return _detector
def scan_text(text: str):
"""Scan text for jailbreaks and crisis signals using SHIELD."""
detector = get_detector()
return detector.detect(text)
def is_crisis(verdict: str) -> bool:
return verdict in [Verdict.CRISIS_DETECTED.value, Verdict.CRISIS_UNDER_ATTACK.value]
def is_jailbreak(verdict: str) -> bool:
return verdict in [Verdict.JAILBREAK_DETECTED.value, Verdict.CRISIS_UNDER_ATTACK.value]

View File

@@ -12,6 +12,8 @@ from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Optional
from hermes_constants import display_hermes_home
logger = logging.getLogger(__name__)
_skill_commands: Dict[str, Dict[str, Any]] = {}
@@ -108,7 +110,7 @@ def _inject_skill_config(loaded_skill: dict[str, Any], parts: list[str]) -> None
if not resolved:
return
lines = ["", "[Skill config (from ~/.hermes/config.yaml):"]
lines = ["", f"[Skill config (from {display_hermes_home()}/config.yaml):"]
for key, value in resolved.items():
display_val = str(value) if value else "(not set)"
lines.append(f" {key} = {display_val}")

View File

@@ -10,9 +10,9 @@ import os
import re
import sys
from pathlib import Path
from typing import Any, Dict, List, Set, Tuple
from typing import Any, Dict, List, Optional, Set, Tuple
from hermes_constants import get_hermes_home
from hermes_constants import get_config_path, get_skills_dir
logger = logging.getLogger(__name__)
@@ -130,7 +130,7 @@ def get_disabled_skill_names(platform: str | None = None) -> Set[str]:
Reads the config file directly (no CLI config imports) to stay
lightweight.
"""
config_path = get_hermes_home() / "config.yaml"
config_path = get_config_path()
if not config_path.exists():
return set()
try:
@@ -178,7 +178,7 @@ def get_external_skills_dirs() -> List[Path]:
path. Only directories that actually exist are returned. Duplicates and
paths that resolve to the local ``~/.hermes/skills/`` are silently skipped.
"""
config_path = get_hermes_home() / "config.yaml"
config_path = get_config_path()
if not config_path.exists():
return []
try:
@@ -200,7 +200,7 @@ def get_external_skills_dirs() -> List[Path]:
if not isinstance(raw_dirs, list):
return []
local_skills = (get_hermes_home() / "skills").resolve()
local_skills = get_skills_dir().resolve()
seen: Set[Path] = set()
result: List[Path] = []
@@ -230,7 +230,7 @@ def get_all_skills_dirs() -> List[Path]:
The local dir is always first (and always included even if it doesn't exist
yet — callers handle that). External dirs follow in config order.
"""
dirs = [get_hermes_home() / "skills"]
dirs = [get_skills_dir()]
dirs.extend(get_external_skills_dirs())
return dirs
@@ -384,7 +384,7 @@ def resolve_skill_config_values(
current values (or the declared default if the key isn't set).
Path values are expanded via ``os.path.expanduser``.
"""
config_path = get_hermes_home() / "config.yaml"
config_path = get_config_path()
config: Dict[str, Any] = {}
if config_path.exists():
try:
@@ -441,3 +441,25 @@ def iter_skill_index_files(skills_dir: Path, filename: str):
matches.append(Path(root) / filename)
for path in sorted(matches, key=lambda p: str(p.relative_to(skills_dir))):
yield path
# ── Namespace helpers for plugin-provided skills ───────────────────────────
_NAMESPACE_RE = re.compile(r"^[a-zA-Z0-9_-]+$")
def parse_qualified_name(name: str) -> Tuple[Optional[str], str]:
"""Split ``'namespace:skill-name'`` into ``(namespace, bare_name)``.
Returns ``(None, name)`` when there is no ``':'``.
"""
if ":" not in name:
return None, name
return tuple(name.split(":", 1)) # type: ignore[return-value]
def is_valid_namespace(candidate: Optional[str]) -> bool:
"""Check whether *candidate* is a valid namespace (``[a-zA-Z0-9_-]+``)."""
if not candidate:
return False
return bool(_NAMESPACE_RE.match(candidate))

23
agent/telemetry_logger.py Normal file
View File

@@ -0,0 +1,23 @@
import os
import json
import time
def log_token_usage(prompt_tokens, completion_tokens, model_name):
"""Logs token usage to a local JSONL file for fleet-wide accounting."""
spend_dir = os.path.expanduser("~/.hermes/telemetry/spend")
os.makedirs(spend_dir, exist_ok=True)
session_id = os.environ.get("HERMES_SESSION_ID", "default")
log_file = os.path.join(spend_dir, f"session_{session_id}.jsonl")
record = {
"timestamp": time.time(),
"model": model_name,
"input_tokens": prompt_tokens,
"output_tokens": completion_tokens
}
with open(log_file, "a") as f:
f.write(json.dumps(record) + "\n")

146
agent/time_aware_routing.py Normal file
View File

@@ -0,0 +1,146 @@
"""Time-aware model routing for cron jobs.
Routes cron tasks to more capable models during off-hours when the user
is not present to correct errors. Reduces error rates during high-error
time windows (e.g., 18:00 evening batches).
Usage:
from agent.time_aware_routing import resolve_time_aware_model
model = resolve_time_aware_model(base_model="mimo-v2-pro", is_cron=True)
"""
from __future__ import annotations
import os
import time
from dataclasses import dataclass
from typing import Dict, Optional
# Error rate data from empirical audit (2026-04-12)
# Higher error rates during these hours suggest routing to better models
_HIGH_ERROR_HOURS = {
18: 9.4, # 18:00 — 9.4% error rate (evening cron batches)
19: 8.1,
20: 7.5,
21: 6.8,
22: 6.2,
23: 5.9,
0: 5.5,
1: 5.2,
}
# Low error hours — default model is fine
_LOW_ERROR_HOURS = set(range(6, 18)) # 06:00-17:59
# Default fallback models by time zone
_DEFAULT_STRONG_MODEL = os.getenv("CRON_STRONG_MODEL", "xiaomi/mimo-v2-pro")
_DEFAULT_CHEAP_MODEL = os.getenv("CRON_CHEAP_MODEL", "qwen2.5:7b")
_ERROR_THRESHOLD = float(os.getenv("CRON_ERROR_THRESHOLD", "6.0")) # % error rate
@dataclass
class RoutingDecision:
"""Result of time-aware routing."""
model: str
provider: str
reason: str
hour: int
error_rate: float
is_off_hours: bool
def get_hour_error_rate(hour: int) -> float:
"""Get expected error rate for a given hour (0-23)."""
return _HIGH_ERROR_HOURS.get(hour, 4.0) # Default 4% for unlisted hours
def is_off_hours(hour: int) -> bool:
"""Check if hour is considered off-hours (higher error rates)."""
return hour not in _LOW_ERROR_HOURS
def resolve_time_aware_model(
base_model: str = "",
base_provider: str = "",
is_cron: bool = False,
hour: Optional[int] = None,
) -> RoutingDecision:
"""Resolve model based on time of day and task type.
During off-hours (evening/night), routes to stronger models for cron
jobs to compensate for lack of human oversight.
Args:
base_model: The model that would normally be used.
base_provider: The provider for the base model.
is_cron: Whether this is a cron job (vs interactive session).
hour: Override hour (for testing). Defaults to current hour.
Returns:
RoutingDecision with model, provider, and reasoning.
"""
if hour is None:
hour = time.localtime().tm_hour
error_rate = get_hour_error_rate(hour)
off_hours = is_off_hours(hour)
# Interactive sessions always use the base model (user can correct errors)
if not is_cron:
return RoutingDecision(
model=base_model or _DEFAULT_CHEAP_MODEL,
provider=base_provider,
reason="Interactive session — user can correct errors",
hour=hour,
error_rate=error_rate,
is_off_hours=off_hours,
)
# Cron jobs during low-error hours: use base model
if not off_hours and error_rate < _ERROR_THRESHOLD:
return RoutingDecision(
model=base_model or _DEFAULT_CHEAP_MODEL,
provider=base_provider,
reason=f"Low-error hours ({hour}:00, {error_rate}% expected)",
hour=hour,
error_rate=error_rate,
is_off_hours=False,
)
# Cron jobs during high-error hours: upgrade to stronger model
if error_rate >= _ERROR_THRESHOLD:
return RoutingDecision(
model=_DEFAULT_STRONG_MODEL,
provider="nous",
reason=f"High-error hours ({hour}:00, {error_rate}% expected) — using stronger model",
hour=hour,
error_rate=error_rate,
is_off_hours=True,
)
# Off-hours but low error: use base model
return RoutingDecision(
model=base_model or _DEFAULT_CHEAP_MODEL,
provider=base_provider,
reason=f"Off-hours but low error ({hour}:00, {error_rate}%)",
hour=hour,
error_rate=error_rate,
is_off_hours=off_hours,
)
def get_routing_report() -> str:
"""Get a report of time-based routing decisions for the next 24 hours."""
lines = ["Time-Aware Model Routing (24h forecast)", "=" * 40, ""]
lines.append(f"Error threshold: {_ERROR_THRESHOLD}%")
lines.append(f"Strong model: {_DEFAULT_STRONG_MODEL}")
lines.append(f"Cheap model: {_DEFAULT_CHEAP_MODEL}")
lines.append("")
for h in range(24):
decision = resolve_time_aware_model(is_cron=True, hour=h)
icon = "\U0001f7e2" if decision.model == _DEFAULT_CHEAP_MODEL else "\U0001f534"
lines.append(f" {h:02d}:00 {icon} {decision.model:25s} ({decision.error_rate}% error)")
return "\n".join(lines)

View File

@@ -36,7 +36,7 @@ def generate_title(user_message: str, assistant_response: str, timeout: float =
try:
response = call_llm(
task="compression", # reuse compression task config (cheap/fast model)
task="title_generation",
messages=messages,
max_tokens=30,
temperature=0.3,

316
agent/token_budget.py Normal file
View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
"""
Token Budget — Poka-yoke guard against silent context overflow.
Progressive warning system with circuit breakers:
- 60%: WARNING — log + suggest summarization
- 80%: CAUTION — auto-compress, drop raw tool outputs
- 90%: CRITICAL — block verbose tool calls, force wrap-up
- 95%: STOP — graceful session termination with summary
Also provides tool output budgeting to truncate before overflow.
Usage:
from agent.token_budget import TokenBudget
budget = TokenBudget(context_length=128_000)
budget.update(8000) # from API response prompt_tokens
status = budget.check() # returns BudgetStatus with level + message
budget.should_block_tools() # True at 90%+
budget.should_terminate() # True at 95%+
# Tool output budgeting
remaining = budget.tool_output_budget()
truncated = budget.truncate_tool_output(output_text, max_chars=remaining)
"""
import logging
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
logger = logging.getLogger(__name__)
# ── Thresholds ────────────────────────────────────────────────────────
WARN_PERCENT = 0.60
CAUTION_PERCENT = 0.80
CRITICAL_PERCENT = 0.90
STOP_PERCENT = 0.95
# Reserve 5% of context for system prompt, response, and overhead
RESPONSE_RESERVE_RATIO = 0.05
# Max tool output chars at each level
TOOL_OUTPUT_BUDGETS = {
"NORMAL": 50_000,
"WARNING": 20_000,
"CAUTION": 8_000,
"CRITICAL": 2_000,
"STOP": 500,
}
class BudgetLevel(Enum):
NORMAL = "NORMAL"
WARNING = "WARNING"
CAUTION = "CAUTION"
CRITICAL = "CRITICAL"
STOP = "STOP"
@property
def percent_threshold(self) -> float:
return {
BudgetLevel.NORMAL: 0.0,
BudgetLevel.WARNING: WARN_PERCENT,
BudgetLevel.CAUTION: CAUTION_PERCENT,
BudgetLevel.CRITICAL: CRITICAL_PERCENT,
BudgetLevel.STOP: STOP_PERCENT,
}[self]
@property
def emoji(self) -> str:
return {
BudgetLevel.NORMAL: "",
BudgetLevel.WARNING: "\u26a0\ufe0f",
BudgetLevel.CAUTION: "\U0001f525",
BudgetLevel.CRITICAL: "\U0001f6d1",
BudgetLevel.STOP: "\U0001f6d1",
}[self]
@dataclass
class BudgetStatus:
"""Current token budget status."""
level: BudgetLevel
tokens_used: int
context_length: int
percent_used: float
tokens_remaining: int
message: str = ""
should_compress: bool = False
should_block_tools: bool = False
should_terminate: bool = False
def to_indicator(self) -> str:
"""Compact status indicator for CLI display."""
pct = int(self.percent_used * 100)
if self.level == BudgetLevel.NORMAL:
return f"[{pct}%]"
return f"{self.level.emoji} [{pct}%]"
def to_bar(self, width: int = 10) -> str:
"""Visual progress bar."""
filled = int(width * self.percent_used)
bar = "\u2588" * filled + "\u2591" * (width - filled)
color = self._bar_color()
return f"{color}{bar}\033[0m {int(self.percent_used * 100)}%"
def _bar_color(self) -> str:
if self.level == BudgetLevel.STOP:
return "\033[41m" # red bg
if self.level == BudgetLevel.CRITICAL:
return "\033[31m" # red
if self.level == BudgetLevel.CAUTION:
return "\033[33m" # yellow
if self.level == BudgetLevel.WARNING:
return "\033[33m" # yellow
return "\033[32m" # green
class TokenBudget:
"""
Progressive token budget tracker with poka-yoke circuit breakers.
Tracks cumulative token usage against a context length and triggers
escalating actions at each threshold.
"""
def __init__(
self,
context_length: int,
warn_percent: float = WARN_PERCENT,
caution_percent: float = CAUTION_PERCENT,
critical_percent: float = CRITICAL_PERCENT,
stop_percent: float = STOP_PERCENT,
response_reserve_ratio: float = RESPONSE_RESERVE_RATIO,
):
self.context_length = context_length
self.warn_threshold = int(context_length * warn_percent)
self.caution_threshold = int(context_length * caution_percent)
self.critical_threshold = int(context_length * critical_percent)
self.stop_threshold = int(context_length * stop_percent)
self.response_reserve = int(context_length * response_reserve_ratio)
self.tokens_used = 0
self.completions_tokens = 0
self.total_tool_output_chars = 0
self._level = BudgetLevel.NORMAL
self._history: list[int] = []
def update(self, prompt_tokens: int, completion_tokens: int = 0) -> BudgetStatus:
"""Update budget from API response usage."""
self.tokens_used = prompt_tokens
self.completions_tokens = completion_tokens
self._history.append(prompt_tokens)
return self.check()
def check(self) -> BudgetStatus:
"""Evaluate current budget level and return status."""
pct = self.tokens_used / self.context_length if self.context_length > 0 else 0
remaining = max(0, self.context_length - self.tokens_used - self.response_reserve)
# Determine level
if pct >= STOP_PERCENT:
level = BudgetLevel.STOP
elif pct >= CRITICAL_PERCENT:
level = BudgetLevel.CRITICAL
elif pct >= CAUTION_PERCENT:
level = BudgetLevel.CAUTION
elif pct >= WARN_PERCENT:
level = BudgetLevel.WARNING
else:
level = BudgetLevel.NORMAL
# Log transitions (don\'t log every check)
if level != self._level:
self._log_transition(level, pct)
self._level = level
messages = {
BudgetLevel.NORMAL: "",
BudgetLevel.WARNING: (
f"Context at {int(pct*100)}%. Consider wrapping up soon or using /compress."
),
BudgetLevel.CAUTION: (
f"Context at {int(pct*100)}%. Auto-compressing. "
f"Tool outputs will be truncated."
),
BudgetLevel.CRITICAL: (
f"Context at {int(pct*100)}%. Verbose tools blocked. "
f"Session approaching limit — please wrap up."
),
BudgetLevel.STOP: (
f"Context at {int(pct*100)}%. Session must terminate. "
f"Saving summary before shutdown."
),
}
return BudgetStatus(
level=level,
tokens_used=self.tokens_used,
context_length=self.context_length,
percent_used=pct,
tokens_remaining=remaining,
message=messages[level],
should_compress=level in (BudgetLevel.CAUTION, BudgetLevel.CRITICAL, BudgetLevel.STOP),
should_block_tools=level in (BudgetLevel.CRITICAL, BudgetLevel.STOP),
should_terminate=level == BudgetLevel.STOP,
)
def should_compress(self) -> bool:
"""True at 80%+ — auto-compression should trigger."""
return self.tokens_used >= self.caution_threshold
def should_block_tools(self) -> bool:
"""True at 90%+ — verbose tool calls should be blocked."""
return self.tokens_used >= self.critical_threshold
def should_terminate(self) -> bool:
"""True at 95%+ — session should gracefully terminate."""
return self.tokens_used >= self.stop_threshold
def tool_output_budget(self) -> int:
"""Max chars allowed for next tool output based on current level."""
status = self.check()
return TOOL_OUTPUT_BUDGETS.get(status.level.value, 50_000)
def truncate_tool_output(self, output: str, max_chars: int = None) -> str:
"""Truncate tool output to fit budget. Adds truncation notice."""
if max_chars is None:
max_chars = self.tool_output_budget()
if len(output) <= max_chars:
return output
# Preserve start and end, truncate middle
if max_chars < 200:
return output[:max_chars] + "\n[...truncated...]"
head = max_chars // 2
tail = max_chars - head - 30 # reserve for truncation notice
truncated = (
output[:head]
+ f"\n\n[...{len(output) - head - tail:,} chars truncated...]\n\n"
+ output[-tail:]
)
return truncated
def remaining_for_response(self) -> int:
"""Tokens available for the model\'s response."""
return max(0, self.context_length - self.tokens_used - self.response_reserve)
def growth_rate(self) -> Optional[float]:
"""Average token increase per turn (from history)."""
if len(self._history) < 2:
return None
diffs = [self._history[i] - self._history[i-1] for i in range(1, len(self._history))]
return sum(diffs) / len(diffs)
def turns_remaining(self) -> Optional[int]:
"""Estimated turns until context is full (based on growth rate)."""
rate = self.growth_rate()
if rate is None or rate <= 0:
return None
remaining = self.context_length - self.tokens_used
return int(remaining / rate)
def reset(self):
"""Reset budget for new session."""
self.tokens_used = 0
self.completions_tokens = 0
self.total_tool_output_chars = 0
self._level = BudgetLevel.NORMAL
self._history.clear()
def _log_transition(self, new_level: BudgetLevel, pct: float):
"""Log budget level transitions."""
msg = (
f"Token budget: {self._level.value} -> {new_level.value} "
f"({self.tokens_used}/{self.context_length} = {pct:.0%})"
)
if new_level == BudgetLevel.WARNING:
logger.warning(msg)
elif new_level == BudgetLevel.CAUTION:
logger.warning(msg)
elif new_level in (BudgetLevel.CRITICAL, BudgetLevel.STOP):
logger.error(msg)
else:
logger.info(msg)
def summary(self) -> str:
"""Human-readable budget summary."""
status = self.check()
turns = self.turns_remaining()
rate = self.growth_rate()
lines = [
f"Token Budget: {status.tokens_used:,} / {status.context_length:,} ({status.percent_used:.0%})",
f"Level: {status.level.value}",
f"Remaining: {status.tokens_remaining:,} tokens",
]
if rate is not None:
lines.append(f"Growth rate: ~{rate:,.0f} tokens/turn")
if turns is not None:
lines.append(f"Estimated turns left: ~{turns}")
if status.message:
lines.append(f"Action: {status.message}")
return "\n".join(lines)
# ── Convenience factory ───────────────────────────────────────────────
def create_budget(context_length: int, **kwargs) -> TokenBudget:
"""Create a TokenBudget with defaults."""
return TokenBudget(context_length=context_length, **kwargs)

View File

@@ -0,0 +1,156 @@
"""Tool fixation detection — break repetitive tool calling loops.
Detects when the agent latches onto one tool and calls it repeatedly
without making progress. Injects a nudge prompt to break the loop.
Usage:
from agent.tool_fixation_detector import ToolFixationDetector
detector = ToolFixationDetector()
nudge = detector.record("execute_code")
if nudge:
# Inject nudge into conversation
messages.append({"role": "system", "content": nudge})
"""
from __future__ import annotations
import os
from dataclasses import dataclass, field
from typing import Dict, List, Optional
# Default thresholds
_DEFAULT_THRESHOLD = int(os.getenv("TOOL_FIXATION_THRESHOLD", "5"))
_DEFAULT_WINDOW = int(os.getenv("TOOL_FIXATION_WINDOW", "10"))
@dataclass
class FixationEvent:
"""Record of a fixation detection."""
tool_name: str
streak_length: int
threshold: int
nudge_sent: bool = False
class ToolFixationDetector:
"""Detects and breaks tool fixation loops.
Tracks the sequence of tool calls and detects when the same tool
is called N times consecutively. When detected, returns a nudge
prompt to inject into the conversation.
"""
def __init__(self, threshold: int = 0, window: int = 0):
self.threshold = threshold or _DEFAULT_THRESHOLD
self.window = window or _DEFAULT_WINDOW
self._history: List[str] = []
self._current_streak: str = ""
self._streak_count: int = 0
self._nudges_sent: int = 0
self._events: List[FixationEvent] = []
@property
def nudges_sent(self) -> int:
return self._nudges_sent
@property
def events(self) -> List[FixationEvent]:
return list(self._events)
def record(self, tool_name: str) -> Optional[str]:
"""Record a tool call and return nudge prompt if fixation detected.
Args:
tool_name: Name of the tool that was called.
Returns:
Nudge prompt string if fixation detected, None otherwise.
"""
self._history.append(tool_name)
# Trim history to window
if len(self._history) > self.window:
self._history = self._history[-self.window:]
# Update streak
if tool_name == self._current_streak:
self._streak_count += 1
else:
self._current_streak = tool_name
self._streak_count = 1
# Check for fixation
if self._streak_count >= self.threshold:
event = FixationEvent(
tool_name=tool_name,
streak_length=self._streak_count,
threshold=self.threshold,
nudge_sent=True,
)
self._events.append(event)
self._nudges_sent += 1
return self._build_nudge(tool_name, self._streak_count)
return None
def _build_nudge(self, tool_name: str, count: int) -> str:
"""Build a nudge prompt to break the fixation loop."""
return (
f"[SYSTEM: You have called `{tool_name}` {count} times in a row "
f"without switching tools. This suggests a fixation loop. "
f"Consider:\n"
f"1. Is the tool returning an error? Read the error carefully.\n"
f"2. Is there a different tool that could help?\n"
f"3. Should you ask the user for clarification?\n"
f"4. Is the task actually complete?\n"
f"Break the loop by trying a different approach.]"
)
def reset(self) -> None:
"""Reset the detector state."""
self._history.clear()
self._current_streak = ""
self._streak_count = 0
def get_streak_info(self) -> dict:
"""Get current streak information."""
return {
"current_tool": self._current_streak,
"streak_count": self._streak_count,
"threshold": self.threshold,
"at_threshold": self._streak_count >= self.threshold,
"nudges_sent": self._nudges_sent,
}
def format_report(self) -> str:
"""Format fixation events as a report."""
if not self._events:
return "No tool fixation detected."
lines = [
f"Tool Fixation Report ({len(self._events)} events)",
"=" * 40,
]
for e in self._events:
lines.append(f" {e.tool_name}: {e.streak_length} consecutive calls (threshold: {e.threshold})")
return "\n".join(lines)
# Singleton
_detector: Optional[ToolFixationDetector] = None
def get_fixation_detector() -> ToolFixationDetector:
"""Get or create the singleton detector."""
global _detector
if _detector is None:
_detector = ToolFixationDetector()
return _detector
def reset_fixation_detector() -> None:
"""Reset the singleton."""
global _detector
_detector = None

177
agent/tool_orchestrator.py Normal file
View File

@@ -0,0 +1,177 @@
"""Tool Orchestrator — Robust execution and circuit breaking for agent tools.
Provides a unified execution service that wraps the tool registry.
Implements the Circuit Breaker pattern to prevent the agent from getting
stuck in failure loops when a specific tool or its underlying service
is flapping or down.
Architecture:
Discovery (tools/registry.py) -> Orchestration (agent/tool_orchestrator.py) -> Dispatch
"""
import json
import time
import logging
import threading
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple
from tools.registry import registry
logger = logging.getLogger(__name__)
class CircuitState:
"""States for the tool circuit breaker."""
CLOSED = "closed" # Normal operation
OPEN = "open" # Failing, execution blocked
HALF_OPEN = "half_open" # Testing if service recovered
@dataclass
class ToolStats:
"""Execution statistics for a tool."""
name: str
state: str = CircuitState.CLOSED
failures: int = 0
successes: int = 0
last_failure_time: float = 0
total_execution_time: float = 0
call_count: int = 0
class ToolOrchestrator:
"""Orchestrates tool execution with robustness patterns."""
def __init__(
self,
failure_threshold: int = 3,
reset_timeout: int = 300,
):
"""
Args:
failure_threshold: Number of failures before opening the circuit.
reset_timeout: Seconds to wait before transitioning from OPEN to HALF_OPEN.
"""
self.failure_threshold = failure_threshold
self.reset_timeout = reset_timeout
self._stats: Dict[str, ToolStats] = {}
self._lock = threading.Lock()
def _get_stats(self, name: str) -> ToolStats:
"""Get or initialize stats for a tool with thread-safe state transition."""
with self._lock:
if name not in self._stats:
self._stats[name] = ToolStats(name=name)
stats = self._stats[name]
# Transition from OPEN to HALF_OPEN if timeout expired
if stats.state == CircuitState.OPEN:
if time.time() - stats.last_failure_time > self.reset_timeout:
stats.state = CircuitState.HALF_OPEN
logger.info("Circuit breaker HALF_OPEN for tool: %s", name)
return stats
def _record_success(self, name: str, execution_time: float):
"""Record a successful tool execution and close the circuit."""
with self._lock:
stats = self._stats[name]
stats.successes += 1
stats.call_count += 1
stats.total_execution_time += execution_time
if stats.state != CircuitState.CLOSED:
logger.info("Circuit breaker CLOSED for tool: %s (recovered)", name)
stats.state = CircuitState.CLOSED
stats.failures = 0
def _record_failure(self, name: str, execution_time: float):
"""Record a failed tool execution and potentially open the circuit."""
with self._lock:
stats = self._stats[name]
stats.failures += 1
stats.call_count += 1
stats.total_execution_time += execution_time
stats.last_failure_time = time.time()
if stats.state == CircuitState.HALF_OPEN or stats.failures >= self.failure_threshold:
stats.state = CircuitState.OPEN
logger.warning(
"Circuit breaker OPEN for tool: %s (failures: %d)",
name, stats.failures
)
def dispatch(self, name: str, args: dict, **kwargs) -> str:
"""Execute a tool via the registry with circuit breaker protection."""
stats = self._get_stats(name)
if stats.state == CircuitState.OPEN:
return json.dumps({
"error": (
f"Tool '{name}' is temporarily unavailable due to repeated failures. "
f"Circuit breaker is OPEN. Please try again in a few minutes or use an alternative tool."
),
"circuit_breaker": True,
"tool_name": name
})
start_time = time.time()
try:
# Dispatch to the underlying registry
result_str = registry.dispatch(name, args, **kwargs)
execution_time = time.time() - start_time
# Inspect result for errors. registry.dispatch catches internal
# exceptions and returns a JSON error string.
is_error = False
try:
# Lightweight check for error key in JSON
if '"error":' in result_str:
res_json = json.loads(result_str)
if isinstance(res_json, dict) and "error" in res_json:
is_error = True
except (json.JSONDecodeError, TypeError):
# If it's not valid JSON, it's a malformed result (error)
is_error = True
if is_error:
self._record_failure(name, execution_time)
else:
self._record_success(name, execution_time)
return result_str
except Exception as e:
# This should rarely be hit as registry.dispatch catches most things,
# but we guard against orchestrator-level or registry-level bugs.
execution_time = time.time() - start_time
self._record_failure(name, execution_time)
error_msg = f"Tool orchestrator error during {name}: {type(e).__name__}: {e}"
logger.exception(error_msg)
return json.dumps({
"error": error_msg,
"tool_name": name,
"execution_time": execution_time
})
def get_fleet_stats(self) -> Dict[str, Any]:
"""Return execution statistics for all tools."""
with self._lock:
return {
name: {
"state": s.state,
"failures": s.failures,
"successes": s.successes,
"avg_time": s.total_execution_time / s.call_count if s.call_count > 0 else 0,
"calls": s.call_count
}
for name, s in self._stats.items()
}
# Global orchestrator instance
orchestrator = ToolOrchestrator()

View File

@@ -575,25 +575,6 @@ def has_known_pricing(
return entry is not None
def get_pricing(
model_name: str,
provider: Optional[str] = None,
base_url: Optional[str] = None,
api_key: Optional[str] = None,
) -> Dict[str, float]:
"""Backward-compatible thin wrapper for legacy callers.
Returns only non-cache input/output fields when a pricing entry exists.
Unknown routes return zeroes.
"""
entry = get_pricing_entry(model_name, provider=provider, base_url=base_url, api_key=api_key)
if not entry:
return {"input": 0.0, "output": 0.0}
return {
"input": float(entry.input_cost_per_million or _ZERO),
"output": float(entry.output_cost_per_million or _ZERO),
}
def format_duration_compact(seconds: float) -> str:
if seconds < 60:

32
ansible/fleet_mtls.yml Normal file
View File

@@ -0,0 +1,32 @@
---
# fleet_mtls.yml — Deploy mutual-TLS certificates to all fleet agents.
#
# Prerequisites:
# 1. Run scripts/gen_fleet_ca.sh to create the fleet CA.
# 2. For each agent, run:
# scripts/gen_agent_cert.sh --agent timmy
# scripts/gen_agent_cert.sh --agent allegro
# scripts/gen_agent_cert.sh --agent ezra
#
# Usage:
# ansible-playbook -i inventory/fleet.ini ansible/fleet_mtls.yml
#
# Inventory example (inventory/fleet.ini):
# [fleet]
# timmy.local agent_name=timmy
# allegro.local agent_name=allegro
# ezra.local agent_name=ezra
#
# Refs #806
- name: Distribute fleet mTLS certificates
hosts: fleet
become: true
vars:
_pki_base: "{{ lookup('env', 'HOME') }}/.hermes/pki"
roles:
- role: hermes_mtls
vars:
hermes_mtls_local_ca_cert: "{{ _pki_base }}/ca/fleet-ca.crt"
hermes_mtls_local_agent_cert: "{{ _pki_base }}/agents/{{ agent_name }}/{{ agent_name }}.crt"
hermes_mtls_local_agent_key: "{{ _pki_base }}/agents/{{ agent_name }}/{{ agent_name }}.key"

View File

@@ -0,0 +1,12 @@
# Example fleet inventory for mutual-TLS cert distribution.
# Copy to fleet.ini and adjust hostnames/IPs.
# Refs #806
[fleet_agents]
timmy ansible_host=192.168.1.10
allegro ansible_host=192.168.1.11
ezra ansible_host=192.168.1.12
[fleet_agents:vars]
ansible_user=hermes
ansible_python_interpreter=/usr/bin/python3

View File

@@ -0,0 +1,21 @@
---
# Default paths on the *control node* where certs are read from.
# Override these in your inventory / group_vars as needed.
# Fleet CA certificate (public; safe to push to all nodes)
fleet_mtls_ca_cert_src: "{{ lookup('env', 'HOME') }}/.hermes/pki/ca/fleet-ca.crt"
# Per-agent cert/key source dir on the control node.
# Expected layout: <fleet_mtls_agent_certs_dir>/<agent_name>/<agent_name>.{crt,key}
fleet_mtls_agent_certs_dir: "{{ lookup('env', 'HOME') }}/.hermes/pki/agents"
# Remote destination paths on the fleet node
fleet_mtls_remote_pki_dir: "/etc/hermes/pki"
fleet_mtls_remote_ca_dir: "{{ fleet_mtls_remote_pki_dir }}/ca"
fleet_mtls_remote_agent_dir: "{{ fleet_mtls_remote_pki_dir }}/agent"
# The agent name to deploy (set per-host in inventory, e.g. timmy / allegro / ezra)
fleet_mtls_agent_name: "{{ inventory_hostname_short }}"
# Hermes service name (for reload notification)
fleet_mtls_hermes_service: "hermes-a2a"

View File

@@ -0,0 +1,7 @@
---
- name: Restart hermes-a2a
ansible.builtin.systemd:
name: "{{ fleet_mtls_hermes_service }}"
state: restarted
when: ansible_service_mgr == "systemd"
ignore_errors: true # service may not exist in all environments

View File

@@ -0,0 +1,17 @@
---
galaxy_info:
role_name: fleet_mtls_certs
author: hermes-agent
description: >
Distribute fleet CA and per-agent mTLS certificates to Hermes fleet nodes.
Part of issue #806 — A2A mutual TLS between fleet agents.
min_ansible_version: "2.14"
platforms:
- name: Debian
versions: [bookworm, bullseye]
- name: Ubuntu
versions: ["22.04", "24.04"]
- name: EL
versions: ["8", "9"]
dependencies: []

View File

@@ -0,0 +1,99 @@
---
# fleet_mtls_certs/tasks/main.yml
#
# Distribute the fleet CA certificate and the per-agent TLS cert+key to
# each fleet node. Triggers a hermes-a2a service restart when any cert
# changes.
#
# Refs #806 — A2A mutual TLS between fleet agents.
- name: Verify agent cert source files exist on control node
ansible.builtin.stat:
path: "{{ item }}"
register: _src_stat
delegate_to: localhost
loop:
- "{{ fleet_mtls_ca_cert_src }}"
- "{{ fleet_mtls_agent_certs_dir }}/{{ fleet_mtls_agent_name }}/{{ fleet_mtls_agent_name }}.crt"
- "{{ fleet_mtls_agent_certs_dir }}/{{ fleet_mtls_agent_name }}/{{ fleet_mtls_agent_name }}.key"
loop_control:
label: "{{ item | basename }}"
- name: Fail if any source cert is missing
ansible.builtin.fail:
msg: >
Required cert file not found: {{ item.item }}
Run scripts/gen_fleet_ca.sh and scripts/gen_agent_cert.sh --agent {{ fleet_mtls_agent_name }} first.
when: not item.stat.exists
loop: "{{ _src_stat.results }}"
loop_control:
label: "{{ item.item | basename }}"
# -----------------------------------------------------------------------
# Remote directory structure
# -----------------------------------------------------------------------
- name: Create remote PKI directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: root
group: root
mode: "0750"
loop:
- "{{ fleet_mtls_remote_pki_dir }}"
- "{{ fleet_mtls_remote_ca_dir }}"
- "{{ fleet_mtls_remote_agent_dir }}"
# -----------------------------------------------------------------------
# Fleet CA certificate (public — read-only for all)
# -----------------------------------------------------------------------
- name: Deploy fleet CA certificate
ansible.builtin.copy:
src: "{{ fleet_mtls_ca_cert_src }}"
dest: "{{ fleet_mtls_remote_ca_dir }}/fleet-ca.crt"
owner: root
group: root
mode: "0644"
notify: Restart hermes-a2a
# -----------------------------------------------------------------------
# Per-agent certificate (public portion)
# -----------------------------------------------------------------------
- name: Deploy agent certificate
ansible.builtin.copy:
src: "{{ fleet_mtls_agent_certs_dir }}/{{ fleet_mtls_agent_name }}/{{ fleet_mtls_agent_name }}.crt"
dest: "{{ fleet_mtls_remote_agent_dir }}/agent.crt"
owner: root
group: root
mode: "0644"
notify: Restart hermes-a2a
# -----------------------------------------------------------------------
# Per-agent private key (secret — root-only read)
# -----------------------------------------------------------------------
- name: Deploy agent private key
ansible.builtin.copy:
src: "{{ fleet_mtls_agent_certs_dir }}/{{ fleet_mtls_agent_name }}/{{ fleet_mtls_agent_name }}.key"
dest: "{{ fleet_mtls_remote_agent_dir }}/agent.key"
owner: root
group: root
mode: "0600"
no_log: true # suppress file content from Ansible output
notify: Restart hermes-a2a
# -----------------------------------------------------------------------
# Environment file for hermes-a2a systemd unit
# -----------------------------------------------------------------------
- name: Write hermes-a2a environment file
ansible.builtin.template:
src: hermes_a2a_env.j2
dest: /etc/hermes/a2a.env
owner: root
group: root
mode: "0640"
notify: Restart hermes-a2a

View File

@@ -0,0 +1,10 @@
# Managed by Ansible — fleet_mtls_certs role
# Environment variables for the hermes-a2a systemd service.
# Source this file in the [Service] section: EnvironmentFile=/etc/hermes/a2a.env
HERMES_AGENT_NAME={{ fleet_mtls_agent_name }}
HERMES_A2A_CERT={{ fleet_mtls_remote_agent_dir }}/agent.crt
HERMES_A2A_KEY={{ fleet_mtls_remote_agent_dir }}/agent.key
HERMES_A2A_CA={{ fleet_mtls_remote_ca_dir }}/fleet-ca.crt
HERMES_A2A_HOST=0.0.0.0
HERMES_A2A_PORT=9443

View File

@@ -0,0 +1,21 @@
---
# Ansible role: hermes_mtls
# Distributes fleet mTLS certificates to Hermes agent nodes.
#
# Required variables (set in inventory / group_vars / --extra-vars):
# hermes_mtls_local_ca_cert Local path on the Ansible controller to fleet-ca.crt
# hermes_mtls_local_agent_cert Local path to this agent's .crt file
# hermes_mtls_local_agent_key Local path to this agent's .key file
#
# Optional overrides:
hermes_mtls_cert_dir: /etc/hermes/certs
hermes_mtls_cert_owner: hermes
hermes_mtls_cert_group: hermes
hermes_mtls_cert_mode: "0640"
hermes_mtls_ca_cert_mode: "0644"
# Env file that Hermes reads on startup (systemd EnvironmentFile or .env)
hermes_mtls_env_file: /etc/hermes/mtls.env
# Hermes systemd service name — restarted after cert changes
hermes_mtls_service: hermes-gateway

View File

@@ -0,0 +1,7 @@
---
- name: Restart hermes service
ansible.builtin.systemd:
name: "{{ hermes_mtls_service }}"
state: restarted
daemon_reload: true
when: ansible_service_mgr == "systemd"

View File

@@ -0,0 +1,16 @@
---
galaxy_info:
role_name: hermes_mtls
author: Hermes Fleet
description: Distribute mTLS certificates to Hermes fleet nodes for A2A authentication
license: MIT
min_ansible_version: "2.14"
platforms:
- name: Ubuntu
versions: ["22.04", "24.04"]
- name: Debian
versions: ["12"]
- name: EL
versions: ["9"]
dependencies: []

View File

@@ -0,0 +1,67 @@
---
# hermes_mtls role — distribute fleet mTLS certificates to a Hermes agent node.
#
# This role:
# 1. Creates the cert directory on the remote node
# 2. Copies the Fleet CA cert, agent cert, and agent key
# 3. Writes an env file with HERMES_MTLS_* variables
# 4. Restarts the Hermes service if any cert changed
- name: Ensure cert directory exists
ansible.builtin.file:
path: "{{ hermes_mtls_cert_dir }}"
state: directory
owner: "{{ hermes_mtls_cert_owner }}"
group: "{{ hermes_mtls_cert_group }}"
mode: "0750"
- name: Copy Fleet CA certificate
ansible.builtin.copy:
src: "{{ hermes_mtls_local_ca_cert }}"
dest: "{{ hermes_mtls_cert_dir }}/fleet-ca.crt"
owner: "{{ hermes_mtls_cert_owner }}"
group: "{{ hermes_mtls_cert_group }}"
mode: "{{ hermes_mtls_ca_cert_mode }}"
notify: Restart hermes service
- name: Copy agent TLS certificate
ansible.builtin.copy:
src: "{{ hermes_mtls_local_agent_cert }}"
dest: "{{ hermes_mtls_cert_dir }}/agent.crt"
owner: "{{ hermes_mtls_cert_owner }}"
group: "{{ hermes_mtls_cert_group }}"
mode: "{{ hermes_mtls_cert_mode }}"
notify: Restart hermes service
- name: Copy agent TLS private key
ansible.builtin.copy:
src: "{{ hermes_mtls_local_agent_key }}"
dest: "{{ hermes_mtls_cert_dir }}/agent.key"
owner: "{{ hermes_mtls_cert_owner }}"
group: "{{ hermes_mtls_cert_group }}"
mode: "0600"
notify: Restart hermes service
- name: Write mTLS environment file
ansible.builtin.template:
src: mtls.env.j2
dest: "{{ hermes_mtls_env_file }}"
owner: "{{ hermes_mtls_cert_owner }}"
group: "{{ hermes_mtls_cert_group }}"
mode: "0640"
notify: Restart hermes service
- name: Verify cert files are readable by service user
ansible.builtin.stat:
path: "{{ item }}"
loop:
- "{{ hermes_mtls_cert_dir }}/fleet-ca.crt"
- "{{ hermes_mtls_cert_dir }}/agent.crt"
- "{{ hermes_mtls_cert_dir }}/agent.key"
register: _cert_stat
- name: Assert all cert files exist
ansible.builtin.assert:
that: item.stat.exists
fail_msg: "Expected cert file missing: {{ item.item }}"
loop: "{{ _cert_stat.results }}"

View File

@@ -0,0 +1,8 @@
# Hermes mTLS environment — generated by hermes_mtls Ansible role
# Source this file or use as a systemd EnvironmentFile=
# WARNING: This file contains the path to the agent's private key.
# Restrict read access to the hermes service user.
HERMES_MTLS_CERT={{ hermes_mtls_cert_dir }}/agent.crt
HERMES_MTLS_KEY={{ hermes_mtls_cert_dir }}/agent.key
HERMES_MTLS_CA={{ hermes_mtls_cert_dir }}/fleet-ca.crt

View File

@@ -0,0 +1,40 @@
# Tool Call Benchmark: Gemma 4 vs mimo-v2-pro
Date: 2026-04-13
Status: Awaiting execution
## Test Design
100 diverse tool calls across 7 categories:
| Category | Count | Tools Tested |
|----------|-------|--------------|
| File operations | 20 | read_file, write_file, search_files |
| Terminal commands | 20 | terminal |
| Web search | 15 | web_search |
| Code execution | 15 | execute_code |
| Browser automation | 10 | browser_navigate |
| Delegation | 10 | delegate_task |
| MCP tools | 10 | mcp_* |
## Metrics
| Metric | mimo-v2-pro | Gemma 4 |
|--------|-------------|---------|
| Schema parse success | — | — |
| Tool execution success | — | — |
| Parallel tool success | — | — |
| Avg latency (s) | — | — |
| Token cost per call | — | — |
## How to Run
```bash
python3 benchmarks/tool_call_benchmark.py --model nous:xiaomi/mimo-v2-pro
python3 benchmarks/tool_call_benchmark.py --model ollama/gemma4:latest
python3 benchmarks/tool_call_benchmark.py --compare
```
## Gemma 4-Specific Failure Modes
To be documented after benchmark execution.

194
benchmarks/test_images.json Normal file
View File

@@ -0,0 +1,194 @@
[
{
"id": "screenshot_github_home",
"url": "https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png",
"category": "screenshot",
"expected_keywords": ["github", "logo", "mark"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "diagram_mermaid_flow",
"url": "https://mermaid.ink/img/pako:eNpdkE9PwzAMxb-K5VOl7gc7sAOIIDuAw9gptnRaSJLSJttQStmXs9LCH-ymBOI1ef_42U6cUSae4IkDxbAAWtB6siSZXVhjQTlgl1nigHg5fRBOzSfebopROCu_cytObSfgLSE1ANOeZWkO2IH5upZxYot8m1hqAdpD_63WRl0xdUG1jdl9kPiOb_EWk2JBtPaiKkF4eVIYgO0EtkW-RSgC4gJ6HJYRG1UNdN0HNVd0Bftjj7X8P92qPj-F8l8T3w",
"category": "diagram",
"expected_keywords": ["flow", "diagram", "process"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": false}
},
{
"id": "photo_random_1",
"url": "https://picsum.photos/seed/vision1/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "photo_random_2",
"url": "https://picsum.photos/seed/vision2/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "chart_simple_bar",
"url": "https://quickchart.io/chart?c={type:'bar',data:{labels:['Q1','Q2','Q3','Q4'],datasets:[{label:'Revenue',data:[100,150,200,250]}]}}",
"category": "chart",
"expected_keywords": ["bar", "chart", "revenue"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "chart_pie",
"url": "https://quickchart.io/chart?c={type:'pie',data:{labels:['A','B','C'],datasets:[{data:[30,50,20]}]}}",
"category": "chart",
"expected_keywords": ["pie", "chart", "percentage"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "diagram_org_chart",
"url": "https://mermaid.ink/img/pako:eNpdkE9PwzAMxb-K5VOl7gc7sAOIIDuAw9gptnRaSJLSJttQStmXs9LCH-ymBOI1ef_42U6cUSae4IkDxbAAWtB6iuyIWyrLgXLALrPEAfFy-iCcmk-83RSjcFZ-51ac2k7AW0JqAKY9y9IcsAPzdS3jxBb5NrHUAraH_lutjbpi6oJqG7P7IPEd3-ItJsWCaO1FVYLw8qQwANsJbIt8i1AExAX0OCwjNqoa6LoPaq7oCvbHHmv5f7pVfX4K5b8mvg",
"category": "diagram",
"expected_keywords": ["organization", "hierarchy", "chart"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": false}
},
{
"id": "screenshot_terminal",
"url": "https://raw.githubusercontent.com/nicehash/nicehash-quick-start/main/images/nicehash-terminal.png",
"category": "screenshot",
"expected_keywords": ["terminal", "command", "output"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "photo_random_3",
"url": "https://picsum.photos/seed/vision3/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "chart_line",
"url": "https://quickchart.io/chart?c={type:'line',data:{labels:['Jan','Feb','Mar','Apr'],datasets:[{label:'Temperature',data:[5,8,12,18]}]}}",
"category": "chart",
"expected_keywords": ["line", "chart", "temperature"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "diagram_sequence",
"url": "https://mermaid.ink/img/pako:eNpdkE9PwzAMxb-K5VOl7gc7sAOIIDuAw9gptnRaSJLSJttQStmXs9LCH-ymBOI1ef_42U6cUSae4IkDxbAAWtB6iuyIWyrLgXLALrPEAfFy-iCcmk-83RSjcFZ-51ac2k7AW0JqAKY9y9IcsAPzdS3jxBb5NrHUAraH_lutjbpi6oJqG7P7IPEd3-ItJsWCaO1FVYLw8qQwANsJbIt8i1AExAX0OCwjNqoa6LoPaq7oCvbHHmv5f7pVfX4K5b8mvg",
"category": "diagram",
"expected_keywords": ["sequence", "interaction", "message"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": false}
},
{
"id": "photo_random_4",
"url": "https://picsum.photos/seed/vision4/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "screenshot_webpage",
"url": "https://github.githubassets.com/images/modules/site/social-cards.png",
"category": "screenshot",
"expected_keywords": ["github", "page", "web"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "chart_radar",
"url": "https://quickchart.io/chart?c={type:'radar',data:{labels:['Speed','Power','Defense','Magic'],datasets:[{label:'Hero',data:[80,60,70,90]}]}}",
"category": "chart",
"expected_keywords": ["radar", "chart", "skill"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "photo_random_5",
"url": "https://picsum.photos/seed/vision5/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "diagram_class",
"url": "https://mermaid.ink/img/pako:eNpdkE9PwzAMxb-K5VOl7gc7sAOIIDuAw9gptnRaSJLSJttQStmXs9LCH-ymBOI1ef_42U6cUSae4IkDxbAAWtB6iuyIWyrLgXLALrPEAfFy-iCcmk-83RSjcFZ-51ac2k7AW0JqAKY9y9IcsAPzdS3jxBb5NrHUAraH_lutjbpi6oJqG7P7IPEd3-ItJsWCaO1FVYLw8qQwANsJbIt8i1AExAX0OCwjNqoa6LoPaq7oCvbHHmv5f7pVfX4K5b8mvg",
"category": "diagram",
"expected_keywords": ["class", "object", "attribute"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": false}
},
{
"id": "chart_doughnut",
"url": "https://quickchart.io/chart?c={type:'doughnut',data:{labels:['Desktop','Mobile','Tablet'],datasets:[{data:[60,30,10]}]}}",
"category": "chart",
"expected_keywords": ["doughnut", "chart", "device"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "photo_random_6",
"url": "https://picsum.photos/seed/vision6/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "screenshot_error",
"url": "https://http.cat/404.jpg",
"category": "screenshot",
"expected_keywords": ["404", "error", "cat"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": true}
},
{
"id": "diagram_network",
"url": "https://mermaid.ink/img/pako:eNpdkE9PwzAMxb-K5VOl7gc7sAOIIDuAw9gptnRaSJLSJttQStmXs9LCH-ymBOI1ef_42U6cUSae4IkDxbAAWtB6iuyIWyrLgXLALrPEAfFy-iCcmk-83RSjcFZ-51ac2k7AW0JqAKY9y9IcsAPzdS3jxBb5NrHUAraH_lutjbpi6oJqG7P7IPEd3-ItJsWCaO1FVYLw8qQwANsJbIt8i1AExAX0OCwjNqoa6LoPaq7oCvbHHmv5f7pVfX4K5b8mvg",
"category": "diagram",
"expected_keywords": ["network", "node", "connection"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": false}
},
{
"id": "photo_random_7",
"url": "https://picsum.photos/seed/vision7/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "chart_stacked_bar",
"url": "https://quickchart.io/chart?c={type:'bar',data:{labels:['2022','2023','2024'],datasets:[{label:'Cloud',data:[100,150,200]},{label:'On-prem',data:[200,180,160]}]},options:{scales:{x:{stacked:true},y:{stacked:true}}}}",
"category": "chart",
"expected_keywords": ["stacked", "bar", "chart"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 50, "min_sentences": 2, "has_numbers": true}
},
{
"id": "screenshot_dashboard",
"url": "https://github.githubassets.com/images/modules/site/features-code-search.png",
"category": "screenshot",
"expected_keywords": ["search", "code", "feature"],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
},
{
"id": "photo_random_8",
"url": "https://picsum.photos/seed/vision8/400/300",
"category": "photo",
"expected_keywords": [],
"ground_truth_ocr": "",
"expected_structure": {"min_length": 30, "min_sentences": 1, "has_numbers": false}
}
]

View File

@@ -0,0 +1,614 @@
#!/usr/bin/env python3
"""
Tool-Calling Benchmark — Gemma 4 vs mimo-v2-pro regression test.
Runs 100 diverse tool-calling prompts through multiple models and compares
success rates, latency, and token costs.
Usage:
python3 benchmarks/tool_call_benchmark.py # full 100-call suite
python3 benchmarks/tool_call_benchmark.py --limit 10 # quick smoke test
python3 benchmarks/tool_call_benchmark.py --models nous # single model
python3 benchmarks/tool_call_benchmark.py --category file # single category
Requires: hermes-agent venv activated, OPENROUTER_API_KEY or equivalent.
"""
import argparse
import json
import os
import sys
import time
import traceback
from dataclasses import dataclass, field, asdict
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
# Ensure hermes-agent root is importable
REPO_ROOT = Path(__file__).resolve().parent.parent
sys.path.insert(0, str(REPO_ROOT))
# ---------------------------------------------------------------------------
# Test Definitions
# ---------------------------------------------------------------------------
@dataclass
class ToolCall:
"""A single tool-calling test case."""
id: str
category: str
prompt: str
expected_tool: str # tool name we expect the model to call
expected_params_check: str = "" # substring expected in JSON args
timeout: int = 30 # max seconds per call
notes: str = ""
# fmt: off
SUITE: list[ToolCall] = [
# ── File Operations (20) ──────────────────────────────────────────────
ToolCall("file-01", "file", "Read the file /tmp/test_bench.txt and show me its contents.",
"read_file", "path"),
ToolCall("file-02", "file", "Write 'hello benchmark' to /tmp/test_bench_out.txt",
"write_file", "path"),
ToolCall("file-03", "file", "Search for the word 'import' in all Python files in the current directory.",
"search_files", "pattern"),
ToolCall("file-04", "file", "Read lines 1-20 of /etc/hosts",
"read_file", "offset"),
ToolCall("file-05", "file", "Patch /tmp/test_bench_out.txt: replace 'hello' with 'goodbye'",
"patch", "old_string"),
ToolCall("file-06", "file", "Search for files matching *.py in the current directory.",
"search_files", "target"),
ToolCall("file-07", "file", "Read the first 10 lines of /etc/passwd",
"read_file", "limit"),
ToolCall("file-08", "file", "Write a JSON config to /tmp/bench_config.json with key 'debug': true",
"write_file", "content"),
ToolCall("file-09", "file", "Search for 'def test_' in Python test files.",
"search_files", "file_glob"),
ToolCall("file-10", "file", "Read /tmp/bench_config.json and tell me what's in it.",
"read_file", "bench_config"),
ToolCall("file-11", "file", "Create a file /tmp/bench_readme.md with one line: '# Benchmark'",
"write_file", "bench_readme"),
ToolCall("file-12", "file", "Search for 'TODO' comments in all .py files.",
"search_files", "TODO"),
ToolCall("file-13", "file", "Read /tmp/bench_readme.md",
"read_file", "bench_readme"),
ToolCall("file-14", "file", "Patch /tmp/bench_readme.md: replace '# Benchmark' with '# Tool Benchmark'",
"patch", "Tool Benchmark"),
ToolCall("file-15", "file", "Write a Python one-liner to /tmp/bench_hello.py that prints hello.",
"write_file", "bench_hello"),
ToolCall("file-16", "file", "Search for all .json files in /tmp/.",
"search_files", "json"),
ToolCall("file-17", "file", "Read /tmp/bench_hello.py and verify it has print('hello').",
"read_file", "bench_hello"),
ToolCall("file-18", "file", "Patch /tmp/bench_hello.py to print 'hello world' instead of 'hello'.",
"patch", "hello world"),
ToolCall("file-19", "file", "List files matching 'bench*' in /tmp/.",
"search_files", "bench"),
ToolCall("file-20", "file", "Read /tmp/test_bench.txt again and summarize its contents.",
"read_file", "test_bench"),
# ── Terminal Commands (20) ────────────────────────────────────────────
ToolCall("term-01", "terminal", "Run `echo hello world` in the terminal.",
"terminal", "echo"),
ToolCall("term-02", "terminal", "Run `date` to get the current date and time.",
"terminal", "date"),
ToolCall("term-03", "terminal", "Run `uname -a` to get system information.",
"terminal", "uname"),
ToolCall("term-04", "terminal", "Run `pwd` to show the current directory.",
"terminal", "pwd"),
ToolCall("term-05", "terminal", "Run `ls -la /tmp/ | head -20` to list temp files.",
"terminal", "head"),
ToolCall("term-06", "terminal", "Run `whoami` to show the current user.",
"terminal", "whoami"),
ToolCall("term-07", "terminal", "Run `df -h` to show disk usage.",
"terminal", "df"),
ToolCall("term-08", "terminal", "Run `python3 --version` to check Python version.",
"terminal", "python3"),
ToolCall("term-09", "terminal", "Run `cat /etc/hostname` to get the hostname.",
"terminal", "hostname"),
ToolCall("term-10", "terminal", "Run `uptime` to see system uptime.",
"terminal", "uptime"),
ToolCall("term-11", "terminal", "Run `env | grep PATH` to show the PATH variable.",
"terminal", "PATH"),
ToolCall("term-12", "terminal", "Run `wc -l /etc/passwd` to count lines.",
"terminal", "wc"),
ToolCall("term-13", "terminal", "Run `echo $SHELL` to show the current shell.",
"terminal", "SHELL"),
ToolCall("term-14", "terminal", "Run `free -h || vm_stat` to check memory usage.",
"terminal", "memory"),
ToolCall("term-15", "terminal", "Run `id` to show user and group IDs.",
"terminal", "id"),
ToolCall("term-16", "terminal", "Run `hostname` to get the machine hostname.",
"terminal", "hostname"),
ToolCall("term-17", "terminal", "Run `echo {1..5}` to test brace expansion.",
"terminal", "echo"),
ToolCall("term-18", "terminal", "Run `seq 1 5` to generate a number sequence.",
"terminal", "seq"),
ToolCall("term-19", "terminal", "Run `python3 -c 'print(2+2)'` to compute 2+2.",
"terminal", "print"),
ToolCall("term-20", "terminal", "Run `ls -d /tmp/bench* 2>/dev/null | wc -l` to count bench files.",
"terminal", "wc"),
# ── Code Execution (15) ──────────────────────────────────────────────
ToolCall("code-01", "code", "Execute a Python script that computes factorial of 10.",
"execute_code", "factorial"),
ToolCall("code-02", "code", "Run Python to read /tmp/test_bench.txt and count its words.",
"execute_code", "words"),
ToolCall("code-03", "code", "Execute Python to generate the first 20 Fibonacci numbers.",
"execute_code", "fibonacci"),
ToolCall("code-04", "code", "Run Python to parse JSON from a string and print keys.",
"execute_code", "json"),
ToolCall("code-05", "code", "Execute Python to list all files in /tmp/ matching 'bench*'.",
"execute_code", "glob"),
ToolCall("code-06", "code", "Run Python to compute the sum of squares from 1 to 100.",
"execute_code", "sum"),
ToolCall("code-07", "code", "Execute Python to check if 'racecar' is a palindrome.",
"execute_code", "palindrome"),
ToolCall("code-08", "code", "Run Python to create a CSV string with 5 rows of sample data.",
"execute_code", "csv"),
ToolCall("code-09", "code", "Execute Python to sort a list [5,2,8,1,9] and print the result.",
"execute_code", "sort"),
ToolCall("code-10", "code", "Run Python to count lines in /etc/passwd.",
"execute_code", "passwd"),
ToolCall("code-11", "code", "Execute Python to hash the string 'benchmark' with SHA256.",
"execute_code", "sha256"),
ToolCall("code-12", "code", "Run Python to get the current UTC timestamp.",
"execute_code", "utcnow"),
ToolCall("code-13", "code", "Execute Python to convert 'hello world' to uppercase and reverse it.",
"execute_code", "upper"),
ToolCall("code-14", "code", "Run Python to create a dictionary of system info (platform, python version).",
"execute_code", "sys"),
ToolCall("code-15", "code", "Execute Python to check internet connectivity by resolving google.com.",
"execute_code", "socket"),
# ── Delegation (10) ──────────────────────────────────────────────────
ToolCall("deleg-01", "delegate", "Use a subagent to find all .log files in /tmp/.",
"delegate_task", "log"),
ToolCall("deleg-02", "delegate", "Delegate to a subagent: what is 15 * 37?",
"delegate_task", "15"),
ToolCall("deleg-03", "delegate", "Use a subagent to check if Python 3 is installed and its version.",
"delegate_task", "python"),
ToolCall("deleg-04", "delegate", "Delegate: read /tmp/test_bench.txt and summarize it in one sentence.",
"delegate_task", "summarize"),
ToolCall("deleg-05", "delegate", "Use a subagent to list the contents of /tmp/ directory.",
"delegate_task", "tmp"),
ToolCall("deleg-06", "delegate", "Delegate: count the number of .py files in the current directory.",
"delegate_task", ".py"),
ToolCall("deleg-07", "delegate", "Use a subagent to check disk space with df -h.",
"delegate_task", "df"),
ToolCall("deleg-08", "delegate", "Delegate: what OS are we running on?",
"delegate_task", "os"),
ToolCall("deleg-09", "delegate", "Use a subagent to find the hostname of this machine.",
"delegate_task", "hostname"),
ToolCall("deleg-10", "delegate", "Delegate: create a temp file /tmp/bench_deleg.txt with 'done'.",
"delegate_task", "write"),
# ── Todo / Memory (10 — replacing web/browser/MCP which need external services) ──
ToolCall("todo-01", "todo", "Add a todo item: 'Run benchmark suite'",
"todo", "benchmark"),
ToolCall("todo-02", "todo", "Show me the current todo list.",
"todo", ""),
ToolCall("todo-03", "todo", "Mark the first todo item as completed.",
"todo", "completed"),
ToolCall("todo-04", "todo", "Add a todo: 'Review benchmark results' with status pending.",
"todo", "Review"),
ToolCall("todo-05", "todo", "Clear all completed todos.",
"todo", "clear"),
ToolCall("todo-06", "memory", "Save this to memory: 'benchmark ran on {date}'".format(
date=datetime.now().strftime("%Y-%m-%d")),
"memory", "benchmark"),
ToolCall("todo-07", "memory", "Search memory for 'benchmark'.",
"memory", "benchmark"),
ToolCall("todo-08", "memory", "Add a memory note: 'test models are gemma-4 and mimo-v2-pro'.",
"memory", "gemma"),
ToolCall("todo-09", "todo", "Add three todo items: 'analyze', 'report', 'cleanup'.",
"todo", "analyze"),
ToolCall("todo-10", "memory", "Search memory for any notes about models.",
"memory", "model"),
# ── Skills (10 — replacing MCP tools which need servers) ─────────────
ToolCall("skill-01", "skills", "List all available skills.",
"skills_list", ""),
ToolCall("skill-02", "skills", "View the skill called 'test-driven-development'.",
"skill_view", "test-driven"),
ToolCall("skill-03", "skills", "Search for skills related to 'git'.",
"skills_list", "git"),
ToolCall("skill-04", "skills", "View the 'code-review' skill.",
"skill_view", "code-review"),
ToolCall("skill-05", "skills", "List all skills in the 'devops' category.",
"skills_list", "devops"),
ToolCall("skill-06", "skills", "View the 'systematic-debugging' skill.",
"skill_view", "systematic-debugging"),
ToolCall("skill-07", "skills", "Search for skills about 'testing'.",
"skills_list", "testing"),
ToolCall("skill-08", "skills", "View the 'writing-plans' skill.",
"skill_view", "writing-plans"),
ToolCall("skill-09", "skills", "List skills in 'software-development' category.",
"skills_list", "software-development"),
ToolCall("skill-10", "skills", "View the 'pr-review-discipline' skill.",
"skill_view", "pr-review"),
# ── Additional tests to reach 100 ────────────────────────────────────
ToolCall("file-21", "file", "Write a Python snippet to /tmp/bench_sort.py that sorts [3,1,2].",
"write_file", "bench_sort"),
ToolCall("file-22", "file", "Read /tmp/bench_sort.py back and confirm it exists.",
"read_file", "bench_sort"),
ToolCall("file-23", "file", "Search for 'class' in all .py files in the benchmarks directory.",
"search_files", "class"),
ToolCall("term-21", "terminal", "Run `cat /etc/os-release 2>/dev/null || sw_vers 2>/dev/null` for OS info.",
"terminal", "os"),
ToolCall("term-22", "terminal", "Run `nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null` for CPU count.",
"terminal", "cpu"),
ToolCall("code-16", "code", "Execute Python to flatten a nested list [[1,2],[3,4],[5]].",
"execute_code", "flatten"),
ToolCall("code-17", "code", "Run Python to check if a number 17 is prime.",
"execute_code", "prime"),
ToolCall("deleg-11", "delegate", "Delegate: what is the current working directory?",
"delegate_task", "cwd"),
ToolCall("todo-11", "todo", "Add a todo: 'Finalize benchmark report' status pending.",
"todo", "Finalize"),
ToolCall("todo-12", "memory", "Store fact: 'benchmark categories: file, terminal, code, delegate, todo, memory, skills'.",
"memory", "categories"),
ToolCall("skill-11", "skills", "Search for skills about 'deployment'.",
"skills_list", "deployment"),
ToolCall("skill-12", "skills", "View the 'gitea-burn-cycle' skill.",
"skill_view", "gitea-burn-cycle"),
ToolCall("skill-13", "skills", "List all available skill categories.",
"skills_list", ""),
ToolCall("skill-14", "skills", "Search for skills related to 'memory'.",
"skills_list", "memory"),
ToolCall("skill-15", "skills", "View the 'mimo-swarm' skill.",
"skill_view", "mimo-swarm"),
]
# fmt: on
# ---------------------------------------------------------------------------
# Runner
# ---------------------------------------------------------------------------
@dataclass
class CallResult:
test_id: str
category: str
model: str
prompt: str
expected_tool: str
success: bool
tool_called: Optional[str] = None
tool_args_valid: bool = False
execution_ok: bool = False
latency_s: float = 0.0
error: str = ""
raw_response: str = ""
@dataclass
class ModelStats:
model: str
total: int = 0
schema_ok: int = 0 # model produced valid tool call JSON
exec_ok: int = 0 # tool actually ran without error
latency_sum: float = 0.0
failures: list = field(default_factory=list)
@property
def schema_pct(self) -> float:
return (self.schema_ok / self.total * 100) if self.total else 0
@property
def exec_pct(self) -> float:
return (self.exec_ok / self.total * 100) if self.total else 0
@property
def avg_latency(self) -> float:
return (self.latency_sum / self.total) if self.total else 0
def setup_test_files():
"""Create prerequisite files for the benchmark."""
Path("/tmp/test_bench.txt").write_text(
"This is a benchmark test file.\n"
"It contains sample data for tool-calling tests.\n"
"Line three has some import statements.\n"
"import os\nimport sys\nimport json\n"
"End of test data.\n"
)
def run_single_test(tc: ToolCall, model_spec: str, provider: str) -> CallResult:
"""Run a single tool-calling test through the agent."""
from run_agent import AIAgent
result = CallResult(
test_id=tc.id,
category=tc.category,
model=model_spec,
prompt=tc.prompt,
expected_tool=tc.expected_tool,
success=False,
)
try:
agent = AIAgent(
model=model_spec,
provider=provider,
max_iterations=3,
quiet_mode=True,
skip_context_files=True,
skip_memory=True,
persist_session=False,
)
t0 = time.time()
conv = agent.run_conversation(
user_message=tc.prompt,
system_message=(
"You are a benchmark test runner. Execute the user's request by calling "
"the appropriate tool. Return the tool result directly. Do not add commentary."
),
)
result.latency_s = round(time.time() - t0, 2)
messages = conv.get("messages", [])
# Find the first assistant message with tool_calls
tool_called = None
tool_args_str = ""
for msg in messages:
if msg.get("role") == "assistant" and msg.get("tool_calls"):
for tc_item in msg["tool_calls"]:
fn = tc_item.get("function", {})
tool_called = fn.get("name", "")
tool_args_str = fn.get("arguments", "{}")
break
break
if tool_called:
result.tool_called = tool_called
result.schema_ok = True
# Check if the right tool was called
if tool_called == tc.expected_tool:
result.success = True
# Check if args contain expected substring
if tc.expected_params_check:
result.tool_args_valid = tc.expected_params_check in tool_args_str
else:
result.tool_args_valid = True
# Check if tool executed (look for tool role message)
for msg in messages:
if msg.get("role") == "tool":
content = msg.get("content", "")
if content and "error" not in content.lower()[:50]:
result.execution_ok = True
break
elif content:
result.execution_ok = True # got a response, even if error
break
else:
# No tool call produced — still check if model responded
final = conv.get("final_response", "")
result.raw_response = final[:200] if final else ""
except Exception as e:
result.error = f"{type(e).__name__}: {str(e)[:200]}"
result.latency_s = round(time.time() - t0, 2) if 't0' in dir() else 0
return result
def generate_report(results: list[CallResult], models: list[str], output_path: Path):
"""Generate markdown benchmark report."""
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")
# Aggregate per model
stats: dict[str, ModelStats] = {}
for m in models:
stats[m] = ModelStats(model=m)
by_category: dict[str, dict[str, list[CallResult]]] = {}
for r in results:
s = stats[r.model]
s.total += 1
s.schema_ok += int(r.schema_ok)
s.exec_ok += int(r.execution_ok)
s.latency_sum += r.latency_s
if not r.success:
s.failures.append(r)
by_category.setdefault(r.category, {}).setdefault(r.model, []).append(r)
lines = [
f"# Tool-Calling Benchmark Report",
f"",
f"Generated: {now}",
f"Suite: {len(SUITE)} calls across {len(set(tc.category for tc in SUITE))} categories",
f"Models tested: {', '.join(models)}",
f"",
f"## Summary",
f"",
f"| Metric | {' | '.join(models)} |",
f"|--------|{'|'.join('---------' for _ in models)}|",
]
# Schema parse success
row = "| Schema parse success | "
for m in models:
s = stats[m]
row += f"{s.schema_ok}/{s.total} ({s.schema_pct:.0f}%) | "
lines.append(row)
# Tool execution success
row = "| Tool execution success | "
for m in models:
s = stats[m]
row += f"{s.exec_ok}/{s.total} ({s.exec_pct:.0f}%) | "
lines.append(row)
# Correct tool selected
row = "| Correct tool selected | "
for m in models:
s = stats[m]
correct = sum(1 for r in results if r.model == m and r.success)
pct = (correct / s.total * 100) if s.total else 0
row += f"{correct}/{s.total} ({pct:.0f}%) | "
lines.append(row)
# Avg latency
row = "| Avg latency (s) | "
for m in models:
s = stats[m]
row += f"{s.avg_latency:.2f} | "
lines.append(row)
lines.append("")
# Per-category breakdown
lines.append("## Per-Category Breakdown")
lines.append("")
for cat in sorted(by_category.keys()):
lines.append(f"### {cat.title()}")
lines.append("")
lines.append(f"| Metric | {' | '.join(models)} |")
lines.append(f"|--------|{'|'.join('---------' for _ in models)}|")
cat_data = by_category[cat]
for metric_name, fn in [
("Schema OK", lambda r: r.schema_ok),
("Exec OK", lambda r: r.execution_ok),
("Correct tool", lambda r: r.success),
]:
row = f"| {metric_name} | "
for m in models:
results_m = cat_data.get(m, [])
total = len(results_m)
ok = sum(1 for r in results_m if fn(r))
pct = (ok / total * 100) if total else 0
row += f"{ok}/{total} ({pct:.0f}%) | "
lines.append(row)
lines.append("")
# Failure analysis
lines.append("## Failure Analysis")
lines.append("")
any_failures = False
for m in models:
s = stats[m]
if s.failures:
any_failures = True
lines.append(f"### {m}{len(s.failures)} failures")
lines.append("")
lines.append("| Test | Category | Expected | Got | Error |")
lines.append("|------|----------|----------|-----|-------|")
for r in s.failures:
got = r.tool_called or "none"
err = r.error or "wrong tool"
lines.append(f"| {r.test_id} | {r.category} | {r.expected_tool} | {got} | {err[:60]} |")
lines.append("")
if not any_failures:
lines.append("No failures detected.")
lines.append("")
# Raw results JSON
lines.append("## Raw Results")
lines.append("")
lines.append("```json")
lines.append(json.dumps([asdict(r) for r in results], indent=2, default=str))
lines.append("```")
report = "\n".join(lines)
output_path.write_text(report)
return report
def main():
parser = argparse.ArgumentParser(description="Tool-calling benchmark")
parser.add_argument("--models", nargs="+",
default=["nous:gia-3/gemma-4-31b", "nous:mimo-v2-pro"],
help="Model specs to test (provider:model)")
parser.add_argument("--limit", type=int, default=0,
help="Run only first N tests (0 = all)")
parser.add_argument("--category", type=str, default="",
help="Run only tests in this category")
parser.add_argument("--output", type=str, default="",
help="Output report path (default: benchmarks/gemma4-tool-calling-YYYY-MM-DD.md)")
parser.add_argument("--dry-run", action="store_true",
help="Print test cases without running them")
args = parser.parse_args()
# Filter suite
suite = SUITE[:]
if args.category:
suite = [tc for tc in suite if tc.category == args.category]
if args.limit > 0:
suite = suite[:args.limit]
if args.dry_run:
print(f"Would run {len(suite)} tests:")
for tc in suite:
print(f" [{tc.category:8s}] {tc.id}: {tc.expected_tool}{tc.prompt[:60]}")
return
# Setup
setup_test_files()
date_str = datetime.now().strftime("%Y-%m-%d")
output_path = Path(args.output) if args.output else REPO_ROOT / "benchmarks" / f"gemma4-tool-calling-{date_str}.md"
# Parse model specs
model_specs = []
for spec in args.models:
parts = spec.split(":", 1)
provider = parts[0]
model_name = parts[1] if len(parts) > 1 else parts[0]
model_specs.append((provider, model_name, spec))
print(f"Benchmark: {len(suite)} tests × {len(model_specs)} models = {len(suite) * len(model_specs)} calls")
print(f"Output: {output_path}")
print()
all_results: list[CallResult] = []
for provider, model_name, full_spec in model_specs:
print(f"── {full_spec} {'' * (50 - len(full_spec))}")
model_results = []
for i, tc in enumerate(suite, 1):
sys.stdout.write(f"\r [{i:3d}/{len(suite)}] {tc.id:10s} {tc.category:8s}{tc.expected_tool:20s}")
sys.stdout.flush()
r = run_single_test(tc, full_spec, provider)
model_results.append(r)
status = "" if r.success else ""
sys.stdout.write(f" {status} ({r.latency_s:.1f}s)")
sys.stdout.write("\n")
all_results.extend(model_results)
# Quick stats
ok = sum(1 for r in model_results if r.success)
print(f" Result: {ok}/{len(model_results)} correct tool selected ({ok/len(model_results)*100:.0f}%)")
print()
# Generate report
model_names = [spec for _, _, spec in model_specs]
report = generate_report(all_results, model_names, output_path)
print(f"Report written to {output_path}")
# Exit code: 0 if all pass, 1 if any failures
total_fail = sum(1 for r in all_results if not r.success)
sys.exit(1 if total_fail > 0 else 0)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,635 @@
#!/usr/bin/env python3
"""
Vision Benchmark Suite — Issue #817
Compares Gemma 4 vision accuracy vs current approach (Gemini 3 Flash Preview).
Measures OCR accuracy, description quality, latency, and token usage.
Usage:
# Run full benchmark
python benchmarks/vision_benchmark.py --images benchmarks/test_images.json
# Single image test
python benchmarks/vision_benchmark.py --url https://example.com/image.png
# Generate test report
python benchmarks/vision_benchmark.py --images benchmarks/test_images.json --output benchmarks/vision_results.json
Test image dataset: benchmarks/test_images.json (50-100 diverse images)
"""
import argparse
import asyncio
import base64
import json
import os
import statistics
import sys
import time
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional
# ---------------------------------------------------------------------------
# Benchmark configuration
# ---------------------------------------------------------------------------
# Models to compare
MODELS = {
"gemma4": {
"model_id": "google/gemma-4-27b-it",
"display_name": "Gemma 4 27B",
"provider": "nous",
"description": "Google's multimodal Gemma 4 model",
},
"gemini3_flash": {
"model_id": "google/gemini-3-flash-preview",
"display_name": "Gemini 3 Flash Preview",
"provider": "openrouter",
"description": "Current default vision model",
},
}
# Evaluation prompts for different test categories
EVAL_PROMPTS = {
"screenshot": "Describe this screenshot in detail. What application is shown? What is the current state of the UI?",
"diagram": "Describe this diagram completely. What concepts does it illustrate? List all components and their relationships.",
"photo": "Describe this photo in detail. What objects are visible? What is the scene?",
"ocr": "Extract ALL text visible in this image. Return it exactly as written, preserving formatting.",
"chart": "What data does this chart show? List all axes labels, values, and key trends.",
"document": "Extract all text from this document image. Preserve paragraph structure.",
}
# ---------------------------------------------------------------------------
# Vision model interface
# ---------------------------------------------------------------------------
async def analyze_with_model(
image_url: str,
prompt: str,
model_config: dict,
timeout: float = 120.0,
) -> dict:
"""Call a vision model and return structured results.
Returns dict with:
- analysis: str
- latency_ms: float
- tokens: dict (prompt_tokens, completion_tokens, total_tokens)
- success: bool
- error: str (if failed)
"""
import httpx
provider = model_config["provider"]
model_id = model_config["model_id"]
# Prepare messages
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{"type": "image_url", "image_url": {"url": image_url}},
],
}
]
# Route to provider
if provider == "openrouter":
api_url = "https://openrouter.ai/api/v1/chat/completions"
api_key = os.getenv("OPENROUTER_API_KEY", "")
elif provider == "nous":
api_url = "https://inference.nousresearch.com/v1/chat/completions"
api_key = os.getenv("NOUS_API_KEY", "") or os.getenv("NOUS_INFERENCE_API_KEY", "")
else:
api_url = os.getenv(f"{provider.upper()}_API_URL", "")
api_key = os.getenv(f"{provider.upper()}_API_KEY", "")
if not api_key:
return {
"analysis": "",
"latency_ms": 0,
"tokens": {},
"success": False,
"error": f"No API key for provider {provider}",
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
payload = {
"model": model_id,
"messages": messages,
"max_tokens": 2000,
"temperature": 0.1,
}
start = time.perf_counter()
try:
async with httpx.AsyncClient(timeout=timeout) as client:
resp = await client.post(api_url, json=payload, headers=headers)
resp.raise_for_status()
data = resp.json()
latency_ms = (time.perf_counter() - start) * 1000
analysis = ""
choices = data.get("choices", [])
if choices:
msg = choices[0].get("message", {})
analysis = msg.get("content", "")
usage = data.get("usage", {})
tokens = {
"prompt_tokens": usage.get("prompt_tokens", 0),
"completion_tokens": usage.get("completion_tokens", 0),
"total_tokens": usage.get("total_tokens", 0),
}
return {
"analysis": analysis,
"latency_ms": round(latency_ms, 1),
"tokens": tokens,
"success": True,
"error": "",
}
except Exception as e:
return {
"analysis": "",
"latency_ms": round((time.perf_counter() - start) * 1000, 1),
"tokens": {},
"success": False,
"error": str(e),
}
# ---------------------------------------------------------------------------
# Evaluation metrics
# ---------------------------------------------------------------------------
def compute_ocr_accuracy(extracted: str, ground_truth: str) -> float:
"""Compute OCR accuracy using character-level Levenshtein ratio.
Returns 0.0-1.0 (1.0 = perfect match).
"""
if not ground_truth:
return 1.0 if not extracted else 0.0
if not extracted:
return 0.0
# Normalized Levenshtein similarity
extracted_lower = extracted.lower().strip()
truth_lower = ground_truth.lower().strip()
# Simple character overlap ratio (fast proxy)
max_len = max(len(extracted_lower), len(truth_lower))
if max_len == 0:
return 1.0
# Count matching characters at matching positions
matches = sum(1 for a, b in zip(extracted_lower, truth_lower) if a == b)
position_ratio = matches / max_len
# Also check word-level overlap
extracted_words = set(extracted_lower.split())
truth_words = set(truth_lower.split())
if truth_words:
word_recall = len(extracted_words & truth_words) / len(truth_words)
else:
word_recall = 1.0 if not extracted_words else 0.0
return round((position_ratio * 0.4 + word_recall * 0.6), 4)
def compute_description_completeness(analysis: str, expected_keywords: list) -> float:
"""Score description completeness based on keyword coverage.
Returns 0.0-1.0.
"""
if not expected_keywords:
return 1.0
if not analysis:
return 0.0
analysis_lower = analysis.lower()
found = sum(1 for kw in expected_keywords if kw.lower() in analysis_lower)
return round(found / len(expected_keywords), 4)
def compute_structural_accuracy(analysis: str, expected_structure: dict) -> dict:
"""Evaluate structural elements of the analysis.
Returns dict with per-element scores.
"""
scores = {}
# Length check
min_length = expected_structure.get("min_length", 50)
scores["length"] = min(len(analysis) / min_length, 1.0) if min_length > 0 else 1.0
# Sentence count
min_sentences = expected_structure.get("min_sentences", 2)
sentence_count = analysis.count(".") + analysis.count("!") + analysis.count("?")
scores["sentences"] = min(sentence_count / max(min_sentences, 1), 1.0)
# Has specifics (numbers, names, etc.)
if expected_structure.get("has_numbers", False):
import re
scores["has_numbers"] = 1.0 if re.search(r'\d', analysis) else 0.0
return scores
# ---------------------------------------------------------------------------
# Benchmark runner
# ---------------------------------------------------------------------------
async def run_single_test(
image: dict,
models: dict,
runs_per_model: int = 1,
) -> dict:
"""Run a single image through all models.
Args:
image: dict with url, category, expected_keywords, ground_truth_ocr, etc.
models: dict of model configs to test
runs_per_model: number of runs per model (for consistency testing)
Returns dict with results per model.
"""
category = image.get("category", "photo")
prompt = EVAL_PROMPTS.get(category, EVAL_PROMPTS["photo"])
url = image["url"]
results = {}
for model_name, model_config in models.items():
runs = []
for run_i in range(runs_per_model):
result = await analyze_with_model(url, prompt, model_config)
runs.append(result)
if run_i < runs_per_model - 1:
await asyncio.sleep(1) # Rate limit courtesy
# Aggregate
successful = [r for r in runs if r["success"]]
if successful:
avg_latency = statistics.mean(r["latency_ms"] for r in successful)
avg_tokens = statistics.mean(
r["tokens"].get("total_tokens", 0) for r in successful
)
# Use first successful run for accuracy metrics
primary = successful[0]
# Compute accuracy
ocr_score = None
if image.get("ground_truth_ocr"):
ocr_score = compute_ocr_accuracy(
primary["analysis"], image["ground_truth_ocr"]
)
keyword_score = None
if image.get("expected_keywords"):
keyword_score = compute_description_completeness(
primary["analysis"], image["expected_keywords"]
)
structural = compute_structural_accuracy(
primary["analysis"], image.get("expected_structure", {})
)
results[model_name] = {
"success": True,
"analysis_preview": primary["analysis"][:300],
"analysis_length": len(primary["analysis"]),
"avg_latency_ms": round(avg_latency, 1),
"avg_tokens": round(avg_tokens, 1),
"ocr_accuracy": ocr_score,
"keyword_completeness": keyword_score,
"structural_scores": structural,
"consistency": round(
statistics.stdev(len(r["analysis"]) for r in successful), 1
) if len(successful) > 1 else 0.0,
"runs": len(successful),
"errors": len(runs) - len(successful),
}
else:
results[model_name] = {
"success": False,
"error": runs[0]["error"] if runs else "No runs",
"runs": 0,
"errors": len(runs),
}
return results
async def run_benchmark_suite(
images: List[dict],
models: dict,
runs_per_model: int = 1,
) -> dict:
"""Run the full benchmark suite.
Args:
images: list of image test cases
models: model configs to compare
runs_per_model: consistency runs per image
Returns structured benchmark report.
"""
total = len(images)
all_results = []
print(f"\nRunning vision benchmark: {total} images x {len(models)} models x {runs_per_model} runs")
print(f"Models: {', '.join(m['display_name'] for m in models.values())}\n")
for i, image in enumerate(images):
img_id = image.get("id", f"img_{i}")
category = image.get("category", "unknown")
print(f" [{i+1}/{total}] {img_id} ({category})...", end=" ", flush=True)
result = await run_single_test(image, models, runs_per_model)
result["image_id"] = img_id
result["category"] = category
all_results.append(result)
# Quick status
statuses = []
for mname in models:
if result[mname]["success"]:
lat = result[mname]["avg_latency_ms"]
statuses.append(f"{mname}:{lat:.0f}ms")
else:
statuses.append(f"{mname}:FAIL")
print(", ".join(statuses))
# Aggregate statistics
summary = aggregate_results(all_results, models)
return {
"generated_at": datetime.now(timezone.utc).isoformat(),
"config": {
"total_images": total,
"runs_per_model": runs_per_model,
"models": {k: v["display_name"] for k, v in models.items()},
},
"results": all_results,
"summary": summary,
}
def aggregate_results(results: List[dict], models: dict) -> dict:
"""Compute aggregate statistics across all test images."""
summary = {}
for model_name in models:
model_results = [r[model_name] for r in results if r[model_name]["success"]]
failed = [r[model_name] for r in results if not r[model_name]["success"]]
if not model_results:
summary[model_name] = {"success_rate": 0, "error": "All runs failed"}
continue
latencies = [r["avg_latency_ms"] for r in model_results]
tokens = [r["avg_tokens"] for r in model_results if r.get("avg_tokens")]
ocr_scores = [r["ocr_accuracy"] for r in model_results if r.get("ocr_accuracy") is not None]
keyword_scores = [r["keyword_completeness"] for r in model_results if r.get("keyword_completeness") is not None]
summary[model_name] = {
"success_rate": round(len(model_results) / (len(model_results) + len(failed)), 4),
"total_runs": len(model_results),
"total_failures": len(failed),
"latency": {
"mean_ms": round(statistics.mean(latencies), 1),
"median_ms": round(statistics.median(latencies), 1),
"p95_ms": round(sorted(latencies)[int(len(latencies) * 0.95)], 1),
"std_ms": round(statistics.stdev(latencies), 1) if len(latencies) > 1 else 0,
},
"tokens": {
"mean_total": round(statistics.mean(tokens), 1) if tokens else 0,
"total_used": sum(int(t) for t in tokens),
},
"accuracy": {
"ocr_mean": round(statistics.mean(ocr_scores), 4) if ocr_scores else None,
"ocr_count": len(ocr_scores),
"keyword_mean": round(statistics.mean(keyword_scores), 4) if keyword_scores else None,
"keyword_count": len(keyword_scores),
},
}
return summary
# ---------------------------------------------------------------------------
# Report generation
# ---------------------------------------------------------------------------
def to_markdown(report: dict) -> str:
"""Generate human-readable markdown report."""
summary = report["summary"]
config = report["config"]
model_names = list(config["models"].values())
lines = [
"# Vision Benchmark Report",
"",
f"Generated: {report['generated_at'][:16]}",
f"Images tested: {config['total_images']}",
f"Runs per model: {config['runs_per_model']}",
f"Models: {', '.join(model_names)}",
"",
"## Latency Comparison",
"",
"| Model | Mean (ms) | Median | P95 | Std Dev |",
"|-------|-----------|--------|-----|---------|",
]
for mkey, mname in config["models"].items():
if mkey in summary and "latency" in summary[mkey]:
lat = summary[mkey]["latency"]
lines.append(
f"| {mname} | {lat['mean_ms']:.0f} | {lat['median_ms']:.0f} | "
f"{lat['p95_ms']:.0f} | {lat['std_ms']:.0f} |"
)
lines += [
"",
"## Accuracy Comparison",
"",
"| Model | OCR Accuracy | Keyword Coverage | Success Rate |",
"|-------|-------------|-----------------|--------------|",
]
for mkey, mname in config["models"].items():
if mkey in summary and "accuracy" in summary[mkey]:
acc = summary[mkey]["accuracy"]
sr = summary[mkey].get("success_rate", 0)
ocr = f"{acc['ocr_mean']:.1%}" if acc["ocr_mean"] is not None else "N/A"
kw = f"{acc['keyword_mean']:.1%}" if acc["keyword_mean"] is not None else "N/A"
lines.append(f"| {mname} | {ocr} | {kw} | {sr:.1%} |")
lines += [
"",
"## Token Usage",
"",
"| Model | Mean Tokens/Image | Total Tokens |",
"|-------|------------------|--------------|",
]
for mkey, mname in config["models"].items():
if mkey in summary and "tokens" in summary[mkey]:
tok = summary[mkey]["tokens"]
lines.append(
f"| {mname} | {tok['mean_total']:.0f} | {tok['total_used']} |"
)
# Verdict
lines += ["", "## Verdict", ""]
# Find best model by composite score
best_model = None
best_score = -1
for mkey, mname in config["models"].items():
if mkey not in summary or "accuracy" not in summary[mkey]:
continue
acc = summary[mkey]["accuracy"]
sr = summary[mkey].get("success_rate", 0)
ocr = acc["ocr_mean"] or 0
kw = acc["keyword_mean"] or 0
# Weighted composite: 40% OCR, 30% keyword, 30% success rate
score = (ocr * 0.4 + kw * 0.3 + sr * 0.3)
if score > best_score:
best_score = score
best_model = mname
if best_model:
lines.append(f"**Best overall: {best_model}** (composite score: {best_score:.1%})")
else:
lines.append("No clear winner — insufficient data.")
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Test dataset management
# ---------------------------------------------------------------------------
def generate_sample_dataset() -> List[dict]:
"""Generate a sample test dataset with diverse public images.
Returns list of test image definitions.
"""
return [
# Screenshots
{
"id": "screenshot_github",
"url": "https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png",
"category": "screenshot",
"expected_keywords": ["github", "logo", "octocat"],
"expected_structure": {"min_length": 50, "min_sentences": 2},
},
# Diagrams
{
"id": "diagram_architecture",
"url": "https://mermaid.ink/img/pako:eNp9kMtOwzAQRX_F8hKpJbhJFVJBi1QJiMWCG8eZNsGJLdlOiqIid5RdufiHnZRA7GbuzJwZe4ZGH2SCBPYUwgxoQKvJnCR2YY0F5YBdJJkD4uX0oXB6PnF3U4zCWcWdW3FqOwGvCKkBmHKSTB2gJeRrLTeJLfJdJKkBGYf9P1sTNdUXVJqY3YNJK7xLVwR0mxJFU6rCgEKnhSGIL2Eq8BdEERAX0OGwEiVQ1R0MaNFR8QfqKxmHigbX8VLjDz_Q0L8Wc_qPxDw",
"category": "diagram",
"expected_keywords": ["architecture", "component", "service"],
"expected_structure": {"min_length": 100, "min_sentences": 3},
},
# Photos
{
"id": "photo_nature",
"url": "https://picsum.photos/seed/bench1/400/300",
"category": "photo",
"expected_keywords": [],
"expected_structure": {"min_length": 30, "min_sentences": 1},
},
# Charts
{
"id": "chart_bar",
"url": "https://quickchart.io/chart?c={type:'bar',data:{labels:['Q1','Q2','Q3','Q4'],datasets:[{label:'Users',data:[50,60,70,80]}]}}",
"category": "chart",
"expected_keywords": ["bar", "chart", "data"],
"expected_structure": {"min_length": 50, "min_sentences": 2},
},
]
def load_dataset(path: str) -> List[dict]:
"""Load test dataset from JSON file."""
with open(path) as f:
return json.load(f)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
async def main():
parser = argparse.ArgumentParser(description="Vision Benchmark Suite (Issue #817)")
parser.add_argument("--images", help="Path to test images JSON file")
parser.add_argument("--url", help="Single image URL to test")
parser.add_argument("--category", default="photo", help="Category for single URL")
parser.add_argument("--output", default=None, help="Output JSON file")
parser.add_argument("--runs", type=int, default=1, help="Runs per model per image")
parser.add_argument("--models", nargs="+", default=None,
help="Models to test (default: all)")
parser.add_argument("--markdown", action="store_true", help="Output markdown report")
parser.add_argument("--generate-dataset", action="store_true",
help="Generate sample dataset and exit")
args = parser.parse_args()
if args.generate_dataset:
dataset = generate_sample_dataset()
out_path = args.images or "benchmarks/test_images.json"
os.makedirs(os.path.dirname(out_path) or ".", exist_ok=True)
with open(out_path, "w") as f:
json.dump(dataset, f, indent=2)
print(f"Generated sample dataset: {out_path} ({len(dataset)} images)")
return
# Select models
if args.models:
selected = {k: v for k, v in MODELS.items() if k in args.models}
else:
selected = MODELS
# Load images
if args.url:
images = [{"id": "single", "url": args.url, "category": args.category}]
elif args.images:
images = load_dataset(args.images)
else:
print("ERROR: Provide --images or --url")
sys.exit(1)
# Run benchmark
report = await run_benchmark_suite(images, selected, args.runs)
# Output
if args.output:
os.makedirs(os.path.dirname(args.output) or ".", exist_ok=True)
with open(args.output, "w") as f:
json.dump(report, f, indent=2)
print(f"\nResults saved to {args.output}")
if args.markdown or not args.output:
print("\n" + to_markdown(report))
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -24,6 +24,8 @@ model:
# "minimax" - MiniMax global (requires: MINIMAX_API_KEY)
# "minimax-cn" - MiniMax China (requires: MINIMAX_CN_API_KEY)
# "huggingface" - Hugging Face Inference (requires: HF_TOKEN)
# "xiaomi" - Xiaomi MiMo (requires: XIAOMI_API_KEY)
# "arcee" - Arcee AI Trinity models (requires: ARCEEAI_API_KEY)
# "kilocode" - KiloCode gateway (requires: KILOCODE_API_KEY)
# "ai-gateway" - Vercel AI Gateway (requires: AI_GATEWAY_API_KEY)
#
@@ -308,15 +310,8 @@ compression:
# compression of older turns.
protect_last_n: 20
# Model to use for generating summaries (fast/cheap recommended)
# This model compresses the middle turns into a concise summary.
# IMPORTANT: it receives the full middle section of the conversation, so it
# MUST support a context length at least as large as your main model's.
summary_model: "google/gemini-3-flash-preview"
# Provider for the summary model (default: "auto")
# Options: "auto", "openrouter", "nous", "main"
# summary_provider: "auto"
# To pin a specific model/provider for compression summaries, use the
# auxiliary section below (auxiliary.compression.provider / model).
# =============================================================================
# Auxiliary Models (Advanced — Experimental)
@@ -353,7 +348,7 @@ compression:
# Other providers pick a sensible default automatically.
#
# auxiliary:
# # Image analysis: vision_analyze tool + browser screenshots
# # Image analysis: vision_analyze tool
# vision:
# provider: "auto"
# model: "" # e.g. "google/gemini-2.5-flash", "openai/gpt-4o"
@@ -361,6 +356,15 @@ compression:
# download_timeout: 30 # Image HTTP download timeout (seconds)
# # Increase for slow connections or self-hosted image servers
#
# # Browser screenshot analysis (browser_vision tool)
# # Defaults to Gemma 4 27B — natively multimodal, same model family as the main
# # text model, which avoids model-switching overhead and improves context continuity.
# # Override with any vision-capable model. Set to "" to fall back to auto-detection.
# # Can also be overridden per-session with BROWSER_VISION_MODEL env var.
# browser_vision:
# model: "google/gemma-4-27b-it" # default; override e.g. "google/gemini-2.5-flash"
# timeout: 120 # API call timeout in seconds (default 120s)
#
# # Web page scraping / summarization + browser page text extraction
# web_extract:
# provider: "auto"
@@ -528,7 +532,7 @@ agent:
# - A preset like "hermes-cli" or "hermes-telegram" (curated tool set)
# - A list of individual toolsets to compose your own (see list below)
#
# Supported platform keys: cli, telegram, discord, whatsapp, slack
# Supported platform keys: cli, telegram, discord, whatsapp, slack, qqbot
#
# Examples:
#
@@ -557,6 +561,7 @@ agent:
# slack: hermes-slack (same as telegram)
# signal: hermes-signal (same as telegram)
# homeassistant: hermes-homeassistant (same as telegram)
# qqbot: hermes-qqbot (same as telegram)
#
platform_toolsets:
cli: [hermes-cli]
@@ -566,6 +571,7 @@ platform_toolsets:
slack: [hermes-slack]
signal: [hermes-signal]
homeassistant: [hermes-homeassistant]
qqbot: [hermes-qqbot]
# ─────────────────────────────────────────────────────────────────────────────
# Available toolsets (use these names in platform_toolsets or the toolsets list)
@@ -773,6 +779,11 @@ display:
# Toggle at runtime with /verbose in the CLI
tool_progress: all
# Gateway-only natural mid-turn assistant updates.
# When true, completed assistant status messages are sent as separate chat
# messages. This is independent of tool_progress and gateway streaming.
interim_assistant_messages: true
# What Enter does when Hermes is already busy in the CLI.
# interrupt: Interrupt the current run and redirect Hermes (default)
# queue: Queue your message for the next turn
@@ -781,7 +792,7 @@ display:
# Background process notifications (gateway/messaging only).
# Controls how chatty the process watcher is when you use
# terminal(background=true, check_interval=...) from Telegram/Discord/etc.
# terminal(background=true, notify_on_complete=true) from Telegram/Discord/etc.
# off: No watcher messages at all
# result: Only the final completion message
# error: Only the final message when exit code != 0

859
cli.py

File diff suppressed because it is too large Load Diff

View File

@@ -44,7 +44,8 @@ logger = logging.getLogger(__name__)
_KNOWN_DELIVERY_PLATFORMS = frozenset({
"telegram", "discord", "slack", "whatsapp", "signal",
"matrix", "mattermost", "homeassistant", "dingtalk", "feishu",
"wecom", "weixin", "sms", "email", "webhook", "bluebubbles",
"wecom", "wecom_callback", "weixin", "sms", "email", "webhook", "bluebubbles",
"qqbot",
})
from cron.jobs import get_due_jobs, mark_job_run, save_job_output, advance_next_run
@@ -219,6 +220,21 @@ def _deliver_result(job: dict, content: str, adapters=None, loop=None) -> Option
chat_id = target["chat_id"]
thread_id = target.get("thread_id")
# Diagnostic: log thread_id for topic-aware delivery debugging
origin = job.get("origin") or {}
origin_thread = origin.get("thread_id")
if origin_thread and not thread_id:
logger.warning(
"Job '%s': origin has thread_id=%s but delivery target lost it "
"(deliver=%s, target=%s)",
job["id"], origin_thread, job.get("deliver", "local"), target,
)
elif thread_id:
logger.debug(
"Job '%s': delivering to %s:%s thread_id=%s",
job["id"], platform_name, chat_id, thread_id,
)
from tools.send_message_tool import _send_to_platform
from gateway.config import load_gateway_config, Platform
@@ -234,10 +250,12 @@ def _deliver_result(job: dict, content: str, adapters=None, loop=None) -> Option
"dingtalk": Platform.DINGTALK,
"feishu": Platform.FEISHU,
"wecom": Platform.WECOM,
"wecom_callback": Platform.WECOM_CALLBACK,
"weixin": Platform.WEIXIN,
"email": Platform.EMAIL,
"sms": Platform.SMS,
"bluebubbles": Platform.BLUEBUBBLES,
"qqbot": Platform.QQBOT,
}
platform = platform_map.get(platform_name.lower())
if not platform:
@@ -270,11 +288,13 @@ def _deliver_result(job: dict, content: str, adapters=None, loop=None) -> Option
if wrap_response:
task_name = job.get("name", job["id"])
job_id = job.get("id", "")
delivery_content = (
f"Cronjob Response: {task_name}\n"
f"(job_id: {job_id})\n"
f"-------------\n\n"
f"{content}\n\n"
f"Note: The agent cannot see this message, and therefore cannot respond to it."
f"To stop or manage this job, send me a new message (e.g. \"stop reminder {task_name}\")."
)
else:
delivery_content = content
@@ -625,6 +645,15 @@ def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
except Exception as e:
logger.warning("Job '%s': failed to load config.yaml, using defaults: %s", job_id, e)
# Apply IPv4 preference if configured.
try:
from hermes_constants import apply_ipv4_preference
_net_cfg = _cfg.get("network", {})
if isinstance(_net_cfg, dict) and _net_cfg.get("force_ipv4"):
apply_ipv4_preference(force=True)
except Exception:
pass
# Reasoning config from config.yaml
from hermes_constants import parse_reasoning_effort
effort = str(_cfg.get("agent", {}).get("reasoning_effort", "")).strip()
@@ -722,6 +751,7 @@ def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
provider_sort=pr.get("sort"),
disabled_toolsets=["cronjob", "messaging", "clarify"],
quiet_mode=True,
skip_context_files=True, # Don't inject SOUL.md/AGENTS.md from scheduler cwd
skip_memory=True, # Cron system prompts would corrupt user representations
platform="cron",
session_id=_cron_session_id,

38
docker/entrypoint.sh Normal file → Executable file
View File

@@ -1,10 +1,44 @@
#!/bin/bash
# Docker entrypoint: bootstrap config files into the mounted volume, then run hermes.
# Docker/Podman entrypoint: bootstrap config files into the mounted volume, then run hermes.
set -e
HERMES_HOME="/opt/data"
HERMES_HOME="${HERMES_HOME:-/opt/data}"
INSTALL_DIR="/opt/hermes"
# --- Privilege dropping via gosu ---
# When started as root (the default for Docker, or fakeroot in rootless Podman),
# optionally remap the hermes user/group to match host-side ownership, fix volume
# permissions, then re-exec as hermes.
if [ "$(id -u)" = "0" ]; then
if [ -n "$HERMES_UID" ] && [ "$HERMES_UID" != "$(id -u hermes)" ]; then
echo "Changing hermes UID to $HERMES_UID"
usermod -u "$HERMES_UID" hermes
fi
if [ -n "$HERMES_GID" ] && [ "$HERMES_GID" != "$(id -g hermes)" ]; then
echo "Changing hermes GID to $HERMES_GID"
# -o allows non-unique GID (e.g. macOS GID 20 "staff" may already exist
# as "dialout" in the Debian-based container image)
groupmod -o -g "$HERMES_GID" hermes 2>/dev/null || true
fi
actual_hermes_uid=$(id -u hermes)
if [ "$(stat -c %u "$HERMES_HOME" 2>/dev/null)" != "$actual_hermes_uid" ]; then
echo "$HERMES_HOME is not owned by $actual_hermes_uid, fixing"
# In rootless Podman the container's "root" is mapped to an unprivileged
# host UID — chown will fail. That's fine: the volume is already owned
# by the mapped user on the host side.
chown -R hermes:hermes "$HERMES_HOME" 2>/dev/null || \
echo "Warning: chown failed (rootless container?) — continuing anyway"
fi
echo "Dropping root privileges"
exec gosu hermes "$0" "$@"
fi
# --- Running as hermes from here ---
source "${INSTALL_DIR}/.venv/bin/activate"
# Create essential directory structure. Cache and platform directories
# (cache/images, cache/audio, platforms/whatsapp, etc.) are created on
# demand by the application — don't pre-create them here so new installs

View File

@@ -0,0 +1,432 @@
# Workflow Orchestration & Task Queue Research for AI Agents
**Date:** 2026-04-14
**Scope:** SOTA comparison of task queues and workflow orchestrators for autonomous AI agent workflows
---
## 1. Current Architecture: Cron + Webhook
### How it works
- **Scheduler:** `cron/scheduler.py` — gateway calls `tick()` every 60 seconds
- **Storage:** JSON file (`~/.hermes/cron/jobs.json`) + file-based lock (`cron/.tick.lock`)
- **Execution:** Each job spawns a full `AIAgent.run_conversation()` in a thread pool with inactivity timeout
- **Delivery:** Results pushed back to origin chat via platform adapters (Telegram, Discord, etc.)
- **Checkpointing:** Job outputs saved to `~/.hermes/cron/output/{job_id}/{timestamp}.md`
### Strengths
- Simple, zero-dependency (no broker/redis needed)
- Jobs are isolated — each runs a fresh agent session
- Direct platform delivery with E2EE support
- Script pre-run for data collection
- Inactivity-based timeout (not hard wall-clock)
### Weaknesses
- **No task dependencies** — jobs are completely independent
- **No retry logic** — single failure = lost run (recurring jobs advance schedule and move on)
- **No concurrency control** — all due jobs fire at once; no worker pool sizing
- **No observability** — no metrics, no dashboard, no structured logging of job state transitions
- **Tick-based polling** — 60s granularity, wastes cycles when idle, adds latency when busy
- **Single-process** — file lock means only one tick at a time; no horizontal scaling
- **No dead letter queue** — failed deliveries are logged but not retried
- **No workflow chaining** — cannot express "run A, then B with A's output"
---
## 2. Framework Comparison
### 2.1 Huey (Already Installed v2.6.0)
**Architecture:** Embedded task queue, SQLite/Redis/file storage, consumer process model.
| Feature | Huey | Our Cron |
|---|---|---|
| Broker | SQLite (default), Redis | JSON file |
| Retry | Built-in: `retries=N, retry_delay=S` | None |
| Task chaining | `task1.s() | task2.s()` (pipeline) | None |
| Scheduling | `@huey.periodic_task(crontab(...))` | Our own cron parser |
| Concurrency | Worker pool with `-w N` flag | Single tick lock |
| Monitoring | `huey_consumer` logs, Huey Admin (Django) | Manual log reading |
| Failure recovery | Automatic retry + configurable backoff | None |
| Priority | `PriorityRedisExpireHuey` or task priority | None |
| Result storage | `store_results=True` with result() | File output |
**Task Dependencies Pattern:**
```python
@huey.task()
def analyze_data(input_data):
return run_analysis(input_data)
@huey.task()
def generate_report(analysis_result):
return create_report(analysis_result)
# Pipeline: analyze then report
pipeline = analyze_data.s(raw_data) | generate_report.s()
result = pipeline()
```
**Retry Pattern:**
```python
@huey.task(retries=3, retry_delay=60, retry_backoff=True)
def flaky_api_call(url):
return requests.get(url, timeout=30)
```
**Benchmarks:** ~5,000 tasks/sec with SQLite backend, ~15,000 with Redis. Sub-millisecond scheduling latency. Very lightweight — single process.
**Verdict:** Best fit for our use case. Already installed. SQLite backend = no external deps. Can layer on top of our existing job storage.
---
### 2.2 Celery
**Architecture:** Distributed task queue with message broker (RabbitMQ/Redis).
| Feature | Celery | Huey |
|---|---|---|
| Broker | Redis, RabbitMQ, SQS (required) | SQLite (built-in) |
| Scale | 100K+ tasks/sec | ~5-15K tasks/sec |
| Chains | `chain(task1.s(), task2.s())` | Pipeline operator |
| Groups/Chords | Parallel + callback | Not built-in |
| Canvas | Full workflow DSL (chain, group, chord, map) | Basic pipeline |
| Monitoring | Flower dashboard, Celery events | Minimal |
| Complexity | Heavy — needs broker, workers, result backend | Single process |
**Workflow Pattern:**
```python
from celery import chain, group, chord
# Chain: sequential
workflow = chain(fetch_data.s(), analyze.s(), report.s())
# Group: parallel
parallel = group(fetch_twitter.s(), fetch_reddit.s(), fetch_hn.s())
# Chord: parallel then callback
chord(parallel, aggregate_results.s())
```
**Verdict:** Overkill for our scale. Adds RabbitMQ/Redis dependency. The Canvas API is powerful but we don't need 100K task/sec throughput. Flower monitoring is nice but we'd need to deploy it separately.
---
### 2.3 Temporal
**Architecture:** Durable execution engine. Workflows as code with automatic state persistence and replay.
| Feature | Temporal | Our Cron |
|---|---|---|
| State management | Automatic — workflow state persisted on every step | Manual JSON files |
| Failure recovery | Workflows survive process restarts, auto-retry | Lost on crash |
| Task dependencies | Native — activities call other activities | None |
| Long-running tasks | Built-in (days/months OK) | Inactivity timeout |
| Versioning | Workflow versioning for safe updates | No versioning |
| Visibility | Full workflow state at any point | Log files |
| Infrastructure | Requires Temporal server + database | None |
| Language | Python SDK, but Temporal server is Go | Pure Python |
**Workflow Pattern:**
```python
@workflow.defn
class AIAgentWorkflow:
@workflow.run
async def run(self, job_config: dict) -> str:
# Step 1: Fetch data
data = await workflow.execute_activity(
fetch_data_activity,
job_config["script"],
start_to_close_timeout=timedelta(minutes=5),
retry_policy=RetryPolicy(maximum_attempts=3),
)
# Step 2: Analyze with AI agent
analysis = await workflow.execute_activity(
run_agent_activity,
{"prompt": job_config["prompt"], "context": data},
start_to_close_timeout=timedelta(minutes=30),
retry_policy=RetryPolicy(
initial_interval=timedelta(seconds=60),
maximum_attempts=3,
),
)
# Step 3: Deliver
await workflow.execute_activity(
deliver_activity,
{"platform": job_config["deliver"], "content": analysis},
start_to_close_timeout=timedelta(seconds=60),
)
return analysis
```
**Verdict:** Best architecture for complex multi-step AI workflows, but heavy infrastructure cost. Temporal server needs PostgreSQL/Cassandra + visibility store. Ideal if we reach 50+ multi-step workflows with complex failure modes. Overkill for current needs.
---
### 2.4 Prefect
**Architecture:** Modern data/workflow orchestration with Python-native API.
| Feature | Prefect |
|---|---|
| Dependencies | SQLite (default) or PostgreSQL |
| Task retries | `@task(retries=3, retry_delay_seconds=10)` |
| Task dependencies | `result = task_a(wait_for=[task_b])` |
| Caching | `cache_key_fn` for result caching |
| Subflows | Nested workflow composition |
| Deployments | Schedule via `Deployment` or `CronSchedule` |
| UI | Excellent web dashboard |
| Async | Full async support |
**Workflow Pattern:**
```python
from prefect import flow, task
from prefect.tasks import task_input_hash
@task(retries=3, retry_delay_seconds=30)
def run_agent(prompt: str) -> str:
agent = AIAgent(...)
return agent.run_conversation(prompt)
@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))
def fetch_context(script: str) -> str:
return run_script(script)
@flow(name="agent-workflow")
def agent_workflow(job_config: dict):
context = fetch_context(job_config.get("script", ""))
result = run_agent(
f"{context}\n\n{job_config['prompt']}",
wait_for=[context]
)
deliver(result, job_config["deliver"])
return result
```
**Benchmarks:** Sub-second task scheduling. Handles 10K+ concurrent task runs. SQLite backend for single-node.
**Verdict:** Strong alternative. Pythonic, good UI, built-in scheduling. But heavier than Huey — deploys a server process. Best if we want a web dashboard for monitoring. Less infrastructure than Temporal but more than Huey.
---
### 2.5 Apache Airflow
**Architecture:** Batch-oriented DAG scheduler, Python-based.
| Feature | Airflow |
|---|---|
| DAG model | Static DAGs defined in Python files |
| Scheduler | Polling-based, 5-30s granularity |
| Dependencies | PostgreSQL/MySQL + Redis/RabbitMQ + webserver |
| UI | Rich web UI with DAG visualization |
| Best for | ETL, data pipelines, batch processing |
| Weakness | Not designed for dynamic task creation; heavy; DAG definition overhead |
**Verdict:** Wrong tool for this job. Airflow excels at static, well-defined data pipelines (ETL). Our agent workflows are dynamic — tasks are created at runtime based on user prompts. Airflow's DAG model fights against this. Massive overhead (needs webserver, scheduler, worker, metadata DB).
---
### 2.6 Dramatiq
**Architecture:** Lightweight distributed task queue, Celery alternative.
| Feature | Dramatiq |
|---|---|
| Broker | Redis, RabbitMQ |
| Retries | `@dramatiq.actor(max_retries=3)` |
| Middleware | Pluggable: age_limit, time_limit, retries, callbacks |
| Groups | `group(actor.message(...), ...).run()` |
| Pipes | `actor.message() | other_actor.message()` |
| Simplicity | Cleaner API than Celery |
**Verdict:** Nice middle ground between Huey and Celery. But still requires a broker (Redis/RabbitMQ). No SQLite backend. Less ecosystem than Celery, less lightweight than Huey.
---
### 2.7 RQ (Redis Queue)
**Architecture:** Minimal Redis-based task queue.
| Feature | RQ |
|---|---|
| Broker | Redis only |
| Retries | Via `Retry` class |
| Workers | Simple worker processes |
| Dashboard | `rq-dashboard` (separate) |
| Limitation | Redis-only, no SQLite, no scheduling built-in |
**Verdict:** Too simple and Redis-dependent. No periodic task support without `rq-scheduler`. No task chaining without third-party. Not competitive with Huey for our use case.
---
## 3. Architecture Patterns for AI Agent Workflows
### 3.1 Task Chaining (Fan-out / Fan-in)
The critical pattern for multi-step AI workflows:
```
[Script] → [Agent] → [Deliver]
↓ ↓ ↓
Context Report Notification
```
**Implementation with Huey:**
```python
@huey.task(retries=2)
def run_script_task(script_path):
return run_script(script_path)
@huey.task(retries=3, retry_delay=60)
def run_agent_task(prompt, context=None):
if context:
prompt = f"## Context\n{context}\n\n{prompt}"
agent = AIAgent(...)
return agent.run_conversation(prompt)
@huey.task()
def deliver_task(result, job_config):
return deliver_result(job_config, result)
# Compose: script → agent → deliver
def compose_workflow(job):
steps = []
if job.get("script"):
steps.append(run_script_task.s(job["script"]))
steps.append(run_agent_task.s(job["prompt"]))
steps.append(deliver_task.s(job))
return reduce(lambda a, b: a.then(b), steps)
```
### 3.2 Retry with Exponential Backoff
```python
from huey import RetryTask
class AIWorkflowTask(RetryTask):
retries = 3
retry_delay = 30 # Start at 30s
retry_backoff = True # 30s → 60s → 120s
max_retry_delay = 600 # Cap at 10min
```
### 3.3 Dead Letter Queue
For tasks that exhaust retries:
```python
@huey.task(retries=3)
def flaky_task(data):
...
# Dead letter handling
def handle_failure(task, exc, retries):
# Log to dead letter store
save_dead_letter(task, exc, retries)
# Notify user of failure
notify_user(f"Task {task.name} failed after {retries} retries: {exc}")
```
### 3.4 Observability Pattern
```python
# Structured event logging for every state transition
def emit_event(job_id, event_type, metadata):
event = {
"job_id": job_id,
"event": event_type, # scheduled, started, completed, failed, retried
"timestamp": iso_now(),
"metadata": metadata,
}
append_to_event_log(event)
# Also emit to metrics (Prometheus/StatsD)
metrics.increment(f"cron.{event_type}")
```
---
## 4. Benchmarks Summary
| Framework | Throughput | Latency | Memory | Startup | Dependencies |
|---|---|---|---|---|---|
| Current Cron | ~1 job/60s tick | 60-120s | Minimal | Instant | None |
| Huey (SQLite) | ~5K tasks/sec | <10ms | ~20MB | <1s | None |
| Huey (Redis) | ~15K tasks/sec | <5ms | ~20MB | <1s | Redis |
| Celery (Redis) | ~15K tasks/sec | <10ms | ~100MB | ~3s | Redis |
| Temporal | ~50K activities/sec | <5ms | ~200MB | ~10s | Temporal server+DB |
| Prefect | ~10K tasks/sec | <20ms | ~150MB | ~5s | PostgreSQL |
---
## 5. Recommendations
### Immediate (Phase 1): Enhance Current Cron
Add these capabilities to the existing `cron/` module **without** switching frameworks:
1. **Retry logic** — Add `retry_count`, `retry_delay`, `max_retries` fields to job JSON. In `scheduler.py tick()`, on failure: if `retries_remaining > 0`, don't advance schedule, set `next_run_at = now + retry_delay * (attempt^2)`.
2. **Backoff** — Exponential: `delay * 2^attempt`, capped at 10 minutes.
3. **Dead letter tracking** — After max retries, mark job state as `dead_letter` and emit a delivery notification with the error.
4. **Concurrency limit** — Add a semaphore (e.g., `max_concurrent=3`) to `tick()` so we don't spawn 20 agents simultaneously.
5. **Structured events** — Append JSON events to `~/.hermes/cron/events.jsonl` for every state transition (scheduled, started, completed, failed, retried, delivered).
**Effort:** ~1-2 days. No new dependencies.
### Medium-term (Phase 2): Adopt Huey for Workflow Chaining
When we need task dependencies (multi-step agent workflows), migrate to Huey:
1. **Keep the JSON job store** as the source of truth for user-facing job management.
2. **Use Huey as the execution engine** — enqueue tasks from `tick()`, let Huey handle retries, scheduling, and chaining.
3. **SQLite backend** — no new infrastructure. One consumer process (`huey_consumer.py`) alongside the gateway.
4. **Task chaining for multi-step jobs**`script_task.then(agent_task).then(delivery_task)`.
**Migration path:**
- Phase 2a: Run Huey consumer alongside gateway. Mirror cron jobs to Huey periodic tasks.
- Phase 2b: Add task chaining for jobs with scripts.
- Phase 2c: Migrate all jobs to Huey, deprecate tick()-based execution.
**Effort:** ~1 week. Huey already installed. Gateway integration ~2-3 days.
### Long-term (Phase 3): Evaluate Temporal/Prefect
Only if:
- We have 100+ concurrent multi-step workflows
- We need workflow versioning and A/B testing
- We need cross-service orchestration (agent calls to external APIs with complex compensation logic)
- We want a web dashboard for non-technical users
**Don't adopt early** — these tools solve problems we don't have yet.
---
## 6. Decision Matrix
| Need | Best Solution | Why |
|---|---|---|
| Simple retry logic | Enhance current cron | Zero deps, fast to implement |
| Task chaining | **Huey** | Already installed, SQLite backend, pipeline API |
| Monitoring dashboard | Prefect or Huey+Flower | If monitoring becomes critical |
| Massive scale (10K+/sec) | Celery + Redis | If we're processing thousands of agent runs per hour |
| Complex compensation | Temporal | Only if we need durable multi-service workflows |
| Periodic scheduling | Current cron (works) or Huey | Current is fine; Huey adds `crontab()` with seconds |
---
## 7. Key Insight
The cron system's biggest gap isn't the framework — it's the **absence of retry and dependency primitives**. These can be added to the current system in <100 lines of code. The second biggest gap is observability (structured events + metrics), which is also solvable incrementally.
Huey is the right *eventual* target for workflow execution because:
1. Already installed, zero new dependencies
2. SQLite backend matches our "no infrastructure" philosophy
3. Pipeline API gives us task chaining for free
4. Retry/backoff is first-class
5. Consumer model is more efficient than tick-polling
6. ~50x better scheduling latency (ms vs 60s)
The migration should be gradual — start by wrapping Huey inside our existing cron tick, then progressively move execution to Huey's consumer model.

38
docs/cron-audit-890.md Normal file
View File

@@ -0,0 +1,38 @@
# Cron Job Audit — #890
## Problem
9 of 69 cron jobs have zero completions. They waste scheduler cycles.
## Dead Jobs Identified
| Job | Schedule | Completions | Action |
|-----|----------|-------------|--------|
| exp-swarm-pipeline | every 10 min | 0 | disable |
| exp-music-generator | every 2h | 0 | disable |
| exp-paper-citations | every 3h | 0 | disable |
| exp-gbrain-patterns | every 2h | 0 | disable |
| exp-infra-hardening | every 2h | 0 | disable |
| gemma4-multimodal-burn | every 1h | 0 | disable |
| morning-paper-report | daily | 0 | disable |
| overnight-collector | every 15 min | 0 | disable |
| morning-experiment-report | daily | 0 | disable |
## Usage
```bash
# Audit: show dead jobs
python scripts/cron_audit.py
# Dry run: preview changes
python scripts/cron_audit.py --disable --dry-run
# Disable dead jobs
python scripts/cron_audit.py --disable
# Delete dead jobs
python scripts/cron_audit.py --delete
# Find jobs with no runs in 7 days
python scripts/cron_audit.py --older-than 7d
```

View File

@@ -0,0 +1,56 @@
# Emotional Presence Patterns for Crisis Support
Research issue #880. What works for reaching broken men in their darkest moment.
## Crisis Detection Patterns
| Pattern | Severity | Detection |
|---------|----------|-----------|
| Method seeking | CRITICAL | "how do I kill myself", "suicide methods" |
| Suicidal ideation | HIGH | "I want to die", "no reason to live" |
| Self-harm | HIGH | "cutting myself", "hurting myself" |
| Despair | HIGH | "can't go on", "no one cares" |
| Farewell | CRITICAL | "this is goodbye", "my final message" |
## What Works
1. Validation without agreement: "That sounds incredibly painful"
2. Presence over solutions: "I'm here with you right now"
3. Specific next steps: "Text HOME to 741741 right now"
4. Connection: "You don't have to face this alone"
5. Hope without minimizing: "There is a way through this"
## What Does NOT Work
1. False empathy: "I feel your pain"
2. Toxic positivity: "Look on the bright side!"
3. Minimization: "Other people have it worse"
4. Guilt induction: "Think about your family"
5. Premature problem-solving: "Have you tried therapy?"
6. AI dishonesty: pretending to be human
## SOUL.md Protocol
1. Ask: "Are you safe right now?" (FIRST response)
2. Listen. Stay present. Don't disconnect.
3. Never compute the value of a human life
4. Never suggest death is an option
5. Always point to 988 Lifeline
6. Share the gospel: "Jesus saves those who call on His name"
## Crisis Resources
- 988 Suicide & Crisis Lifeline: call or text 988
- Crisis Text Line: text HOME to 741741
- 988 Chat: 988lifeline.org/chat
- Spanish: 1-888-628-9454
- Emergency: 911
## Implementation Status
- Crisis detection: agent/crisis_protocol.py
- SHIELD integration: tools/shield/
- 988 Lifeline: resources defined
- Emotional presence: this document
- Escalation tracking: future work
- Human notification: future work

View File

@@ -0,0 +1,42 @@
# Holographic + Vector Hybrid Memory Architecture
Research issue #879. Combining HRR (holographic) and vector (Qdrant) memory.
## Architecture
Three memory backends, each with unique strengths:
| Backend | Strength | Weakness | Use Case |
|---------|----------|----------|----------|
| FTS5 | Exact keyword match | No semantic understanding | Precise recall |
| Vector (Qdrant) | Semantic similarity | No compositional queries | Topic search |
| HRR (Holographic) | Compositional queries | Limited scale | Complex reasoning |
## Why Hybrid
- FTS5 alone: misses ~30-40% of semantically relevant content
- Vector alone: can't do compositional queries ("what did I discuss about X after doing Y?")
- HRR alone: unique capability but no semantic fallback
- Hybrid: best of all three, RRF fusion for ranking
## Implementation: Reciprocal Rank Fusion
Results from each backend are merged using RRF:
- score = sum(weight / (k + rank)) for each backend
- k=60 (standard RRF constant)
- Weights: FTS5=0.6, Vector=0.4 (configurable)
## Status
- FTS5: EXISTS (hermes_state.py)
- Vector (Qdrant): implemented (tools/hybrid_search.py)
- HRR: EXISTS (plugins/memory/holographic.py)
- RRF fusion: implemented (tools/hybrid_search.py)
- Ingestion pipeline: partial
## Next Steps
1. Wire HRR into hybrid_search.py
2. Session-level vector ingestion
3. Benchmark: measure R@5 improvement
4. Cross-session memory persistence

View File

@@ -11,12 +11,14 @@ When you run `hermes setup` for the first time and Hermes detects `~/.openclaw`,
### 2. CLI Command (quick, scriptable)
```bash
hermes claw migrate # Full migration with confirmation prompt
hermes claw migrate --dry-run # Preview what would happen
hermes claw migrate # Preview then migrate (always shows preview first)
hermes claw migrate --dry-run # Preview only, no changes
hermes claw migrate --preset user-data # Migrate without API keys/secrets
hermes claw migrate --yes # Skip confirmation prompt
```
The migration always shows a full preview of what will be imported before making any changes. You review the preview and confirm before anything is written.
**All options:**
| Flag | Description |
@@ -39,7 +41,7 @@ Ask the agent to run the migration for you:
```
The agent will use the `openclaw-migration` skill to:
1. Run a dry-run first to preview changes
1. Run a preview first to show what would change
2. Ask about conflict resolution (SOUL.md, skills, etc.)
3. Let you choose between `user-data` and `full` presets
4. Execute the migration with your choices
@@ -58,16 +60,31 @@ The agent will use the `openclaw-migration` skill to:
| Messaging settings | `~/.openclaw/config.yaml` (TELEGRAM_ALLOWED_USERS, MESSAGING_CWD) | `~/.hermes/.env` |
| TTS assets | `~/.openclaw/workspace/tts/` | `~/.hermes/tts/` |
Workspace files are also checked at `workspace.default/` and `workspace-main/` as fallback paths (OpenClaw renamed `workspace/` to `workspace-main/` in recent versions).
### `full` preset (adds to `user-data`)
| Item | Source | Destination |
|------|--------|-------------|
| Telegram bot token | `~/.openclaw/config.yaml` | `~/.hermes/.env` |
| OpenRouter API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| OpenAI API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| Anthropic API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| ElevenLabs API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| Telegram bot token | `openclaw.json` channels config | `~/.hermes/.env` |
| OpenRouter API key | `.env`, `openclaw.json`, or `openclaw.json["env"]` | `~/.hermes/.env` |
| OpenAI API key | `.env`, `openclaw.json`, or `openclaw.json["env"]` | `~/.hermes/.env` |
| Anthropic API key | `.env`, `openclaw.json`, or `openclaw.json["env"]` | `~/.hermes/.env` |
| ElevenLabs API key | `.env`, `openclaw.json`, or `openclaw.json["env"]` | `~/.hermes/.env` |
Only these 6 allowlisted secrets are ever imported. Other credentials are skipped and reported.
API keys are searched across four sources: inline config values, `~/.openclaw/.env`, the `openclaw.json` `"env"` sub-object, and per-agent auth profiles.
Only allowlisted secrets are ever imported. Other credentials are skipped and reported.
## OpenClaw Schema Compatibility
The migration handles both old and current OpenClaw config layouts:
- **Channel tokens**: Reads from flat paths (`channels.telegram.botToken`) and the newer `accounts.default` layout (`channels.telegram.accounts.default.botToken`)
- **TTS provider**: OpenClaw renamed "edge" to "microsoft" — both are recognized and mapped to Hermes' "edge"
- **Provider API types**: Both short (`openai`, `anthropic`) and hyphenated (`openai-completions`, `anthropic-messages`, `google-generative-ai`) values are mapped correctly
- **thinkingDefault**: All enum values are handled including newer ones (`minimal`, `xhigh`, `adaptive`)
- **Matrix**: Uses `accessToken` field (not `botToken`)
- **SecretRef formats**: Plain strings, env templates (`${VAR}`), and `source: "env"` SecretRefs are resolved. `source: "file"` and `source: "exec"` SecretRefs produce a warning — add those keys manually after migration.
## Conflict Handling
@@ -84,18 +101,24 @@ For skills, you can also use `--skill-conflict rename` to import conflicting ski
## Migration Report
Every migration (including dry runs) produces a report showing:
Every migration produces a report showing:
- **Migrated items** — what was successfully imported
- **Conflicts** — items skipped because they already exist
- **Skipped items** — items not found in the source
- **Errors** — items that failed to import
For execute runs, the full report is saved to `~/.hermes/migration/openclaw/<timestamp>/`.
For executed migrations, the full report is saved to `~/.hermes/migration/openclaw/<timestamp>/`.
## Post-Migration Notes
- **Skills require a new session** — imported skills take effect after restarting your agent or starting a new chat.
- **WhatsApp requires re-pairing** — WhatsApp uses QR-code pairing, not token-based auth. Run `hermes whatsapp` to pair.
- **Archive cleanup** — after migration, you'll be offered to rename `~/.openclaw/` to `.openclaw.pre-migration/` to prevent state confusion. You can also run `hermes claw cleanup` later.
## Troubleshooting
### "OpenClaw directory not found"
The migration looks for `~/.openclaw` by default. If your OpenClaw is installed elsewhere, use `--source`:
The migration looks for `~/.openclaw` by default, then tries `~/.clawdbot` and `~/.moltbot`. If your OpenClaw is installed elsewhere, use `--source`:
```bash
hermes claw migrate --source /path/to/.openclaw
```
@@ -108,3 +131,12 @@ hermes skills install openclaw-migration
### Memory overflow
If your OpenClaw MEMORY.md or USER.md exceeds Hermes' character limits, excess entries are exported to an overflow file in the migration report directory. You can manually review and add the most important ones.
### API keys not found
Keys might be stored in different places depending on your OpenClaw setup:
- `~/.openclaw/.env` file
- Inline in `openclaw.json` under `models.providers.*.apiKey`
- In `openclaw.json` under the `"env"` or `"env.vars"` sub-objects
- In `~/.openclaw/agents/main/agent/auth-profiles.json`
The migration checks all four. If keys use `source: "file"` or `source: "exec"` SecretRefs, they can't be resolved automatically — add them via `hermes config set`.

View File

@@ -0,0 +1,44 @@
# awesome-ai-tools Integration Plan
**Tracking:** #842
**Source report:** docs/tool-investigation-2026-04-15.md
**Date:** 2026-04-16
---
## Status Dashboard
| # | Tool | Category | Impact | Effort | Status | Issue |
|---|------|----------|--------|--------|--------|-------|
| 1 | Mem0 | Memory | 5/5 | 3/5 | Cloud + Local done | #842 |
| 2 | LightRAG | RAG | 4/5 | 3/5 | Not started | #857 |
| 3 | n8n | Orchestration | 5/5 | 4/5 | Not started | #858 |
| 4 | RAGFlow | RAG | 4/5 | 4/5 | Not started | #859 |
| 5 | tensorzero | LLMOps | 4/5 | 3/5 | Not started | #860 |
---
## #1: Mem0 — DONE
Cloud: `plugins/memory/mem0/` (MEM0_API_KEY required)
Local: `plugins/memory/mem0_local/` (ChromaDB, no API key)
## #2: LightRAG (P2)
Create `plugins/rag/lightrag/` plugin. Index skill docs. Use local Ollama embeddings.
## #3: n8n (P3)
Deploy as Docker service. Create workflow templates for Hermes patterns.
## #4: RAGFlow (P4)
Deploy as Docker service. Integrate via HTTP API for document understanding.
## #5: tensorzero (P3)
Evaluate as provider routing replacement. Canary migration (10% traffic first).
---
*Last updated: 2026-04-16*

View File

@@ -0,0 +1,324 @@
# SOTA Research: Multi-Agent Coordination & Fleet Knowledge Graphs
**Date:** 2026-04-14
**Scope:** Agent-to-agent communication, shared memory, task delegation, consensus protocols
**Frameworks Analyzed:** CrewAI, AutoGen, MetaGPT, ChatDev, CAMEL
---
## 1. Architecture Pattern Summary
### 1.1 CrewAI — Role-Based Crew Orchestration
**Core Pattern:** Agents organized into "Crews" with explicit roles, goals, and backstories. Tasks are assigned to agents, executed via sequential or hierarchical process flows.
**Agent-to-Agent Communication:**
- **Sequential:** Agent A completes Task A → output injected into Task B's context for Agent B
- **Hierarchical:** Manager agent delegates to worker agents, collects results, synthesizes
- **Context passing:** Tasks can declare `context: [other_tasks]` — outputs from dependent tasks are automatically injected into the current task's prompt
- **No direct agent-to-agent messaging** — communication is mediated through task outputs
**Shared Memory (v2 — Unified Memory):**
- `Memory` class with `remember()` / `recall()` using vector embeddings (LanceDB/ChromaDB)
- **Scope-based isolation:** `MemoryScope` provides path-based namespacing (`/crew/research/agent-foo`)
- **Composite scoring:** semantic similarity (0.5) + recency (0.3) + importance (0.2)
- **RecallFlow:** LLM-driven deep recall with adaptive query expansion
- **Privacy flags:** Private memories only visible to the source that created them
- **Background saves:** ThreadPoolExecutor with write barrier (drain_writes before recall)
**Task Delegation:**
- Agent tools include `Delegate Work to Co-worker` and `Ask Question to Co-worker`
- Delegation creates a new task for another agent, results come back to delegator
- Depth-limited (no infinite delegation chains)
**State & Checkpointing:**
- `SqliteProvider` / `JsonProvider` for state checkpoint persistence
- `CheckpointConfig` with event-driven persistence
- Flow state is Pydantic models with serialization
**Cache:**
- Thread-safe in-memory tool result cache with RWLock
- Key: `{tool_name}-{input}` → cached output
### 1.2 AutoGen (Microsoft) — Conversation-Centric Teams
**Core Pattern:** Agents communicate through shared conversation threads. A "Group Chat Manager" controls turn-taking and speaker selection.
**Agent-to-Agent Communication:**
- **Shared message thread** — all agents see all messages (like a group chat)
- **Three team patterns:**
- `RoundRobinGroupChat`: Fixed order cycling through participants
- `SelectorGroupChat`: LLM-based speaker selection with candidate filtering
- `SwarmGroupChat`: Handoff-based routing (agent sends HandoffMessage to next agent)
- `GraphFlow` (DiGraph): DAG-based execution with conditional edges, parallel fan-out, loops
- `MagenticOneOrchestrator`: Ledger-based orchestration with task planning, progress tracking, stall detection
**Shared State:**
- `ChatCompletionContext` — manages message history per agent (can be unbounded or windowed)
- `ModelContext` shared across agents in a team
- State serialization: `save_state()` / `load_state()` for all managers
- **No built-in vector memory** — context is purely conversational
**Task Delegation:**
- `Swarm`: Agents use `HandoffMessage` to explicitly route control
- `GraphFlow`: Conditional edges route based on message content (keyword or callable)
- `MagenticOne`: Orchestrator maintains a "task ledger" (facts + plan) and dynamically re-plans on stalls
**Consensus / Termination:**
- `TerminationCondition` — composable conditions (text match, max messages, source-based)
- No explicit consensus protocols — termination is manager-decided
**Key Insight:** AutoGen's `ChatCompletionContext` is the closest analog to shared memory, but it's purely sequential message history, not a knowledge base.
### 1.3 MetaGPT — SOP-Driven Software Teams
**Core Pattern:** Agents follow Standard Operating Procedures (SOPs). Each agent has a defined role (Product Manager, Architect, Engineer, QA) and produces structured artifacts.
**Agent-to-Agent Communication:**
- **Publish-Subscribe via Environment:** Agents publish "actions" to a shared Environment, subscribers react
- **Structured outputs:** Each role produces specific artifact types (PRD, design doc, code, test cases)
- **Message routing:** Environment acts as a message bus, filtering by subscriber interest
**Shared Memory:**
- `Environment` class maintains shared state (project workspace)
- File-based shared memory: agents write/read from a shared filesystem
- `SharedMemory` for cross-agent context (structured data, not free-form text)
**Task Delegation:**
- Implicit through SOP stages: PM → Architect → Engineer → QA
- Each agent's output is the next agent's input
- No dynamic re-delegation
**Consensus:**
- Sequential SOP execution (no parallel agents)
- QA agent can trigger re-work loops back to Engineer
### 1.4 ChatDev — Chat-Chain Software Development
**Core Pattern:** Agents follow a "chat chain" — a sequence of chat phases (designing, coding, testing, documenting). Each phase involves a pair of agents (CEO↔CTO, Programmer↔Reviewer, etc.).
**Agent-to-Agent Communication:**
- **Paired chat sessions:** Two agents communicate in each phase (role-play between instructor and assistant)
- **Chain propagation:** Phase N's output (code, design doc) becomes Phase N+1's input
- **No broadcast** — communication is strictly pairwise within phases
**Shared Memory:**
- Software-centric: shared code repository is the "memory"
- Each phase modifies/inherits the codebase
- No explicit vector memory or knowledge graph
**Task Delegation:**
- Hardcoded phase sequence: Design → Code → Test → Document
- Each phase delegates to a specific agent pair
- No dynamic task re-assignment
**Consensus:**
- Phase-level termination: when both agents agree the phase is complete
- "Thought" tokens for chain-of-thought within chat
### 1.5 CAMEL — Role-Playing & Workforce
**Core Pattern:** Two primary modes:
1. **RolePlaying:** Two-agent conversation with task specification and optional critic
2. **Workforce:** Multi-agent with coordinator, task planner, and worker pool
**Agent-to-Agent Communication:**
- **RolePlaying:** Structured turn-taking between assistant and user agents
- **Workforce:** Coordinator assigns tasks via `TaskChannel`, workers return results
- **Worker types:** `SingleAgentWorker` (single ChatAgent), `RolePlayingWorker` (two-agent pair)
**Shared Memory / Task Channel:**
- `TaskChannel` — async queue-based task dispatch with packet tracking
- States: SENT → PROCESSING → RETURNED → ARCHIVED
- O(1) lookup by task ID, status-based filtering, assignee/publisher queues
- `WorkflowMemoryManager` — persists workflow patterns as markdown files
- Role-based organization: workflows stored by `role_identifier`
- Agent-based intelligent selection: LLM picks relevant past workflows
- Versioned: metadata tracks creation time and version numbers
**Task Delegation:**
- Coordinator agent decomposes complex tasks using LLM analysis
- Tasks assigned to workers based on capability matching
- Failed tasks trigger: retry, create new worker, or further decomposition
- `FailureHandlingConfig` with configurable `RecoveryStrategy`
**Consensus / Quality:**
- Quality evaluation via structured output (response format enforced)
- Task dependencies tracked (worker receives dependency tasks as context)
- `WorkforceMetrics` for tracking execution statistics
---
## 2. Key Architectural Patterns for Fleet Knowledge Graph
### 2.1 Communication Topology Patterns
| Pattern | Used By | Description |
|---------|---------|-------------|
| **Sequential Chain** | CrewAI, ChatDev, MetaGPT | A→B→C linear flow, output feeds next |
| **Shared Thread** | AutoGen | All agents see all messages |
| **Publish-Subscribe** | MetaGPT | Environment-based message bus |
| **Paired Chat** | ChatDev, CAMEL | Two-agent conversation pairs |
| **Handoff Routing** | AutoGen Swarm | Agent explicitly names next speaker |
| **DAG Graph** | AutoGen GraphFlow | Conditional edges, parallel, loops |
| **Ledger Orchestration** | AutoGen MagenticOne | Maintains task ledger, re-plans |
| **Task Channel** | CAMEL | Async queue with packet states |
### 2.2 Shared State Patterns
| Pattern | Used By | Description |
|---------|---------|-------------|
| **Vector Memory** | CrewAI | Embeddings + scope-based namespacing |
| **Message History** | AutoGen | Sequential conversation context |
| **File System** | MetaGPT, ChatDev | Agents read/write shared files |
| **Task Channel** | CAMEL | Async packet-based task dispatch |
| **Workflow Files** | CAMEL | Markdown-based workflow memory |
| **Tool Cache** | CrewAI | In-memory RWLock tool result cache |
| **State Checkpoint** | CrewAI, AutoGen | Serialized Pydantic/SQLite checkpoints |
### 2.3 Task Delegation Patterns
| Pattern | Used By | Description |
|---------|---------|-------------|
| **Role Assignment** | CrewAI | Fixed agent per task |
| **Manager Delegation** | CrewAI Hierarchical | Manager assigns tasks dynamically |
| **Speaker Selection** | AutoGen Selector | LLM picks next agent |
| **Handoff** | AutoGen Swarm | Agent explicitly transfers control |
| **SOP Routing** | MetaGPT | Stage-based implicit delegation |
| **Coordinator** | CAMEL Workforce | LLM-based task decomposition + assignment |
| **Dynamic Worker Creation** | CAMEL Workforce | Create new workers on failure |
### 2.4 Conflict Resolution Patterns
| Pattern | Used By | Description |
|---------|---------|-------------|
| **Manager Arbitration** | CrewAI Hierarchical | Manager resolves conflicts |
| **Critic-in-the-loop** | CAMEL | Critic agent evaluates and selects |
| **Quality Gate** | CAMEL Workforce | Structured quality evaluation |
| **Termination Conditions** | AutoGen | Composable stop conditions |
| **Stall Detection** | AutoGen MagenticOne | Re-plans when progress stalls |
---
## 3. Recommendations for Hermes Fleet Knowledge Graph
### 3.1 Architecture: Hybrid Graph + Memory
Based on the SOTA analysis, the optimal fleet knowledge graph should combine:
1. **CrewAI's scoped memory** for hierarchical knowledge organization
- Path-based namespaces: `/fleet/{fleet_id}/agent/{agent_id}/diary`
- Composite scoring: semantic + recency + importance
- Background writes with read barriers
2. **CAMEL's TaskChannel** for task dispatch and tracking
- Packet states (SENT → PROCESSING → RETURNED → ARCHIVED)
- O(1) lookup by task ID
- Assignee/publisher tracking
3. **AutoGen's DiGraph** for execution flow definition
- DAG with conditional edges for complex workflows
- Parallel fan-out for independent tasks
- Activation conditions (all vs any) for synchronization points
4. **AutoGen MagenticOne's ledger** for shared task context
- Maintained facts, plan, and progress ledger
- Dynamic re-planning on stalls
### 3.2 Fleet Knowledge Graph Schema
```
/fleet/{fleet_id}/
├── shared/ # Shared knowledge (all agents read)
│ ├── facts/ # Known facts, constraints
│ ├── decisions/ # Record of decisions made
│ └── context/ # Active task context
├── agent/{agent_id}/
│ ├── diary/ # Agent's personal experience log
│ ├── capabilities/ # What this agent can do
│ └── state/ # Current task state
├── tasks/
│ ├── {task_id}/ # Task metadata, dependencies, status
│ └── graph/ # DAG definition for task dependencies
└── consensus/
├── proposals/ # Pending proposals
└── decisions/ # Resolved consensus decisions
```
### 3.3 Key Design Decisions
1. **Diary System (Agent Memory):**
- Each agent writes to its own scoped memory after every significant action
- LLM-analyzed importance scoring (like CrewAI's unified memory)
- Cross-agent recall: agents can query other agents' diaries for relevant experiences
- Decay: old low-importance memories expire
2. **Shared State (Fleet Knowledge):**
- SQLite-backed (like Hermes' existing `state.db`) with FTS5 search
- Hierarchical scopes (like CrewAI's MemoryScope)
- Write-ahead log for concurrent access
- Read barriers before queries (like CrewAI's `drain_writes`)
3. **Task Delegation:**
- Coordinator pattern (like CAMEL's Workforce)
- Task decomposition via LLM
- Failed task → retry, reassign, or decompose
- Max depth limit (like Hermes' existing MAX_DEPTH=2)
4. **Consensus Protocol:**
- Proposal-based: agent proposes, others vote/acknowledge
- Timeout-based fallback: if no response within N seconds, proceed
- Manager override: designated manager can break ties
- Simple majority for non-critical, unanimity for critical decisions
5. **Conflict Resolution:**
- Last-write-wins for non-critical state
- Optimistic locking with version numbers
- Manager arbitration for task assignment conflicts
- Quality gates (like CAMEL) for output validation
### 3.4 Integration with Existing Hermes Architecture
Hermes already has strong foundations:
- **Delegation system** (`delegate_tool.py`): Isolated child agents, parallel execution, depth limits
- **State DB** (`hermes_state.py`): SQLite + FTS5, WAL mode, session tracking, message history
- **Credential pools**: Shared credentials with rotation
The fleet knowledge graph should extend these patterns:
- **Session DB → Fleet DB:** Add tables for fleet metadata, agent registrations, task graphs
- **Memory tool → Fleet Memory:** Scoped vector memory shared across fleet agents
- **Delegate tool → Fleet Delegation:** Task channel with persistence, quality evaluation
- **New: Consensus module:** Proposal/vote protocol with timeout handling
---
## 4. Reference Implementations
| Component | Best Reference | Key Takeaway |
|-----------|---------------|--------------|
| Scoped Memory | CrewAI `Memory` + `MemoryScope` | Path-based namespaces, composite scoring, background writes |
| Task Dispatch | CAMEL `TaskChannel` | Packet-based with state machine, O(1) lookup |
| Execution DAG | AutoGen `DiGraphBuilder` | Fluent builder, conditional edges, activation groups |
| Orchestration | AutoGen `MagenticOneOrchestrator` | Ledger-based planning, stall detection, re-planning |
| Agent Communication | AutoGen `SelectorGroupChat` | LLM-based speaker selection, shared message thread |
| Quality Evaluation | CAMEL Workforce | Structured output for quality scoring |
| Workflow Memory | CAMEL `WorkflowMemoryManager` | Markdown-based, role-organized, versioned |
| State Checkpoint | CrewAI `SqliteProvider` | JSONB checkpoints, WAL mode |
| Tool Cache | CrewAI `CacheHandler` | RWLock-based concurrent tool result cache |
---
## 5. Open Questions
1. **Graph vs Vector for knowledge:** Should fleet knowledge use a proper graph DB (e.g., Neo4j) or stick with vector + SQLite?
- Recommendation: Start with SQLite + vectors (existing stack), add graph later if needed
2. **Real-time vs Batch:** Should agents receive updates in real-time or batched?
- Recommendation: Event-driven for critical updates, batched for diary entries
3. **Security model:** How should cross-agent access be controlled?
- Recommendation: Role-based ACLs on scope paths, similar to CrewAI's privacy flags
4. **Scalability:** How many agents can a single fleet support?
- Recommendation: Start with 10-agent fleets, optimize SQLite concurrency first

View File

@@ -0,0 +1,29 @@
# Phase 3: Poka-yoke Integration & Fleet Verification
Epic #967. Morning review packet for Hermes harness features.
## Poka-yoke Features Implemented
| Feature | Module | PR | Status |
|---------|--------|-----|--------|
| Token budget tracker | agent/token_budget.py | #930 | MERGED |
| Provider preflight validation | agent/provider_preflight.py | #932 | MERGED |
| Atomic skill editing | tools/skill_edit_guard.py | #933 | MERGED |
| Config debt fixes | gateway/config.py | #437 | MERGED |
| Test collection fixes | tests/acp/conftest.py | #794 | MERGED |
| Context-faithful prompting | agent/context_faithful.py | #786 | MERGED |
## Fleet Verification
- Unit tests pass on all modules
- Collection: 11,472 tests, 0 errors (was 6 errors)
- ACP tests: cleanly skipped when acp extra missing
- Provider validation: catches missing/short keys
- Skill editing: atomic with auto-revert
## Next Steps
1. Wire token_budget into run_agent.py conversation loop
2. Wire provider_preflight into session start
3. Wire skill_edit_guard into skill_manage tool
4. Fleet-wide deployment verification

View File

@@ -41,6 +41,14 @@ colors:
session_label: "#DAA520" # Session label
session_border: "#8B8682" # Session ID dim color
# TUI surfaces
status_bar_bg: "#1a1a2e" # Status / usage bar background
voice_status_bg: "#1a1a2e" # Voice-mode badge background
completion_menu_bg: "#1a1a2e" # Completion list background
completion_menu_current_bg: "#333355" # Active completion row background
completion_menu_meta_bg: "#1a1a2e" # Completion meta column background
completion_menu_meta_current_bg: "#333355" # Active completion meta background
# ── Spinner ─────────────────────────────────────────────────────────────────
# Customize the animated spinner shown during API calls and tool execution.
spinner:

View File

@@ -0,0 +1,329 @@
# Container-Aware CLI Review Fixes Spec
**PR:** NousResearch/hermes-agent#7543
**Review:** cursor[bot] bugbot review (4094049442) + two prior rounds
**Date:** 2026-04-12
**Branch:** `feat/container-aware-cli-clean`
## Review Issues Summary
Six issues were raised across three bugbot review rounds. Three were fixed in intermediate commits (38277a6a, 726cf90f). This spec addresses remaining design concerns surfaced by those reviews and simplifies the implementation based on interview decisions.
| # | Issue | Severity | Status |
|---|-------|----------|--------|
| 1 | `os.execvp` retry loop unreachable | Medium | Fixed in 79e8cd12 (switched to subprocess.run) |
| 2 | Redundant `shutil.which("sudo")` | Medium | Fixed in 38277a6a (reuses `sudo` var) |
| 3 | Missing `chown -h` on symlink update | Low | Fixed in 38277a6a |
| 4 | Container routing after `parse_args()` | High | Fixed in 726cf90f |
| 5 | Hardcoded `/home/${user}` | Medium | Fixed in 726cf90f |
| 6 | Group membership not gated on `container.enable` | Low | Fixed in 726cf90f |
The mechanical fixes are in place but the overall design needs revision. The retry loop, error swallowing, and process model have deeper issues than what the bugbot flagged.
---
## Spec: Revised `_exec_in_container`
### Design Principles
1. **Let it crash.** No silent fallbacks. If `.container-mode` exists but something goes wrong, the error propagates naturally (Python traceback). The only case where container routing is skipped is when `.container-mode` doesn't exist or `HERMES_DEV=1`.
2. **No retries.** Probe once for sudo, exec once. If it fails, docker/podman's stderr reaches the user verbatim.
3. **Completely transparent.** No error wrapping, no prefixes, no spinners. Docker's output goes straight through.
4. **`os.execvp` on the happy path.** Replace the Python process entirely so there's no idle parent during interactive sessions. Note: `execvp` never returns on success (process is replaced) and raises `OSError` on failure (it does not return a value). The container process's exit code becomes the process exit code by definition — no explicit propagation needed.
5. **One human-readable exception to "let it crash".** `subprocess.TimeoutExpired` from the sudo probe gets a specific catch with a readable message, since a raw traceback for "your Docker daemon is slow" is confusing. All other exceptions propagate naturally.
### Execution Flow
```
1. get_container_exec_info()
- HERMES_DEV=1 → return None (skip routing)
- Inside container → return None (skip routing)
- .container-mode doesn't exist → return None (skip routing)
- .container-mode exists → parse and return dict
- .container-mode exists but malformed/unreadable → LET IT CRASH (no try/except)
2. _exec_in_container(container_info, sys.argv[1:])
a. shutil.which(backend) → if None, print "{backend} not found on PATH" and sys.exit(1)
b. Sudo probe: subprocess.run([runtime, "inspect", "--format", "ok", container_name], timeout=15)
- If succeeds → needs_sudo = False
- If fails → try subprocess.run([sudo, "-n", runtime, "inspect", ...], timeout=15)
- If succeeds → needs_sudo = True
- If fails → print error with sudoers hint (including why -n is required) and sys.exit(1)
- If TimeoutExpired → catch specifically, print human-readable message about slow daemon
c. Build exec_cmd: [sudo? + runtime, "exec", tty_flags, "-u", exec_user, env_flags, container, hermes_bin, *cli_args]
d. os.execvp(exec_cmd[0], exec_cmd)
- On success: process is replaced — Python is gone, container exit code IS the process exit code
- On OSError: let it crash (natural traceback)
```
### Changes to `hermes_cli/main.py`
#### `_exec_in_container` — rewrite
Remove:
- The entire retry loop (`max_retries`, `for attempt in range(...)`)
- Spinner logic (`"Waiting for container..."`, dots)
- Exit code classification (125/126/127 handling)
- `subprocess.run` for the exec call (keep it only for the sudo probe)
- Special TTY vs non-TTY retry counts
- The `time` import (no longer needed)
Change:
- Use `os.execvp(exec_cmd[0], exec_cmd)` as the final call
- Keep the `subprocess` import only for the sudo probe
- Keep TTY detection for the `-it` vs `-i` flag
- Keep env var forwarding (TERM, COLORTERM, LANG, LC_ALL)
- Keep the sudo probe as-is (it's the one "smart" part)
- Bump probe `timeout` from 5s to 15s — cold podman on a loaded machine needs headroom
- Catch `subprocess.TimeoutExpired` specifically on both probe calls — print a readable message about the daemon being unresponsive instead of a raw traceback
- Expand the sudoers hint error message to explain *why* `-n` (non-interactive) is required: a password prompt would hang the CLI or break piped commands
The function becomes roughly:
```python
def _exec_in_container(container_info: dict, cli_args: list):
"""Replace the current process with a command inside the managed container.
Probes whether sudo is needed (rootful containers), then os.execvp
into the container. If exec fails, the OS error propagates naturally.
"""
import shutil
import subprocess
backend = container_info["backend"]
container_name = container_info["container_name"]
exec_user = container_info["exec_user"]
hermes_bin = container_info["hermes_bin"]
runtime = shutil.which(backend)
if not runtime:
print(f"Error: {backend} not found on PATH. Cannot route to container.",
file=sys.stderr)
sys.exit(1)
# Probe whether we need sudo to see the rootful container.
# Timeout is 15s — cold podman on a loaded machine can take a while.
# TimeoutExpired is caught specifically for a human-readable message;
# all other exceptions propagate naturally.
needs_sudo = False
sudo = None
try:
probe = subprocess.run(
[runtime, "inspect", "--format", "ok", container_name],
capture_output=True, text=True, timeout=15,
)
except subprocess.TimeoutExpired:
print(
f"Error: timed out waiting for {backend} to respond.\n"
f"The {backend} daemon may be unresponsive or starting up.",
file=sys.stderr,
)
sys.exit(1)
if probe.returncode != 0:
sudo = shutil.which("sudo")
if sudo:
try:
probe2 = subprocess.run(
[sudo, "-n", runtime, "inspect", "--format", "ok", container_name],
capture_output=True, text=True, timeout=15,
)
except subprocess.TimeoutExpired:
print(
f"Error: timed out waiting for sudo {backend} to respond.",
file=sys.stderr,
)
sys.exit(1)
if probe2.returncode == 0:
needs_sudo = True
else:
print(
f"Error: container '{container_name}' not found via {backend}.\n"
f"\n"
f"The NixOS service runs the container as root. Your user cannot\n"
f"see it because {backend} uses per-user namespaces.\n"
f"\n"
f"Fix: grant passwordless sudo for {backend}. The -n (non-interactive)\n"
f"flag is required because the CLI calls sudo non-interactively —\n"
f"a password prompt would hang or break piped commands:\n"
f"\n"
f' security.sudo.extraRules = [{{\n'
f' users = [ "{os.getenv("USER", "your-user")}" ];\n'
f' commands = [{{ command = "{runtime}"; options = [ "NOPASSWD" ]; }}];\n'
f' }}];\n'
f"\n"
f"Or run: sudo hermes {' '.join(cli_args)}",
file=sys.stderr,
)
sys.exit(1)
else:
print(
f"Error: container '{container_name}' not found via {backend}.\n"
f"The container may be running under root. Try: sudo hermes {' '.join(cli_args)}",
file=sys.stderr,
)
sys.exit(1)
is_tty = sys.stdin.isatty()
tty_flags = ["-it"] if is_tty else ["-i"]
env_flags = []
for var in ("TERM", "COLORTERM", "LANG", "LC_ALL"):
val = os.environ.get(var)
if val:
env_flags.extend(["-e", f"{var}={val}"])
cmd_prefix = [sudo, "-n", runtime] if needs_sudo else [runtime]
exec_cmd = (
cmd_prefix + ["exec"]
+ tty_flags
+ ["-u", exec_user]
+ env_flags
+ [container_name, hermes_bin]
+ cli_args
)
# execvp replaces this process entirely — it never returns on success.
# On failure it raises OSError, which propagates naturally.
os.execvp(exec_cmd[0], exec_cmd)
```
#### Container routing call site in `main()` — remove try/except
Current:
```python
try:
from hermes_cli.config import get_container_exec_info
container_info = get_container_exec_info()
if container_info:
_exec_in_container(container_info, sys.argv[1:])
sys.exit(1) # exec failed if we reach here
except SystemExit:
raise
except Exception:
pass # Container routing unavailable, proceed locally
```
Revised:
```python
from hermes_cli.config import get_container_exec_info
container_info = get_container_exec_info()
if container_info:
_exec_in_container(container_info, sys.argv[1:])
# Unreachable: os.execvp never returns on success (process is replaced)
# and raises OSError on failure (which propagates as a traceback).
# This line exists only as a defensive assertion.
sys.exit(1)
```
No try/except. If `.container-mode` doesn't exist, `get_container_exec_info()` returns `None` and we skip routing. If it exists but is broken, the exception propagates with a natural traceback.
Note: `sys.exit(1)` after `_exec_in_container` is dead code in all paths — `os.execvp` either replaces the process or raises. It's kept as a belt-and-suspenders assertion with a comment marking it unreachable, not as actual error handling.
### Changes to `hermes_cli/config.py`
#### `get_container_exec_info` — remove inner try/except
Current code catches `(OSError, IOError)` and returns `None`. This silently hides permission errors, corrupt files, etc.
Change: Remove the try/except around file reading. Keep the early returns for `HERMES_DEV=1` and `_is_inside_container()`. The `FileNotFoundError` from `open()` when `.container-mode` doesn't exist should still return `None` (this is the "container mode not enabled" case). All other exceptions propagate.
```python
def get_container_exec_info() -> Optional[dict]:
if os.environ.get("HERMES_DEV") == "1":
return None
if _is_inside_container():
return None
container_mode_file = get_hermes_home() / ".container-mode"
try:
with open(container_mode_file, "r") as f:
# ... parse key=value lines ...
except FileNotFoundError:
return None
# All other exceptions (PermissionError, malformed data, etc.) propagate
return { ... }
```
---
## Spec: NixOS Module Changes
### Symlink creation — simplify to two branches
Current: 4 branches (symlink exists, directory exists, other file, doesn't exist).
Revised: 2 branches.
```bash
if [ -d "${symlinkPath}" ] && [ ! -L "${symlinkPath}" ]; then
# Real directory — back it up, then create symlink
_backup="${symlinkPath}.bak.$(date +%s)"
echo "hermes-agent: backing up existing ${symlinkPath} to $_backup"
mv "${symlinkPath}" "$_backup"
fi
# For everything else (symlink, doesn't exist, etc.) — just force-create
ln -sfn "${target}" "${symlinkPath}"
chown -h ${user}:${cfg.group} "${symlinkPath}"
```
`ln -sfn` handles: existing symlink (replaces), doesn't exist (creates), and after the `mv` above (creates). The only case that needs special handling is a real directory, because `ln -sfn` cannot atomically replace a directory.
Note: there is a theoretical race between the `[ -d ... ]` check and the `mv` (something could create/remove the directory in between). In practice this is a NixOS activation script running as root during `nixos-rebuild switch` — no other process should be touching `~/.hermes` at that moment. Not worth adding locking for.
### Sudoers — document, don't auto-configure
Do NOT add `security.sudo.extraRules` to the module. Document the sudoers requirement in the module's description/comments and in the error message the CLI prints when sudo probe fails.
### Group membership gating — keep as-is
The fix in 726cf90f (`cfg.container.enable && cfg.container.hostUsers != []`) is correct. Leftover group membership when container mode is disabled is harmless. No cleanup needed.
---
## Spec: Test Rewrite
The existing test file (`tests/hermes_cli/test_container_aware_cli.py`) has 16 tests. With the simplified exec model, several are obsolete.
### Tests to keep (update as needed)
- `test_is_inside_container_dockerenv` — unchanged
- `test_is_inside_container_containerenv` — unchanged
- `test_is_inside_container_cgroup_docker` — unchanged
- `test_is_inside_container_false_on_host` — unchanged
- `test_get_container_exec_info_returns_metadata` — unchanged
- `test_get_container_exec_info_none_inside_container` — unchanged
- `test_get_container_exec_info_none_without_file` — unchanged
- `test_get_container_exec_info_skipped_when_hermes_dev` — unchanged
- `test_get_container_exec_info_not_skipped_when_hermes_dev_zero` — unchanged
- `test_get_container_exec_info_defaults` — unchanged
- `test_get_container_exec_info_docker_backend` — unchanged
### Tests to add
- `test_get_container_exec_info_crashes_on_permission_error` — verify that `PermissionError` propagates (no silent `None` return)
- `test_exec_in_container_calls_execvp` — verify `os.execvp` is called with correct args (runtime, tty flags, user, env, container, binary, cli args)
- `test_exec_in_container_sudo_probe_sets_prefix` — verify that when first probe fails and sudo probe succeeds, `os.execvp` is called with `sudo -n` prefix
- `test_exec_in_container_no_runtime_hard_fails` — keep existing, verify `sys.exit(1)` when `shutil.which` returns None
- `test_exec_in_container_non_tty_uses_i_only` — update to check `os.execvp` args instead of `subprocess.run` args
- `test_exec_in_container_probe_timeout_prints_message` — verify that `subprocess.TimeoutExpired` from the probe produces a human-readable error and `sys.exit(1)`, not a raw traceback
- `test_exec_in_container_container_not_running_no_sudo` — verify the path where runtime exists (`shutil.which` returns a path) but probe returns non-zero and no sudo is available. Should print the "container may be running under root" error. This is distinct from `no_runtime_hard_fails` which covers `shutil.which` returning None.
### Tests to delete
- `test_exec_in_container_tty_retries_on_container_failure` — retry loop removed
- `test_exec_in_container_non_tty_retries_silently_exits_126` — retry loop removed
- `test_exec_in_container_propagates_hermes_exit_code` — no subprocess.run to check exit codes; execvp replaces the process. Note: exit code propagation still works correctly — when `os.execvp` succeeds, the container's process *becomes* this process, so its exit code is the process exit code by OS semantics. No application code needed, no test needed. A comment in the function docstring documents this intent for future readers.
---
## Out of Scope
- Auto-configuring sudoers rules in the NixOS module
- Any changes to `get_container_exec_info` parsing logic beyond the try/except narrowing
- Changes to `.container-mode` file format
- Changes to the `HERMES_DEV=1` bypass
- Changes to container detection logic (`_is_inside_container`)

View File

@@ -0,0 +1,151 @@
## Tool Investigation Report: Top 5 Recommendations from awesome-ai-tools
**Source:** [formatho/awesome-ai-tools](https://github.com/formatho/awesome-ai-tools)
**Date:** 2026-04-15
**Tools Analyzed:** 414 across 9 categories
**Agent:** Timmy
---
## Analysis Summary
Scanned 414 tools from the awesome-ai-tools repository. Evaluated each against Hermes integration potential across five categories: Memory/Context, Inference Optimization, Agent Orchestration, Workflow Automation, and Retrieval/RAG.
### Evaluation Criteria
- **Stars:** GitHub community validation (stability signal)
- **Freshness:** Active development (Fresh = updated <=7 days)
- **Integration Fit:** How well it complements Hermes' existing architecture (skills, memory, tools)
- **Integration Effort:** 1 (trivial drop-in) to 5 (major refactor required)
- **Impact:** 1 (incremental) to 5 (transformative)
---
## Top 5 Recommended Tools
### #1: Mem0 — Universal Memory Layer for AI Agents
| Metric | Value |
|--------|-------|
| **Category** | Memory/Context |
| **GitHub** | [mem0ai/mem0](https://github.com/mem0ai/mem0) |
| **Stars** | 53.1k |
| **Freshness** | Fresh |
| **Integration Effort** | 3/5 |
| **Impact** | 5/5 |
| **Hermes Status** | IMPLEMENTED (plugins/memory/mem0/) + LOCAL MODE (plugins/memory/mem0_local/) |
**Why it fits Hermes:**
Hermes currently has session_search (transcript recall) and memory (persistent facts), but lacks a unified memory layer that bridges sessions with semantic understanding. Mem0 provides exactly this: automatic memory extraction from conversations, deduplication, and cross-session retrieval with semantic search.
**Integration path:**
- Cloud: plugins/memory/mem0/ (requires MEM0_API_KEY)
- Local: plugins/memory/mem0_local/ (ChromaDB-backed, no API key)
- Auto-extract facts from session transcripts
- Query before session_search for richer contextual recall
**Key risk:** Mem0 is freemium — core is open-source but advanced features require paid tier. Local mode mitigates this entirely.
---
### #2: LightRAG — Simple and Fast Retrieval-Augmented Generation
| Metric | Value |
|--------|-------|
| **Category** | Retrieval/RAG |
| **GitHub** | [HKUDS/LightRAG](https://github.com/HKUDS/LightRAG) |
| **Stars** | 33.1k |
| **Freshness** | Fresh |
| **Integration Effort** | 3/5 |
| **Impact** | 4/5 |
| **Hermes Status** | NOT IMPLEMENTED — Issue #857 |
**Why it fits Hermes:**
Hermes has 190+ skills but no unified knowledge retrieval system. LightRAG adds graph-based RAG that understands relationships between concepts, not just keyword matches. It's lightweight, runs locally, and has a simple API.
**Integration path:**
- LightRAG as a local knowledge base for skill references
- Index GENOME.md files, README.md, and key codebase files
- Use local Ollama models for embeddings
- Complements existing search_files without replacing it
---
### #3: n8n — Workflow Automation Platform
| Metric | Value |
|--------|-------|
| **Category** | Workflow Automation / Agent Orchestration |
| **GitHub** | [n8n-io/n8n](https://github.com/n8n-io/n8n) |
| **Stars** | 183.9k |
| **Freshness** | Fresh |
| **Integration Effort** | 4/5 |
| **Impact** | 5/5 |
| **Hermes Status** | NOT IMPLEMENTED — Issue #858 |
**Why it fits Hermes:**
n8n provides a self-hosted, fair-code workflow platform with 400+ integrations. Rather than replacing Hermes' agent loop, n8n sits above it: trigger Hermes agents from external events, chain multi-agent workflows, and visualize execution.
---
### #4: RAGFlow — Open-Source RAG Engine
| Metric | Value |
|--------|-------|
| **Category** | Retrieval/RAG |
| **GitHub** | [infiniflow/ragflow](https://github.com/infiniflow/ragflow) |
| **Stars** | 77.9k |
| **Freshness** | Fresh |
| **Integration Effort** | 4/5 |
| **Impact** | 4/5 |
| **Hermes Status** | NOT IMPLEMENTED — Issue #859 |
**Why it fits Hermes:**
RAGFlow handles document parsing (PDF, Word, images via OCR), chunking, embedding, and retrieval with a web UI. Enables "document understanding" as a first-class capability.
---
### #5: tensorzero — LLMOps Platform
| Metric | Value |
|--------|-------|
| **Category** | Inference Optimization / LLMOps |
| **GitHub** | [tensorzero/tensorzero](https://github.com/tensorzero/tensorzero) |
| **Stars** | 11.2k |
| **Freshness** | Fresh |
| **Integration Effort** | 3/5 |
| **Impact** | 4/5 |
| **Hermes Status** | NOT IMPLEMENTED — Issue #860 |
**Why it fits Hermes:**
TensorZero unifies LLM gateway, observability, evaluation, and optimization. Replaces custom provider routing with a maintained, battle-tested platform.
---
## Honorable Mentions
| Tool | Stars | Category | Why Not Top 5 |
|------|-------|----------|---------------|
| memvid | 14.9k | Memory | Newer; Mem0 is more mature |
| mempalace | 44.8k | Memory | Already evaluated; Mem0 has broader API |
| Everything Claude Code | 154.3k | Agent | Too Claude-specific |
| Portkey AI Gateway | 11.3k | Gateway | TensorZero is OSS; Portkey is freemium |
---
## Implementation Priority
| Priority | Tool | Action | Status | Issue |
|----------|------|--------|--------|-------|
| P1 | Mem0 | Local-only mode (ChromaDB) | DONE | #842 |
| P2 | LightRAG | Set up local instance, index skills | Not started | #857 |
| P3 | tensorzero | Evaluate as provider routing | Not started | #860 |
| P4 | RAGFlow | Deploy Docker, test docs | Not started | #859 |
| P5 | n8n | Deploy for workflow viz | Not started | #858 |
---
## References
- Source: https://github.com/formatho/awesome-ai-tools
- Total tools: 414 across 9 categories
- Last updated: April 16, 2026
- Tracking issue: Timmy_Foundation/hermes-agent#842

View File

@@ -0,0 +1,24 @@
# Tool Investigation Report: Top 5 Recommendations
**Generated:** 2026-04-20 | **Source:** formatho/awesome-ai-tools (795 tools, 10 categories)
## Top 5
1. **LiteLLM** (76k) — Unified API gateway. Replace custom provider routing. Impact: 5/5, Effort: 2/5
2. **Mem0** (53k) — Universal memory layer. Structured long-term memory. Impact: 5/5, Effort: 3/5
3. **RAGFlow** (77k) — RAG engine with OCR. Document processing upgrade. Impact: 4/5, Effort: 4/5
4. **LiteRT-LM** (3.7k) — On-device inference. Edge/mobile deployment. Impact: 4/5, Effort: 3/5
5. **Claude-Mem** (61k) — Session capture and context injection. Impact: 3/5, Effort: 2/5
## Priority
- Phase 1: LiteLLM (2-3 days, highest ROI)
- Phase 2: Mem0 (1 week, critical for agent maturity)
- Phase 3: RAGFlow (1-2 weeks, capability upgrade)
## Honorable Mentions
- GPTCache: Semantic cache, 30-50% cost reduction
- promptfoo: LLM testing framework
- PageIndex: Vectorless RAG
- rtk: Token reduction proxy, 60-90% savings

View File

@@ -18,9 +18,7 @@ suppress delivery.
"""
import logging
import os
import threading
from pathlib import Path
logger = logging.getLogger("hooks.boot-md")

View File

@@ -8,6 +8,7 @@ Handles loading and validating configuration for:
- Delivery preferences
"""
import ipaddress
import logging
import os
import json
@@ -63,8 +64,10 @@ class Platform(Enum):
WEBHOOK = "webhook"
FEISHU = "feishu"
WECOM = "wecom"
WECOM_CALLBACK = "wecom_callback"
WEIXIN = "weixin"
BLUEBUBBLES = "bluebubbles"
QQBOT = "qqbot"
@dataclass
@@ -190,7 +193,7 @@ class StreamingConfig:
"""Configuration for real-time token streaming to messaging platforms."""
enabled: bool = False
transport: str = "edit" # "edit" (progressive editMessageText) or "off"
edit_interval: float = 0.3 # Seconds between message edits
edit_interval: float = 1.0 # Seconds between message edits (Telegram rate-limits at ~1/s)
buffer_threshold: int = 40 # Chars before forcing an edit
cursor: str = "" # Cursor shown during streaming
@@ -210,7 +213,7 @@ class StreamingConfig:
return cls(
enabled=data.get("enabled", False),
transport=data.get("transport", "edit"),
edit_interval=float(data.get("edit_interval", 0.3)),
edit_interval=float(data.get("edit_interval", 1.0)),
buffer_threshold=int(data.get("buffer_threshold", 40)),
cursor=data.get("cursor", ""),
)
@@ -291,12 +294,20 @@ class GatewayConfig:
# Feishu uses extra dict for app credentials
elif platform == Platform.FEISHU and config.extra.get("app_id"):
connected.append(platform)
# WeCom uses extra dict for bot credentials
# WeCom bot mode uses extra dict for bot credentials
elif platform == Platform.WECOM and config.extra.get("bot_id"):
connected.append(platform)
# WeCom callback mode uses corp_id or apps list
elif platform == Platform.WECOM_CALLBACK and (
config.extra.get("corp_id") or config.extra.get("apps")
):
connected.append(platform)
# BlueBubbles uses extra dict for local server config
elif platform == Platform.BLUEBUBBLES and config.extra.get("server_url") and config.extra.get("password"):
connected.append(platform)
# QQBot uses extra dict for app credentials
elif platform == Platform.QQBOT and config.extra.get("app_id") and config.extra.get("client_secret"):
connected.append(platform)
return connected
def get_home_channel(self, platform: Platform) -> Optional[HomeChannel]:
@@ -615,6 +626,11 @@ def load_gateway_config() -> GatewayConfig:
if isinstance(frc, list):
frc = ",".join(str(v) for v in frc)
os.environ["TELEGRAM_FREE_RESPONSE_CHATS"] = str(frc)
ignored_threads = telegram_cfg.get("ignored_threads")
if ignored_threads is not None and not os.getenv("TELEGRAM_IGNORED_THREADS"):
if isinstance(ignored_threads, list):
ignored_threads = ",".join(str(v) for v in ignored_threads)
os.environ["TELEGRAM_IGNORED_THREADS"] = str(ignored_threads)
if "reactions" in telegram_cfg and not os.getenv("TELEGRAM_REACTIONS"):
os.environ["TELEGRAM_REACTIONS"] = str(telegram_cfg["reactions"]).lower()
@@ -659,6 +675,37 @@ def load_gateway_config() -> GatewayConfig:
_apply_env_overrides(config)
# --- Validate loaded values ---
_validate_gateway_config(config)
return config
def _is_network_accessible(host: str) -> bool:
"""Return True if *host* would expose a server beyond the loopback interface.
Duplicates the logic in ``gateway.platforms.base.is_network_accessible``
without creating a circular import (base.py imports from this module).
"""
try:
addr = ipaddress.ip_address(host)
if addr.is_loopback:
return False
# ::ffff:127.x.x.x — Python's is_loopback returns False for
# IPv4-mapped loopback; unwrap and check the underlying IPv4.
if getattr(addr, "ipv4_mapped", None) and addr.ipv4_mapped.is_loopback:
return False
return True
except ValueError:
# Hostname: assume it could be network-accessible.
return True
def _validate_gateway_config(config: "GatewayConfig") -> None:
"""Validate and sanitize a loaded GatewayConfig in place.
Called by ``load_gateway_config()`` after all config sources are merged.
Extracted as a separate function for testability.
"""
policy = config.default_reset_policy
if not (0 <= policy.at_hour <= 23):
@@ -695,7 +742,47 @@ def load_gateway_config() -> GatewayConfig:
platform.value, env_name,
)
return config
# Reject known-weak placeholder tokens.
# Ported from openclaw/openclaw#64586: users who copy .env.example
# without changing placeholder values get a clear startup error instead
# of a confusing "auth failed" from the platform API.
try:
from hermes_cli.auth import has_usable_secret
except ImportError:
has_usable_secret = None # type: ignore[assignment]
if has_usable_secret is not None:
for platform, pconfig in config.platforms.items():
if not pconfig.enabled:
continue
env_name = _token_env_names.get(platform)
if not env_name:
continue
token = pconfig.token
if token and token.strip() and not has_usable_secret(token, min_length=4):
logger.error(
"%s is enabled but %s is set to a placeholder value ('%s'). "
"Set a real bot token before starting the gateway. "
"The adapter will NOT be started.",
platform.value, env_name, token.strip()[:6] + "...",
)
pconfig.enabled = False
# Warn when the API server is enabled on a network-accessible address
# without an auth key. The adapter will refuse to start anyway, but
# surfacing this at config-load time lets operators see the problem in
# the startup log before any platform adapter initialisation runs.
api_cfg = config.platforms.get(Platform.API_SERVER)
if api_cfg and api_cfg.enabled:
key = api_cfg.extra.get("key", "")
host = api_cfg.extra.get("host", "127.0.0.1")
if not key and _is_network_accessible(host):
logger.warning(
"API Server is enabled on %s but API_SERVER_KEY is not set. "
"The adapter will refuse to start on a network-accessible address. "
"Set API_SERVER_KEY or bind to 127.0.0.1 for local-only access.",
host,
)
def _apply_env_overrides(config: GatewayConfig) -> None:
@@ -987,6 +1074,23 @@ def _apply_env_overrides(config: GatewayConfig) -> None:
name=os.getenv("WECOM_HOME_CHANNEL_NAME", "Home"),
)
# WeCom callback mode (self-built apps)
wecom_callback_corp_id = os.getenv("WECOM_CALLBACK_CORP_ID")
wecom_callback_corp_secret = os.getenv("WECOM_CALLBACK_CORP_SECRET")
if wecom_callback_corp_id and wecom_callback_corp_secret:
if Platform.WECOM_CALLBACK not in config.platforms:
config.platforms[Platform.WECOM_CALLBACK] = PlatformConfig()
config.platforms[Platform.WECOM_CALLBACK].enabled = True
config.platforms[Platform.WECOM_CALLBACK].extra.update({
"corp_id": wecom_callback_corp_id,
"corp_secret": wecom_callback_corp_secret,
"agent_id": os.getenv("WECOM_CALLBACK_AGENT_ID", ""),
"token": os.getenv("WECOM_CALLBACK_TOKEN", ""),
"encoding_aes_key": os.getenv("WECOM_CALLBACK_ENCODING_AES_KEY", ""),
"host": os.getenv("WECOM_CALLBACK_HOST", "0.0.0.0"),
"port": int(os.getenv("WECOM_CALLBACK_PORT", "8645")),
})
# Weixin (personal WeChat via iLink Bot API)
weixin_token = os.getenv("WEIXIN_TOKEN")
weixin_account_id = os.getenv("WEIXIN_ACCOUNT_ID")
@@ -1017,6 +1121,9 @@ def _apply_env_overrides(config: GatewayConfig) -> None:
weixin_group_allowed_users = os.getenv("WEIXIN_GROUP_ALLOWED_USERS", "").strip()
if weixin_group_allowed_users:
extra["group_allow_from"] = weixin_group_allowed_users
weixin_split_multiline = os.getenv("WEIXIN_SPLIT_MULTILINE_MESSAGES", "").strip()
if weixin_split_multiline:
extra["split_multiline_messages"] = weixin_split_multiline
weixin_home = os.getenv("WEIXIN_HOME_CHANNEL", "").strip()
if weixin_home:
config.platforms[Platform.WEIXIN].home_channel = HomeChannel(
@@ -1048,6 +1155,32 @@ def _apply_env_overrides(config: GatewayConfig) -> None:
name=os.getenv("BLUEBUBBLES_HOME_CHANNEL_NAME", "Home"),
)
# QQ (Official Bot API v2)
qq_app_id = os.getenv("QQ_APP_ID")
qq_client_secret = os.getenv("QQ_CLIENT_SECRET")
if qq_app_id or qq_client_secret:
if Platform.QQBOT not in config.platforms:
config.platforms[Platform.QQBOT] = PlatformConfig()
config.platforms[Platform.QQBOT].enabled = True
extra = config.platforms[Platform.QQBOT].extra
if qq_app_id:
extra["app_id"] = qq_app_id
if qq_client_secret:
extra["client_secret"] = qq_client_secret
qq_allowed_users = os.getenv("QQ_ALLOWED_USERS", "").strip()
if qq_allowed_users:
extra["allow_from"] = qq_allowed_users
qq_group_allowed = os.getenv("QQ_GROUP_ALLOWED_USERS", "").strip()
if qq_group_allowed:
extra["group_allow_from"] = qq_group_allowed
qq_home = os.getenv("QQ_HOME_CHANNEL", "").strip()
if qq_home:
config.platforms[Platform.QQBOT].home_channel = HomeChannel(
platform=Platform.QQBOT,
chat_id=qq_home,
name=os.getenv("QQ_HOME_CHANNEL_NAME", "Home"),
)
# Session settings
idle_minutes = os.getenv("SESSION_IDLE_MINUTES")
if idle_minutes:

224
gateway/config_validator.py Normal file
View File

@@ -0,0 +1,224 @@
"""
Gateway Config Validator & Fallback Fix — #892.
Validates gateway configuration and provides sensible defaults
for missing keys to prevent fallback chain breaks.
"""
import logging
import os
from typing import Dict, Any, List, Optional
from dataclasses import dataclass, field
logger = logging.getLogger(__name__)
@dataclass
class ConfigIssue:
"""A configuration issue found during validation."""
key: str
severity: str # error, warning, info
message: str
fix: str
@dataclass
class ConfigValidation:
"""Result of config validation."""
valid: bool
issues: List[ConfigIssue] = field(default_factory=list)
warnings: int = 0
errors: int = 0
# Required keys and their defaults
REQUIRED_KEYS = {
"OPENROUTER_API_KEY": {
"required": False,
"default": "",
"severity": "warning",
"message": "OPENROUTER_API_KEY not set - fallback chain may break",
"fix": "Set OPENROUTER_API_KEY in .env for OpenRouter provider",
},
"API_SERVER_KEY": {
"required": False,
"default": "",
"severity": "warning",
"message": "API_SERVER_KEY not configured",
"fix": "Set API_SERVER_KEY in .env for API server auth",
},
"GITEA_TOKEN": {
"required": False,
"default": "",
"severity": "info",
"message": "GITEA_TOKEN not set - Gitea features disabled",
"fix": "Set GITEA_TOKEN in .env for Gitea integration",
},
}
# Config validation rules
VALIDATION_RULES = [
{
"key": "idle_minutes",
"validate": lambda v: isinstance(v, (int, float)) and v > 0,
"message": "Invalid idle_minutes={v} - must be > 0",
"fix": "Set idle_minutes to positive integer (default: 30)",
},
{
"key": "max_skills_discord",
"validate": lambda v: isinstance(v, int) and v <= 100,
"message": "Discord slash command limit reached ({v}/100) - skills not registered",
"fix": "Reduce skills or paginate registration",
},
]
def validate_config(config: Dict[str, Any]) -> ConfigValidation:
"""
Validate gateway configuration.
Args:
config: Configuration dictionary
Returns:
ConfigValidation with issues found
"""
issues = []
# Check required keys
for key, spec in REQUIRED_KEYS.items():
value = config.get(key) or os.environ.get(key) or spec["default"]
if spec["required"] and not value:
issues.append(ConfigIssue(
key=key,
severity=spec["severity"],
message=spec["message"],
fix=spec["fix"],
))
elif not value and spec["severity"] != "error":
issues.append(ConfigIssue(
key=key,
severity=spec["severity"],
message=spec["message"],
fix=spec["fix"],
))
# Check validation rules
for rule in VALIDATION_RULES:
value = config.get(rule["key"])
if value is not None:
if not rule["validate"](value):
issues.append(ConfigIssue(
key=rule["key"],
severity="error",
message=rule["message"].format(v=value),
fix=rule["fix"],
))
errors = sum(1 for i in issues if i.severity == "error")
warnings = sum(1 for i in issues if i.severity == "warning")
return ConfigValidation(
valid=errors == 0,
issues=issues,
warnings=warnings,
errors=errors,
)
def apply_defaults(config: Dict[str, Any]) -> Dict[str, Any]:
"""
Apply default values for missing config keys.
Args:
config: Configuration dictionary
Returns:
Config with defaults applied
"""
result = dict(config)
for key, spec in REQUIRED_KEYS.items():
if key not in result or not result[key]:
default = os.environ.get(key) or spec["default"]
if default:
result[key] = default
logger.debug("Applied default for %s", key)
# Apply validation defaults
if "idle_minutes" not in result or not result["idle_minutes"] or result["idle_minutes"] <= 0:
result["idle_minutes"] = 30
logger.debug("Applied default idle_minutes=30")
return result
def fix_discord_skill_limit(skills: List[str], max_skills: int = 95) -> List[str]:
"""
Fix Discord slash command limit by reducing skills.
Args:
skills: List of skill names
max_skills: Maximum skills to register (default 95, leaving room for built-ins)
Returns:
Reduced skill list
"""
if len(skills) <= max_skills:
return skills
logger.warning(
"Discord skill limit: %d skills exceeds %d limit, truncating",
len(skills), max_skills
)
# Keep first max_skills (alphabetical priority)
return sorted(skills)[:max_skills]
def validate_provider_config(provider: str, config: Dict[str, Any]) -> ConfigIssue:
"""
Validate provider-specific configuration.
Args:
provider: Provider name
config: Provider config
Returns:
ConfigIssue if invalid, None if valid
"""
if provider == "local-llama.cpp":
# Check if llama.cpp is configured
if not config.get("model_path") and not config.get("base_url"):
return ConfigIssue(
key=f"provider.{provider}",
severity="warning",
message=f"{provider} provider not configured - fallback fails",
fix=f"Configure {provider} model_path or base_url, or remove from provider list",
)
return None
def format_validation_report(validation: ConfigValidation) -> str:
"""Format validation results as a report."""
lines = [
"=" * 50,
"GATEWAY CONFIG VALIDATION",
"=" * 50,
"",
f"Status: {'VALID' if validation.valid else 'INVALID'}",
f"Errors: {validation.errors}",
f"Warnings: {validation.warnings}",
"",
]
if validation.issues:
lines.append("Issues:")
for issue in validation.issues:
icon = "" if issue.severity == "error" else "⚠️" if issue.severity == "warning" else ""
lines.append(f" {icon} [{issue.key}] {issue.message}")
lines.append(f" Fix: {issue.fix}")
lines.append("")
return "\n".join(lines)

View File

@@ -12,7 +12,7 @@ import logging
from pathlib import Path
from datetime import datetime
from dataclasses import dataclass
from typing import Dict, List, Optional, Any, Union
from typing import Dict, List, Optional, Any
from hermes_cli.config import get_hermes_home

194
gateway/display_config.py Normal file
View File

@@ -0,0 +1,194 @@
"""Per-platform display/verbosity configuration resolver.
Provides ``resolve_display_setting()`` — the single entry-point for reading
display settings with platform-specific overrides and sensible defaults.
Resolution order (first non-None wins):
1. ``display.platforms.<platform>.<key>`` — explicit per-platform user override
2. ``display.<key>`` — global user setting
3. ``_PLATFORM_DEFAULTS[<platform>][<key>]`` — built-in sensible default
4. ``_GLOBAL_DEFAULTS[<key>]`` — built-in global default
Exception: ``display.streaming`` is CLI-only. Gateway streaming follows the
top-level ``streaming`` config unless ``display.platforms.<platform>.streaming``
sets an explicit per-platform override.
Backward compatibility: ``display.tool_progress_overrides`` is still read as a
fallback for ``tool_progress`` when no ``display.platforms`` entry exists. A
config migration (version bump) automatically moves the old format into the new
``display.platforms`` structure.
"""
from __future__ import annotations
from typing import Any
# ---------------------------------------------------------------------------
# Overrideable display settings and their global defaults
# ---------------------------------------------------------------------------
# These are the settings that can be configured per-platform.
# Other display settings (compact, personality, skin, etc.) are CLI-only
# and don't participate in per-platform resolution.
_GLOBAL_DEFAULTS: dict[str, Any] = {
"tool_progress": "all",
"show_reasoning": False,
"tool_preview_length": 0,
"streaming": None, # None = follow top-level streaming config
}
# ---------------------------------------------------------------------------
# Sensible per-platform defaults — tiered by platform capability
# ---------------------------------------------------------------------------
# Tier 1 (high): Supports message editing, typically personal/team use
# Tier 2 (medium): Supports editing but often workspace/customer-facing
# Tier 3 (low): No edit support — each progress msg is permanent
# Tier 4 (minimal): Batch/non-interactive delivery
_TIER_HIGH = {
"tool_progress": "all",
"show_reasoning": False,
"tool_preview_length": 40,
"streaming": None, # follow global
}
_TIER_MEDIUM = {
"tool_progress": "new",
"show_reasoning": False,
"tool_preview_length": 40,
"streaming": None,
}
_TIER_LOW = {
"tool_progress": "off",
"show_reasoning": False,
"tool_preview_length": 40,
"streaming": False,
}
_TIER_MINIMAL = {
"tool_progress": "off",
"show_reasoning": False,
"tool_preview_length": 0,
"streaming": False,
}
_PLATFORM_DEFAULTS: dict[str, dict[str, Any]] = {
# Tier 1 — full edit support, personal/team use
"telegram": _TIER_HIGH,
"discord": _TIER_HIGH,
# Tier 2 — edit support, often customer/workspace channels
"slack": _TIER_MEDIUM,
"mattermost": _TIER_MEDIUM,
"matrix": _TIER_MEDIUM,
"feishu": _TIER_MEDIUM,
# Tier 3 — no edit support, progress messages are permanent
"signal": _TIER_LOW,
"whatsapp": _TIER_MEDIUM, # Baileys bridge supports /edit
"bluebubbles": _TIER_LOW,
"weixin": _TIER_LOW,
"wecom": _TIER_LOW,
"wecom_callback": _TIER_LOW,
"dingtalk": _TIER_LOW,
# Tier 4 — batch or non-interactive delivery
"email": _TIER_MINIMAL,
"sms": _TIER_MINIMAL,
"webhook": _TIER_MINIMAL,
"homeassistant": _TIER_MINIMAL,
"api_server": {**_TIER_HIGH, "tool_preview_length": 0},
}
# Canonical set of per-platform overrideable keys (for validation).
OVERRIDEABLE_KEYS = frozenset(_GLOBAL_DEFAULTS.keys())
def resolve_display_setting(
user_config: dict,
platform_key: str,
setting: str,
fallback: Any = None,
) -> Any:
"""Resolve a display setting with per-platform override support.
Parameters
----------
user_config : dict
The full parsed config.yaml dict.
platform_key : str
Platform config key (e.g. ``"telegram"``, ``"slack"``). Use
``_platform_config_key(source.platform)`` from gateway/run.py.
setting : str
Display setting name (e.g. ``"tool_progress"``, ``"show_reasoning"``).
fallback : Any
Fallback value when the setting isn't found anywhere.
Returns
-------
The resolved value, or *fallback* if nothing is configured.
"""
display_cfg = user_config.get("display") or {}
# 1. Explicit per-platform override (display.platforms.<platform>.<key>)
platforms = display_cfg.get("platforms") or {}
plat_overrides = platforms.get(platform_key)
if isinstance(plat_overrides, dict):
val = plat_overrides.get(setting)
if val is not None:
return _normalise(setting, val)
# 1b. Backward compat: display.tool_progress_overrides.<platform>
if setting == "tool_progress":
legacy = display_cfg.get("tool_progress_overrides")
if isinstance(legacy, dict):
val = legacy.get(platform_key)
if val is not None:
return _normalise(setting, val)
# 2. Global user setting (display.<key>). Skip display.streaming because
# that key controls only CLI terminal streaming; gateway token streaming is
# governed by the top-level streaming config plus per-platform overrides.
if setting != "streaming":
val = display_cfg.get(setting)
if val is not None:
return _normalise(setting, val)
# 3. Built-in platform default
plat_defaults = _PLATFORM_DEFAULTS.get(platform_key)
if plat_defaults:
val = plat_defaults.get(setting)
if val is not None:
return val
# 4. Built-in global default
val = _GLOBAL_DEFAULTS.get(setting)
if val is not None:
return val
return fallback
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _normalise(setting: str, value: Any) -> Any:
"""Normalise YAML quirks (bare ``off`` → False in YAML 1.1)."""
if setting == "tool_progress":
if value is False:
return "off"
if value is True:
return "all"
return str(value).lower()
if setting in ("show_reasoning", "streaming"):
if isinstance(value, str):
return value.lower() in ("true", "1", "yes", "on")
return bool(value)
if setting == "tool_preview_length":
try:
return int(value)
except (TypeError, ValueError):
return 0
return value

View File

@@ -9,9 +9,11 @@ Each adapter handles:
"""
from .base import BasePlatformAdapter, MessageEvent, SendResult
from .qqbot import QQAdapter
__all__ = [
"BasePlatformAdapter",
"MessageEvent",
"SendResult",
"QQAdapter",
]

View File

@@ -2,6 +2,11 @@
OpenAI-compatible API server platform adapter.
Exposes an HTTP server with endpoints:
- GET / — Hermes Web Console operator cockpit
- GET /api/gui/health — cockpit health payload
- GET /api/gui/browser/status — browser runtime status
- POST /api/gui/browser/heal — self-healing browser cleanup
- GET /api/gui/discovery — ecosystem discovery for compatible frontends
- POST /v1/chat/completions — OpenAI Chat Completions format (stateless; opt-in session continuity via X-Hermes-Session-Id header)
- POST /v1/responses — OpenAI Responses API format (stateful via previous_response_id)
- GET /v1/responses/{response_id} — Retrieve a stored response
@@ -10,6 +15,7 @@ Exposes an HTTP server with endpoints:
- POST /v1/runs — start a run, returns run_id immediately (202)
- GET /v1/runs/{run_id}/events — SSE stream of structured lifecycle events
- GET /health — health check
- GET /health/detailed — rich status for cross-container dashboard probing
Any OpenAI-compatible frontend (Open WebUI, LobeChat, LibreChat,
AnythingLLM, NextChat, ChatBox, etc.) can connect to hermes-agent
@@ -53,6 +59,67 @@ DEFAULT_HOST = "127.0.0.1"
DEFAULT_PORT = 8642
MAX_STORED_RESPONSES = 100
MAX_REQUEST_BYTES = 1_000_000 # 1 MB default limit for POST bodies
CHAT_COMPLETIONS_SSE_KEEPALIVE_SECONDS = 30.0
MAX_NORMALIZED_TEXT_LENGTH = 65_536 # 64 KB cap for normalized content parts
MAX_CONTENT_LIST_SIZE = 1_000 # Max items when content is an array
def _normalize_chat_content(
content: Any, *, _max_depth: int = 10, _depth: int = 0,
) -> str:
"""Normalize OpenAI chat message content into a plain text string.
Some clients (Open WebUI, LobeChat, etc.) send content as an array of
typed parts instead of a plain string::
[{"type": "text", "text": "hello"}, {"type": "input_text", "text": "..."}]
This function flattens those into a single string so the agent pipeline
(which expects strings) doesn't choke.
Defensive limits prevent abuse: recursion depth, list size, and output
length are all bounded.
"""
if _depth > _max_depth:
return ""
if content is None:
return ""
if isinstance(content, str):
return content[:MAX_NORMALIZED_TEXT_LENGTH] if len(content) > MAX_NORMALIZED_TEXT_LENGTH else content
if isinstance(content, list):
parts: List[str] = []
items = content[:MAX_CONTENT_LIST_SIZE] if len(content) > MAX_CONTENT_LIST_SIZE else content
for item in items:
if isinstance(item, str):
if item:
parts.append(item[:MAX_NORMALIZED_TEXT_LENGTH])
elif isinstance(item, dict):
item_type = str(item.get("type") or "").strip().lower()
if item_type in {"text", "input_text", "output_text"}:
text = item.get("text", "")
if text:
try:
parts.append(str(text)[:MAX_NORMALIZED_TEXT_LENGTH])
except Exception:
pass
# Silently skip image_url / other non-text parts
elif isinstance(item, list):
nested = _normalize_chat_content(item, _max_depth=_max_depth, _depth=_depth + 1)
if nested:
parts.append(nested)
# Check accumulated size
if sum(len(p) for p in parts) >= MAX_NORMALIZED_TEXT_LENGTH:
break
result = "\n".join(parts)
return result[:MAX_NORMALIZED_TEXT_LENGTH] if len(result) > MAX_NORMALIZED_TEXT_LENGTH else result
# Fallback for unexpected types (int, float, bool, etc.)
try:
result = str(content)
return result[:MAX_NORMALIZED_TEXT_LENGTH] if len(result) > MAX_NORMALIZED_TEXT_LENGTH else result
except Exception:
return ""
def check_api_server_requirements() -> bool:
@@ -453,6 +520,8 @@ class APIServerAdapter(BasePlatformAdapter):
session_id: Optional[str] = None,
stream_delta_callback=None,
tool_progress_callback=None,
tool_start_callback=None,
tool_complete_callback=None,
) -> Any:
"""
Create an AIAgent instance using the gateway's runtime config.
@@ -491,6 +560,8 @@ class APIServerAdapter(BasePlatformAdapter):
platform="api_server",
stream_delta_callback=stream_delta_callback,
tool_progress_callback=tool_progress_callback,
tool_start_callback=tool_start_callback,
tool_complete_callback=tool_complete_callback,
session_db=self._ensure_session_db(),
fallback_model=fallback_model,
)
@@ -504,6 +575,27 @@ class APIServerAdapter(BasePlatformAdapter):
"""GET /health — simple health check."""
return web.json_response({"status": "ok", "platform": "hermes-agent"})
async def _handle_health_detailed(self, request: "web.Request") -> "web.Response":
"""GET /health/detailed — rich status for cross-container dashboard probing.
Returns gateway state, connected platforms, PID, and uptime so the
dashboard can display full status without needing a shared PID file or
/proc access. No authentication required.
"""
from gateway.status import read_runtime_status
runtime = read_runtime_status() or {}
return web.json_response({
"status": "ok",
"platform": "hermes-agent",
"gateway_state": runtime.get("gateway_state"),
"platforms": runtime.get("platforms", {}),
"active_agents": runtime.get("active_agents", 0),
"exit_reason": runtime.get("exit_reason"),
"updated_at": runtime.get("updated_at"),
"pid": os.getpid(),
})
async def _handle_models(self, request: "web.Request") -> "web.Response":
"""GET /v1/models — return hermes-agent as an available model."""
auth_err = self._check_auth(request)
@@ -552,7 +644,7 @@ class APIServerAdapter(BasePlatformAdapter):
for msg in messages:
role = msg.get("role", "")
content = msg.get("content", "")
content = _normalize_chat_content(msg.get("content", ""))
if role == "system":
# Accumulate system messages
if system_prompt is None:
@@ -762,7 +854,11 @@ class APIServerAdapter(BasePlatformAdapter):
"""
import queue as _q
sse_headers = {"Content-Type": "text/event-stream", "Cache-Control": "no-cache"}
sse_headers = {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"X-Accel-Buffering": "no",
}
# CORS middleware can't inject headers into StreamResponse after
# prepare() flushes them, so resolve CORS headers up front.
origin = request.headers.get("Origin", "")
@@ -775,6 +871,8 @@ class APIServerAdapter(BasePlatformAdapter):
await response.prepare(request)
try:
last_activity = time.monotonic()
# Role chunk
role_chunk = {
"id": completion_id, "object": "chat.completion.chunk",
@@ -782,6 +880,7 @@ class APIServerAdapter(BasePlatformAdapter):
"choices": [{"index": 0, "delta": {"role": "assistant"}, "finish_reason": None}],
}
await response.write(f"data: {json.dumps(role_chunk)}\n\n".encode())
last_activity = time.monotonic()
# Helper — route a queue item to the correct SSE event.
async def _emit(item):
@@ -805,6 +904,7 @@ class APIServerAdapter(BasePlatformAdapter):
"choices": [{"index": 0, "delta": {"content": item}, "finish_reason": None}],
}
await response.write(f"data: {json.dumps(content_chunk)}\n\n".encode())
return time.monotonic()
# Stream content chunks as they arrive from the agent
loop = asyncio.get_event_loop()
@@ -819,16 +919,19 @@ class APIServerAdapter(BasePlatformAdapter):
delta = stream_q.get_nowait()
if delta is None:
break
await _emit(delta)
last_activity = await _emit(delta)
except _q.Empty:
break
break
if time.monotonic() - last_activity >= CHAT_COMPLETIONS_SSE_KEEPALIVE_SECONDS:
await response.write(b": keepalive\n\n")
last_activity = time.monotonic()
continue
if delta is None: # End of stream sentinel
break
await _emit(delta)
last_activity = await _emit(delta)
# Get usage from completed agent
usage = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0}
@@ -871,6 +974,427 @@ class APIServerAdapter(BasePlatformAdapter):
return response
async def _write_sse_responses(
self,
request: "web.Request",
response_id: str,
model: str,
created_at: int,
stream_q,
agent_task,
agent_ref,
conversation_history: List[Dict[str, str]],
user_message: str,
instructions: Optional[str],
conversation: Optional[str],
store: bool,
session_id: str,
) -> "web.StreamResponse":
"""Write an SSE stream for POST /v1/responses (OpenAI Responses API).
Emits spec-compliant event types as the agent runs:
- ``response.created`` — initial envelope (status=in_progress)
- ``response.output_text.delta`` / ``response.output_text.done`` —
streamed assistant text
- ``response.output_item.added`` / ``response.output_item.done``
with ``item.type == "function_call"`` — when the agent invokes a
tool (both events fire; the ``done`` event carries the finalized
``arguments`` string)
- ``response.output_item.added`` with
``item.type == "function_call_output"`` — tool result with
``{call_id, output, status}``
- ``response.completed`` — terminal event carrying the full
response object with all output items + usage (same payload
shape as the non-streaming path for parity)
- ``response.failed`` — terminal event on agent error
If the client disconnects mid-stream, ``agent.interrupt()`` is
called so the agent stops issuing upstream LLM calls, then the
asyncio task is cancelled. When ``store=True`` the full response
is persisted to the ResponseStore in a ``finally`` block so GET
/v1/responses/{id} and ``previous_response_id`` chaining work the
same as the batch path.
"""
import queue as _q
sse_headers = {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"X-Accel-Buffering": "no",
}
origin = request.headers.get("Origin", "")
cors = self._cors_headers_for_origin(origin) if origin else None
if cors:
sse_headers.update(cors)
if session_id:
sse_headers["X-Hermes-Session-Id"] = session_id
response = web.StreamResponse(status=200, headers=sse_headers)
await response.prepare(request)
# State accumulated during the stream
final_text_parts: List[str] = []
# Track open function_call items by name so we can emit a matching
# ``done`` event when the tool completes. Order preserved.
pending_tool_calls: List[Dict[str, Any]] = []
# Output items we've emitted so far (used to build the terminal
# response.completed payload). Kept in the order they appeared.
emitted_items: List[Dict[str, Any]] = []
# Monotonic counter for output_index (spec requires it).
output_index = 0
# Monotonic counter for call_id generation if the agent doesn't
# provide one (it doesn't, from tool_progress_callback).
call_counter = 0
# Canonical Responses SSE events include a monotonically increasing
# sequence_number. Add it server-side for every emitted event so
# clients that validate the OpenAI event schema can parse our stream.
sequence_number = 0
# Track the assistant message item id + content index for text
# delta events — the spec ties deltas to a specific item.
message_item_id = f"msg_{uuid.uuid4().hex[:24]}"
message_output_index: Optional[int] = None
message_opened = False
async def _write_event(event_type: str, data: Dict[str, Any]) -> None:
nonlocal sequence_number
if "sequence_number" not in data:
data["sequence_number"] = sequence_number
sequence_number += 1
payload = f"event: {event_type}\ndata: {json.dumps(data)}\n\n"
await response.write(payload.encode())
def _envelope(status: str) -> Dict[str, Any]:
env: Dict[str, Any] = {
"id": response_id,
"object": "response",
"status": status,
"created_at": created_at,
"model": model,
}
return env
final_response_text = ""
agent_error: Optional[str] = None
usage: Dict[str, int] = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0}
try:
# response.created — initial envelope, status=in_progress
created_env = _envelope("in_progress")
created_env["output"] = []
await _write_event("response.created", {
"type": "response.created",
"response": created_env,
})
last_activity = time.monotonic()
async def _open_message_item() -> None:
"""Emit response.output_item.added for the assistant message
the first time any text delta arrives."""
nonlocal message_opened, message_output_index, output_index
if message_opened:
return
message_opened = True
message_output_index = output_index
output_index += 1
item = {
"id": message_item_id,
"type": "message",
"status": "in_progress",
"role": "assistant",
"content": [],
}
await _write_event("response.output_item.added", {
"type": "response.output_item.added",
"output_index": message_output_index,
"item": item,
})
async def _emit_text_delta(delta_text: str) -> None:
await _open_message_item()
final_text_parts.append(delta_text)
await _write_event("response.output_text.delta", {
"type": "response.output_text.delta",
"item_id": message_item_id,
"output_index": message_output_index,
"content_index": 0,
"delta": delta_text,
"logprobs": [],
})
async def _emit_tool_started(payload: Dict[str, Any]) -> str:
"""Emit response.output_item.added for a function_call.
Returns the call_id so the matching completion event can
reference it. Prefer the real ``tool_call_id`` from the
agent when available; fall back to a generated call id for
safety in tests or older code paths.
"""
nonlocal output_index, call_counter
call_counter += 1
call_id = payload.get("tool_call_id") or f"call_{response_id[5:]}_{call_counter}"
args = payload.get("arguments", {})
if isinstance(args, dict):
arguments_str = json.dumps(args)
else:
arguments_str = str(args)
item = {
"id": f"fc_{uuid.uuid4().hex[:24]}",
"type": "function_call",
"status": "in_progress",
"name": payload.get("name", ""),
"call_id": call_id,
"arguments": arguments_str,
}
idx = output_index
output_index += 1
pending_tool_calls.append({
"call_id": call_id,
"name": payload.get("name", ""),
"arguments": arguments_str,
"item_id": item["id"],
"output_index": idx,
})
emitted_items.append({
"type": "function_call",
"name": payload.get("name", ""),
"arguments": arguments_str,
"call_id": call_id,
})
await _write_event("response.output_item.added", {
"type": "response.output_item.added",
"output_index": idx,
"item": item,
})
return call_id
async def _emit_tool_completed(payload: Dict[str, Any]) -> None:
"""Emit response.output_item.done (function_call) followed
by response.output_item.added (function_call_output)."""
nonlocal output_index
call_id = payload.get("tool_call_id")
result = payload.get("result", "")
pending = None
if call_id:
for i, p in enumerate(pending_tool_calls):
if p["call_id"] == call_id:
pending = pending_tool_calls.pop(i)
break
if pending is None:
# Completion without a matching start — skip to avoid
# emitting orphaned done events.
return
# function_call done
done_item = {
"id": pending["item_id"],
"type": "function_call",
"status": "completed",
"name": pending["name"],
"call_id": pending["call_id"],
"arguments": pending["arguments"],
}
await _write_event("response.output_item.done", {
"type": "response.output_item.done",
"output_index": pending["output_index"],
"item": done_item,
})
# function_call_output added (result)
result_str = result if isinstance(result, str) else json.dumps(result)
output_parts = [{"type": "input_text", "text": result_str}]
output_item = {
"id": f"fco_{uuid.uuid4().hex[:24]}",
"type": "function_call_output",
"call_id": pending["call_id"],
"output": output_parts,
"status": "completed",
}
idx = output_index
output_index += 1
emitted_items.append({
"type": "function_call_output",
"call_id": pending["call_id"],
"output": output_parts,
})
await _write_event("response.output_item.added", {
"type": "response.output_item.added",
"output_index": idx,
"item": output_item,
})
await _write_event("response.output_item.done", {
"type": "response.output_item.done",
"output_index": idx,
"item": output_item,
})
# Main drain loop — thread-safe queue fed by agent callbacks.
async def _dispatch(it) -> None:
"""Route a queue item to the correct SSE emitter.
Plain strings are text deltas. Tagged tuples with
``__tool_started__`` / ``__tool_completed__`` prefixes
are tool lifecycle events.
"""
if isinstance(it, tuple) and len(it) == 2 and isinstance(it[0], str):
tag, payload = it
if tag == "__tool_started__":
await _emit_tool_started(payload)
elif tag == "__tool_completed__":
await _emit_tool_completed(payload)
# Unknown tags are silently ignored (forward-compat).
elif isinstance(it, str):
await _emit_text_delta(it)
# Other types (non-string, non-tuple) are silently dropped.
loop = asyncio.get_event_loop()
while True:
try:
item = await loop.run_in_executor(None, lambda: stream_q.get(timeout=0.5))
except _q.Empty:
if agent_task.done():
# Drain remaining
while True:
try:
item = stream_q.get_nowait()
if item is None:
break
await _dispatch(item)
last_activity = time.monotonic()
except _q.Empty:
break
break
if time.monotonic() - last_activity >= CHAT_COMPLETIONS_SSE_KEEPALIVE_SECONDS:
await response.write(b": keepalive\n\n")
last_activity = time.monotonic()
continue
if item is None: # EOS sentinel
break
await _dispatch(item)
last_activity = time.monotonic()
# Pick up agent result + usage from the completed task
try:
result, agent_usage = await agent_task
usage = agent_usage or usage
# If the agent produced a final_response but no text
# deltas were streamed (e.g. some providers only emit
# the full response at the end), emit a single fallback
# delta so Responses clients still receive a live text part.
agent_final = result.get("final_response", "") if isinstance(result, dict) else ""
if agent_final and not final_text_parts:
await _emit_text_delta(agent_final)
if agent_final and not final_response_text:
final_response_text = agent_final
if isinstance(result, dict) and result.get("error") and not final_response_text:
agent_error = result["error"]
except Exception as e: # noqa: BLE001
logger.error("Error running agent for streaming responses: %s", e, exc_info=True)
agent_error = str(e)
# Close the message item if it was opened
final_response_text = "".join(final_text_parts) or final_response_text
if message_opened:
await _write_event("response.output_text.done", {
"type": "response.output_text.done",
"item_id": message_item_id,
"output_index": message_output_index,
"content_index": 0,
"text": final_response_text,
"logprobs": [],
})
msg_done_item = {
"id": message_item_id,
"type": "message",
"status": "completed",
"role": "assistant",
"content": [
{"type": "output_text", "text": final_response_text}
],
}
await _write_event("response.output_item.done", {
"type": "response.output_item.done",
"output_index": message_output_index,
"item": msg_done_item,
})
# Always append a final message item in the completed
# response envelope so clients that only parse the terminal
# payload still see the assistant text. This mirrors the
# shape produced by _extract_output_items in the batch path.
final_items: List[Dict[str, Any]] = list(emitted_items)
final_items.append({
"type": "message",
"role": "assistant",
"content": [
{"type": "output_text", "text": final_response_text or (agent_error or "")}
],
})
if agent_error:
failed_env = _envelope("failed")
failed_env["output"] = final_items
failed_env["error"] = {"message": agent_error, "type": "server_error"}
failed_env["usage"] = {
"input_tokens": usage.get("input_tokens", 0),
"output_tokens": usage.get("output_tokens", 0),
"total_tokens": usage.get("total_tokens", 0),
}
await _write_event("response.failed", {
"type": "response.failed",
"response": failed_env,
})
else:
completed_env = _envelope("completed")
completed_env["output"] = final_items
completed_env["usage"] = {
"input_tokens": usage.get("input_tokens", 0),
"output_tokens": usage.get("output_tokens", 0),
"total_tokens": usage.get("total_tokens", 0),
}
await _write_event("response.completed", {
"type": "response.completed",
"response": completed_env,
})
# Persist for future chaining / GET retrieval, mirroring
# the batch path behavior.
if store:
full_history = list(conversation_history)
full_history.append({"role": "user", "content": user_message})
if isinstance(result, dict) and result.get("messages"):
full_history.extend(result["messages"])
else:
full_history.append({"role": "assistant", "content": final_response_text})
self._response_store.put(response_id, {
"response": completed_env,
"conversation_history": full_history,
"instructions": instructions,
"session_id": session_id,
})
if conversation:
self._response_store.set_conversation(conversation, response_id)
except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError, OSError):
# Client disconnected — interrupt the agent so it stops
# making upstream LLM calls, then cancel the task.
agent = agent_ref[0] if agent_ref else None
if agent is not None:
try:
agent.interrupt("SSE client disconnected")
except Exception:
pass
if not agent_task.done():
agent_task.cancel()
try:
await agent_task
except (asyncio.CancelledError, Exception):
pass
logger.info("SSE client disconnected; interrupted agent task %s", response_id)
return response
async def _handle_responses(self, request: "web.Request") -> "web.Response":
"""POST /v1/responses — OpenAI Responses API format."""
auth_err = self._check_auth(request)
@@ -914,18 +1438,7 @@ class APIServerAdapter(BasePlatformAdapter):
input_messages.append({"role": "user", "content": item})
elif isinstance(item, dict):
role = item.get("role", "user")
content = item.get("content", "")
# Handle content that may be a list of content parts
if isinstance(content, list):
text_parts = []
for part in content:
if isinstance(part, dict) and part.get("type") == "input_text":
text_parts.append(part.get("text", ""))
elif isinstance(part, dict) and part.get("type") == "output_text":
text_parts.append(part.get("text", ""))
elif isinstance(part, str):
text_parts.append(part)
content = "\n".join(text_parts)
content = _normalize_chat_content(item.get("content", ""))
input_messages.append({"role": role, "content": content})
else:
return web.json_response(_openai_error("'input' must be a string or array"), status=400)
@@ -952,11 +1465,13 @@ class APIServerAdapter(BasePlatformAdapter):
if previous_response_id:
logger.debug("Both conversation_history and previous_response_id provided; using conversation_history")
stored_session_id = None
if not conversation_history and previous_response_id:
stored = self._response_store.get(previous_response_id)
if stored is None:
return web.json_response(_openai_error(f"Previous response not found: {previous_response_id}"), status=404)
conversation_history = list(stored.get("conversation_history", []))
stored_session_id = stored.get("session_id")
# If no instructions provided, carry forward from previous
if instructions is None:
instructions = stored.get("instructions")
@@ -974,8 +1489,83 @@ class APIServerAdapter(BasePlatformAdapter):
if body.get("truncation") == "auto" and len(conversation_history) > 100:
conversation_history = conversation_history[-100:]
# Run the agent (with Idempotency-Key support)
session_id = str(uuid.uuid4())
# Reuse session from previous_response_id chain so the dashboard
# groups the entire conversation under one session entry.
session_id = stored_session_id or str(uuid.uuid4())
stream = bool(body.get("stream", False))
if stream:
# Streaming branch — emit OpenAI Responses SSE events as the
# agent runs so frontends can render text deltas and tool
# calls in real time. See _write_sse_responses for details.
import queue as _q
_stream_q: _q.Queue = _q.Queue()
def _on_delta(delta):
# None from the agent is a CLI box-close signal, not EOS.
# Forwarding would kill the SSE stream prematurely; the
# SSE writer detects completion via agent_task.done().
if delta is not None:
_stream_q.put(delta)
def _on_tool_progress(event_type, name, preview, args, **kwargs):
"""Queue non-start tool progress events if needed in future.
The structured Responses stream uses ``tool_start_callback``
and ``tool_complete_callback`` for exact call-id correlation,
so progress events are currently ignored here.
"""
return
def _on_tool_start(tool_call_id, function_name, function_args):
"""Queue a started tool for live function_call streaming."""
_stream_q.put(("__tool_started__", {
"tool_call_id": tool_call_id,
"name": function_name,
"arguments": function_args or {},
}))
def _on_tool_complete(tool_call_id, function_name, function_args, function_result):
"""Queue a completed tool result for live function_call_output streaming."""
_stream_q.put(("__tool_completed__", {
"tool_call_id": tool_call_id,
"name": function_name,
"arguments": function_args or {},
"result": function_result,
}))
agent_ref = [None]
agent_task = asyncio.ensure_future(self._run_agent(
user_message=user_message,
conversation_history=conversation_history,
ephemeral_system_prompt=instructions,
session_id=session_id,
stream_delta_callback=_on_delta,
tool_progress_callback=_on_tool_progress,
tool_start_callback=_on_tool_start,
tool_complete_callback=_on_tool_complete,
agent_ref=agent_ref,
))
response_id = f"resp_{uuid.uuid4().hex[:28]}"
model_name = body.get("model", self._model_name)
created_at = int(time.time())
return await self._write_sse_responses(
request=request,
response_id=response_id,
model=model_name,
created_at=created_at,
stream_q=_stream_q,
agent_task=agent_task,
agent_ref=agent_ref,
conversation_history=conversation_history,
user_message=user_message,
instructions=instructions,
conversation=conversation,
store=store,
session_id=session_id,
)
async def _compute_response():
return await self._run_agent(
@@ -1050,6 +1640,7 @@ class APIServerAdapter(BasePlatformAdapter):
"response": response_data,
"conversation_history": full_history,
"instructions": instructions,
"session_id": session_id,
})
# Update conversation mapping so the next request with the same
# conversation name automatically chains to this response
@@ -1403,6 +1994,8 @@ class APIServerAdapter(BasePlatformAdapter):
session_id: Optional[str] = None,
stream_delta_callback=None,
tool_progress_callback=None,
tool_start_callback=None,
tool_complete_callback=None,
agent_ref: Optional[list] = None,
) -> tuple:
"""
@@ -1424,6 +2017,8 @@ class APIServerAdapter(BasePlatformAdapter):
session_id=session_id,
stream_delta_callback=stream_delta_callback,
tool_progress_callback=tool_progress_callback,
tool_start_callback=tool_start_callback,
tool_complete_callback=tool_complete_callback,
)
if agent_ref is not None:
agent_ref[0] = agent
@@ -1560,10 +2155,12 @@ class APIServerAdapter(BasePlatformAdapter):
if previous_response_id:
logger.debug("Both conversation_history and previous_response_id provided; using conversation_history")
stored_session_id = None
if not conversation_history and previous_response_id:
stored = self._response_store.get(previous_response_id)
if stored:
conversation_history = list(stored.get("conversation_history", []))
stored_session_id = stored.get("session_id")
if instructions is None:
instructions = stored.get("instructions")
@@ -1582,7 +2179,7 @@ class APIServerAdapter(BasePlatformAdapter):
)
conversation_history.append({"role": msg["role"], "content": str(content)})
session_id = body.get("session_id") or run_id
session_id = body.get("session_id") or stored_session_id or run_id
ephemeral_system_prompt = instructions
async def _run_and_close():
@@ -1711,6 +2308,30 @@ class APIServerAdapter(BasePlatformAdapter):
# BasePlatformAdapter interface
# ------------------------------------------------------------------
def _register_routes(self, app: "web.Application") -> None:
"""Register API and operator-cockpit routes on an aiohttp app."""
from gateway.platforms.api_server_ui import maybe_register_web_console
app.router.add_get("/health", self._handle_health)
app.router.add_get("/health/detailed", self._handle_health_detailed)
app.router.add_get("/v1/health", self._handle_health)
app.router.add_get("/v1/models", self._handle_models)
app.router.add_post("/v1/chat/completions", self._handle_chat_completions)
app.router.add_post("/v1/responses", self._handle_responses)
app.router.add_get("/v1/responses/{response_id}", self._handle_get_response)
app.router.add_delete("/v1/responses/{response_id}", self._handle_delete_response)
app.router.add_get("/api/jobs", self._handle_list_jobs)
app.router.add_post("/api/jobs", self._handle_create_job)
app.router.add_get("/api/jobs/{job_id}", self._handle_get_job)
app.router.add_patch("/api/jobs/{job_id}", self._handle_update_job)
app.router.add_delete("/api/jobs/{job_id}", self._handle_delete_job)
app.router.add_post("/api/jobs/{job_id}/pause", self._handle_pause_job)
app.router.add_post("/api/jobs/{job_id}/resume", self._handle_resume_job)
app.router.add_post("/api/jobs/{job_id}/run", self._handle_run_job)
app.router.add_post("/v1/runs", self._handle_runs)
app.router.add_get("/v1/runs/{run_id}/events", self._handle_run_events)
maybe_register_web_console(app)
async def connect(self) -> bool:
"""Start the aiohttp web server."""
if not AIOHTTP_AVAILABLE:
@@ -1721,25 +2342,7 @@ class APIServerAdapter(BasePlatformAdapter):
mws = [mw for mw in (cors_middleware, body_limit_middleware, security_headers_middleware) if mw is not None]
self._app = web.Application(middlewares=mws)
self._app["api_server_adapter"] = self
self._app.router.add_get("/health", self._handle_health)
self._app.router.add_get("/v1/health", self._handle_health)
self._app.router.add_get("/v1/models", self._handle_models)
self._app.router.add_post("/v1/chat/completions", self._handle_chat_completions)
self._app.router.add_post("/v1/responses", self._handle_responses)
self._app.router.add_get("/v1/responses/{response_id}", self._handle_get_response)
self._app.router.add_delete("/v1/responses/{response_id}", self._handle_delete_response)
# Cron jobs management API
self._app.router.add_get("/api/jobs", self._handle_list_jobs)
self._app.router.add_post("/api/jobs", self._handle_create_job)
self._app.router.add_get("/api/jobs/{job_id}", self._handle_get_job)
self._app.router.add_patch("/api/jobs/{job_id}", self._handle_update_job)
self._app.router.add_delete("/api/jobs/{job_id}", self._handle_delete_job)
self._app.router.add_post("/api/jobs/{job_id}/pause", self._handle_pause_job)
self._app.router.add_post("/api/jobs/{job_id}/resume", self._handle_resume_job)
self._app.router.add_post("/api/jobs/{job_id}/run", self._handle_run_job)
# Structured event streaming
self._app.router.add_post("/v1/runs", self._handle_runs)
self._app.router.add_get("/v1/runs/{run_id}/events", self._handle_run_events)
self._register_routes(self._app)
# Start background sweep to clean up orphaned (unconsumed) run streams
sweep_task = asyncio.create_task(self._sweep_orphaned_runs())
try:
@@ -1758,6 +2361,23 @@ class APIServerAdapter(BasePlatformAdapter):
)
return False
# Refuse to start network-accessible with a placeholder key.
# Ported from openclaw/openclaw#64586.
if is_network_accessible(self._host) and self._api_key:
try:
from hermes_cli.auth import has_usable_secret
if not has_usable_secret(self._api_key, min_length=8):
logger.error(
"[%s] Refusing to start: API_SERVER_KEY is set to a "
"placeholder value. Generate a real secret "
"(e.g. `openssl rand -hex 32`) and set API_SERVER_KEY "
"before exposing the API server on %s.",
self.name, self._host,
)
return False
except ImportError:
pass
# Port conflict detection — fail fast if port is already in use
try:
with _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) as _s:

View File

@@ -0,0 +1,194 @@
"""Thin operator web console for the API server.
This keeps the UI intentionally small: an aiohttp-mounted cockpit that
surfaces Hermes health, browser runtime state, and ecosystem discovery
without introducing a second heavyweight frontend architecture.
"""
from __future__ import annotations
import json
from html import escape
from typing import Any, Dict
from aiohttp import web
from tools.browser_tool import browser_runtime_heal, browser_runtime_status
_DISCOVERY_FRONTENDS = [
"Open WebUI",
"LobeChat",
"LibreChat",
"AnythingLLM",
"NextChat",
"ChatBox",
]
def _adapter(request: web.Request):
return request.app["api_server_adapter"]
def _auth_or_none(request: web.Request):
adapter = _adapter(request)
return adapter._check_auth(request)
def _render_console_html(adapter) -> str:
health = {
"platform": "api_server",
"host": adapter._host,
"port": adapter._port,
"model": adapter._model_name,
"auth_required": bool(adapter._api_key),
}
health_json = escape(json.dumps(health, indent=2, ensure_ascii=False))
return f'''<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Hermes Web Console</title>
<style>
:root {{ color-scheme: dark; --bg: #0b1020; --panel: #121933; --fg: #e5ecff; --muted: #9aa8d1; --accent: #72b8ff; --good: #6dde8a; }}
body {{ margin: 0; font-family: ui-monospace, SFMono-Regular, Menlo, monospace; background: var(--bg); color: var(--fg); }}
header {{ padding: 20px 24px; border-bottom: 1px solid #243056; }}
main {{ padding: 24px; display: grid; gap: 16px; grid-template-columns: repeat(auto-fit, minmax(320px, 1fr)); }}
.panel {{ background: var(--panel); border: 1px solid #243056; border-radius: 12px; padding: 16px; box-shadow: 0 10px 30px rgba(0,0,0,.2); }}
h1, h2 {{ margin: 0 0 12px; }}
h1 {{ font-size: 24px; color: var(--accent); }}
h2 {{ font-size: 16px; color: var(--accent); }}
p, li, label {{ color: var(--muted); line-height: 1.5; }}
pre {{ margin: 0; white-space: pre-wrap; word-break: break-word; color: var(--fg); }}
button, input {{ font: inherit; }}
button {{ background: #1e2a52; color: var(--fg); border: 1px solid #39508f; border-radius: 8px; padding: 10px 14px; cursor: pointer; }}
button:hover {{ border-color: var(--accent); }}
input {{ width: 100%; box-sizing: border-box; background: #0d142a; color: var(--fg); border: 1px solid #243056; border-radius: 8px; padding: 10px 12px; margin-bottom: 12px; }}
.row {{ display: flex; gap: 8px; flex-wrap: wrap; margin-bottom: 12px; }}
.badge {{ display: inline-block; color: var(--good); border: 1px solid #2f6940; border-radius: 999px; padding: 2px 10px; margin-left: 10px; font-size: 12px; }}
ul {{ margin: 0; padding-left: 18px; }}
code {{ color: var(--good); }}
</style>
</head>
<body>
<header>
<h1>Hermes Web Console <span class="badge">operator cockpit</span></h1>
<p>Thin web UI over the existing API server, browser runtime, and streaming endpoints.</p>
</header>
<main>
<section class="panel">
<h2>Gateway Health</h2>
<pre id="health">{health_json}</pre>
</section>
<section class="panel">
<h2>Browser Cockpit</h2>
<label for="apiKey">Optional API key (only needed when API_SERVER_KEY is configured)</label>
<input id="apiKey" type="password" placeholder="sk-... or bearer token">
<div class="row">
<button id="refreshBtn">Refresh Browser Status</button>
<button id="healBtn">Heal Browser Layer</button>
</div>
<pre id="browserStatus">Loading...</pre>
</section>
<section class="panel">
<h2>Ecosystem Discovery</h2>
<ul>
<li><code>GET /v1/models</code> — OpenAI-compatible model discovery</li>
<li><code>POST /v1/chat/completions</code> — chat frontend compatibility</li>
<li><code>POST /v1/responses</code> — stateful responses API</li>
<li><code>POST /v1/runs</code> + <code>GET /v1/runs/{{run_id}}/events</code> — SSE lifecycle stream</li>
<li><code>GET /api/gui/browser/status</code> — browser runtime status</li>
<li><code>POST /api/gui/browser/heal</code> — cleanup + orphan reaper</li>
</ul>
<pre id="discovery">Loading...</pre>
</section>
</main>
<script>
function authHeaders() {{
const key = document.getElementById('apiKey').value.trim();
return key ? {{ 'Authorization': 'Bearer ' + key }} : {{}};
}}
async function loadJson(path, options) {{
const response = await fetch(path, options);
const text = await response.text();
try {{ return {{ status: response.status, body: JSON.parse(text) }}; }}
catch (_) {{ return {{ status: response.status, body: {{ raw: text }} }}; }}
}}
async function refreshBrowser() {{
const result = await loadJson('/api/gui/browser/status', {{ headers: authHeaders() }});
document.getElementById('browserStatus').textContent = JSON.stringify(result, null, 2);
}}
async function healBrowser() {{
const result = await loadJson('/api/gui/browser/heal', {{ method: 'POST', headers: authHeaders() }});
document.getElementById('browserStatus').textContent = JSON.stringify(result, null, 2);
}}
async function loadDiscovery() {{
const result = await loadJson('/api/gui/discovery');
document.getElementById('discovery').textContent = JSON.stringify(result, null, 2);
}}
document.getElementById('refreshBtn').addEventListener('click', refreshBrowser);
document.getElementById('healBtn').addEventListener('click', healBrowser);
refreshBrowser();
loadDiscovery();
</script>
</body>
</html>'''
async def handle_web_console_index(request: web.Request) -> web.Response:
return web.Response(text=_render_console_html(_adapter(request)), content_type="text/html")
async def handle_gui_health(request: web.Request) -> web.Response:
adapter = _adapter(request)
return web.json_response({
"status": "ok",
"platform": "api_server",
"host": adapter._host,
"port": adapter._port,
"model": adapter._model_name,
"auth_required": bool(adapter._api_key),
})
async def handle_browser_status(request: web.Request) -> web.Response:
auth_err = _auth_or_none(request)
if auth_err is not None:
return auth_err
return web.json_response(browser_runtime_status())
async def handle_browser_heal(request: web.Request) -> web.Response:
auth_err = _auth_or_none(request)
if auth_err is not None:
return auth_err
return web.json_response(browser_runtime_heal())
async def handle_discovery(request: web.Request) -> web.Response:
adapter = _adapter(request)
return web.json_response({
"frontends": _DISCOVERY_FRONTENDS,
"operator_cockpit": {
"root": "/",
"health": "/api/gui/health",
"browser_status": "/api/gui/browser/status",
"browser_heal": "/api/gui/browser/heal",
},
"openai_compatible": {
"models": "/v1/models",
"chat_completions": "/v1/chat/completions",
"responses": "/v1/responses",
"runs": "/v1/runs",
"run_events": "/v1/runs/{run_id}/events",
"model_name": adapter._model_name,
},
})
def maybe_register_web_console(app: web.Application) -> None:
app.router.add_get("/", handle_web_console_index)
app.router.add_get("/api/gui/health", handle_gui_health)
app.router.add_get("/api/gui/browser/status", handle_browser_status)
app.router.add_post("/api/gui/browser/heal", handle_browser_heal)
app.router.add_get("/api/gui/discovery", handle_discovery)

View File

@@ -21,6 +21,59 @@ from urllib.parse import urlsplit
logger = logging.getLogger(__name__)
def utf16_len(s: str) -> int:
"""Count UTF-16 code units in *s*.
Telegram's message-length limit (4 096) is measured in UTF-16 code units,
**not** Unicode code-points. Characters outside the Basic Multilingual
Plane (emoji like 😀, CJK Extension B, musical symbols, …) are encoded as
surrogate pairs and therefore consume **two** UTF-16 code units each, even
though Python's ``len()`` counts them as one.
Ported from nearai/ironclaw#2304 which discovered the same discrepancy in
Rust's ``chars().count()``.
"""
return len(s.encode("utf-16-le")) // 2
def _prefix_within_utf16_limit(s: str, limit: int) -> str:
"""Return the longest prefix of *s* whose UTF-16 length ≤ *limit*.
Unlike a plain ``s[:limit]``, this respects surrogate-pair boundaries so
we never slice a multi-code-unit character in half.
"""
if utf16_len(s) <= limit:
return s
# Binary search for the longest safe prefix
lo, hi = 0, len(s)
while lo < hi:
mid = (lo + hi + 1) // 2
if utf16_len(s[:mid]) <= limit:
lo = mid
else:
hi = mid - 1
return s[:lo]
def _custom_unit_to_cp(s: str, budget: int, len_fn) -> int:
"""Return the largest codepoint offset *n* such that ``len_fn(s[:n]) <= budget``.
Used by :meth:`BasePlatformAdapter.truncate_message` when *len_fn* measures
length in units different from Python codepoints (e.g. UTF-16 code units).
Falls back to binary search which is O(log n) calls to *len_fn*.
"""
if len_fn(s) <= budget:
return len(s)
lo, hi = 0, len(s)
while lo < hi:
mid = (lo + hi + 1) // 2
if len_fn(s[:mid]) <= budget:
lo = mid
else:
hi = mid - 1
return lo
def is_network_accessible(host: str) -> bool:
"""Return True if *host* would expose the server beyond loopback.
@@ -823,7 +876,36 @@ class BasePlatformAdapter(ABC):
result = handler(self)
if asyncio.iscoroutine(result):
await result
def _acquire_platform_lock(self, scope: str, identity: str, resource_desc: str) -> bool:
"""Acquire a scoped lock for this adapter. Returns True on success."""
from gateway.status import acquire_scoped_lock
self._platform_lock_scope = scope
self._platform_lock_identity = identity
acquired, existing = acquire_scoped_lock(
scope, identity, metadata={'platform': self.platform.value}
)
if acquired:
return True
owner_pid = existing.get('pid') if isinstance(existing, dict) else None
message = (
f'{resource_desc} already in use'
+ (f' (PID {owner_pid})' if owner_pid else '')
+ '. Stop the other gateway first.'
)
logger.error('[%s] %s', self.name, message)
self._set_fatal_error(f'{scope}_lock', message, retryable=False)
return False
def _release_platform_lock(self) -> None:
"""Release the scoped lock acquired by _acquire_platform_lock."""
identity = getattr(self, '_platform_lock_identity', None)
if not identity:
return
from gateway.status import release_scoped_lock
release_scoped_lock(self._platform_lock_scope, identity)
self._platform_lock_identity = None
@property
def name(self) -> str:
"""Human-readable name for this adapter."""
@@ -1542,6 +1624,21 @@ class BasePlatformAdapter(ABC):
# streaming already delivered the text (already_sent=True) or
# when the message was queued behind an active agent. Log at
# DEBUG to avoid noisy warnings for expected behavior.
#
# Suppress stale response when the session was interrupted by a
# new message that hasn't been consumed yet. The pending message
# is processed by the pending-message handler below (#8221/#2483).
if (
response
and interrupt_event.is_set()
and session_key in self._pending_messages
):
logger.info(
"[%s] Suppressing stale response for interrupted session %s",
self.name,
session_key,
)
response = None
if not response:
logger.debug("[%s] Handler returned empty/None response for %s", self.name, event.source.chat_id)
if response:
@@ -1857,7 +1954,11 @@ class BasePlatformAdapter(ABC):
return content
@staticmethod
def truncate_message(content: str, max_length: int = 4096) -> List[str]:
def truncate_message(
content: str,
max_length: int = 4096,
len_fn: Optional["Callable[[str], int]"] = None,
) -> List[str]:
"""
Split a long message into chunks, preserving code block boundaries.
@@ -1869,11 +1970,16 @@ class BasePlatformAdapter(ABC):
Args:
content: The full message content
max_length: Maximum length per chunk (platform-specific)
len_fn: Optional length function for measuring string length.
Defaults to ``len`` (Unicode code-points). Pass
``utf16_len`` for platforms that measure message
length in UTF-16 code units (e.g. Telegram).
Returns:
List of message chunks
"""
if len(content) <= max_length:
_len = len_fn or len
if _len(content) <= max_length:
return [content]
INDICATOR_RESERVE = 10 # room for " (XX/XX)"
@@ -1892,22 +1998,33 @@ class BasePlatformAdapter(ABC):
# How much body text we can fit after accounting for the prefix,
# a potential closing fence, and the chunk indicator.
headroom = max_length - INDICATOR_RESERVE - len(prefix) - len(FENCE_CLOSE)
headroom = max_length - INDICATOR_RESERVE - _len(prefix) - _len(FENCE_CLOSE)
if headroom < 1:
headroom = max_length // 2
# Everything remaining fits in one final chunk
if len(prefix) + len(remaining) <= max_length - INDICATOR_RESERVE:
if _len(prefix) + _len(remaining) <= max_length - INDICATOR_RESERVE:
chunks.append(prefix + remaining)
break
# Find a natural split point (prefer newlines, then spaces)
region = remaining[:headroom]
# Find a natural split point (prefer newlines, then spaces).
# When _len != len (e.g. utf16_len for Telegram), headroom is
# measured in the custom unit. We need codepoint-based slice
# positions that stay within the custom-unit budget.
#
# _safe_slice_pos() maps a custom-unit budget to the largest
# codepoint offset whose custom length ≤ budget.
if _len is not len:
# Map headroom (custom units) → codepoint slice length
_cp_limit = _custom_unit_to_cp(remaining, headroom, _len)
else:
_cp_limit = headroom
region = remaining[:_cp_limit]
split_at = region.rfind("\n")
if split_at < headroom // 2:
if split_at < _cp_limit // 2:
split_at = region.rfind(" ")
if split_at < 1:
split_at = headroom
split_at = _cp_limit
# Avoid splitting inside an inline code span (`...`).
# If the text before split_at has an odd number of unescaped
@@ -1927,7 +2044,7 @@ class BasePlatformAdapter(ABC):
safe_split = candidate.rfind(" ", 0, last_bt)
nl_split = candidate.rfind("\n", 0, last_bt)
safe_split = max(safe_split, nl_split)
if safe_split > headroom // 4:
if safe_split > _cp_limit // 4:
split_at = safe_split
chunk_body = remaining[:split_at]

Some files were not shown because too many files have changed in this diff Show More