- Update __version__ to 0.2.0 (was 0.1.0)
- Update pyproject.toml to match
- Add RELEASE_v0.2.0.md with comprehensive changelog covering:
- All 231 merged PRs
- 120 resolved issues
- 74+ contributors credited
- Organized by feature area with PR links
- Fix version mismatch: __init__.py had 'v1.0.0', pyproject.toml had '0.1.0'
Now both use '0.1.0' (no v prefix — added in display code only)
- Add __release_date__ for CalVer date tracking alongside SemVer version
- Fix double-v bug in cmd_version (was printing 'vv1.0.0')
- Update banner title to show 'Hermes Agent v0.1.0 (2026.3.12)' format
- Update cli.py banner to match new format
- Add scripts/release.py: full release automation tool
- Generates categorized changelogs from git history
- Maps git authors to GitHub @mentions (70+ contributors)
- Supports dry-run preview and --publish mode
- Creates annotated CalVer git tags + GitHub Releases
- Bumps semver in source files automatically
- Usage: python scripts/release.py --bump minor --publish
- Add .release_notes.md to .gitignore
Versioning scheme: CalVer tags (v2026.3.12) + SemVer display (v0.1.0)
- Remove test_nous_api_setup_preserves_model_provider_metadata (nous-api
provider no longer exists, test selected Nous OAuth which hangs waiting
for browser login)
- Fix test_nous_oauth_setup prompt_choice index: 1→0 (Nous Portal is
now first option after nous-api removal)
- gateway/run.py: Take main's _resolve_gateway_model() helper
- hermes_cli/setup.py: Re-apply nous-api removal after merge brought
it back. Fix provider_idx offset (Custom is now index 3, not 4).
- tests/hermes_cli/test_setup.py: Fix custom setup test index (3→4)
- Custom endpoints can serve any model, so skip validation for
provider='custom' in validate_requested_model(). Previously it
would reject any model name since there's no static catalog or
live API to check against.
- Show clear setup instructions when switching to custom endpoint
without OPENAI_BASE_URL/OPENAI_API_KEY configured.
- Added curated model lists for Nous Portal and OpenAI Codex to
_PROVIDER_MODELS so /model shows their available models.
Both /model and /provider now show the same unified display:
Current: anthropic/claude-opus-4.6 via OpenRouter
Authenticated providers & models:
[openrouter] ← active
anthropic/claude-opus-4.6 ← current
anthropic/claude-sonnet-4.5
...
[nous]
claude-opus-4-6
gemini-3-flash
...
[openai-codex]
gpt-5.2-codex
gpt-5.1-codex-mini
...
Not configured: Z.AI / GLM, Kimi / Moonshot, ...
Switch model: /model <model-name>
Switch provider: /model <provider>:<model-name>
Example: /model nous:claude-opus-4-6
Users can see all authenticated providers and their models at a glance,
making it easy to switch mid-conversation.
Also added curated model lists for Nous Portal and OpenAI Codex to
hermes_cli/models.py.
Nous Portal backend will become a transparent proxy for OpenRouter-
specific parameters (provider preferences, etc.), so keep sending them
to all providers. The reasoning disabled fix is kept (that's a real
constraint of the Nous endpoint).
Two bugs in _build_api_kwargs that broke Nous Portal:
1. Provider preferences (only, ignore, order, sort) are OpenRouter-
specific routing features. They were being sent in extra_body to ALL
providers, including Nous Portal. When the config had
providers_only=['google-vertex'], Nous Portal returned 404 'Inference
host not found' because it doesn't have a google-vertex backend.
Fix: Only include provider preferences when _is_openrouter is True.
2. Reasoning config with enabled=false was being sent to Nous Portal,
which requires reasoning and returns 400 'Reasoning is mandatory for
this endpoint and cannot be disabled.'
Fix: Omit the reasoning parameter for Nous when enabled=false.
Root cause found via HERMES_DUMP_REQUESTS=1 which showed the exact
request payload being sent to Nous Portal's inference API.
Model selection now comes exclusively from config.yaml (set via
'hermes model' or 'hermes setup'). The LLM_MODEL env var is no longer
read or written anywhere in production code.
Why: env vars are per-process/per-user and would conflict in
multi-agent or multi-tenant setups. Config.yaml is file-based and
can be scoped per-user or eventually per-session.
Changes:
- cli.py: Read model from CLI_CONFIG only, not LLM_MODEL/OPENAI_MODEL
- hermes_cli/auth.py: _save_model_choice() no longer writes LLM_MODEL
to .env
- hermes_cli/setup.py: Remove 12 save_env_value('LLM_MODEL', ...)
calls from all provider setup flows
- gateway/run.py: Remove LLM_MODEL fallback (HERMES_MODEL still works
for gateway process runtime)
- cron/scheduler.py: Same
- agent/auxiliary_client.py: Remove LLM_MODEL from custom endpoint
model detection
Phase 2 of the provider router migration — route the main agent's
client construction and fallback activation through
resolve_provider_client() instead of duplicated ad-hoc logic.
run_agent.py:
- __init__: When no explicit api_key/base_url, use
resolve_provider_client(provider, raw_codex=True) for client
construction. Explicit creds (from CLI/gateway runtime provider)
still construct directly.
- _try_activate_fallback: Replace _resolve_fallback_credentials and
its duplicated _FALLBACK_API_KEY_PROVIDERS / _FALLBACK_OAUTH_PROVIDERS
dicts with a single resolve_provider_client() call. The router
handles all provider types (API-key, OAuth, Codex) centrally.
- Remove _resolve_fallback_credentials method and both fallback dicts.
agent/auxiliary_client.py:
- Add raw_codex parameter to resolve_provider_client(). When True,
returns the raw OpenAI client for Codex providers instead of wrapping
in CodexAuxiliaryClient. The main agent needs this for direct
responses.stream() access.
3251 passed, 2 pre-existing unrelated failures.
Update 14 test files to use the new call_llm/async_call_llm mock
patterns instead of the old get_text_auxiliary_client/
get_vision_auxiliary_client tuple returns.
- vision_tools tests: mock async_call_llm instead of _aux_async_client
- browser tests: mock call_llm instead of _aux_vision_client
- flush_memories tests: mock call_llm instead of get_text_auxiliary_client
- session_search tests: mock async_call_llm with RuntimeError
- mcp_tool tests: fix whitelist model config, use side_effect for
multi-response tests
- auxiliary_config_bridge: update for model=None (resolved in router)
3251 passed, 2 pre-existing unrelated failures.
Add centralized call_llm() and async_call_llm() functions that own the
full LLM request lifecycle:
1. Resolve provider + model from task config or explicit args
2. Get or create a cached client for that provider
3. Format request args (max_tokens handling, provider extra_body)
4. Make the API call with max_tokens/max_completion_tokens retry
5. Return the response
Config: expanded auxiliary section with provider:model slots for all
tasks (compression, vision, web_extract, session_search, skills_hub,
mcp, flush_memories). Config version bumped to 7.
Migrated all auxiliary consumers:
- context_compressor.py: uses call_llm(task='compression')
- vision_tools.py: uses async_call_llm(task='vision')
- web_tools.py: uses async_call_llm(task='web_extract')
- session_search_tool.py: uses async_call_llm(task='session_search')
- browser_tool.py: uses call_llm(task='vision'/'web_extract')
- mcp_tool.py: uses call_llm(task='mcp')
- skills_guard.py: uses call_llm(provider='openrouter')
- run_agent.py flush_memories: uses call_llm(task='flush_memories')
Tests updated for context_compressor and MCP tool. Some test mocks
still need updating (15 remaining failures from mock pattern changes,
2 pre-existing).
Route all remaining ad-hoc auxiliary LLM call sites through
resolve_provider_client() so auth, headers, and API format (Chat
Completions vs Responses API) are handled consistently in one place.
Files changed:
- tools/openrouter_client.py: Replace manual AsyncOpenAI construction
with resolve_provider_client('openrouter', async_mode=True). The
shared client module now delegates entirely to the router.
- tools/skills_guard.py: Replace inline OpenAI client construction
(hardcoded OpenRouter base_url, manual api_key lookup, manual
headers) with resolve_provider_client('openrouter'). Remove unused
OPENROUTER_BASE_URL import.
- trajectory_compressor.py: Add _detect_provider() to map config
base_url to a provider name, then route through
resolve_provider_client. Falls back to raw construction for
unrecognized custom endpoints.
- mini_swe_runner.py: Route default case (no explicit api_key/base_url)
through resolve_provider_client('openrouter') with auto-detection
fallback. Preserves direct construction when explicit creds are
passed via CLI args.
- agent/auxiliary_client.py: Fix stale module docstring — vision auto
mode now correctly documents that Codex and custom endpoints are
tried (not skipped).
Three interconnected fixes for auxiliary client infrastructure:
1. CENTRALIZED PROVIDER ROUTER (auxiliary_client.py)
Add resolve_provider_client(provider, model, async_mode) — a single
entry point for creating properly configured clients. Given a provider
name and optional model, it handles auth lookup (env vars, OAuth
tokens, auth.json), base URL resolution, provider-specific headers,
and API format differences (Chat Completions vs Responses API for
Codex). All auxiliary consumers should route through this instead of
ad-hoc env var lookups.
Refactored get_text_auxiliary_client, get_async_text_auxiliary_client,
and get_vision_auxiliary_client to use the router internally.
2. FIX CODEX VISION BYPASS (vision_tools.py)
vision_tools.py was constructing a raw AsyncOpenAI client from the
sync vision client's api_key/base_url, completely bypassing the Codex
Responses API adapter. When the vision provider resolved to Codex,
the raw client would hit chatgpt.com/backend-api/codex with
chat.completions.create() which only supports the Responses API.
Fix: Added get_async_vision_auxiliary_client() which properly wraps
Codex into AsyncCodexAuxiliaryClient. vision_tools.py now uses this
instead of manual client construction.
3. FIX COMPRESSION FALLBACK + VISION ERROR HANDLING
- context_compressor.py: Removed _get_fallback_client() which blindly
looked for OPENAI_API_KEY + OPENAI_BASE_URL (fails for Codex OAuth,
API-key providers, users without OPENAI_BASE_URL set). Replaced
with fallback loop through resolve_provider_client() for each
known provider, with same-provider dedup.
- vision_tools.py: Added error detection for vision capability
failures. Returns clear message to the model when the configured
model doesn't support vision, instead of a generic error.
Addresses #886
When hermes-agent runs as a systemd service, Docker container, or
headless daemon, the stdout pipe can become unavailable (idle timeout,
buffer exhaustion, socket reset). Any print() call then raises
OSError: [Errno 5] Input/output error, crashing run_conversation()
and causing cron jobs to fail.
Rather than wrapping individual print() calls (68 in run_conversation
alone), this adds a transparent _SafeWriter wrapper installed once at
the start of run_conversation(). It delegates all writes to the real
stdout and silently catches OSError. Zero overhead on the happy path,
comprehensive coverage of all print calls including future ones.
Fixes#845
Co-authored-by: J0hnLawMississippi <J0hnLawMississippi@users.noreply.github.com>
Adds full stack traces to error logs in _upscale_image() and
image_generate_tool() for better debugging. Matches the pattern
used across the rest of the codebase.
Cherry-picked from PR #868 by aydnOktay.
Co-authored-by: aydnOktay <aydnOktay@users.noreply.github.com>
- Forum parent channel IDs now match free-response list (add a forum
channel ID and all its threads respond without mention)
- Better thread chat names: 'Guild / forum / thread' for forum threads
- Add discord.require_mention and discord.free_response_channels to
config.yaml (bridged to env vars, env vars still override)
- Keep require_mention defaulting to true (safe for shared servers)
Cherry-picked from PR #867 by insecurejezza with default fix and
config.yaml integration.
Co-authored-by: insecurejezza <insecurejezza@users.noreply.github.com>
- Log command, return code, and stderr on non-zero exit
- Add exc_info=True to timeout, FileNotFoundError, and catch-all handlers
- Add debug field to restore() error responses with raw git output
- Keeps user-facing error messages clean while preserving detail for debugging
Inspired by PR #843 (aydnOktay).
Replace two inline copies of the env/config model resolution pattern
(in _run_agent_sync and _run_agent) with the _resolve_gateway_model()
helper introduced in PR #830.
Left untouched:
- Session hygiene block: different default (sonnet vs opus) + reads
compression config from the same YAML load
- /model command: also reads provider from same config block
Previously the early return for unconfigured vision model was silent.
Now logs an error so the failure is visible in logs for debugging.
Inspired by PR #839 by aydnOktay.
Co-authored-by: aydnOktay <aydnOktay@users.noreply.github.com>
save_env_value() used bare open('w') which truncates .env immediately.
A crash or OOM kill between truncation and completed write silently
wipes every credential in the file.
Write now goes to a temp file first, then os.replace() swaps it
atomically. Either the old .env exists or the new one does — never
a truncated half-write. Same pattern used in cron/jobs.py.
Cherry-picked from PR #842 by alireza78a, rebased onto current main
with conflict resolution (_secure_file refactor).
Co-authored-by: alireza78a <alireza78a@users.noreply.github.com>
Memory flush, /compress, and session hygiene create AIAgent without
model=, falling back to the hardcoded default "anthropic/claude-opus-4.6".
This fails with a 400 error when the active provider is openai-codex
(Codex only accepts its own model names like gpt-5.1-codex-mini).
Add _resolve_gateway_model() that mirrors the env/config resolution
already used by _run_agent_sync, and wire it into all three temporary
agent creation sites.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Tag duckduckgo-search skill with fallback_for_toolsets: [web] so it
auto-hides when Firecrawl is available and auto-shows when it isn't
- Add 'Conditional Activation' section to CONTRIBUTING.md with full
spec, semantics, and examples for all 4 frontmatter fields
- Add 'Conditional Activation (Fallback Skills)' section to the user-
facing skills docs with field reference table and practical example
- Update SKILL.md format examples in both docs to show the new fields
Follow-up to PR #785 (conditional skill activation feature).
Three new tests for the naive timestamp fix (PR #807):
- test_ensure_aware_naive_preserves_absolute_time: verifies UTC equivalent
is preserved when interpreting naive datetimes as system-local time
- test_ensure_aware_normalizes_aware_to_hermes_tz: verifies already-aware
datetimes are normalized to Hermes tz without shifting the instant
- test_ensure_aware_due_job_not_skipped_when_system_ahead: end-to-end
regression test for the original bug scenario
Legacy cron job rows may store next_run_at without timezone info.
_ensure_aware() previously stamped the Hermes-configured tz directly
via replace(tzinfo=...), which shifts absolute time when system-local
tz differs from Hermes tz — causing overdue jobs to appear not due.
Now: naive datetimes are interpreted as system-local wall time first,
then converted to Hermes tz. Aware datetimes are normalized to Hermes
tz for consistency.
Cherry-picked from PR #807, rebased onto current main.
Fixes#806
Co-authored-by: 0xNyk <0xNyk@users.noreply.github.com>
MiniMax APIs (global and China) don't support /v1/models, causing
hermes doctor to always show HTTP 404 even with valid API keys.
Skip the HTTP check for these providers and show '(key configured)'
when the API key is present.
Cherry-picked from PR #822 by Bartok9, rebased onto current main.
Fixes#811
Co-authored-by: Bartok9 <259807879+Bartok9@users.noreply.github.com>
_discover_one() caught all exceptions and returned [], making
asyncio.gather(return_exceptions=True) redundant. The
isinstance(result, Exception) branch in _discover_all() was dead
code, so failed_count was always 0. This caused:
- No summary printed when all servers fail (silent failure)
- ok_servers always equaling total_servers (misleading count)
- Unused variables transport_desc and transport_type
Fix: let exceptions propagate to gather() so failed_count increments
correctly. Move per-server failure logging to _discover_all(). Remove
dead variables.
Two fixes for context overflow handling:
1. Proactive compression after tool execution: The compression check now
estimates the next prompt size using real token counts from the last API
response (prompt_tokens + completion_tokens) plus a conservative estimate
of newly appended tool results (chars // 3 for JSON-heavy content).
Previously, should_compress() only checked last_prompt_tokens which
didn't account for tool results — so a 130k prompt + 100k chars of tool
output would pass the 140k threshold check but fail the 200k API limit.
2. Safety net: Added 'prompt is too long' to context-length error detection
phrases. Anthropic returns 'prompt is too long: N tokens > M maximum'
on HTTP 400, which wasn't matched by existing phrases. This ensures
compression fires even if the proactive check underestimates.
Fixes#813
The old flow blindly asked for an OpenRouter API key after ANY non-OR
provider selection, even for Nous Portal and Codex which already
support vision natively. This was confusing and annoying.
New behavior:
- OpenRouter: skip — vision uses Gemini via their OR key
- Nous Portal OAuth: skip — vision uses Gemini via Nous
- OpenAI Codex: skip — gpt-5.3-codex supports vision
- Custom endpoint (api.openai.com): show OpenAI vision model picker
(gpt-4o, gpt-4o-mini, gpt-4.1, etc.), saves AUXILIARY_VISION_MODEL
- Custom (other) / z.ai / kimi / minimax / nous-api:
- First checks if existing OR/Nous creds already cover vision
- If not, offers friendly choice: OpenRouter / OpenAI / Skip
- No more 'enter OpenRouter key' thrown in your face
Also fixes the setup summary to check actual vision availability
across all providers instead of hardcoding 'requires OPENROUTER_API_KEY'.
MoA still correctly requires OpenRouter (calls multiple frontier models).
Show users the specific commands for each config area (hermes model,
hermes tools, hermes config set, hermes gateway setup) and then
present 'hermes setup' as the option to configure everything at once.
The installer already handles full setup (provider config, etc.), so
telling users to run 'hermes setup' post-install is redundant and
confusing. Updated all docs to reflect the correct flow:
1. Run the installer (handles everything including provider setup)
2. Use 'hermes model', 'hermes tools', 'hermes gateway setup' to
reconfigure individual settings later
Files updated:
- README.md: removed setup from quick install & getting started
- installation.md: updated post-install, manual step 9, troubleshooting
- quickstart.md: updated provider section & quick reference table
- cli-commands.md: updated hermes setup description
- faq.md: replaced hermes setup references with specific commands
- max_retries reduced from 6 to 3 — 6 retries with exponential backoff
could stall for ~275s total on persistent errors
- ValueError and TypeError now detected as non-retryable client errors
and abort immediately instead of being retried with backoff (these are
local validation/programming errors that will never succeed on retry)
_preflight_codex_api_kwargs rejected these three fields as unsupported,
but _build_api_kwargs adds them to every codex request. This caused a
ValueError before _interruptible_api_call was reached, which was caught
by the retry loop and retried with exponential backoff — appearing as
an infinite hang in tests (275s total backoff across 6 retries).
The fix adds these keys to allowed_keys and passes them through to the
normalized request dict.
This fixes the hanging test_cron_run_job_codex_path_handles_internal_401_refresh
test (now passes in 2.6s instead of timing out).
- test_agent_loop_tool_calling.py: import atroposlib at module level
to trigger skip (environments.agent_loop is now importable without
atroposlib due to __init__.py graceful fallback)
- test_modal_sandbox_fixes.py: skip TestToolResolution tests when
minisweagent not installed
- environments/__init__.py: try/except on atroposlib imports so
submodules like tool_call_parsers remain importable standalone
- test_agent_loop.py, test_tool_call_parsers.py,
test_managed_server_tool_support.py: skip at module level when
atroposlib is missing
Guard all test files that import from environments/ or atroposlib
with try/except + pytest.skip(allow_module_level=True) so they
gracefully skip instead of crashing when deps aren't available.