fix: stop rejecting unlisted models, accept with warning instead

* fix: use session_key instead of chat_id for adapter interrupt lookups

monitor_for_interrupt() in _run_agent was using source.chat_id to query
the adapter's has_pending_interrupt() and get_pending_message() methods.
But the adapter stores interrupt events under build_session_key(source),
which produces a different string (e.g. 'agent:main:telegram:dm' vs '123456').

This key mismatch meant the interrupt was never detected through the
adapter path, which is the only active interrupt path for all adapter-based
platforms (Telegram, Discord, Slack, etc.). The gateway-level interrupt
path (in dispatch_message) is unreachable because the adapter intercepts
the 2nd message in handle_message() before it reaches dispatch_message().

Result: sending a new message while subagents were running had no effect —
the interrupt was silently lost.

Fix: replace all source.chat_id references in the interrupt-related code
within _run_agent() with the session_key parameter, which matches the
adapter's storage keys.

Also adds regression tests verifying session_key vs chat_id consistency.

* debug: add file-based logging to CLI interrupt path

Temporary instrumentation to diagnose why message-based interrupts
don't seem to work during subagent execution. Logs to
~/.hermes/interrupt_debug.log (immune to redirect_stdout).

Two log points:
1. When Enter handler puts message into _interrupt_queue
2. When chat() reads it and calls agent.interrupt()

This will reveal whether the message reaches the queue and
whether the interrupt is actually fired.

* fix: accept unlisted models with warning instead of rejecting

validate_requested_model() previously hard-rejected any model not found
in the provider's API listing. This was too aggressive — users on higher
plan tiers (e.g. Z.AI Pro/Max) may have access to models not shown in
the public listing (like glm-5 on coding endpoints).

Changes:
- validate_requested_model: accept unlisted models with a warning note
  instead of blocking. The model is saved to config and used immediately.
- Z.AI setup: always offer glm-5 in the model list regardless of whether
  a coding endpoint was detected. Pro/Max plans support it.
- Z.AI setup detection message: softened from 'GLM-5 is not available'
  to 'GLM-5 may still be available depending on your plan tier'
This commit is contained in:
Teknium
2026-03-12 16:02:35 -07:00
committed by GitHub
parent def7b84a12
commit 4a8cd6f856
4 changed files with 48 additions and 62 deletions

View File

@@ -889,7 +889,8 @@ def setup_model_provider(config: dict):
print_info(f" URL: {detected['base_url']}")
if detected["id"].startswith("coding"):
print_info(
f" Note: Coding Plan detected — GLM-5 is not available, using {detected['model']}"
f" Note: Coding Plan endpoint detected (default model: {detected['model']}). "
f"GLM-5 may still be available depending on your plan tier."
)
save_env_value("GLM_BASE_URL", zai_base_url)
else:
@@ -1107,14 +1108,11 @@ def setup_model_provider(config: dict):
_update_config_for_provider("openai-codex", DEFAULT_CODEX_BASE_URL)
_set_model_provider(config, "openai-codex", DEFAULT_CODEX_BASE_URL)
elif selected_provider == "zai":
# Coding Plan endpoints don't have GLM-5
is_coding_plan = get_env_value("GLM_BASE_URL") and "coding" in (
get_env_value("GLM_BASE_URL") or ""
)
if is_coding_plan:
zai_models = ["glm-4.7", "glm-4.5", "glm-4.5-flash"]
else:
zai_models = ["glm-5", "glm-4.7", "glm-4.5", "glm-4.5-flash"]
# Always offer all models — Pro/Max plans support GLM-5 even
# on coding endpoints. If the user's plan doesn't support a
# model, the API will return an error at runtime (not our job
# to gatekeep).
zai_models = ["glm-5", "glm-4.7", "glm-4.5", "glm-4.5-flash"]
model_choices = list(zai_models)
model_choices.append("Custom model")
model_choices.append(f"Keep current ({current_model})")