PR #4918 fixed the double-/v1 bug at fresh agent init by stripping the trailing /v1 from OpenCode base URLs when api_mode is anthropic_messages (so the Anthropic SDK's own /v1/messages doesn't land on /v1/v1/messages). The same logic was missing from the /model mid-session switch path. Repro: start a session on opencode-go with GLM-5 (or any chat_completions model), then `/model minimax-m2.7`. switch_model() correctly sets api_mode=anthropic_messages via opencode_model_api_mode(), but base_url passes through as https://opencode.ai/zen/go/v1. The Anthropic SDK then POSTs to https://opencode.ai/zen/go/v1/v1/messages, which returns the OpenCode website 404 HTML page (title 'Not Found | opencode'). Same bug affects `/model claude-sonnet-4-6` on opencode-zen. Verified upstream: POST /v1/messages returns clean JSON 401 with x-api-key auth (route works), while POST /v1/v1/messages returns the exact HTML 404 users reported. Fix mirrors runtime_provider.resolve_runtime_provider: - hermes_cli/model_switch.py::switch_model() strips /v1 after the OpenCode api_mode override when the resolved mode is anthropic_messages. - run_agent.py::AIAgent.switch_model() applies the same strip as defense-in-depth, so any direct caller can't reintroduce the double-/v1. Tests: 9 new regression tests in tests/hermes_cli/test_model_switch_opencode_anthropic.py covering minimax on opencode-go, claude on opencode-zen, chat_completions (GLM/Kimi/Gemini) keeping /v1 intact, codex_responses (GPT) keeping /v1 intact, trailing-slash handling, and the agent-level defense-in-depth.
42 KiB
42 KiB