refactor: remove redundant 'openai' auxiliary provider, clean up docs

The 'openai' provider was redundant — using OPENAI_BASE_URL +
OPENAI_API_KEY with provider: 'main' already covers direct OpenAI API.

Provider options are now: auto, openrouter, nous, codex, main.

- Removed _try_openai(), _OPENAI_AUX_MODEL, _OPENAI_BASE_URL
- Replaced openai tests with codex provider tests
- Updated all docs to remove 'openai' option and clarify 'main'
- 'main' description now explicitly mentions it works with OpenAI API,
  local models, and any OpenAI-compatible endpoint

Tests: 2467 passed.
This commit is contained in:
teknium1
2026-03-08 18:50:26 -07:00
parent 71e81728ac
commit 2d1a1c1c47
5 changed files with 34 additions and 59 deletions

View File

@@ -241,14 +241,11 @@ compression:
# "auto" - Best available: OpenRouter → Nous Portal → main endpoint (default)
# "openrouter" - Force OpenRouter (requires OPENROUTER_API_KEY)
# "nous" - Force Nous Portal (requires: hermes login)
# "openai" - Force OpenAI direct API (requires OPENAI_API_KEY).
# Uses api.openai.com/v1 with models like gpt-4o, gpt-4o-mini.
# Great for vision since GPT-4o supports image input.
# "main" - Use the same provider & credentials as your main chat model.
# Skips OpenRouter/Nous and uses your custom endpoint
# (OPENAI_BASE_URL), Codex OAuth, or API-key provider directly.
# Useful if you run a local model and want auxiliary tasks to
# use it too.
# "codex" - Force Codex OAuth (requires: hermes model → Codex).
# Uses gpt-5.3-codex which supports vision.
# "main" - Use your custom endpoint (OPENAI_BASE_URL + OPENAI_API_KEY).
# Works with OpenAI API, local models, or any OpenAI-compatible
# endpoint. Also falls back to Codex OAuth and API-key providers.
#
# Model: leave empty to use the provider's default. When empty, OpenRouter
# uses "google/gemini-3-flash-preview" and Nous uses "gemini-3-flash".