[claude] feat: add vLLM as alternative inference backend (#1281) #1300

Closed
claude wants to merge 1 commits from claude/issue-1281 into main
Collaborator

Fixes #1281

Summary

  • Adds vllm as a selectable backend in timmy_model_backend (alongside ollama, grok, claude, auto)
  • New VllmBackend support in infrastructure/router/cascade.py: _check_provider_available hits /health, _call_vllm uses OpenAI-compatible /v1/chat/completions
  • config.py: VLLM_URL (default http://localhost:8001) and VLLM_MODEL settings
  • config/providers.yaml: disabled-by-default vllm-local provider at priority 3; cloud providers bumped to 4/5
  • dashboard/routes/health.py: _check_vllm with 30-second TTL cache; /health and /health/sovereignty report vLLM status when it is the active backend
  • docker-compose.yml: vllm service behind --profile vllm; GPU passthrough template included (commented)
  • CLAUDE.md: vLLM added to Service Fallback Matrix
  • 26 new unit tests; tox -e unit passes (520 tests)

Graceful fallback: if vLLM is unreachable the cascade router automatically falls over to Ollama — no crash, logs at DEBUG level.

Fixes #1281 ## Summary - Adds `vllm` as a selectable backend in `timmy_model_backend` (alongside ollama, grok, claude, auto) - New `VllmBackend` support in `infrastructure/router/cascade.py`: `_check_provider_available` hits `/health`, `_call_vllm` uses OpenAI-compatible `/v1/chat/completions` - `config.py`: `VLLM_URL` (default `http://localhost:8001`) and `VLLM_MODEL` settings - `config/providers.yaml`: disabled-by-default `vllm-local` provider at priority 3; cloud providers bumped to 4/5 - `dashboard/routes/health.py`: `_check_vllm` with 30-second TTL cache; `/health` and `/health/sovereignty` report vLLM status when it is the active backend - `docker-compose.yml`: `vllm` service behind `--profile vllm`; GPU passthrough template included (commented) - `CLAUDE.md`: vLLM added to Service Fallback Matrix - 26 new unit tests; `tox -e unit` passes (520 tests) **Graceful fallback:** if vLLM is unreachable the cascade router automatically falls over to Ollama — no crash, logs at DEBUG level.
claude added 1 commit 2026-03-24 01:53:18 +00:00
feat: add vLLM as alternative inference backend (#1281)
Some checks failed
Tests / lint (pull_request) Failing after 31s
Tests / test (pull_request) Has been skipped
28d1905df4
Adds vLLM (high-throughput OpenAI-compatible inference server) as a
selectable backend alongside the existing Ollama and vllm-mlx backends.
vLLM's continuous batching gives 3-10x throughput for agentic workloads.

Changes:
- config.py: add `vllm` to timmy_model_backend Literal; add vllm_url /
  vllm_model settings (VLLM_URL / VLLM_MODEL env vars)
- cascade.py: add vllm provider type with _check_provider_available
  (hits /health) and _call_vllm (OpenAI-compatible completions)
- providers.yaml: add disabled-by-default vllm-local provider (priority 3,
  port 8001); bump OpenAI/Anthropic backup priorities to 4/5
- health.py: add _check_vllm/_check_vllm_sync with 30-second TTL cache;
  /health and /health/sovereignty reflect vLLM status when it is the
  active backend
- docker-compose.yml: add vllm service behind 'vllm' profile (GPU
  passthrough commented-out template included); add vllm-cache volume
- CLAUDE.md: add vLLM row to Service Fallback Matrix
- tests: 26 new unit tests covering availability checks, _call_vllm,
  providers.yaml validation, config options, and health helpers

Graceful fallback: if vLLM is unavailable the cascade router automatically
falls back to Ollama. The app never crashes.

Fixes #1281

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Timmy closed this pull request 2026-03-24 01:58:45 +00:00
Owner

Closing: issue #1281 is already closed and this PR has unresolvable merge conflicts. Will re-implement if needed.

Closing: issue #1281 is already closed and this PR has unresolvable merge conflicts. Will re-implement if needed.
Some checks failed
Tests / lint (pull_request) Failing after 31s
Tests / test (pull_request) Has been skipped

Pull request closed

Sign in to join this conversation.
No Reviewers
No Label
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: Rockachopa/Timmy-time-dashboard#1300