Adds vLLM (high-throughput OpenAI-compatible inference server) as a
selectable backend alongside the existing Ollama and vllm-mlx backends.
vLLM's continuous batching gives 3-10x throughput for agentic workloads.
Changes:
- config.py: add `vllm` to timmy_model_backend Literal; add vllm_url /
vllm_model settings (VLLM_URL / VLLM_MODEL env vars)
- cascade.py: add vllm provider type with _check_provider_available
(hits /health) and _call_vllm (OpenAI-compatible completions)
- providers.yaml: add disabled-by-default vllm-local provider (priority 3,
port 8001); bump OpenAI/Anthropic backup priorities to 4/5
- health.py: add _check_vllm/_check_vllm_sync with 30-second TTL cache;
/health and /health/sovereignty reflect vLLM status when it is the
active backend
- docker-compose.yml: add vllm service behind 'vllm' profile (GPU
passthrough commented-out template included); add vllm-cache volume
- CLAUDE.md: add vLLM row to Service Fallback Matrix
- tests: 26 new unit tests covering availability checks, _call_vllm,
providers.yaml validation, config options, and health helpers
Graceful fallback: if vLLM is unavailable the cascade router automatically
falls back to Ollama. The app never crashes.
Fixes#1281
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>