- YAML-based provider configuration (config/providers.yaml)
- Priority-ordered provider routing
- Circuit breaker pattern for failing providers
- Health check and availability monitoring
- Metrics tracking (latency, errors, success rates)
- Support for Ollama, OpenAI, Anthropic, AirLLM providers
- Automatic failover on rate limits or errors
- REST API endpoints for monitoring and control
- 41 comprehensive tests
API Endpoints:
- POST /api/v1/router/complete - Chat completion with failover
- GET /api/v1/router/status - Provider health status
- GET /api/v1/router/metrics - Detailed metrics
- GET /api/v1/router/providers - List all providers
- POST /api/v1/router/providers/{name}/control - Enable/disable/reset
- POST /api/v1/router/health-check - Run health checks
- GET /api/v1/router/config - View configuration
596 B
596 B
Self-Modify Report: 20260225_232403
Instruction: Fix foo Target files: src/foo.py Dry run: False Backend: ollama Branch: N/A Result: SUCCESS Error: none Commit: abc123 Attempts: 2 Autonomous cycles: 0
Attempt 1 -- syntax_validation
Error: src/foo.py: line 1: '(' was never closed
LLM Response
bad llm
Edits Written
src/foo.py
def foo(
Attempt 2 -- complete
LLM Response
good llm
Edits Written
src/foo.py
def foo():
pass
Test Result: PASSED
passed