Files
Timmy-time-dashboard/.env.example
Claude 19af4ae540 feat: integrate AirLLM as optional high-performance backend
Adds the `bigbrain` optional dependency group (airllm>=2.9.0) and a
complete second inference path that runs 8B / 70B / 405B Llama models
locally via layer-by-layer loading — no GPU required, no cloud, fully
sovereign.

Key changes:
- src/timmy/backends.py   — TimmyAirLLMAgent (same print_response interface
                            as Agno Agent); auto-selects AirLLMMLX on Apple
                            Silicon, AutoModel (PyTorch) everywhere else
- src/timmy/agent.py      — _resolve_backend() routing with explicit override,
                            env-config, and 'auto' Apple-Silicon detection
- src/timmy/cli.py        — --backend / --model-size flags on all commands
- src/config.py           — timmy_model_backend + airllm_model_size settings
- src/timmy/prompts.py    — mentions AirLLM "even bigger brains, still fully
                            sovereign"
- pyproject.toml          — bigbrain optional dep; wheel includes updated
- .env.example            — TIMMY_MODEL_BACKEND + AIRLLM_MODEL_SIZE docs
- tests/conftest.py       — stubs 'airllm' module so tests run without GPU
- tests/test_backends.py  — 13 new tests covering helpers + TimmyAirLLMAgent
- tests/test_agent.py     — 7 new tests for backend routing
- README.md               — Big Brain section with one-line install
- activate_self_tdd.sh    — bootstrap script (venv + install + tests +
                            watchdog + dashboard); --big-brain flag

All 61 tests pass. Self-TDD watchdog unaffected.

https://claude.ai/code/session_01DMjQ5qMZ8iHeyix1j3GS7c
2026-02-21 16:53:16 +00:00

24 lines
952 B
Plaintext

# Timmy Time — Mission Control
# Copy this file to .env and uncomment lines you want to override.
# .env is gitignored and never committed.
# Ollama host (default: http://localhost:11434)
# Override if Ollama is running on another machine or port.
# OLLAMA_URL=http://localhost:11434
# LLM model to use via Ollama (default: llama3.2)
# OLLAMA_MODEL=llama3.2
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
# DEBUG=true
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
# Inference backend: "ollama" (default) | "airllm" | "auto"
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
# Requires: pip install ".[bigbrain]"
# TIMMY_MODEL_BACKEND=ollama
# AirLLM model size (default: 70b).
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
# AIRLLM_MODEL_SIZE=70b