Model upgrade: - qwen2.5:14b → qwen3.5:latest across config, tools, and docs - Added qwen3.5 to multimodal model registry Self-hosted Gitea CI: - .gitea/workflows/tests.yml: lint + test jobs via act_runner - Unified Dockerfile: pre-baked deps from poetry.lock for fast CI - sitepackages=true in tox for ~2s dep resolution (was ~40s) - OLLAMA_URL set to dead port in CI to prevent real LLM calls Test isolation fixes: - Smoke test fixture mocks create_timmy (was hitting real Ollama) - WebSocket sends initial_state before joining broadcast pool (race fix) - Tests use settings.ollama_model/url instead of hardcoded values - skip_ci marker for Ollama-dependent tests, excluded in CI tox envs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
101 lines
5.3 KiB
Plaintext
101 lines
5.3 KiB
Plaintext
# Timmy Time — Mission Control
|
|
# Copy this file to .env and uncomment lines you want to override.
|
|
# .env is gitignored and never committed.
|
|
#
|
|
# For cloud deployment, deploy/setup.sh generates this automatically.
|
|
|
|
# ── Cloud / Production ──────────────────────────────────────────────────────
|
|
# Your domain for automatic HTTPS via Let's Encrypt.
|
|
# Set to your actual domain (e.g., timmy.example.com) for HTTPS.
|
|
# Leave as "localhost" for IP-only HTTP access.
|
|
# DOMAIN=localhost
|
|
|
|
# Ollama host (default: http://localhost:11434)
|
|
# In production (docker-compose.prod.yml), this is set to http://ollama:11434 automatically.
|
|
# OLLAMA_URL=http://localhost:11434
|
|
|
|
# LLM model to use via Ollama (default: qwen3.5:latest)
|
|
# OLLAMA_MODEL=qwen3.5:latest
|
|
|
|
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
|
|
# DEBUG=true
|
|
|
|
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
|
|
# Inference backend: "ollama" (default) | "airllm" | "auto"
|
|
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
|
|
# Requires: pip install ".[bigbrain]"
|
|
# TIMMY_MODEL_BACKEND=ollama
|
|
|
|
# AirLLM model size (default: 70b).
|
|
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
|
|
# AIRLLM_MODEL_SIZE=70b
|
|
|
|
# ── Grok (xAI) — premium cloud augmentation ──────────────────────────────────
|
|
# Enable Grok as an opt-in premium backend for frontier reasoning.
|
|
# Local-first ethos is preserved — Grok only activates when explicitly enabled.
|
|
# GROK_ENABLED=false
|
|
# XAI_API_KEY=xai-...
|
|
# GROK_DEFAULT_MODEL=grok-3-fast
|
|
# GROK_MAX_SATS_PER_QUERY=200
|
|
# GROK_FREE=false
|
|
|
|
# ── L402 Lightning secrets ───────────────────────────────────────────────────
|
|
# HMAC secret for invoice verification. MUST be changed in production.
|
|
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
|
|
# L402_HMAC_SECRET=<your-secret-here>
|
|
|
|
# HMAC secret for macaroon signing. MUST be changed in production.
|
|
# L402_MACAROON_SECRET=<your-secret-here>
|
|
|
|
# Lightning backend: "mock" (default) | "lnd"
|
|
# LIGHTNING_BACKEND=mock
|
|
|
|
# ── Environment & Privacy ───────────────────────────────────────────────────
|
|
# Environment mode: "development" (default) | "production"
|
|
# In production, security secrets MUST be set or the app will refuse to start.
|
|
# TIMMY_ENV=development
|
|
|
|
# Disable Agno telemetry for sovereign/air-gapped deployments.
|
|
# Default is false (disabled) to align with local-first AI vision.
|
|
# TELEMETRY_ENABLED=false
|
|
|
|
# ── Telegram bot ──────────────────────────────────────────────────────────────
|
|
# Bot token from @BotFather on Telegram.
|
|
# Alternatively, configure via the /telegram/setup dashboard endpoint at runtime.
|
|
# Requires: pip install ".[telegram]"
|
|
# TELEGRAM_TOKEN=
|
|
|
|
# ── Discord bot ──────────────────────────────────────────────────────────────
|
|
# Bot token from https://discord.com/developers/applications
|
|
# Alternatively, configure via the /discord/setup dashboard endpoint at runtime.
|
|
# Requires: pip install ".[discord]"
|
|
# Optional: pip install pyzbar Pillow (for QR code invite detection from screenshots)
|
|
# DISCORD_TOKEN=
|
|
|
|
# ── Autoresearch — autonomous ML experiment loops ────────────────────────────
|
|
# Enable autonomous experiment loops (Karpathy autoresearch pattern).
|
|
# AUTORESEARCH_ENABLED=false
|
|
# AUTORESEARCH_WORKSPACE=data/experiments
|
|
# AUTORESEARCH_TIME_BUDGET=300
|
|
# AUTORESEARCH_MAX_ITERATIONS=100
|
|
# AUTORESEARCH_METRIC=val_bpb
|
|
|
|
# ── Auth Gate (nginx auth_request) ─────────────────────────────────────────
|
|
# Required when running auth-gate.py for nginx auth_request.
|
|
# Generate secret with: python3 -c "import secrets; print(secrets.token_hex(32))"
|
|
# AUTH_GATE_SECRET=<your-secret-here>
|
|
# AUTH_GATE_USER=<your-username>
|
|
# AUTH_GATE_PASS=<your-password>
|
|
|
|
# ── Docker Production ────────────────────────────────────────────────────────
|
|
# When deploying with docker-compose.prod.yml:
|
|
# - Containers run as non-root user "timmy" (defined in Dockerfile)
|
|
# - No source bind mounts — code is baked into the image
|
|
# - Set TIMMY_ENV=production to enforce security checks
|
|
# - All secrets below MUST be set before production deployment
|
|
#
|
|
# Taskosaur secrets (change from dev defaults):
|
|
# TASKOSAUR_JWT_SECRET=<generate with: python3 -c "import secrets; print(secrets.token_hex(32))">
|
|
# TASKOSAUR_JWT_REFRESH_SECRET=<generate with: python3 -c "import secrets; print(secrets.token_hex(32))">
|
|
# TASKOSAUR_ENCRYPTION_KEY=<generate with: python3 -c "import secrets; print(secrets.token_hex(32))">
|