Files
Timmy-time-dashboard/.env.example

101 lines
5.2 KiB
Plaintext
Raw Normal View History

# Timmy Time — Mission Control
# Copy this file to .env and uncomment lines you want to override.
# .env is gitignored and never committed.
#
# For cloud deployment, deploy/setup.sh generates this automatically.
# ── Cloud / Production ──────────────────────────────────────────────────────
# Your domain for automatic HTTPS via Let's Encrypt.
# Set to your actual domain (e.g., timmy.example.com) for HTTPS.
# Leave as "localhost" for IP-only HTTP access.
# DOMAIN=localhost
# Ollama host (default: http://localhost:11434)
# In production (docker-compose.prod.yml), this is set to http://ollama:11434 automatically.
# OLLAMA_URL=http://localhost:11434
# LLM model to use via Ollama (default: qwen3.5:latest)
# OLLAMA_MODEL=qwen3.5:latest
# Ollama context window size (default: 4096 tokens)
# Set higher for more context, lower to save RAM. 0 = model default.
# qwen3:30b + 4096 ctx ≈ 19GB VRAM; default ctx ≈ 45GB.
# OLLAMA_NUM_CTX=4096
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
# DEBUG=true
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
# Inference backend: "ollama" (default) | "airllm" | "auto"
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
# Requires: pip install ".[bigbrain]"
# TIMMY_MODEL_BACKEND=ollama
# AirLLM model size (default: 70b).
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
# AIRLLM_MODEL_SIZE=70b
# ── Grok (xAI) — premium cloud augmentation ──────────────────────────────────
# Enable Grok as an opt-in premium backend for frontier reasoning.
# Local-first ethos is preserved — Grok only activates when explicitly enabled.
# GROK_ENABLED=false
# XAI_API_KEY=xai-...
# GROK_DEFAULT_MODEL=grok-3-fast
# GROK_MAX_SATS_PER_QUERY=200
# GROK_FREE=false
# ── L402 Lightning secrets ───────────────────────────────────────────────────
# HMAC secret for invoice verification. MUST be changed in production.
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
# L402_HMAC_SECRET=<your-secret-here>
# HMAC secret for macaroon signing. MUST be changed in production.
# L402_MACAROON_SECRET=<your-secret-here>
# Lightning backend: "mock" (default) | "lnd"
# LIGHTNING_BACKEND=mock
# ── Environment & Privacy ───────────────────────────────────────────────────
# Environment mode: "development" (default) | "production"
# In production, security secrets MUST be set or the app will refuse to start.
# TIMMY_ENV=development
# Disable Agno telemetry for sovereign/air-gapped deployments.
# Default is false (disabled) to align with local-first AI vision.
# TELEMETRY_ENABLED=false
# ── Telegram bot ──────────────────────────────────────────────────────────────
# Bot token from @BotFather on Telegram.
# Alternatively, configure via the /telegram/setup dashboard endpoint at runtime.
# Requires: pip install ".[telegram]"
# TELEGRAM_TOKEN=
# ── Discord bot ──────────────────────────────────────────────────────────────
# Bot token from https://discord.com/developers/applications
# Alternatively, configure via the /discord/setup dashboard endpoint at runtime.
# Requires: pip install ".[discord]"
# Optional: pip install pyzbar Pillow (for QR code invite detection from screenshots)
# DISCORD_TOKEN=
# ── Autoresearch — autonomous ML experiment loops ────────────────────────────
# Enable autonomous experiment loops (Karpathy autoresearch pattern).
# AUTORESEARCH_ENABLED=false
# AUTORESEARCH_WORKSPACE=data/experiments
# AUTORESEARCH_TIME_BUDGET=300
# AUTORESEARCH_MAX_ITERATIONS=100
# AUTORESEARCH_METRIC=val_bpb
ruff (#169) * polish: streamline nav, extract inline styles, improve tablet UX - Restructure desktop nav from 8+ flat links + overflow dropdown into 5 grouped dropdowns (Core, Agents, Intel, System, More) matching the mobile menu structure to reduce decision fatigue - Extract all inline styles from mission_control.html and base.html notification elements into mission-control.css with semantic classes - Replace JS-built innerHTML with secure DOM construction in notification loader and chat history - Add CONNECTING state to connection indicator (amber) instead of showing OFFLINE before WebSocket connects - Add tablet breakpoint (1024px) with larger touch targets for Apple Pencil / stylus use and safe-area padding for iPad toolbar - Add active-link highlighting in desktop dropdown menus - Rename "Mission Control" page title to "System Overview" to disambiguate from the chat home page - Add "Home — Timmy Time" page title to index.html https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h * fix(security): move auth-gate credentials to environment variables Hardcoded username, password, and HMAC secret in auth-gate.py replaced with os.environ lookups. Startup now refuses to run if any variable is unset. Added AUTH_GATE_SECRET/USER/PASS to .env.example. https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h * refactor(tooling): migrate from black+isort+bandit to ruff Replace three separate linting/formatting tools with a single ruff invocation. Updates tox.ini (lint, format, pre-push, pre-commit envs), .pre-commit-config.yaml, and CI workflow. Fixes all ruff errors including unused imports, missing raise-from, and undefined names. Ruff config maps existing bandit skips to equivalent S-rules. https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h --------- Co-authored-by: Claude <noreply@anthropic.com>
2026-03-11 12:23:35 -04:00
# ── Auth Gate (nginx auth_request) ─────────────────────────────────────────
# Required when running auth-gate.py for nginx auth_request.
# Generate secret with: python3 -c "import secrets; print(secrets.token_hex(32))"
# AUTH_GATE_SECRET=<your-secret-here>
# AUTH_GATE_USER=<your-username>
# AUTH_GATE_PASS=<your-password>
# ── Docker Production ────────────────────────────────────────────────────────
# When deploying with docker-compose.prod.yml:
# - Containers run as non-root user "timmy" (defined in Dockerfile)
# - No source bind mounts — code is baked into the image
# - Set TIMMY_ENV=production to enforce security checks
# - All secrets below MUST be set before production deployment