Add complete production deployment stack so Timmy can be deployed to any cloud provider (DigitalOcean, AWS, Hetzner, etc.) with a single command. New files: - docker-compose.prod.yml: production stack (Caddy auto-HTTPS, Ollama LLM, Dashboard, Timmy agent, Watchtower auto-updates) - deploy/Caddyfile: reverse proxy with security headers and WebSocket support - deploy/setup.sh: interactive one-click setup script for any Ubuntu/Debian server - deploy/cloud-init.yaml: paste as User Data when creating a cloud VM - deploy/timmy.service: systemd unit for auto-start on boot - deploy/digitalocean/create-droplet.sh: create a DO droplet via doctl CLI Updated: - Dockerfile: non-root user, healthcheck, missing deps (GitPython, moviepy, redis) - Makefile: cloud-deploy, cloud-up/down/logs/status/update/scale targets - .env.example: DOMAIN setting for HTTPS - .dockerignore: exclude deploy configs from image https://claude.ai/code/session_018CduUZoEJzFynBwMsxaP8T
49 lines
2.3 KiB
Plaintext
49 lines
2.3 KiB
Plaintext
# Timmy Time — Mission Control
|
|
# Copy this file to .env and uncomment lines you want to override.
|
|
# .env is gitignored and never committed.
|
|
#
|
|
# For cloud deployment, deploy/setup.sh generates this automatically.
|
|
|
|
# ── Cloud / Production ──────────────────────────────────────────────────────
|
|
# Your domain for automatic HTTPS via Let's Encrypt.
|
|
# Set to your actual domain (e.g., timmy.example.com) for HTTPS.
|
|
# Leave as "localhost" for IP-only HTTP access.
|
|
# DOMAIN=localhost
|
|
|
|
# Ollama host (default: http://localhost:11434)
|
|
# In production (docker-compose.prod.yml), this is set to http://ollama:11434 automatically.
|
|
# OLLAMA_URL=http://localhost:11434
|
|
|
|
# LLM model to use via Ollama (default: llama3.2)
|
|
# OLLAMA_MODEL=llama3.2
|
|
|
|
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
|
|
# DEBUG=true
|
|
|
|
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
|
|
# Inference backend: "ollama" (default) | "airllm" | "auto"
|
|
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
|
|
# Requires: pip install ".[bigbrain]"
|
|
# TIMMY_MODEL_BACKEND=ollama
|
|
|
|
# AirLLM model size (default: 70b).
|
|
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
|
|
# AIRLLM_MODEL_SIZE=70b
|
|
|
|
# ── L402 Lightning secrets ───────────────────────────────────────────────────
|
|
# HMAC secret for invoice verification. MUST be changed in production.
|
|
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
|
|
# L402_HMAC_SECRET=<your-secret-here>
|
|
|
|
# HMAC secret for macaroon signing. MUST be changed in production.
|
|
# L402_MACAROON_SECRET=<your-secret-here>
|
|
|
|
# Lightning backend: "mock" (default) | "lnd"
|
|
# LIGHTNING_BACKEND=mock
|
|
|
|
# ── Telegram bot ──────────────────────────────────────────────────────────────
|
|
# Bot token from @BotFather on Telegram.
|
|
# Alternatively, configure via the /telegram/setup dashboard endpoint at runtime.
|
|
# Requires: pip install ".[telegram]"
|
|
# TELEGRAM_TOKEN=
|