Files
Timmy-time-dashboard/.env.example
Alexander Payne 4961c610f2 Security, privacy, and agent intelligence hardening
## Security (Workset A)
- XSS: Verified templates use safe DOM methods (textContent, createElement)
- Secrets: Fail-fast in production mode when L402 secrets not set
- Environment mode: Add TIMMY_ENV (development|production) validation

## Privacy (Workset C)
- Add telemetry_enabled config (default: False for sovereign AI)
- Pass telemetry setting to Agno Agent
- Update .env.example with TELEMETRY_ENABLED and TIMMY_ENV docs

## Agent Intelligence (Workset D)
- Enhanced TIMMY_SYSTEM_PROMPT with:
  - Tool usage guidelines (when to use, when not to)
  - Memory awareness documentation
  - Operating mode documentation
- Help reduce unnecessary tool calls for simple queries

All 895 tests pass.
Telemetry disabled by default aligns with sovereign AI vision.
2026-02-25 15:32:19 -05:00

65 lines
3.3 KiB
Plaintext

# Timmy Time — Mission Control
# Copy this file to .env and uncomment lines you want to override.
# .env is gitignored and never committed.
#
# For cloud deployment, deploy/setup.sh generates this automatically.
# ── Cloud / Production ──────────────────────────────────────────────────────
# Your domain for automatic HTTPS via Let's Encrypt.
# Set to your actual domain (e.g., timmy.example.com) for HTTPS.
# Leave as "localhost" for IP-only HTTP access.
# DOMAIN=localhost
# Ollama host (default: http://localhost:11434)
# In production (docker-compose.prod.yml), this is set to http://ollama:11434 automatically.
# OLLAMA_URL=http://localhost:11434
# LLM model to use via Ollama (default: llama3.2)
# OLLAMA_MODEL=llama3.2
# Enable FastAPI interactive docs at /docs and /redoc (default: false)
# DEBUG=true
# ── AirLLM / big-brain backend ───────────────────────────────────────────────
# Inference backend: "ollama" (default) | "airllm" | "auto"
# "auto" → uses AirLLM on Apple Silicon if installed, otherwise Ollama.
# Requires: pip install ".[bigbrain]"
# TIMMY_MODEL_BACKEND=ollama
# AirLLM model size (default: 70b).
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
# AIRLLM_MODEL_SIZE=70b
# ── L402 Lightning secrets ───────────────────────────────────────────────────
# HMAC secret for invoice verification. MUST be changed in production.
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"
# L402_HMAC_SECRET=<your-secret-here>
# HMAC secret for macaroon signing. MUST be changed in production.
# L402_MACAROON_SECRET=<your-secret-here>
# Lightning backend: "mock" (default) | "lnd"
# LIGHTNING_BACKEND=mock
# ── Environment & Privacy ───────────────────────────────────────────────────
# Environment mode: "development" (default) | "production"
# In production, security secrets MUST be set or the app will refuse to start.
# TIMMY_ENV=development
# Disable Agno telemetry for sovereign/air-gapped deployments.
# Default is false (disabled) to align with local-first AI vision.
# TELEMETRY_ENABLED=false
# ── Telegram bot ──────────────────────────────────────────────────────────────
# Bot token from @BotFather on Telegram.
# Alternatively, configure via the /telegram/setup dashboard endpoint at runtime.
# Requires: pip install ".[telegram]"
# TELEGRAM_TOKEN=
# ── Discord bot ──────────────────────────────────────────────────────────────
# Bot token from https://discord.com/developers/applications
# Alternatively, configure via the /discord/setup dashboard endpoint at runtime.
# Requires: pip install ".[discord]"
# Optional: pip install pyzbar Pillow (for QR code invite detection from screenshots)
# DISCORD_TOKEN=