This repository has been archived on 2026-03-24. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Claude 3b7fcc5ebc feat: add in-browser local model support for iPhone via WebLLM
Enable Timmy to run directly on iPhone by loading a small LLM into
the browser via WebGPU (Safari 26+ / iOS 26+). No server connection
required — fully sovereign, fully offline.

New files:
- static/local_llm.js: WebLLM wrapper with model catalogue, WebGPU
  detection, streaming chat, and progress callbacks
- templates/mobile_local.html: Mobile-optimized UI with model
  selector, download progress, LOCAL/SERVER badge, and chat
- tests/dashboard/test_local_models.py: 31 tests covering routes,
  config, template UX, JS asset, and XSS prevention

Changes:
- config.py: browser_model_enabled, browser_model_id,
  browser_model_fallback settings
- routes/mobile.py: /mobile/local page, /mobile/local-models API
- base.html: LOCAL AI nav link

Supported models: SmolLM2-360M (~200MB), Qwen2.5-0.5B (~350MB),
SmolLM2-1.7B (~1GB), Llama-3.2-1B (~700MB). Falls back to
server-side Ollama when local model is unavailable.

https://claude.ai/code/session_01Cqkvr4sZbED7T3iDu1rwSD
2026-02-27 00:03:05 +00:00

Timmy Time — Mission Control

Tests

A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate API access with Bitcoin Lightning — all from a browser, no cloud AI required.

Live Docs →


Quick Start

git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
cd Timmy-time-dashboard
make install              # create venv + install deps
cp .env.example .env      # configure environment

ollama serve              # separate terminal
ollama pull llama3.2

make dev                  # http://localhost:8000
make test                 # no Ollama needed

What's Here

Subsystem Description
Timmy Agent Agno-powered agent (Ollama default, AirLLM optional for 70B/405B)
Mission Control FastAPI + HTMX dashboard — chat, health, swarm, marketplace
Swarm Multi-agent coordinator — spawn agents, post tasks, Lightning auctions
L402 / Lightning Bitcoin Lightning payment gating for API access
Spark Event capture, predictions, memory consolidation, advisory
Creative Studio Multi-persona pipeline — image, music, video generation
Hands 6 autonomous scheduled agents — Oracle, Sentinel, Scout, Scribe, Ledger, Weaver
Self-Coding Codebase-aware self-modification with git safety
Integrations Telegram bridge, Siri Shortcuts, voice NLU, mobile layout

Commands

make dev            # start dashboard (http://localhost:8000)
make test           # run all tests
make test-cov       # tests + coverage report
make lint           # run ruff/flake8
make docker-up      # start via Docker
make help           # see all commands

CLI tools: timmy, timmy-serve, self-tdd, self-modify


Documentation

Document Purpose
CLAUDE.md AI assistant development guide
AGENTS.md Multi-agent development standards
.env.example Configuration reference
docs/ Architecture docs, ADRs, audits

Configuration

cp .env.example .env

Key variables: OLLAMA_URL, OLLAMA_MODEL, TIMMY_MODEL_BACKEND, L402_HMAC_SECRET, LIGHTNING_BACKEND, DEBUG. Full list in .env.example.


Troubleshooting

  • ollama: command not foundbrew install ollama or ollama.com
  • connection refused — run ollama serve first
  • ModuleNotFoundErrorsource .venv/bin/activate && make install
  • Health panel shows DOWN — Ollama isn't running; chat returns offline message

Roadmap

Version Name Status
1.0 Genesis Complete — Agno + Ollama + SQLite + Dashboard
2.0 Exodus In progress — Swarm + L402 + Voice + Marketplace + Hands
3.0 Revelation Planned — Lightning treasury + single .app bundle
Description
Mission Control for sovereign AI agents
Readme MIT 98 MiB
Languages
Python 82.2%
HTML 8.6%
CSS 3.6%
Shell 2%
TypeScript 1.9%
Other 1.6%