## 1. MCP (Model Context Protocol) Implementation ### Registry (src/mcp/registry.py) - Tool registration with JSON schemas - Dynamic tool discovery - Health tracking per tool - Metrics collection (latency, error rates) - @register_tool decorator for easy registration ### Server (src/mcp/server.py) - MCPServer class implementing MCP protocol - MCPHTTPServer for FastAPI integration - Standard endpoints: list_tools, call_tool, get_schema ### Schemas (src/mcp/schemas/base.py) - create_tool_schema() helper - Common parameter types - Standard return types ### Bootstrap (src/mcp/bootstrap.py) - Automatic tool module loading - Status reporting ## 2. MCP-Compliant Tools (src/tools/) | Tool | Purpose | Category | |------|---------|----------| | web_search | DuckDuckGo search | research | | read_file | File reading | files | | write_file | File writing (confirmation) | files | | list_directory | Directory listing | files | | python | Python code execution | code | | memory_search | Vector memory search | memory | All tools have proper schemas, error handling, and MCP registration. ## 3. Event Bus (src/events/bus.py) - Async publish/subscribe pattern - Pattern matching with wildcards (agent.task.*) - Event history tracking - Concurrent handler execution - Module-level singleton for system-wide use ## 4. Sub-Agents (src/agents/) All agents inherit from BaseAgent with: - Agno Agent integration - MCP tool registry access - Event bus connectivity - Structured logging ### Agent Roster | Agent | Role | Tools | Purpose | |-------|------|-------|---------| | Seer | Research | web_search, read_file, memory_search | Information gathering | | Forge | Code | python, write_file, read_file | Code generation | | Quill | Writing | write_file, read_file, memory_search | Content creation | | Echo | Memory | memory_search, read_file, write_file | Context retrieval | | Helm | Routing | memory_search | Task routing decisions | | Timmy | Orchestrator | All tools | Coordination & user interface | ### Timmy Orchestrator - Analyzes user requests - Routes to appropriate sub-agent - Handles direct queries - Manages swarm coordination - create_timmy_swarm() factory function ## 5. Integration All components wired together: - Tools auto-register on import - Agents connect to event bus - MCP server provides HTTP API - Ready for dashboard integration ## Tests - All 973 existing tests pass - New components tested manually - Import verification successful Next steps: Cascade Router, Self-Upgrade Loop, Dashboard integration
Timmy Time — Mission Control
A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate API access with Bitcoin Lightning — all from a browser, no cloud AI required.
What's built
| Subsystem | Description |
|---|---|
| Timmy Agent | Agno-powered agent (Ollama default, AirLLM optional for 70B/405B) |
| Mission Control | FastAPI + HTMX dashboard — chat, health, swarm, marketplace |
| Swarm | Multi-agent coordinator — spawn agents, post tasks, run Lightning auctions |
| L402 / Lightning | Bitcoin Lightning payment gating for API access (mock backend; LND scaffolded) |
| Spark Intelligence | Event capture, predictions, memory consolidation, advisory engine |
| Creative Studio | Multi-persona creative pipeline — image, music, video generation |
| Tools | Git, image, music, and video tools accessible by persona agents |
| Voice | NLU intent detection + TTS (pyttsx3, no cloud) |
| WebSocket | Real-time swarm live feed |
| Mobile | Responsive layout with full iOS safe-area and touch support |
| Telegram | Bridge Telegram messages to Timmy |
| CLI | timmy, timmy-serve, self-tdd entry points |
Full test suite, 100% passing.
Prerequisites
Python 3.11+
python3 --version # must be 3.11+
If not: brew install python@3.11
Ollama — runs the local LLM
brew install ollama
# or download from https://ollama.com
Quickstart
# 1. Clone
git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
cd Timmy-time-dashboard
# 2. Install
make install
# or manually: python3 -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"
# 3. Start Ollama (separate terminal)
ollama serve
ollama pull llama3.2
# 4. Launch dashboard
make dev
# opens at http://localhost:8000
Common commands
make test # run all tests (no Ollama needed)
make test-cov # test + coverage report
make dev # start dashboard (http://localhost:8000)
make watch # self-TDD watchdog (60s poll, alerts on regressions)
Or with the bootstrap script (creates venv, tests, watchdog, server in one shot):
bash activate_self_tdd.sh
bash activate_self_tdd.sh --big-brain # also installs AirLLM
CLI
timmy chat "What is sovereignty?"
timmy think "Bitcoin and self-custody"
timmy status
timmy-serve start # L402-gated API server (port 8402)
timmy-serve invoice # generate a Lightning invoice
timmy-serve status
Mobile access
The dashboard is fully mobile-optimized (iOS safe area, 44px touch targets, 16px input to prevent zoom, momentum scroll).
# Bind to your local network
uvicorn dashboard.app:app --host 0.0.0.0 --port 8000 --reload
# Find your IP
ipconfig getifaddr en0 # Wi-Fi on macOS
Open http://<your-ip>:8000 on your phone (same Wi-Fi network).
Mobile-specific routes:
/mobile— single-column optimized layout/mobile-test— 21-scenario HITL test harness (layout, touch, scroll, notch)
AirLLM — big brain backend
Run 70B or 405B models locally with no GPU, using AirLLM's layer-by-layer loading. Apple Silicon uses MLX automatically.
pip install ".[bigbrain]"
pip install "airllm[mlx]" # Apple Silicon only
timmy chat "Explain self-custody" --backend airllm --model-size 70b
Or set once in .env:
TIMMY_MODEL_BACKEND=auto
AIRLLM_MODEL_SIZE=70b
| Flag | Parameters | RAM needed |
|---|---|---|
8b |
8 billion | ~16 GB |
70b |
70 billion | ~140 GB |
405b |
405 billion | ~810 GB |
Configuration
cp .env.example .env
# edit .env
| Variable | Default | Purpose |
|---|---|---|
OLLAMA_URL |
http://localhost:11434 |
Ollama host |
OLLAMA_MODEL |
llama3.2 |
Model served by Ollama |
DEBUG |
false |
Enable /docs and /redoc |
TIMMY_MODEL_BACKEND |
ollama |
ollama | airllm | auto |
AIRLLM_MODEL_SIZE |
70b |
8b | 70b | 405b |
L402_HMAC_SECRET |
(default — change in prod) | HMAC signing key for macaroons |
L402_MACAROON_SECRET |
(default — change in prod) | Macaroon secret |
LIGHTNING_BACKEND |
mock |
mock (production-ready) | lnd (scaffolded, not yet functional) |
Architecture
Browser / Phone
│ HTTP + HTMX + WebSocket
▼
┌─────────────────────────────────────────┐
│ FastAPI (dashboard.app) │
│ routes: agents, health, swarm, │
│ marketplace, voice, mobile │
└───┬─────────────┬──────────┬────────────┘
│ │ │
▼ ▼ ▼
Jinja2 Timmy Swarm
Templates Agent Coordinator
(HTMX) │ ├─ Registry (SQLite)
├─ Ollama ├─ AuctionManager (L402 bids)
└─ AirLLM ├─ SwarmComms (Redis / in-memory)
└─ SwarmManager (subprocess)
│
├── Voice NLU + TTS (pyttsx3, local)
├── WebSocket live feed (ws_manager)
├── L402 Lightning proxy (macaroon + invoice)
├── Push notifications (local + macOS native)
└── Siri Shortcuts API endpoints
Persistence: timmy.db (Agno memory), data/swarm.db (registry + tasks)
External: Ollama :11434, optional Redis, optional LND gRPC
Project layout
src/
config.py # pydantic-settings — all env vars live here
timmy/ # Core agent (agent.py, backends.py, cli.py, prompts.py)
dashboard/ # FastAPI app, routes, Jinja2 templates
swarm/ # Multi-agent: coordinator, registry, bidder, tasks, comms
timmy_serve/ # L402 proxy, payment handler, TTS, serve CLI
spark/ # Intelligence engine — events, predictions, advisory
creative/ # Creative director + video assembler pipeline
tools/ # Git, image, music, video tools for persona agents
lightning/ # Lightning backend abstraction (mock + LND)
agent_core/ # Substrate-agnostic agent interface
voice/ # NLU intent detection
ws_manager/ # WebSocket connection manager
notifications/ # Push notification store
shortcuts/ # Siri Shortcuts endpoints
telegram_bot/ # Telegram bridge
self_tdd/ # Continuous test watchdog
tests/ # one test file per module, all mocked
static/style.css # Dark mission-control theme (JetBrains Mono)
docs/ # GitHub Pages landing page
AGENTS.md # AI agent development standards ← read this
.env.example # Environment variable reference
Makefile # Common dev commands
Troubleshooting
ollama: command not found — install from brew install ollama or ollama.com
connection refused in chat — run ollama serve in a separate terminal
ModuleNotFoundError: No module named 'sqlalchemy' — re-run install to pick up the updated agno[sqlite] dependency:
make install
ModuleNotFoundError: No module named 'dashboard' — activate the venv:
source .venv/bin/activate && pip install -e ".[dev]"
Health panel shows DOWN — Ollama isn't running; chat still works but returns the offline error message
L402 startup warnings — set L402_HMAC_SECRET and L402_MACAROON_SECRET in
.env to silence them (required for production)
For AI agents contributing to this repo
Read AGENTS.md. It covers per-agent assignments, architecture
patterns, coding conventions, and the v2→v3 roadmap.
Roadmap
| Version | Name | Status | Milestone |
|---|---|---|---|
| 1.0.0 | Genesis | ✅ Complete | Agno + Ollama + SQLite + Dashboard |
| 2.0.0 | Exodus | 🔄 In progress | Swarm + L402 + Voice + Marketplace |
| 3.0.0 | Revelation | 📋 Planned | Lightning treasury + single .app bundle |