forked from Rockachopa/Timmy-time-dashboard
This repository has been archived on 2026-03-24 . You can view files and clone it. You cannot open issues or pull requests or push a commit.
211c54bc8c17b8cd7c48584e394b59966ff80666
Inspired by OpenClaw-RL's multi-model orchestration, this adds four features for custom model management: 1. Custom model registry (infrastructure/models/registry.py) — SQLite-backed registry for GGUF, safetensors, HF checkpoint, and Ollama models with role-based lookups (general, reward, teacher, judge). 2. Per-agent model assignment — each swarm persona can use a different model instead of sharing the global default. Resolved via registry assignment > persona default > global default. 3. Runtime model management API (/api/v1/models) — REST endpoints to register, list, assign, enable/disable, and remove custom models without restart. Includes a dashboard page at /models. 4. Reward model scoring (PRM-style) — majority-vote quality evaluation of agent outputs using a configurable reward model. Scores persist in SQLite and feed into the swarm learner. New config settings: custom_weights_dir, reward_model_enabled, reward_model_name, reward_model_votes. 54 new tests covering registry CRUD, API endpoints, agent assignments, role lookups, and reward scoring. https://claude.ai/code/session_01V4iTozMwcE2gjfnCJdCugC
Timmy Time — Mission Control
A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate API access with Bitcoin Lightning — all from a browser, no cloud AI required.
Quick Start
git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
cd Timmy-time-dashboard
make install # create venv + install deps
cp .env.example .env # configure environment
ollama serve # separate terminal
ollama pull llama3.2
make dev # http://localhost:8000
make test # no Ollama needed
What's Here
| Subsystem | Description |
|---|---|
| Timmy Agent | Agno-powered agent (Ollama default, AirLLM optional for 70B/405B) |
| Mission Control | FastAPI + HTMX dashboard — chat, health, swarm, marketplace |
| Swarm | Multi-agent coordinator — spawn agents, post tasks, Lightning auctions |
| L402 / Lightning | Bitcoin Lightning payment gating for API access |
| Spark | Event capture, predictions, memory consolidation, advisory |
| Creative Studio | Multi-persona pipeline — image, music, video generation |
| Hands | 6 autonomous scheduled agents — Oracle, Sentinel, Scout, Scribe, Ledger, Weaver |
| Self-Coding | Codebase-aware self-modification with git safety |
| Integrations | Telegram bridge, Siri Shortcuts, voice NLU, mobile layout |
Commands
make dev # start dashboard (http://localhost:8000)
make test # run all tests
make test-cov # tests + coverage report
make lint # run ruff/flake8
make docker-up # start via Docker
make help # see all commands
CLI tools: timmy, timmy-serve, self-tdd, self-modify
Documentation
| Document | Purpose |
|---|---|
| CLAUDE.md | AI assistant development guide |
| AGENTS.md | Multi-agent development standards |
| .env.example | Configuration reference |
| docs/ | Architecture docs, ADRs, audits |
Configuration
cp .env.example .env
Key variables: OLLAMA_URL, OLLAMA_MODEL, TIMMY_MODEL_BACKEND,
L402_HMAC_SECRET, LIGHTNING_BACKEND, DEBUG. Full list in .env.example.
Troubleshooting
ollama: command not found—brew install ollamaor ollama.comconnection refused— runollama servefirstModuleNotFoundError—source .venv/bin/activate && make install- Health panel shows DOWN — Ollama isn't running; chat returns offline message
Roadmap
| Version | Name | Status |
|---|---|---|
| 1.0 | Genesis | Complete — Agno + Ollama + SQLite + Dashboard |
| 2.0 | Exodus | In progress — Swarm + L402 + Voice + Marketplace + Hands |
| 3.0 | Revelation | Planned — Lightning treasury + single .app bundle |
Languages
Python
82.2%
HTML
8.6%
CSS
3.6%
Shell
2%
TypeScript
1.9%
Other
1.6%