* Remove persona system, identity, and all Timmy references
Strip the codebase to pure orchestration logic:
- Delete TIMMY_IDENTITY.md and memory/self/identity.md
- Gut brain/identity.py to no-op stubs (empty returns)
- Remove all system prompts reinforcing Timmy's character, faith,
sovereignty, sign-off ("Sir, affirmative"), and agent roster
- Replace identity-laden prompts with generic local-AI-assistant prompts
- Remove "You work for Timmy" from all sub-agent system prompts
- Rename PersonaTools → AgentTools, PERSONA_TOOLKITS → AGENT_TOOLKITS
- Replace "timmy" agent ID with "orchestrator" across routes, marketplace,
tools catalog, and orchestrator class
- Strip Timmy references from config comments, templates, telegram bot,
chat API, and dashboard UI
- Delete tests/brain/test_identity.py entirely
- Fix all test assertions that checked for persona identity content
729 tests pass (2 pre-existing failures in test_calm.py unrelated).
https://claude.ai/code/session_01LjQGUE6nk9W9674zaxrYxy
* Add Taskosaur (PM + AI task execution) to docker-compose
Spins up Taskosaur alongside the dashboard on `docker compose up`:
- postgres:16-alpine (port 5432, Taskosaur DB)
- redis:7-alpine (Bull queue backend)
- taskosaur (ports 3000 API / 3001 UI)
- dashboard now depends_on taskosaur healthy
- TASKOSAUR_API_URL injected into dashboard environment
Dashboard can reach Taskosaur at http://taskosaur:3000/api on the
internal network. Frontend UI accessible at http://localhost:3001.
https://claude.ai/code/session_01LjQGUE6nk9W9674zaxrYxy
---------
Co-authored-by: Claude <noreply@anthropic.com>
Replace the stub `handle_bug_report` handler with a real implementation
that logs a decision trail and dispatches code_fix tasks to Forge for
automated fixing. Add `POST /api/bugs/submit` endpoint and `timmy
ingest-report` CLI command so AI test runners (Comet) can submit
structured bug reports without manual copy-paste.
- POST /api/bugs/submit: accepts JSON reports, creates bug_report tasks
- timmy ingest-report: CLI for file/stdin JSON ingestion with --dry-run
- handle_bug_report: logs decision trail to event_log, dispatches
code_fix task to Forge with parent_task_id linking back to the bug
- 18 TDD tests covering endpoint, handler, and CLI
Co-authored-by: Alexander Payne <apayne@MM.local>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The chat WebSocket return path was broken by two bugs that prevented
Timmy's responses from appearing in the live chat feed:
1. Frontend checked msg.type instead of msg.event for 'timmy_response'
events — the WSEvent dataclass uses 'event' as the field name.
2. Frontend accessed msg.response instead of msg.data.response — the
response payload is nested in the data field.
Additional fixes:
- Queue acknowledgment ("Message queued...") no longer logged as an
agent message in chat history; the real response is logged by the
task processor when it completes, eliminating duplicate messages.
- Chat message template now carries data-task-id so the WS handler
can find and replace the placeholder with the actual response.
- appendMessage() uses DOM APIs (textContent) instead of innerHTML
for safer content insertion before markdown rendering.
- Fixed chat_message.html script targeting when queue-status div is
present between the agent message and the inline script.
https://claude.ai/code/session_011cJfexqBBuGhSRQU8qwKcR
Co-authored-by: Claude <noreply@anthropic.com>
Errors and uncaught exceptions are now automatically captured, deduplicated,
persisted to a rotating log file, and filed as bug report tasks in the
existing task queue — giving Timmy a sovereign, local issue tracker with
zero new dependencies.
- Add RotatingFileHandler writing errors to logs/errors.log (5MB rotate, 5 backups)
- Add error capture module with stack-trace hashing and 5-min dedup window
- Add FastAPI exception middleware + global exception handler
- Instrument all background loops (briefing, thinking, task processor) with capture_error()
- Extend task queue with bug_report task type and auto-approve rule
- Fix auto-approve type matching (was ignoring task_type field entirely)
- Add /bugs dashboard page and /api/bugs JSON endpoints
- Add ERROR_CAPTURED and BUG_REPORT_CREATED event types for real-time feed
- Add BUGS nav link to desktop and mobile navigation
- Add 16 tests covering error capture, deduplication, and bug report routes
Co-authored-by: Alexander Payne <apayne@MM.local>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add /api/chat, /api/upload, and /api/chat/history endpoints to the
FastAPI dashboard so the Expo mobile app talks directly to Timmy's
brain (Ollama) instead of a non-existent Node.js server.
Backend:
- New src/dashboard/routes/chat_api.py with 4 endpoints
- Mount /uploads/ for serving chat attachments
- Same context injection and session management as HTMX chat
Mobile app fixes:
- Point API base URL at port 8000 (FastAPI) instead of 3000
- Create lib/_core/theme.ts (was referenced but never created)
- Fix shared/types.ts (remove broken drizzle/errors re-exports)
- Remove broken server/chat.ts and 1,235-line template README
- Clean package.json (remove express, mysql2, drizzle, tRPC deps)
- Remove debug console.log from theme-provider
Tests: 13 new tests covering all API endpoints (all passing).
https://claude.ai/code/session_01XqErDoh2rVsPY8oTj21Lz2
Co-authored-by: Claude <noreply@anthropic.com>
* test: remove hardcoded sleeps, add pytest-timeout
- Replace fixed time.sleep() calls with intelligent polling or WebDriverWait
- Add pytest-timeout dependency and --timeout=30 to prevent hangs
- Fixes test flakiness and improves test suite speed
* feat: add Aider AI tool to Forge's toolkit
- Add Aider tool that calls local Ollama (qwen2.5:14b) for AI coding assist
- Register tool in Forge's code toolkit
- Add functional tests for the Aider tool
* config: add opencode.json with local Ollama provider for sovereign AI
* feat: Timmy fixes and improvements
## Bug Fixes
- Fix read_file path resolution: add ~ expansion, proper relative path handling
- Add repo_root to config.py with auto-detection from .git location
- Fix hardcoded llama3.2 - now dynamic from settings.ollama_model
## Timmy's Requests
- Add communication protocol to AGENTS.md (read context first, explain changes)
- Create DECISIONS.md for architectural decision documentation
- Add reasoning guidance to system prompts (step-by-step, state uncertainty)
- Update tests to reflect correct model name (llama3.1:8b-instruct)
## Testing
- All 177 dashboard tests pass
- All 32 prompt/tool tests pass
---------
Co-authored-by: Alexander Payne <apayne@MM.local>
Enable Timmy to run directly on iPhone by loading a small LLM into
the browser via WebGPU (Safari 26+ / iOS 26+). No server connection
required — fully sovereign, fully offline.
New files:
- static/local_llm.js: WebLLM wrapper with model catalogue, WebGPU
detection, streaming chat, and progress callbacks
- templates/mobile_local.html: Mobile-optimized UI with model
selector, download progress, LOCAL/SERVER badge, and chat
- tests/dashboard/test_local_models.py: 31 tests covering routes,
config, template UX, JS asset, and XSS prevention
Changes:
- config.py: browser_model_enabled, browser_model_id,
browser_model_fallback settings
- routes/mobile.py: /mobile/local page, /mobile/local-models API
- base.html: LOCAL AI nav link
Supported models: SmolLM2-360M (~200MB), Qwen2.5-0.5B (~350MB),
SmolLM2-1.7B (~1GB), Llama-3.2-1B (~700MB). Falls back to
server-side Ollama when local model is unavailable.
https://claude.ai/code/session_01Cqkvr4sZbED7T3iDu1rwSD