* test: remove hardcoded sleeps, add pytest-timeout
- Replace fixed time.sleep() calls with intelligent polling or WebDriverWait
- Add pytest-timeout dependency and --timeout=30 to prevent hangs
- Fixes test flakiness and improves test suite speed
* feat: add Aider AI tool to Forge's toolkit
- Add Aider tool that calls local Ollama (qwen2.5:14b) for AI coding assist
- Register tool in Forge's code toolkit
- Add functional tests for the Aider tool
* config: add opencode.json with local Ollama provider for sovereign AI
* feat: Timmy fixes and improvements
## Bug Fixes
- Fix read_file path resolution: add ~ expansion, proper relative path handling
- Add repo_root to config.py with auto-detection from .git location
- Fix hardcoded llama3.2 - now dynamic from settings.ollama_model
## Timmy's Requests
- Add communication protocol to AGENTS.md (read context first, explain changes)
- Create DECISIONS.md for architectural decision documentation
- Add reasoning guidance to system prompts (step-by-step, state uncertainty)
- Update tests to reflect correct model name (llama3.1:8b-instruct)
## Testing
- All 177 dashboard tests pass
- All 32 prompt/tool tests pass
---------
Co-authored-by: Alexander Payne <apayne@MM.local>
Enable Timmy to run directly on iPhone by loading a small LLM into
the browser via WebGPU (Safari 26+ / iOS 26+). No server connection
required — fully sovereign, fully offline.
New files:
- static/local_llm.js: WebLLM wrapper with model catalogue, WebGPU
detection, streaming chat, and progress callbacks
- templates/mobile_local.html: Mobile-optimized UI with model
selector, download progress, LOCAL/SERVER badge, and chat
- tests/dashboard/test_local_models.py: 31 tests covering routes,
config, template UX, JS asset, and XSS prevention
Changes:
- config.py: browser_model_enabled, browser_model_id,
browser_model_fallback settings
- routes/mobile.py: /mobile/local page, /mobile/local-models API
- base.html: LOCAL AI nav link
Supported models: SmolLM2-360M (~200MB), Qwen2.5-0.5B (~350MB),
SmolLM2-1.7B (~1GB), Llama-3.2-1B (~700MB). Falls back to
server-side Ollama when local model is unavailable.
https://claude.ai/code/session_01Cqkvr4sZbED7T3iDu1rwSD