Major:
- Extract all inline <style> blocks from 22 Jinja2 templates into
static/css/mission-control.css — single cacheable stylesheet
- Add tox lint check that fails on inline <style> in templates
Minor:
1. Connection status indicator in topbar (green/amber/red dot) reflecting
WebSocket + Ollama reachability, with auto-reconnect
2. Jinja2 {% macro panel(title) %} in macros.html — eliminates repeated
.card.mc-panel markup; index.html converted as example
3. SVG favicon (purple T + orange dot)
4. 30-second TTL cache on _check_ollama() to avoid blocking the event loop
on every health poll (asyncio.to_thread was already in place)
5. Toast notification system (McToast.show) for transient status messages —
wired into connection status for Ollama/WebSocket state changes
Enforcement:
- CLAUDE.md updated with conventions 11-14 (no inline CSS, use panel macro,
use toasts, never block the event loop)
- tox lint + pre-push environments now fail on inline <style> blocks
https://claude.ai/code/session_014FQ785MQdyJQ4BAXrRSo9w
Co-authored-by: Claude <noreply@anthropic.com>
Enable Timmy to run directly on iPhone by loading a small LLM into
the browser via WebGPU (Safari 26+ / iOS 26+). No server connection
required — fully sovereign, fully offline.
New files:
- static/local_llm.js: WebLLM wrapper with model catalogue, WebGPU
detection, streaming chat, and progress callbacks
- templates/mobile_local.html: Mobile-optimized UI with model
selector, download progress, LOCAL/SERVER badge, and chat
- tests/dashboard/test_local_models.py: 31 tests covering routes,
config, template UX, JS asset, and XSS prevention
Changes:
- config.py: browser_model_enabled, browser_model_id,
browser_model_fallback settings
- routes/mobile.py: /mobile/local page, /mobile/local-models API
- base.html: LOCAL AI nav link
Supported models: SmolLM2-360M (~200MB), Qwen2.5-0.5B (~350MB),
SmolLM2-1.7B (~1GB), Llama-3.2-1B (~700MB). Falls back to
server-side Ollama when local model is unavailable.
https://claude.ai/code/session_01Cqkvr4sZbED7T3iDu1rwSD