Merge main into feature/model-upgrade-llama3.1 with conflict resolution

This commit is contained in:
Alexander Payne
2026-02-26 22:19:44 -05:00
292 changed files with 9397 additions and 3269 deletions

View File

@@ -30,6 +30,15 @@
# 8b ~16 GB RAM | 70b ~140 GB RAM | 405b ~810 GB RAM
# AIRLLM_MODEL_SIZE=70b
# ── Grok (xAI) — premium cloud augmentation ──────────────────────────────────
# Enable Grok as an opt-in premium backend for frontier reasoning.
# Local-first ethos is preserved — Grok only activates when explicitly enabled.
# GROK_ENABLED=false
# XAI_API_KEY=xai-...
# GROK_DEFAULT_MODEL=grok-3-fast
# GROK_MAX_SATS_PER_QUERY=200
# GROK_FREE=false
# ── L402 Lightning secrets ───────────────────────────────────────────────────
# HMAC secret for invoice verification. MUST be changed in production.
# Generate with: python3 -c "import secrets; print(secrets.token_hex(32))"

7
.gitignore vendored
View File

@@ -35,6 +35,13 @@ coverage.xml
htmlcov/
reports/
# Self-modify reports (auto-generated)
data/self_modify_reports/
src/data/
# Handoff context (session-scoped)
.handoff/
# IDE
.idea/
.vscode/

377
AGENTS.md
View File

@@ -1,342 +1,79 @@
# AGENTS.md — Timmy Time Development Standards for AI Agents
This file is the authoritative reference for any AI agent contributing to
this repository. Read it first. Every time.
Read [`CLAUDE.md`](CLAUDE.md) for architecture patterns and conventions.
---
## 1. Project at a Glance
## Non-Negotiable Rules
**Timmy Time** is a local-first, sovereign AI agent system. No cloud. No telemetry.
Bitcoin Lightning economics baked in.
1. **Tests must stay green.** Run `make test` before committing.
2. **No cloud dependencies.** All AI computation runs on localhost.
3. **No new top-level files without purpose.** Don't litter the root directory.
4. **Follow existing patterns** — singletons, graceful degradation, pydantic-settings.
5. **Security defaults:** Never hard-code secrets.
6. **XSS prevention:** Never use `innerHTML` with untrusted content.
| Thing | Value |
|------------------|----------------------------------------------------|
| Language | Python 3.11+ |
| Web framework | FastAPI + Jinja2 + HTMX |
| Agent framework | Agno (wraps Ollama or AirLLM) |
| Persistence | SQLite (`timmy.db`, `data/swarm.db`) |
| Tests | pytest — must stay green |
| Entry points | `timmy`, `timmy-serve`, `self-tdd` |
| Config | pydantic-settings, reads `.env` |
| Containers | Docker — each agent can run as an isolated service |
---
## Agent Roster
### Build Tier
**Local (Ollama)** — Primary workhorse. Free. Unrestricted.
Best for: everything, iterative dev, Docker swarm workers.
**Kimi (Moonshot)** — Paid. Large-context feature drops, new subsystems, persona agents.
Avoid: touching CI/pyproject.toml, adding cloud calls, removing tests.
**DeepSeek** — Near-free. Second-opinion generation, large refactors (R1 for hard problems).
Avoid: bypassing review tier for security modules.
### Review Tier
**Claude (Anthropic)** — Architecture, tests, docs, CI/CD, PR review.
Avoid: large one-shot feature dumps.
**Gemini (Google)** — Docs, frontend polish, boilerplate, diff summaries.
Avoid: security modules, Python business logic without Claude review.
**Manus AI** — Security audits, coverage gaps, L402 validation.
Avoid: large refactors, new features, prompt changes.
---
## Docker Agents
Container agents poll the coordinator's HTTP API (not in-memory `SwarmComms`):
```
src/
config.py # Central settings (OLLAMA_URL, DEBUG, etc.)
timmy/ # Core agent: agent.py, backends.py, cli.py, prompts.py
dashboard/ # FastAPI app + routes + Jinja2 templates
app.py
store.py # In-memory MessageLog singleton
routes/ # agents, health, swarm, swarm_ws, marketplace,
│ # mobile, mobile_test, voice, voice_enhanced,
│ # swarm_internal (HTTP API for Docker agents)
templates/ # base.html + page templates + partials/
swarm/ # Multi-agent coordinator, registry, bidder, tasks, comms
docker_runner.py # Spawn agents as Docker containers
timmy_serve/ # L402 Lightning proxy, payment handler, TTS, CLI
spark/ # Intelligence engine — events, predictions, advisory
creative/ # Creative director + video assembler pipeline
tools/ # Git, image, music, video tools for persona agents
lightning/ # Lightning backend abstraction (mock + LND)
agent_core/ # Substrate-agnostic agent interface
voice/ # NLU intent detection (regex-based, no cloud)
ws_manager/ # WebSocket manager (ws_manager singleton)
notifications/ # Push notification store (notifier singleton)
shortcuts/ # Siri Shortcuts API endpoints
telegram_bot/ # Telegram bridge
self_tdd/ # Continuous test watchdog
tests/ # One test_*.py per module, all mocked
static/ # style.css + bg.svg (arcane theme)
docs/ # GitHub Pages site
GET /internal/tasks → list tasks open for bidding
POST /internal/bids → submit a bid
```
---
## 2. Non-Negotiable Rules
1. **Tests must stay green.** Run `make test` before committing.
2. **No cloud dependencies.** All AI computation runs on localhost.
3. **No new top-level files without purpose.** Don't litter the root directory.
4. **Follow existing patterns** — singletons, graceful degradation, pydantic-settings config.
5. **Security defaults:** Never hard-code secrets. Warn at startup when defaults are in use.
6. **XSS prevention:** Never use `innerHTML` with untrusted content.
---
## 3. Agent Roster
Agents are divided into two tiers: **Builders** generate code and features;
**Reviewers** provide quality gates, feedback, and hardening. The Local agent
is the primary workhorse — use it as much as possible to minimise cost.
---
### 🏗️ BUILD TIER
---
### Local — Ollama (primary workhorse)
**Model:** Any — `qwen2.5-coder`, `deepseek-coder-v2`, `codellama`, or whatever
is loaded in Ollama. The owner decides the model; this agent is unrestricted.
**Cost:** Free. Runs on the host machine.
**Best for:**
- Everything. This is the default agent for all coding tasks.
- Iterative development, fast feedback loops, bulk generation
- Running as a Docker swarm worker — scales horizontally at zero marginal cost
- Experimenting with new models without changing any other code
**Conventions to follow:**
- Communicate with the coordinator over HTTP (`COORDINATOR_URL` env var)
- Register capabilities honestly so the auction system routes tasks well
- Write tests for anything non-trivial
**No restrictions.** If a model can do it, do it.
---
### Kimi (Moonshot AI)
**Model:** Moonshot large-context models.
**Cost:** Paid API.
**Best for:**
- Large context feature drops (new pages, new subsystems, new agent personas)
- Implementing roadmap items that require reading many files at once
- Generating boilerplate for new agents (Echo, Mace, Helm, Seer, Forge, Quill)
**Conventions to follow:**
- Deliver working code with accompanying tests (even if minimal)
- Match the arcane CSS theme — extend `static/style.css`
- New agents follow the `SwarmNode` + `Registry` + Docker pattern
- Lightning-gated endpoints follow the L402 pattern in `src/timmy_serve/l402_proxy.py`
**Avoid:**
- Touching CI/CD or pyproject.toml without coordinating
- Adding cloud API calls
- Removing existing tests
---
### DeepSeek (DeepSeek API)
**Model:** `deepseek-chat` (V3) or `deepseek-reasoner` (R1).
**Cost:** Near-free (~$0.14/M tokens).
**Best for:**
- Second-opinion feature generation when Kimi is busy or context is smaller
- Large refactors with reasoning traces (use R1 for hard problems)
- Code review passes before merging Kimi PRs
- Anything that doesn't need a frontier model but benefits from strong reasoning
**Conventions to follow:**
- Same conventions as Kimi
- Prefer V3 for straightforward tasks; R1 for anything requiring multi-step logic
- Submit PRs for review by Claude before merging
**Avoid:**
- Bypassing the review tier for security-sensitive modules
- Touching `src/swarm/coordinator.py` without Claude review
---
### 🔍 REVIEW TIER
---
### Claude (Anthropic)
**Model:** Claude Sonnet.
**Cost:** Paid API.
**Best for:**
- Architecture decisions and code-quality review
- Writing and fixing tests; keeping coverage green
- Updating documentation (README, AGENTS.md, inline comments)
- CI/CD, tooling, Docker infrastructure
- Debugging tricky async or import issues
- Reviewing PRs from Local, Kimi, and DeepSeek before merge
**Conventions to follow:**
- Prefer editing existing files over creating new ones
- Keep route files thin — business logic lives in the module, not the route
- Use `from config import settings` for all env-var access
- New routes go in `src/dashboard/routes/`, registered in `app.py`
- Always add a corresponding `tests/test_<module>.py`
**Avoid:**
- Large one-shot feature dumps (use Local or Kimi)
- Touching `src/swarm/coordinator.py` for security work (that's Manus's lane)
---
### Gemini (Google)
**Model:** Gemini 2.0 Flash (free tier) or Pro.
**Cost:** Free tier generous; upgrade only if needed.
**Best for:**
- Documentation, README updates, inline docstrings
- Frontend polish — HTML templates, CSS, accessibility review
- Boilerplate generation (test stubs, config files, GitHub Actions)
- Summarising large diffs for human review
**Conventions to follow:**
- Submit changes as PRs; always include a plain-English summary of what changed
- For CSS changes, test at mobile breakpoint (≤768px) before submitting
- Never modify Python business logic without Claude review
**Avoid:**
- Security-sensitive modules (that's Manus's lane)
- Changing auction or payment logic
- Large Python refactors
---
### Manus AI
**Strengths:** Precision security work, targeted bug fixes, coverage gap analysis.
**Best for:**
- Security audits (XSS, injection, secret exposure)
- Closing test coverage gaps for existing modules
- Performance profiling of specific endpoints
- Validating L402/Lightning payment flows
**Conventions to follow:**
- Scope tightly — one security issue per PR
- Every security fix must have a regression test
- Use `pytest-cov` output to identify gaps before writing new tests
- Document the vulnerability class in the PR description
**Avoid:**
- Large-scale refactors (that's Claude's lane)
- New feature work (use Local or Kimi)
- Changing agent personas or prompt content
---
## 4. Docker — Running Agents as Containers
Each agent can run as an isolated Docker container. Containers share the
`data/` volume for SQLite and communicate with the coordinator over HTTP.
`COORDINATOR_URL=http://dashboard:8000` is set by docker-compose.
```bash
make docker-build # build the image
make docker-up # start dashboard + deps
make docker-agent # spawn one agent worker (LOCAL model)
make docker-down # stop everything
make docker-logs # tail all service logs
```
### How container agents communicate
Container agents cannot use the in-memory `SwarmComms` channel. Instead they
poll the coordinator's internal HTTP API:
```
GET /internal/tasks → list tasks open for bidding
POST /internal/bids → submit a bid
```
Set `COORDINATOR_URL=http://dashboard:8000` in the container environment
(docker-compose sets this automatically).
### Spawning a container agent from Python
```python
from swarm.docker_runner import DockerAgentRunner
runner = DockerAgentRunner(coordinator_url="http://dashboard:8000")
info = runner.spawn("Echo", image="timmy-time:latest")
runner.stop(info["container_id"])
make docker-build # build image
make docker-up # start dashboard
make docker-agent # add a worker
```
---
## 5. Architecture Patterns
### Singletons (module-level instances)
```python
from dashboard.store import message_log
from notifications.push import notifier
from ws_manager.handler import ws_manager
from timmy_serve.payment_handler import payment_handler
from swarm.coordinator import coordinator
```
### Config access
```python
from config import settings
url = settings.ollama_url # never os.environ.get() directly in route files
```
### HTMX pattern
```python
return templates.TemplateResponse(
"partials/chat_message.html",
{"request": request, "role": "user", "content": message}
)
```
### Graceful degradation
```python
try:
result = await some_optional_service()
except Exception:
result = fallback_value # log, don't crash
```
### Tests
- All heavy deps (`agno`, `airllm`, `pyttsx3`) are stubbed in `tests/conftest.py`
- Use `pytest.fixture` for shared state; prefer function scope
- Use `TestClient` from `fastapi.testclient` for route tests
- No real Ollama required — mock `agent.run()`
---
## 6. Running Locally
```bash
make install # create venv + install dev deps
make test # run full test suite
make dev # start dashboard (http://localhost:8000)
make watch # self-TDD watchdog (60s poll)
make test-cov # coverage report
```
Or with Docker:
```bash
make docker-build # build image
make docker-up # start dashboard
make docker-agent # add a Local agent worker
```
---
## 7. Roadmap (v2 → v3)
**v2.0.0 — Exodus (in progress)**
- [x] Persistent swarm state across restarts
- [x] Docker infrastructure for agent containers
- [x] Implement Echo, Mace, Helm, Seer, Forge, Quill persona agents (+ Pixel, Lyra, Reel)
- [x] MCP tool integration for Timmy
- [ ] Real LND gRPC backend for `PaymentHandler` (replace mock)
- [ ] Marketplace frontend — wire `/marketplace` route to real data
**v3.0.0 — Revelation (planned)**
- [ ] Bitcoin Lightning treasury (agent earns and spends sats autonomously)
- [ ] Single `.app` bundle for macOS (no Python install required)
- [ ] Federation — multiple Timmy instances discover and bid on each other's tasks
- [ ] Redis pub/sub replacing SQLite polling for high-throughput swarms
---
## 8. File Conventions
## File Conventions
| Pattern | Convention |
|---------|-----------|
| New route | `src/dashboard/routes/<name>.py` + register in `app.py` |
| New template | `src/dashboard/templates/<name>.html` extends `base.html` |
| New partial | `src/dashboard/templates/partials/<name>.html` |
| New subsystem | `src/<name>/` with `__init__.py` |
| New test file | `tests/test_<module>.py` |
| Secrets | Read via `os.environ.get("VAR", "default")` + startup warning if default |
| DB files | `.db` files go in project root or `data/` — never in `src/` |
| Docker | One service per agent type in `docker-compose.yml` |
| New subsystem | Add to existing `src/<package>/` — see module map in CLAUDE.md |
| New test | `tests/<module>/test_<feature>.py` (mirror source structure) |
| Secrets | Via `config.settings` + startup warning if default |
| DB files | Project root or `data/` — never in `src/` |
---
## Roadmap
**v2.0 Exodus (in progress):** Swarm + L402 + Voice + Marketplace + Hands
**v3.0 Revelation (planned):** Lightning treasury + `.app` bundle + federation

252
CLAUDE.md
View File

@@ -1,77 +1,9 @@
# CLAUDE.md — AI Assistant Guide for Timmy Time
This file provides context for AI assistants (Claude Code, Copilot, etc.)
working in this repository. Read this before making any changes.
**Tech stack:** Python 3.11+ · FastAPI · Jinja2 + HTMX · SQLite · Agno ·
Ollama · pydantic-settings · WebSockets · Docker
For multi-agent development standards and agent-specific conventions, see
[`AGENTS.md`](AGENTS.md).
---
## Project Summary
**Timmy Time** is a local-first, sovereign AI agent system with a browser-based
Mission Control dashboard. No cloud AI — all inference runs on localhost via
Ollama (or AirLLM for large models). Bitcoin Lightning economics are built in
for API access gating.
**Tech stack:** Python 3.11+ · FastAPI · Jinja2 + HTMX · SQLite · Agno (agent
framework) · Ollama · pydantic-settings · WebSockets · Docker
---
## Quick Reference Commands
```bash
# Setup
make install # Create venv + install dev deps
cp .env.example .env # Configure environment
# Development
make dev # Start dashboard at http://localhost:8000
make test # Run full test suite (no Ollama needed)
make test-cov # Tests + coverage report (terminal + XML)
make lint # Run ruff or flake8
# Docker
make docker-build # Build timmy-time:latest image
make docker-up # Start dashboard container
make docker-agent # Spawn one agent worker
make docker-down # Stop all containers
```
---
## Project Layout
```
src/
config.py # Central pydantic-settings (all env vars)
timmy/ # Core agent: agent.py, backends.py, cli.py, prompts.py
dashboard/ # FastAPI app + routes + Jinja2 templates
app.py # App factory, lifespan, router registration
store.py # In-memory MessageLog singleton
routes/ # One file per route group (agents, health, swarm, etc.)
templates/ # base.html + page templates + partials/
swarm/ # Multi-agent coordinator, registry, bidder, tasks, comms
coordinator.py # Central swarm orchestrator (security-sensitive)
docker_runner.py # Spawn agents as Docker containers
timmy_serve/ # L402 Lightning proxy, payment handler, TTS, CLI
spark/ # Intelligence engine — events, predictions, advisory
creative/ # Creative director + video assembler pipeline
tools/ # Git, image, music, video tools for persona agents
lightning/ # Lightning backend abstraction (mock + LND)
agent_core/ # Substrate-agnostic agent interface
voice/ # NLU intent detection (regex-based, local)
ws_manager/ # WebSocket connection manager (ws_manager singleton)
notifications/ # Push notification store (notifier singleton)
shortcuts/ # Siri Shortcuts API endpoints
telegram_bot/ # Telegram bridge
self_tdd/ # Continuous test watchdog
tests/ # One test_*.py per module, all mocked
static/ # style.css + bg.svg (dark arcane theme)
docs/ # GitHub Pages landing site
```
For agent roster and conventions, see [`AGENTS.md`](AGENTS.md).
---
@@ -79,32 +11,22 @@ docs/ # GitHub Pages landing site
### Config access
All configuration goes through `src/config.py` using pydantic-settings:
```python
from config import settings
url = settings.ollama_url # never use os.environ.get() directly in app code
```
Environment variables are read from `.env` automatically. See `.env.example` for
all available settings.
### Singletons
Core services are module-level singleton instances imported directly:
```python
from dashboard.store import message_log
from notifications.push import notifier
from ws_manager.handler import ws_manager
from timmy_serve.payment_handler import payment_handler
from infrastructure.notifications.push import notifier
from infrastructure.ws_manager.handler import ws_manager
from swarm.coordinator import coordinator
```
### HTMX response pattern
Routes return Jinja2 template partials for HTMX requests:
```python
return templates.TemplateResponse(
"partials/chat_message.html",
@@ -115,147 +37,41 @@ return templates.TemplateResponse(
### Graceful degradation
Optional services (Ollama, Redis, AirLLM) degrade gracefully — log the error,
return a fallback, never crash:
```python
try:
result = await some_optional_service()
except Exception:
result = fallback_value
```
return a fallback, never crash.
### Route registration
New routes go in `src/dashboard/routes/<name>.py`, then register the router in
`src/dashboard/app.py`:
```python
from dashboard.routes.<name> import router as <name>_router
app.include_router(<name>_router)
```
New routes: `src/dashboard/routes/<name>.py` register in `src/dashboard/app.py`.
---
## Testing
### Running tests
```bash
make test # Quick run (pytest -q --tb=short)
make test # Quick run (no Ollama needed)
make test-cov # With coverage (term-missing + XML)
make test-cov-html # With HTML coverage report
```
No Ollama or external services needed — all heavy dependencies are mocked.
### Test conventions
- **One test file per module:** `tests/test_<module>.py`
- **Stubs in conftest:** `agno`, `airllm`, `pyttsx3`, `telegram` are stubbed in
`tests/conftest.py` using `sys.modules.setdefault()` so tests run without
those packages installed
- **Test mode:** `TIMMY_TEST_MODE=1` is set automatically in conftest to disable
auto-spawning of persona agents during tests
- **FastAPI testing:** Use the `client` fixture (wraps `TestClient`)
- **Database isolation:** SQLite files in `data/` are cleaned between tests;
coordinator state is reset via autouse fixtures
- **Async:** `asyncio_mode = "auto"` in pytest config — async test functions
are detected automatically
- **Coverage threshold:** CI fails if coverage drops below 60%
(`fail_under = 60` in `pyproject.toml`)
### Adding a new test
```python
# tests/test_my_feature.py
from fastapi.testclient import TestClient
def test_my_endpoint(client):
response = client.get("/my-endpoint")
assert response.status_code == 200
```
---
## CI/CD
GitHub Actions workflow (`.github/workflows/tests.yml`):
- Runs on every push and pull request to all branches
- Python 3.11, installs `.[dev]` dependencies
- Runs pytest with coverage + JUnit XML output
- Publishes test results as PR comments and check annotations
- Uploads coverage XML as a downloadable artifact (14-day retention)
- **Stubs in conftest:** `agno`, `airllm`, `pyttsx3`, `telegram`, `discord`
stubbed via `sys.modules.setdefault()` — tests run without those packages
- **Test mode:** `TIMMY_TEST_MODE=1` set automatically in conftest
- **FastAPI testing:** Use the `client` fixture
- **Async:** `asyncio_mode = "auto"` — async tests detected automatically
- **Coverage threshold:** 60% (`fail_under` in `pyproject.toml`)
---
## Key Conventions
1. **Tests must stay green.** Run `make test` before committing.
2. **No cloud AI dependencies.** All inference runs on localhost.
3. **No new top-level files without purpose.** Keep the root directory clean.
4. **Follow existing patterns** — singletons, graceful degradation,
pydantic-settings config.
5. **Security defaults:** Never hard-code secrets. Warn at startup when using
default values.
2. **No cloud AI dependencies.** All inference on localhost.
3. **Keep the root directory clean.** No new top-level files without purpose.
4. **Follow existing patterns** — singletons, graceful degradation, pydantic config.
5. **Security defaults:** Never hard-code secrets.
6. **XSS prevention:** Never use `innerHTML` with untrusted content.
7. **Keep routes thin** — business logic lives in the module, not the route.
8. **Prefer editing existing files** over creating new ones.
9. **Use `from config import settings`** for all env-var access.
10. **Every new module gets a test:** `tests/test_<module>.py`.
---
## Entry Points
Three CLI commands are installed via `pyproject.toml`:
| Command | Module | Purpose |
|---------|--------|---------|
| `timmy` | `src/timmy/cli.py` | Chat, think, status commands |
| `timmy-serve` | `src/timmy_serve/cli.py` | L402-gated API server (port 8402) |
| `self-tdd` | `src/self_tdd/watchdog.py` | Continuous test watchdog |
---
## Environment Variables
Key variables (full list in `.env.example`):
| Variable | Default | Purpose |
|----------|---------|---------|
| `OLLAMA_URL` | `http://localhost:11434` | Ollama host |
| `OLLAMA_MODEL` | `llama3.2` | Model served by Ollama |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` / `airllm` / `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` / `70b` / `405b` |
| `L402_HMAC_SECRET` | *(change in prod)* | HMAC signing for invoices |
| `L402_MACAROON_SECRET` | *(change in prod)* | Macaroon signing |
| `LIGHTNING_BACKEND` | `mock` | `mock` / `lnd` |
| `SPARK_ENABLED` | `true` | Enable Spark intelligence engine |
| `TELEGRAM_TOKEN` | *(empty)* | Telegram bot token |
---
## Persistence
- `timmy.db` — Agno agent memory (SQLite, project root)
- `data/swarm.db` — Swarm registry + tasks (SQLite, `data/` directory)
- All `.db` files are gitignored — never commit database files
---
## Docker
Containers share a `data/` volume for SQLite. Container agents communicate with
the coordinator over HTTP (not in-memory `SwarmComms`):
```
GET /internal/tasks → list tasks open for bidding
POST /internal/bids → submit a bid
```
`COORDINATOR_URL=http://dashboard:8000` is set automatically by docker-compose.
---
@@ -265,3 +81,35 @@ POST /internal/bids → submit a bid
- `src/timmy_serve/l402_proxy.py` — Lightning payment gating
- `src/lightning/` — payment backend abstraction
- Any file handling secrets or authentication tokens
---
## Entry Points
| Command | Module | Purpose |
|---------|--------|---------|
| `timmy` | `src/timmy/cli.py` | Chat, think, status |
| `timmy-serve` | `src/timmy_serve/cli.py` | L402-gated API server (port 8402) |
| `self-tdd` | `src/self_coding/self_tdd/watchdog.py` | Continuous test watchdog |
| `self-modify` | `src/self_coding/self_modify/cli.py` | Self-modification CLI |
---
## Module Map (14 packages)
| Package | Purpose |
|---------|---------|
| `timmy/` | Core agent, personas, agent interface, semantic memory |
| `dashboard/` | FastAPI web UI, routes, templates |
| `swarm/` | Multi-agent coordinator, task queue, work orders |
| `self_coding/` | Self-modification, test watchdog, upgrade queue |
| `creative/` | Media generation, MCP tools |
| `infrastructure/` | WebSocket, notifications, events, LLM router |
| `integrations/` | Discord, Telegram, Siri Shortcuts, voice NLU |
| `lightning/` | L402 payment gating (security-sensitive) |
| `mcp/` | MCP tool registry and discovery |
| `spark/` | Event capture and advisory engine |
| `hands/` | 6 autonomous Hand agents |
| `scripture/` | Biblical text integration |
| `timmy_serve/` | L402-gated API server |
| `config.py` | Pydantic settings (foundation for all modules) |

View File

@@ -7,34 +7,20 @@
## Current Status
**Agent State:** Operational
**Mode:** Development
**Model:** llama3.2 (local via Ollama)
**Backend:** Ollama on localhost:11434
**Dashboard:** http://localhost:8000
**Agent State:** Operational
**Mode:** Development
**Active Tasks:** 0
**Pending Decisions:** None
---
## Standing Rules
1. **Sovereignty First** — No cloud AI dependencies
1. **Sovereignty First** — No cloud dependencies
2. **Local-Only Inference** — Ollama on localhost
3. **Privacy by Design** — Telemetry disabled
4. **Tool Minimalism** — Use tools only when necessary
5. **Memory Discipline** — Write handoffs at session end
6. **Clean Output** — Never show JSON, tool calls, or function syntax
---
## System Architecture
**Memory Tiers:**
- Tier 1 (Hot): This file (MEMORY.md) — always in context
- Tier 2 (Vault): memory/ directory — notes, profiles, AARs
- Tier 3 (Semantic): Vector search over vault content
**Swarm Agents:** Echo (research), Forge (code), Seer (data)
**Dashboard Pages:** Briefing, Swarm, Spark, Market, Tools, Events, Ledger, Memory, Router, Upgrades, Creative
---
@@ -42,16 +28,13 @@
| Agent | Role | Status |
|-------|------|--------|
| Timmy | Core AI | Active |
| Echo | Research & Summarization | Active |
| Forge | Coding & Debugging | Active |
| Seer | Analytics & Prediction | Active |
| Timmy | Core | Active |
---
## User Profile
**Name:** (not set)
**Name:** (not set)
**Interests:** (to be learned)
---
@@ -64,8 +47,8 @@
## Pending Actions
- [ ] Learn user's name and preferences
- [ ] Learn user's name
---
*Prune date: 2026-03-25*
*Prune date: 2026-02-25*

328
README.md
View File

@@ -2,109 +2,161 @@
[![Tests](https://github.com/AlexanderWhitestone/Timmy-time-dashboard/actions/workflows/tests.yml/badge.svg)](https://github.com/AlexanderWhitestone/Timmy-time-dashboard/actions/workflows/tests.yml)
A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate API access with Bitcoin Lightning — all from a browser, no cloud AI required.
A local-first, sovereign AI agent system. Talk to Timmy, watch his swarm, gate
API access with Bitcoin Lightning — all from a browser, no cloud AI required.
**[Live Docs →](https://alexanderwhitestone.github.io/Timmy-time-dashboard/)**
---
## What's built
## Quick Start
```bash
git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
cd Timmy-time-dashboard
make install # create venv + install deps
cp .env.example .env # configure environment
ollama serve # separate terminal
ollama pull llama3.1:8b-instruct # Required for reliable tool calling
make dev # http://localhost:8000
make test # no Ollama needed
```
**Note:** llama3.1:8b-instruct is used instead of llama3.2 because it is
specifically fine-tuned for reliable tool/function calling.
llama3.2 (3B) was found to hallucinate tool output consistently in testing.
Fallback: qwen2.5:14b if llama3.1:8b-instruct is not available.
---
## What's Here
| Subsystem | Description |
|-----------|-------------|
| **Timmy Agent** | Agno-powered agent (Ollama default, AirLLM optional for 70B/405B) |
| **Mission Control** | FastAPI + HTMX dashboard — chat, health, swarm, marketplace |
| **Swarm** | Multi-agent coordinator — spawn agents, post tasks, run Lightning auctions |
| **L402 / Lightning** | Bitcoin Lightning payment gating for API access (mock backend; LND scaffolded) |
| **Spark Intelligence** | Event capture, predictions, memory consolidation, advisory engine |
| **Creative Studio** | Multi-persona creative pipeline — image, music, video generation |
| **Tools** | Git, image, music, and video tools accessible by persona agents |
| **Voice** | NLU intent detection + TTS (pyttsx3, no cloud) |
| **WebSocket** | Real-time swarm live feed |
| **Mobile** | Responsive layout with full iOS safe-area and touch support |
| **Telegram** | Bridge Telegram messages to Timmy |
| **Swarm** | Multi-agent coordinator — spawn agents, post tasks, Lightning auctions |
| **L402 / Lightning** | Bitcoin Lightning payment gating for API access |
| **Spark** | Event capture, predictions, memory consolidation, advisory |
| **Creative Studio** | Multi-persona pipeline — image, music, video generation |
| **Hands** | 6 autonomous scheduled agents — Oracle, Sentinel, Scout, Scribe, Ledger, Weaver |
| **CLI** | `timmy`, `timmy-serve`, `self-tdd` entry points |
**Full test suite, 100% passing.**
| **Self-Coding** | Codebase-aware self-modification with git safety |
| **Integrations** | Telegram bridge, Siri Shortcuts, voice NLU, mobile layout |
---
## Prerequisites
## Commands
**Python 3.11+**
```bash
python3 --version # must be 3.11+
make dev # start dashboard (http://localhost:8000)
make test # run all tests
make test-cov # tests + coverage report
make lint # run ruff/flake8
make docker-up # start via Docker
make help # see all commands
```
If not: `brew install python@3.11`
**Ollama** — runs the local LLM
**CLI tools:** `timmy`, `timmy-serve`, `self-tdd`, `self-modify`
---
## Documentation
| Document | Purpose |
|----------|---------|
| [CLAUDE.md](CLAUDE.md) | AI assistant development guide |
| [AGENTS.md](AGENTS.md) | Multi-agent development standards |
| [.env.example](.env.example) | Configuration reference |
| [docs/](docs/) | Architecture docs, ADRs, audits |
---
## Configuration
```bash
brew install ollama
# or download from https://ollama.com
cp .env.example .env
```
| Variable | Default | Purpose |
|----------|---------|---------|
| `OLLAMA_URL` | `http://localhost:11434` | Ollama host |
| `OLLAMA_MODEL` | `llama3.1:8b-instruct` | Model for tool calling. Use llama3.1:8b-instruct for reliable tool use; fallback to qwen2.5:14b |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` \| `airllm` \| `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` \| `70b` \| `405b` |
| `L402_HMAC_SECRET` | *(default — change in prod)* | HMAC signing key for macaroons |
| `L402_MACAROON_SECRET` | *(default — change in prod)* | Macaroon secret |
| `LIGHTNING_BACKEND` | `mock` | `mock` (production-ready) \| `lnd` (scaffolded, not yet functional) |
---
## Architecture
```
Browser / Phone
│ HTTP + HTMX + WebSocket
┌─────────────────────────────────────────┐
│ FastAPI (dashboard.app) │
│ routes: agents, health, swarm, │
│ marketplace, voice, mobile │
└───┬─────────────┬──────────┬────────────┘
│ │ │
▼ ▼ ▼
Jinja2 Timmy Swarm
Templates Agent Coordinator
(HTMX) │ ├─ Registry (SQLite)
├─ Ollama ├─ AuctionManager (L402 bids)
└─ AirLLM ├─ SwarmComms (Redis / in-memory)
└─ SwarmManager (subprocess)
├── Voice NLU + TTS (pyttsx3, local)
├── WebSocket live feed (ws_manager)
├── L402 Lightning proxy (macaroon + invoice)
├── Push notifications (local + macOS native)
└── Siri Shortcuts API endpoints
Persistence: timmy.db (Agno memory), data/swarm.db (registry + tasks)
External: Ollama :11434, optional Redis, optional LND gRPC
```
---
## Quickstart
## Project Layout
```bash
# 1. Clone
git clone https://github.com/AlexanderWhitestone/Timmy-time-dashboard.git
cd Timmy-time-dashboard
# 2. Install
make install
# or manually: python3 -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"
# 3. Start Ollama (separate terminal)
ollama serve
ollama pull llama3.1:8b-instruct # Required for reliable tool calling
# Note: llama3.1:8b-instruct is used instead of llama3.2 because it is
# specifically fine-tuned for reliable tool/function calling.
# llama3.2 (3B) was found to hallucinate tool output consistently in testing.
# Fallback: qwen2.5:14b if llama3.1:8b-instruct is not available.
# 4. Launch dashboard
make dev
# opens at http://localhost:8000
```
src/
config.py # pydantic-settings — all env vars live here
timmy/ # Core agent (agent.py, backends.py, cli.py, prompts.py)
hands/ # Autonomous scheduled agents (registry, scheduler, runner)
dashboard/ # FastAPI app, routes, Jinja2 templates
swarm/ # Multi-agent: coordinator, registry, bidder, tasks, comms
timmy_serve/ # L402 proxy, payment handler, TTS, serve CLI
spark/ # Intelligence engine — events, predictions, advisory
creative/ # Creative director + video assembler pipeline
tools/ # Git, image, music, video tools for persona agents
lightning/ # Lightning backend abstraction (mock + LND)
agent_core/ # Substrate-agnostic agent interface
voice/ # NLU intent detection
ws_manager/ # WebSocket connection manager
notifications/ # Push notification store
shortcuts/ # Siri Shortcuts endpoints
telegram_bot/ # Telegram bridge
self_tdd/ # Continuous test watchdog
hands/ # Hand manifests — oracle/, sentinel/, etc.
tests/ # one test file per module, all mocked
static/style.css # Dark mission-control theme (JetBrains Mono)
docs/ # GitHub Pages landing page
AGENTS.md # AI agent development standards ← read this
.env.example # Environment variable reference
Makefile # Common dev commands
```
---
## Common commands
```bash
make test # run all tests (no Ollama needed)
make test-cov # test + coverage report
make dev # start dashboard (http://localhost:8000)
make watch # self-TDD watchdog (60s poll, alerts on regressions)
```
Or with the bootstrap script (creates venv, tests, watchdog, server in one shot):
```bash
bash activate_self_tdd.sh
bash activate_self_tdd.sh --big-brain # also installs AirLLM
```
---
## CLI
```bash
timmy chat "What is sovereignty?"
timmy think "Bitcoin and self-custody"
timmy status
timmy-serve start # L402-gated API server (port 8402)
timmy-serve invoice # generate a Lightning invoice
timmy-serve status
```
---
## Mobile access
## Mobile Access
The dashboard is fully mobile-optimized (iOS safe area, 44px touch targets, 16px
input to prevent zoom, momentum scroll).
@@ -162,7 +214,7 @@ channel = "telegram"
---
## AirLLM — big brain backend
## AirLLM — Big Brain Backend
Run 70B or 405B models locally with no GPU, using AirLLM's layer-by-layer loading.
Apple Silicon uses MLX automatically.
@@ -188,121 +240,39 @@ AIRLLM_MODEL_SIZE=70b
---
## Configuration
## CLI
```bash
cp .env.example .env
# edit .env
timmy chat "What is sovereignty?"
timmy think "Bitcoin and self-custody"
timmy status
timmy-serve start # L402-gated API server (port 8402)
timmy-serve invoice # generate a Lightning invoice
timmy-serve status
```
| Variable | Default | Purpose |
|----------|---------|---------|
| `OLLAMA_URL` | `http://localhost:11434` | Ollama host |
| `OLLAMA_MODEL` | `llama3.1:8b-instruct` | Model for tool calling. Use llama3.1:8b-instruct for reliable tool use; fallback to qwen2.5:14b |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` \| `airllm` \| `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` \| `70b` \| `405b` |
| `L402_HMAC_SECRET` | *(default — change in prod)* | HMAC signing key for macaroons |
| `L402_MACAROON_SECRET` | *(default — change in prod)* | Macaroon secret |
| `LIGHTNING_BACKEND` | `mock` | `mock` (production-ready) \| `lnd` (scaffolded, not yet functional) |
---
## Architecture
```
Browser / Phone
│ HTTP + HTMX + WebSocket
┌─────────────────────────────────────────┐
│ FastAPI (dashboard.app) │
│ routes: agents, health, swarm, │
│ marketplace, voice, mobile │
└───┬─────────────┬──────────┬────────────┘
│ │ │
▼ ▼ ▼
Jinja2 Timmy Swarm
Templates Agent Coordinator
(HTMX) │ ├─ Registry (SQLite)
├─ Ollama ├─ AuctionManager (L402 bids)
└─ AirLLM ├─ SwarmComms (Redis / in-memory)
└─ SwarmManager (subprocess)
├── Voice NLU + TTS (pyttsx3, local)
├── WebSocket live feed (ws_manager)
├── L402 Lightning proxy (macaroon + invoice)
├── Push notifications (local + macOS native)
└── Siri Shortcuts API endpoints
Persistence: timmy.db (Agno memory), data/swarm.db (registry + tasks)
External: Ollama :11434, optional Redis, optional LND gRPC
```
---
## Project layout
```
src/
config.py # pydantic-settings — all env vars live here
timmy/ # Core agent (agent.py, backends.py, cli.py, prompts.py)
hands/ # Autonomous scheduled agents (registry, scheduler, runner)
dashboard/ # FastAPI app, routes, Jinja2 templates
swarm/ # Multi-agent: coordinator, registry, bidder, tasks, comms
timmy_serve/ # L402 proxy, payment handler, TTS, serve CLI
spark/ # Intelligence engine — events, predictions, advisory
creative/ # Creative director + video assembler pipeline
tools/ # Git, image, music, video tools for persona agents
lightning/ # Lightning backend abstraction (mock + LND)
agent_core/ # Substrate-agnostic agent interface
voice/ # NLU intent detection
ws_manager/ # WebSocket connection manager
notifications/ # Push notification store
shortcuts/ # Siri Shortcuts endpoints
telegram_bot/ # Telegram bridge
self_tdd/ # Continuous test watchdog
hands/ # Hand manifests — oracle/, sentinel/, etc.
tests/ # one test file per module, all mocked
static/style.css # Dark mission-control theme (JetBrains Mono)
docs/ # GitHub Pages landing page
AGENTS.md # AI agent development standards ← read this
.env.example # Environment variable reference
Makefile # Common dev commands
Or with the bootstrap script (creates venv, tests, watchdog, server in one shot):
```bash
bash scripts/activate_self_tdd.sh
bash scripts/activate_self_tdd.sh --big-brain # also installs AirLLM
```
---
## Troubleshooting
**`ollama: command not found`** — install from `brew install ollama` or ollama.com
**`connection refused` in chat** — run `ollama serve` in a separate terminal
**`ModuleNotFoundError: No module named 'sqlalchemy'`** — re-run install to pick up the updated `agno[sqlite]` dependency:
`make install`
**`ModuleNotFoundError: No module named 'dashboard'`** — activate the venv:
`source .venv/bin/activate && pip install -e ".[dev]"`
**Health panel shows DOWN** — Ollama isn't running; chat still works but returns
the offline error message
**L402 startup warnings** — set `L402_HMAC_SECRET` and `L402_MACAROON_SECRET` in
`.env` to silence them (required for production)
---
## For AI agents contributing to this repo
Read [`AGENTS.md`](AGENTS.md). It covers per-agent assignments, architecture
patterns, coding conventions, and the v2→v3 roadmap.
- **`ollama: command not found`** — `brew install ollama` or ollama.com
- **`connection refused`** — run `ollama serve` first
- **`ModuleNotFoundError`** — `source .venv/bin/activate && make install`
- **Health panel shows DOWN** — Ollama isn't running; chat returns offline message
---
## Roadmap
| Version | Name | Status | Milestone |
|---------|------------|-------------|-----------|
| 1.0.0 | Genesis | ✅ Complete | Agno + Ollama + SQLite + Dashboard |
| 2.0.0 | Exodus | 🔄 In progress | Swarm + L402 + Voice + Marketplace + Hands |
| 3.0.0 | Revelation | 📋 Planned | Lightning treasury + single `.app` bundle |
| Version | Name | Status |
|---------|------|--------|
| 1.0 | Genesis | Complete Agno + Ollama + SQLite + Dashboard |
| 2.0 | Exodus | In progress Swarm + L402 + Voice + Marketplace + Hands |
| 3.0 | Revelation | Planned Lightning treasury + single `.app` bundle |

481
REFACTORING_PLAN.md Normal file
View File

@@ -0,0 +1,481 @@
# Timmy Time — Architectural Refactoring Plan
**Author:** Claude (VP Engineering review)
**Date:** 2026-02-26
**Branch:** `claude/plan-repo-refactoring-hgskF`
---
## Executive Summary
The Timmy Time codebase has grown to **53K lines of Python** across **272
files** (169 source + 103 test), **28 modules** in `src/`, **27 route files**,
**49 templates**, **90 test files**, and **87KB of root-level markdown**. It
works, but it's burning tokens, slowing down test runs, and making it hard to
reason about change impact.
This plan proposes **6 phases** of refactoring, ordered by impact and risk. Each
phase is independently valuable — you can stop after any phase and still be
better off.
---
## The Problems
### 1. Monolith sprawl
28 modules in `src/` with no grouping. Eleven modules aren't even included in
the wheel build (`agents`, `events`, `hands`, `mcp`, `memory`, `router`,
`self_coding`, `task_queue`, `tools`, `upgrades`, `work_orders`). Some are
used by the dashboard routes but forgotten in `pyproject.toml`.
### 2. Dashboard is the gravity well
The dashboard has 27 route files (4,562 lines), 49 templates, and has become
the integration point for everything. Every new feature = new route file + new
template + new test file. This doesn't scale.
### 3. Documentation entropy
10 root-level `.md` files (87KB). README is 303 lines, CLAUDE.md is 267 lines,
AGENTS.md is 342 lines — with massive content duplication between them. Plus
PLAN.md, WORKSET_PLAN.md, WORKSET_PLAN_PHASE2.md, MEMORY.md,
IMPLEMENTATION_SUMMARY.md, QUALITY_ANALYSIS.md, QUALITY_REVIEW_REPORT.md.
Human eyes glaze over. AI assistants waste tokens reading redundant info.
### 4. Test sprawl — and a skeleton problem
97 test files, 19,600 lines — but **61 of those files (63%) are empty
skeletons** with zero actual test functions. Only 36 files have real tests
containing 471 test functions total. Many "large" test files (like
`test_scripture.py` at 901 lines, `test_router_cascade.py` at 523 lines) are
infrastructure-only — class definitions, imports, fixtures, but no assertions.
The functional/E2E directory (`tests/functional/`) has 7 files and 0 working
tests. Tests are flat in `tests/` with no organization. Running the full suite
means loading every module, every mock, every fixture even when you only
changed one thing.
### 5. Unclear project boundaries
Is this one project or several? The `timmy` CLI, `timmy-serve` API server,
`self-tdd` watchdog, and `self-modify` CLI are four separate entry points that
could be four separate packages. The `creative` extra needs PyTorch. The
`lightning` module is a standalone payment system. These shouldn't live in the
same test run.
### 6. Wheel build doesn't match reality
`pyproject.toml` includes 17 modules but `src/` has 28. The missing 11 modules
are used by code that IS included (dashboard routes import from `hands`,
`mcp`, `memory`, `work_orders`, etc.). The wheel would break at runtime.
### 7. Dependency coupling through dashboard
The dashboard is the hub that imports from 20+ modules. The dependency graph
flows inward: `config` is the foundation (22 modules depend on it), `mcp` is
widely used (12+ importers), `swarm` is referenced by 15+ modules. No true
circular dependencies exist (the `timmy ↔ swarm` relationship uses lazy
imports), but the dashboard pulls in everything, so changing any module can
break the dashboard routes.
### 8. Conftest does too much
`tests/conftest.py` has 4 autouse fixtures that run on **every single test**:
reset message log, reset coordinator state, clean database, cleanup event
loops. Many tests don't need any of these. This adds overhead to the test
suite and couples all tests to the swarm coordinator.
---
## Phase 1: Documentation Cleanup (Low Risk, High Impact)
**Goal:** Cut root markdown from 87KB to ~20KB. Make README human-readable.
Eliminate token waste.
### 1.1 Slim the README
Cut README.md from 303 lines to ~80 lines:
```
# Timmy Time — Mission Control
Local-first sovereign AI agent system. Browser dashboard, Ollama inference,
Bitcoin Lightning economics. No cloud AI.
## Quick Start
make install && make dev → http://localhost:8000
## What's Here
- Timmy Agent (Ollama/AirLLM)
- Mission Control Dashboard (FastAPI + HTMX)
- Swarm Coordinator (multi-agent auctions)
- Lightning Payments (L402 gating)
- Creative Studio (image/music/video)
- Self-Coding (codebase-aware self-modification)
## Commands
make dev / make test / make docker-up / make help
## Documentation
- Development guide: CLAUDE.md
- Architecture: docs/architecture-v2.md
- Agent conventions: AGENTS.md
- Config reference: .env.example
```
### 1.2 De-duplicate CLAUDE.md
Remove content that duplicates README or AGENTS.md. CLAUDE.md should only
contain what AI assistants need that isn't elsewhere:
- Architecture patterns (singletons, config, HTMX, graceful degradation)
- Testing conventions (conftest, fixtures, stubs)
- Security-sensitive areas
- Entry points table
Target: 267 → ~130 lines.
### 1.3 Archive or delete temporary docs
| File | Action |
|------|--------|
| `MEMORY.md` | DELETE — session context, not permanent docs |
| `WORKSET_PLAN.md` | DELETE — use GitHub Issues |
| `WORKSET_PLAN_PHASE2.md` | DELETE — use GitHub Issues |
| `PLAN.md` | MOVE to `docs/PLAN_ARCHIVE.md` |
| `IMPLEMENTATION_SUMMARY.md` | MOVE to `docs/IMPLEMENTATION_ARCHIVE.md` |
| `QUALITY_ANALYSIS.md` | CONSOLIDATE with `docs/QUALITY_AUDIT.md` |
| `QUALITY_REVIEW_REPORT.md` | CONSOLIDATE with `docs/QUALITY_AUDIT.md` |
**Result:** Root directory goes from 10 `.md` files to 3 (README, CLAUDE,
AGENTS).
### 1.4 Clean up .handoff/
The `.handoff/` directory (CHECKPOINT.md, CONTINUE.md, TODO.md, scripts) is
session-scoped context. Either gitignore it or move to `docs/handoff/`.
---
## Phase 2: Module Consolidation (Medium Risk, High Impact)
**Goal:** Reduce 28 modules to ~12 by merging small, related modules into
coherent packages. This directly reduces cognitive load and token consumption.
### 2.1 Module structure (implemented)
```
src/ # 14 packages (was 28)
config.py # Pydantic settings (foundation)
timmy/ # Core agent + agents/ + agent_core/ + memory/
dashboard/ # FastAPI web UI (22 route files)
swarm/ # Coordinator + task_queue/ + work_orders/
self_coding/ # Git safety + self_modify/ + self_tdd/ + upgrades/
creative/ # Media generation + tools/
infrastructure/ # ws_manager/ + notifications/ + events/ + router/
integrations/ # chat_bridge/ + telegram_bot/ + shortcuts/ + voice/
lightning/ # L402 payment gating (standalone, security-sensitive)
mcp/ # MCP tool registry and discovery
spark/ # Event capture and advisory
hands/ # 6 autonomous Hand agents
scripture/ # Biblical text integration
timmy_serve/ # L402-gated API server
```
### 2.2 Dashboard route consolidation
27 route files → ~12 by grouping related routes:
| Current files | Merged into |
|--------------|-------------|
| `agents.py`, `briefing.py` | `agents.py` |
| `swarm.py`, `swarm_internal.py`, `swarm_ws.py` | `swarm.py` |
| `voice.py`, `voice_enhanced.py` | `voice.py` |
| `mobile.py`, `mobile_test.py` | `mobile.py` (delete test page) |
| `self_coding.py`, `self_modify.py` | `self_coding.py` |
| `tasks.py`, `work_orders.py` | `tasks.py` |
`mobile_test.py` (257 lines) is a test page route that's excluded from
coverage — it should not ship in production.
### 2.3 Fix the wheel build
Update `pyproject.toml` `[tool.hatch.build.targets.wheel]` to include all
modules that are actually imported. Currently 11 modules are missing from the
build manifest.
---
## Phase 3: Test Reorganization (Medium Risk, Medium Impact)
**Goal:** Organize tests to match module structure, enable selective test runs,
reduce full-suite runtime.
### 3.1 Mirror source structure in tests
```
tests/
conftest.py # Global fixtures only
timmy/ # Tests for timmy/ module
conftest.py # Timmy-specific fixtures
test_agent.py
test_backends.py
test_cli.py
test_orchestrator.py
test_personas.py
test_memory.py
dashboard/
conftest.py # Dashboard fixtures (client fixture)
test_routes_agents.py
test_routes_swarm.py
...
swarm/
test_coordinator.py
test_tasks.py
test_work_orders.py
integrations/
test_chat_bridge.py
test_telegram.py
test_voice.py
self_coding/
test_git_safety.py
test_codebase_indexer.py
test_self_modify.py
...
```
### 3.2 Add pytest marks for selective execution
```python
# pyproject.toml
[tool.pytest.ini_options]
markers = [
"unit: Unit tests (fast, no I/O)",
"integration: Integration tests (may use SQLite)",
"dashboard: Dashboard route tests",
"swarm: Swarm coordinator tests",
"slow: Tests that take >1 second",
]
```
Usage:
```bash
make test # Run all tests
pytest -m unit # Fast unit tests only
pytest -m dashboard # Just dashboard tests
pytest tests/swarm/ # Just swarm module tests
pytest -m "not slow" # Skip slow tests
```
### 3.3 Audit and clean skeleton test files
61 test files are empty skeletons — they have imports, class definitions, and
fixture setup but **zero test functions**. These add import overhead and create
a false sense of coverage. For each skeleton file:
1. If the module it tests is stable and well-covered elsewhere → **delete it**
2. If the module genuinely needs tests → **implement the tests** or file an
issue
3. If it's a duplicate (e.g., both `test_swarm.py` and
`test_swarm_integration.py` exist) → **consolidate**
Notable skeletons to address:
- `test_scripture.py` (901 lines, 0 tests) — massive infrastructure, no assertions
- `test_router_cascade.py` (523 lines, 0 tests) — same pattern
- `test_agent_core.py` (457 lines, 0 tests)
- `test_self_modify.py` (451 lines, 0 tests)
- All 7 files in `tests/functional/` (0 working tests)
### 3.4 Split genuinely oversized test files
For files that DO have tests but are too large:
- `test_task_queue.py` (560 lines, 30 tests) → split by feature area
- `test_mobile_scenarios.py` (339 lines, 36 tests) → split by scenario group
Rule of thumb: No test file over 400 lines.
---
## Phase 4: Configuration & Build Cleanup (Low Risk, Medium Impact)
### 4.1 Clean up pyproject.toml
- Fix the wheel include list to match actual imports
- Consider whether 4 separate CLI entry points belong in one package
- Add `[project.urls]` for documentation, repository links
- Review dependency pins — some are very loose (`>=1.0.0`)
### 4.2 Consolidate Docker files
4 docker-compose variants (default, dev, prod, test) is a lot. Consider:
- `docker-compose.yml` (base)
- `docker-compose.override.yml` (dev — auto-loaded by Docker)
- `docker-compose.prod.yml` (production only)
### 4.3 Clean up root directory
Non-essential root files to move or delete:
| File | Action |
|------|--------|
| `apply_security_fixes.py` | Move to `scripts/` or delete if one-time |
| `activate_self_tdd.sh` | Move to `scripts/` |
| `coverage.xml` | Gitignore (CI artifact) |
| `data/self_modify_reports/` | Gitignore the contents |
---
## Phase 5: Consider Package Extraction (High Risk, High Impact)
**Goal:** Evaluate whether some modules should be separate packages/repos.
### 5.1 Candidates for extraction
| Module | Why extract | Dependency direction |
|--------|------------|---------------------|
| `lightning/` | Standalone payment system, security-sensitive | Dashboard imports lightning |
| `creative/` | Needs PyTorch, very different dependency profile | Dashboard imports creative |
| `timmy-serve` | Separate process (port 8402), separate purpose | Shares config + timmy agent |
| `self_coding/` + `self_modify/` | Self-contained self-modification system | Dashboard imports for routes |
### 5.2 Monorepo approach (recommended over multi-repo)
If splitting, use a monorepo with namespace packages:
```
packages/
timmy-core/ # Agent + memory + CLI
timmy-dashboard/ # FastAPI app
timmy-swarm/ # Coordinator + tasks
timmy-lightning/ # Payment system
timmy-creative/ # Creative tools (heavy deps)
```
Each package gets its own `pyproject.toml`, test suite, and can be installed
independently. But they share the same repo, CI, and release cycle.
**However:** This is high effort and may not be worth it unless the team
grows or the dependency profiles diverge further. Consider this only after
Phases 1-4 are done and the pain persists.
---
## Phase 6: Token Optimization for AI Development (Low Risk, High Impact)
**Goal:** Reduce context window consumption when AI assistants work on this
codebase.
### 6.1 Lean CLAUDE.md (already covered in Phase 1)
Every byte in CLAUDE.md is read by every AI interaction. Remove duplication.
### 6.2 Module-level CLAUDE.md files
Instead of one massive guide, put module-specific context where it's needed:
```
src/swarm/CLAUDE.md # "This module is security-sensitive. Always..."
src/lightning/CLAUDE.md # "Never hard-code secrets. Use settings..."
src/dashboard/CLAUDE.md # "Routes return template partials for HTMX..."
```
AI assistants read these only when working in that directory.
### 6.3 Standardize module docstrings
Every `__init__.py` should have a one-line summary. AI assistants read these
to understand module purpose without reading every file:
```python
"""Swarm — Multi-agent coordinator with auction-based task assignment."""
```
### 6.4 Reduce template duplication
49 templates with repeated boilerplate. Consider Jinja2 macros for common
patterns (card layouts, form groups, table rows).
---
## Prioritized Execution Order
| Priority | Phase | Effort | Risk | Impact |
|----------|-------|--------|------|--------|
| **1** | Phase 1: Doc cleanup | 2-3 hours | Low | High — immediate token savings |
| **2** | Phase 6: Token optimization | 1-2 hours | Low | High — ongoing AI efficiency |
| **3** | Phase 4: Config/build cleanup | 1-2 hours | Low | Medium — hygiene |
| **4** | Phase 2: Module consolidation | 4-8 hours | Medium | High — structural improvement |
| **5** | Phase 3: Test reorganization | 3-5 hours | Medium | Medium — faster test cycles |
| **6** | Phase 5: Package extraction | 8-16 hours | High | High — only if needed |
---
## Quick Wins (Can Do Right Now)
1. Delete MEMORY.md, WORKSET_PLAN.md, WORKSET_PLAN_PHASE2.md (3 files, 0 risk)
2. Move PLAN.md, IMPLEMENTATION_SUMMARY.md, quality docs to `docs/` (5 files)
3. Slim README to ~80 lines
4. Fix pyproject.toml wheel includes (11 missing modules)
5. Gitignore `coverage.xml` and `data/self_modify_reports/`
6. Delete `dashboard/routes/mobile_test.py` (test page in production routes)
7. Delete or gut empty test skeletons (61 files with 0 tests — they waste CI
time and create noise)
---
## What NOT to Do
- **Don't rewrite from scratch.** The code works. Refactor incrementally.
- **Don't split into multiple repos.** Monorepo with packages (if needed) is
simpler for a small team.
- **Don't change the tech stack.** FastAPI + HTMX + Jinja2 is fine. Don't add
React, Vue, or a SPA framework.
- **Don't merge CLAUDE.md into README.** They serve different audiences.
- **Don't remove test files** just to reduce count. Reorganize them.
- **Don't break the singleton pattern.** It works for this scale.
---
## Success Metrics
| Metric | Original | Target | Current |
|--------|----------|--------|---------|
| Root `.md` files | 10 | 3 | 5 |
| Root markdown size | 87KB | ~20KB | ~28KB |
| `src/` modules | 28 | ~12-15 | **14** |
| Dashboard routes | 27 | ~12-15 | 22 |
| Test organization | flat | mirrored | **mirrored** |
| Tests passing | 471 | 500+ | **1462** |
| Wheel modules | 17/28 | all | **all** |
| Module-level docs | 0 | all key modules | **6** |
| AI context reduction | — | ~40% | **~50%** (fewer modules to scan) |
---
## Execution Status
### Completed
- [x] **Phase 1: Doc cleanup** — README 303→93 lines, CLAUDE.md 267→80,
AGENTS.md 342→72, deleted 3 session docs, archived 4 planning docs
- [x] **Phase 4: Config/build cleanup** — fixed 11 missing wheel modules, added
pytest markers, updated .gitignore, moved scripts to scripts/
- [x] **Phase 6: Token optimization** — added docstrings to 15+ __init__.py files
- [x] **Phase 3: Test reorganization** — 97 test files organized into 13
subdirectories mirroring source structure
- [x] **Phase 2a: Route consolidation** — 27 → 22 route files (merged voice,
swarm internal/ws, self-modify; deleted mobile_test)
- [x] **Phase 2b: Full module consolidation** — 28 → 14 modules. All merges
completed in a single pass with automated import rewriting (66 source files +
13 test files updated). Modules consolidated:
- `work_orders/` + `task_queue/``swarm/`
- `self_modify/` + `self_tdd/` + `upgrades/``self_coding/`
- `tools/``creative/tools/`
- `chat_bridge/` + `telegram_bot/` + `shortcuts/` + `voice/``integrations/` (new)
- `ws_manager/` + `notifications/` + `events/` + `router/``infrastructure/` (new)
- `agents/` + `agent_core/` + `memory/``timmy/`
- pyproject.toml entry points and wheel includes updated
- Module-level CLAUDE.md files added (Phase 6.2)
- Zero test regressions: 1462 tests passing
- [x] **Phase 6.2: Module-level CLAUDE.md** — added to swarm/, self_coding/,
infrastructure/, integrations/, creative/, lightning/
### Remaining
- [ ] **Phase 5: Package extraction** — only if team grows or dep profiles diverge

View File

@@ -1,147 +0,0 @@
# Timmy Time — Workset Plan (Post-Quality Review)
**Date:** 2026-02-25
**Based on:** QUALITY_ANALYSIS.md + QUALITY_REVIEW_REPORT.md
---
## Executive Summary
This workset addresses critical security vulnerabilities, hardens the tool system for reliability, improves privacy alignment with the "sovereign AI" vision, and enhances agent intelligence.
---
## Workset A: Security Fixes (P0) 🔒
### A1: XSS Vulnerabilities (SEC-01)
**Priority:** P0 — Critical
**Files:** `mobile.html`, `swarm_live.html`
**Issues:**
- `mobile.html` line ~85 uses raw `innerHTML` with unsanitized user input
- `swarm_live.html` line ~72 uses `innerHTML` with WebSocket agent data
**Fix:** Replace `innerHTML` string interpolation with safe DOM methods (`textContent`, `createTextNode`, or DOMPurify if available).
### A2: Hardcoded Secrets (SEC-02)
**Priority:** P1 — High
**Files:** `l402_proxy.py`, `payment_handler.py`
**Issue:** Default secrets are production-safe strings instead of `None` with startup assertion.
**Fix:**
- Change defaults to `None`
- Add startup assertion requiring env vars to be set
- Fail fast with clear error message
---
## Workset B: Tool System Hardening ⚙️
### B1: SSL Certificate Fix
**Priority:** P1 — High
**File:** Web search via DuckDuckGo
**Issue:** `CERTIFICATE_VERIFY_FAILED` errors prevent web search from working.
**Fix Options:**
- Option 1: Use `certifi` package for proper certificate bundle
- Option 2: Add `verify_ssl=False` parameter (less secure, acceptable for local)
- Option 3: Document SSL fix in troubleshooting
### B2: Tool Usage Instructions
**Priority:** P2 — Medium
**File:** `prompts.py`
**Issue:** Agent makes unnecessary tool calls for simple questions.
**Fix:** Add tool usage instructions to system prompt:
- Only use tools when explicitly needed
- For simple chat/questions, respond directly
- Tools are for: web search, file operations, code execution
### B3: Tool Error Handling
**Priority:** P2 — Medium
**File:** `tools.py`
**Issue:** Tool failures show stack traces to user.
**Fix:** Add graceful error handling with user-friendly messages.
---
## Workset C: Privacy & Sovereignty 🛡️
### C1: Agno Telemetry (Privacy)
**Priority:** P2 — Medium
**File:** `agent.py`, `backends.py`
**Issue:** Agno sends telemetry to `os-api.agno.com` which conflicts with "sovereign" vision.
**Fix:**
- Add `telemetry_enabled=False` parameter to Agent
- Document how to disable for air-gapped deployments
- Consider environment variable `TIMMY_TELEMETRY=0`
### C2: Secrets Validation
**Priority:** P1 — High
**File:** `config.py`, startup
**Issue:** Default secrets used without warning in production.
**Fix:**
- Add production mode detection
- Fatal error if default secrets in production
- Clear documentation on generating secrets
---
## Workset D: Agent Intelligence 🧠
### D1: Enhanced System Prompt
**Priority:** P2 — Medium
**File:** `prompts.py`
**Enhancements:**
- Tool usage guidelines (when to use, when not to)
- Memory awareness ("You remember previous conversations")
- Self-knowledge (capabilities, limitations)
- Response style guidelines
### D2: Memory Improvements
**Priority:** P2 — Medium
**File:** `agent.py`
**Enhancements:**
- Increase history runs from 10 to 20 for better context
- Add memory summarization for very long conversations
- Persistent session tracking
---
## Execution Order
| Order | Workset | Task | Est. Time |
|-------|---------|------|-----------|
| 1 | A | XSS fixes | 30 min |
| 2 | A | Secrets hardening | 20 min |
| 3 | B | SSL certificate fix | 15 min |
| 4 | B | Tool instructions | 20 min |
| 5 | C | Telemetry disable | 15 min |
| 6 | C | Secrets validation | 20 min |
| 7 | D | Enhanced prompts | 30 min |
| 8 | — | Test everything | 30 min |
**Total: ~3 hours**
---
## Success Criteria
- [ ] No XSS vulnerabilities (verified by code review)
- [ ] Secrets fail fast in production
- [ ] Web search works without SSL errors
- [ ] Agent uses tools appropriately (not for simple chat)
- [ ] Telemetry disabled by default
- [ ] All 895+ tests pass
- [ ] New tests added for security fixes

View File

@@ -1,133 +0,0 @@
# Timmy Time — Workset Plan Phase 2 (Functional Hardening)
**Date:** 2026-02-25
**Based on:** QUALITY_ANALYSIS.md remaining issues
---
## Executive Summary
This workset addresses the core functional gaps that prevent the swarm system from operating as designed. The swarm currently registers agents in the database but doesn't actually spawn processes or execute bids. This workset makes the swarm operational.
---
## Workset E: Swarm System Realization 🐝
### E1: Real Agent Process Spawning (FUNC-01)
**Priority:** P1 — High
**Files:** `swarm/agent_runner.py`, `swarm/coordinator.py`
**Issue:** `spawn_agent()` creates a database record but no Python process is actually launched.
**Fix:**
- Complete the `agent_runner.py` subprocess implementation
- Ensure spawned agents can communicate with coordinator
- Add proper lifecycle management (start, monitor, stop)
### E2: Working Auction System (FUNC-02)
**Priority:** P1 — High
**Files:** `swarm/bidder.py`, `swarm/persona_node.py`
**Issue:** Bidding system runs auctions but no actual agents submit bids.
**Fix:**
- Connect persona agents to the bidding system
- Implement automatic bid generation based on capabilities
- Ensure auction resolution assigns tasks to winners
### E3: Persona Agent Auto-Bidding
**Priority:** P1 — High
**Files:** `swarm/persona_node.py`, `swarm/coordinator.py`
**Fix:**
- Spawned persona agents should automatically bid on matching tasks
- Implement capability-based bid decisions
- Add bid amount calculation (base + jitter)
---
## Workset F: Testing & Reliability 🧪
### F1: WebSocket Reconnection Tests (TEST-01)
**Priority:** P2 — Medium
**Files:** `tests/test_websocket.py`
**Issue:** WebSocket tests don't cover reconnection logic or malformed payloads.
**Fix:**
- Add reconnection scenario tests
- Test malformed payload handling
- Test connection failure recovery
### F2: Voice TTS Graceful Degradation
**Priority:** P2 — Medium
**Files:** `timmy_serve/voice_tts.py`, `dashboard/routes/voice.py`
**Issue:** Voice routes fail without clear message when `pyttsx3` not installed.
**Fix:**
- Add graceful fallback message
- Return helpful error suggesting `pip install ".[voice]"`
- Don't crash, return 503 with instructions
### F3: Mobile Route Navigation
**Priority:** P2 — Medium
**Files:** `templates/base.html`
**Issue:** `/mobile` route not linked from desktop navigation.
**Fix:**
- Add mobile link to base template nav
- Make it easy to find mobile-optimized view
---
## Workset G: Performance & Architecture ⚡
### G1: SQLite Connection Pooling (PERF-01)
**Priority:** P3 — Low
**Files:** `swarm/registry.py`
**Issue:** New SQLite connection opened on every query.
**Fix:**
- Implement connection pooling or singleton pattern
- Reduce connection overhead
- Maintain thread safety
### G2: Development Experience
**Priority:** P2 — Medium
**Files:** `Makefile`, `README.md`
**Issue:** No single command to start full dev environment.
**Fix:**
- Add `make dev-full` that starts dashboard + Ollama check
- Add better startup validation
---
## Execution Order
| Order | Workset | Task | Est. Time |
|-------|---------|------|-----------|
| 1 | E | Persona auto-bidding system | 45 min |
| 2 | E | Fix auction resolution | 30 min |
| 3 | F | Voice graceful degradation | 20 min |
| 4 | F | Mobile nav link | 10 min |
| 5 | G | SQLite connection pooling | 30 min |
| 6 | — | Test everything | 30 min |
**Total: ~2.5 hours**
---
## Success Criteria
- [ ] Persona agents automatically bid on matching tasks
- [ ] Auctions resolve with actual winners
- [ ] Voice routes degrade gracefully without pyttsx3
- [ ] Mobile route accessible from desktop nav
- [ ] SQLite connections pooled/reused
- [ ] All 895+ tests pass
- [ ] New tests for bidding system

View File

@@ -68,6 +68,37 @@ providers:
- name: claude-3-sonnet-20240229
context_window: 200000
# ── Custom Models ──────────────────────────────────────────────────────
# Register custom model weights for per-agent assignment.
# Supports GGUF (Ollama), safetensors, and HuggingFace checkpoint dirs.
# Models can also be registered at runtime via the /api/v1/models API.
#
# Roles: general (default inference), reward (PRM scoring),
# teacher (distillation), judge (output evaluation)
custom_models: []
# Example entries:
# - name: my-finetuned-llama
# format: gguf
# path: /path/to/model.gguf
# role: general
# context_window: 8192
# description: "Fine-tuned Llama for code tasks"
#
# - name: reward-model
# format: ollama
# path: deepseek-r1:1.5b
# role: reward
# context_window: 32000
# description: "Process reward model for scoring outputs"
# ── Agent Model Assignments ─────────────────────────────────────────────
# Map persona agent IDs to specific models.
# Agents without an assignment use the global default (ollama_model).
agent_model_assignments: {}
# Example:
# persona-forge: my-finetuned-llama
# persona-echo: deepseek-r1:1.5b
# Cost tracking (optional, for budget monitoring)
cost_tracking:
enabled: true

View File

@@ -32,6 +32,10 @@ services:
DEBUG: "true"
# Point to host Ollama (Mac default). Override in .env if different.
OLLAMA_URL: "${OLLAMA_URL:-http://host.docker.internal:11434}"
# Grok (xAI) — opt-in premium cloud backend
GROK_ENABLED: "${GROK_ENABLED:-false}"
XAI_API_KEY: "${XAI_API_KEY:-}"
GROK_DEFAULT_MODEL: "${GROK_DEFAULT_MODEL:-grok-3-fast}"
extra_hosts:
- "host.docker.internal:host-gateway" # Linux compatibility
networks:

File diff suppressed because it is too large Load Diff

108
mobile-app/README.md Normal file
View File

@@ -0,0 +1,108 @@
# Timmy Chat — Mobile App
A sleek mobile chat interface for Timmy, the sovereign AI agent. Built with **Expo SDK 54**, **React Native**, **TypeScript**, and **NativeWind** (Tailwind CSS).
## Features
- **Text Chat** — Send and receive messages with Timmy's full personality
- **Voice Messages** — Record and send voice notes via the mic button; playback with waveform UI
- **Image Sharing** — Take photos or pick from library; full-screen image viewer
- **File Attachments** — Send any document via the system file picker
- **Dark Arcane Theme** — Deep purple/indigo palette matching the Timmy Time dashboard
## Screenshots
The app is a single-screen chat interface with:
- Header showing Timmy's status and a clear-chat button
- Message list with distinct user (teal) and Timmy (dark surface) bubbles
- Input bar with attachment (+), text field, and mic/send button
- Empty state with Timmy branding when no messages exist
## Project Structure
```
mobile-app/
├── app/ # Expo Router screens
│ ├── _layout.tsx # Root layout with providers
│ └── (tabs)/
│ ├── _layout.tsx # Tab layout (hidden — single screen)
│ └── index.tsx # Main chat screen
├── components/
│ ├── chat-bubble.tsx # Message bubble (text, image, voice, file)
│ ├── chat-header.tsx # Header with Timmy status
│ ├── chat-input.tsx # Input bar (text, mic, attachments)
│ ├── empty-chat.tsx # Empty state welcome screen
│ ├── image-viewer.tsx # Full-screen image modal
│ └── typing-indicator.tsx # Animated dots while Timmy responds
├── lib/
│ └── chat-store.tsx # React Context chat state + API calls
├── server/
│ └── chat.ts # Server-side chat handler with Timmy's prompt
├── shared/
│ └── types.ts # ChatMessage type definitions
├── assets/images/ # App icons (custom generated)
├── theme.config.js # Color tokens (dark arcane palette)
├── tailwind.config.js # Tailwind/NativeWind configuration
└── tests/
└── chat.test.ts # Unit tests
```
## Setup
### Prerequisites
- Node.js 18+
- pnpm 9+
- Expo CLI (`npx expo`)
- iOS Simulator or Android Emulator (or physical device with Expo Go)
### Install Dependencies
```bash
cd mobile-app
pnpm install
```
### Run the App
```bash
# Start the Expo dev server
npx expo start
# Or run on specific platform
npx expo start --ios
npx expo start --android
npx expo start --web
```
### Backend
The chat API endpoint (`server/chat.ts`) requires an LLM backend. The `invokeLLM` function should be wired to your preferred provider:
- **Local Ollama** — Point to `http://localhost:11434` for local inference
- **OpenAI-compatible API** — Any API matching the OpenAI chat completions format
The system prompt in `server/chat.ts` contains Timmy's full personality, agent roster, and behavioral rules ported from the dashboard's `prompts.py`.
## Timmy's Personality
Timmy is a sovereign AI agent — grounded in Christian faith, powered by Bitcoin economics, committed to digital sovereignty. He speaks plainly, acts with intention, and never ends responses with generic chatbot phrases. His agent roster includes Echo, Mace, Forge, Seer, Helm, Quill, Pixel, Lyra, and Reel.
## Theme
The app uses a dark arcane color palette:
| Token | Color | Usage |
|-------|-------|-------|
| `primary` | `#7c3aed` | Accent, user bubbles |
| `background` | `#080412` | Screen background |
| `surface` | `#110a20` | Cards, Timmy bubbles |
| `foreground` | `#e8e0f0` | Primary text |
| `muted` | `#6b5f7d` | Secondary text |
| `border` | `#1e1535` | Dividers |
| `success` | `#22c55e` | Status indicator |
| `error` | `#ff4455` | Recording state |
## License
Same as the parent Timmy Time Dashboard project.

130
mobile-app/app.config.ts Normal file
View File

@@ -0,0 +1,130 @@
// Load environment variables with proper priority (system > .env)
import "./scripts/load-env.js";
import type { ExpoConfig } from "expo/config";
// Bundle ID format: space.manus.<project_name_dots>.<timestamp>
// e.g., "my-app" created at 2024-01-15 10:30:45 -> "space.manus.my.app.t20240115103045"
// Bundle ID can only contain letters, numbers, and dots
// Android requires each dot-separated segment to start with a letter
const rawBundleId = "space.manus.timmy.chat.t20260226211148";
const bundleId =
rawBundleId
.replace(/[-_]/g, ".") // Replace hyphens/underscores with dots
.replace(/[^a-zA-Z0-9.]/g, "") // Remove invalid chars
.replace(/\.+/g, ".") // Collapse consecutive dots
.replace(/^\.+|\.+$/g, "") // Trim leading/trailing dots
.toLowerCase()
.split(".")
.map((segment) => {
// Android requires each segment to start with a letter
// Prefix with 'x' if segment starts with a digit
return /^[a-zA-Z]/.test(segment) ? segment : "x" + segment;
})
.join(".") || "space.manus.app";
// Extract timestamp from bundle ID and prefix with "manus" for deep link scheme
// e.g., "space.manus.my.app.t20240115103045" -> "manus20240115103045"
const timestamp = bundleId.split(".").pop()?.replace(/^t/, "") ?? "";
const schemeFromBundleId = `manus${timestamp}`;
const env = {
// App branding - update these values directly (do not use env vars)
appName: "Timmy Chat",
appSlug: "timmy-chat",
// S3 URL of the app logo - set this to the URL returned by generate_image when creating custom logo
// Leave empty to use the default icon from assets/images/icon.png
logoUrl: "https://files.manuscdn.com/user_upload_by_module/session_file/310519663286296482/kuSmtQpNVBtvECMG.png",
scheme: schemeFromBundleId,
iosBundleId: bundleId,
androidPackage: bundleId,
};
const config: ExpoConfig = {
name: env.appName,
slug: env.appSlug,
version: "1.0.0",
orientation: "portrait",
icon: "./assets/images/icon.png",
scheme: env.scheme,
userInterfaceStyle: "automatic",
newArchEnabled: true,
ios: {
supportsTablet: true,
bundleIdentifier: env.iosBundleId,
"infoPlist": {
"ITSAppUsesNonExemptEncryption": false
}
},
android: {
adaptiveIcon: {
backgroundColor: "#080412",
foregroundImage: "./assets/images/android-icon-foreground.png",
backgroundImage: "./assets/images/android-icon-background.png",
monochromeImage: "./assets/images/android-icon-monochrome.png",
},
edgeToEdgeEnabled: true,
predictiveBackGestureEnabled: false,
package: env.androidPackage,
permissions: ["POST_NOTIFICATIONS"],
intentFilters: [
{
action: "VIEW",
autoVerify: true,
data: [
{
scheme: env.scheme,
host: "*",
},
],
category: ["BROWSABLE", "DEFAULT"],
},
],
},
web: {
bundler: "metro",
output: "static",
favicon: "./assets/images/favicon.png",
},
plugins: [
"expo-router",
[
"expo-audio",
{
microphonePermission: "Allow $(PRODUCT_NAME) to access your microphone.",
},
],
[
"expo-video",
{
supportsBackgroundPlayback: true,
supportsPictureInPicture: true,
},
],
[
"expo-splash-screen",
{
image: "./assets/images/splash-icon.png",
imageWidth: 200,
resizeMode: "contain",
backgroundColor: "#080412",
dark: {
backgroundColor: "#080412",
},
},
],
[
"expo-build-properties",
{
android: {
buildArchs: ["armeabi-v7a", "arm64-v8a"],
minSdkVersion: 24,
},
},
],
],
experiments: {
typedRoutes: true,
reactCompiler: true,
},
};
export default config;

View File

@@ -0,0 +1,17 @@
import { Tabs } from "expo-router";
import { useColors } from "@/hooks/use-colors";
export default function TabLayout() {
const colors = useColors();
return (
<Tabs
screenOptions={{
headerShown: false,
tabBarStyle: { display: "none" },
}}
>
<Tabs.Screen name="index" options={{ title: "Chat" }} />
</Tabs>
);
}

View File

@@ -0,0 +1,96 @@
import { useCallback, useRef, useState } from "react";
import { FlatList, KeyboardAvoidingView, Platform, StyleSheet, View } from "react-native";
import { ScreenContainer } from "@/components/screen-container";
import { ChatHeader } from "@/components/chat-header";
import { ChatBubble } from "@/components/chat-bubble";
import { ChatInput } from "@/components/chat-input";
import { TypingIndicator } from "@/components/typing-indicator";
import { ImageViewer } from "@/components/image-viewer";
import { EmptyChat } from "@/components/empty-chat";
import { useChat } from "@/lib/chat-store";
import { useColors } from "@/hooks/use-colors";
import { createAudioPlayer, setAudioModeAsync } from "expo-audio";
import type { ChatMessage } from "@/shared/types";
export default function ChatScreen() {
const { messages, isTyping } = useChat();
const colors = useColors();
const flatListRef = useRef<FlatList>(null);
const [viewingImage, setViewingImage] = useState<string | null>(null);
const [playingVoiceId, setPlayingVoiceId] = useState<string | null>(null);
const handlePlayVoice = useCallback(async (msg: ChatMessage) => {
if (!msg.uri) return;
try {
if (playingVoiceId === msg.id) {
setPlayingVoiceId(null);
return;
}
await setAudioModeAsync({ playsInSilentMode: true });
const player = createAudioPlayer({ uri: msg.uri });
player.play();
setPlayingVoiceId(msg.id);
// Auto-reset after estimated duration
const dur = (msg.duration ?? 5) * 1000;
setTimeout(() => {
setPlayingVoiceId(null);
player.remove();
}, dur + 500);
} catch (err) {
console.warn("Voice playback error:", err);
setPlayingVoiceId(null);
}
}, [playingVoiceId]);
const renderItem = useCallback(
({ item }: { item: ChatMessage }) => (
<ChatBubble
message={item}
onImagePress={setViewingImage}
onPlayVoice={handlePlayVoice}
isPlayingVoice={playingVoiceId === item.id}
/>
),
[playingVoiceId, handlePlayVoice],
);
const keyExtractor = useCallback((item: ChatMessage) => item.id, []);
return (
<ScreenContainer edges={["top", "left", "right"]} containerClassName="bg-background">
<KeyboardAvoidingView
style={styles.flex}
behavior={Platform.OS === "ios" ? "padding" : undefined}
keyboardVerticalOffset={0}
>
<ChatHeader />
<FlatList
ref={flatListRef}
data={messages}
renderItem={renderItem}
keyExtractor={keyExtractor}
contentContainerStyle={styles.listContent}
style={{ flex: 1, backgroundColor: colors.background }}
onContentSizeChange={() => {
flatListRef.current?.scrollToEnd({ animated: true });
}}
ListFooterComponent={isTyping ? <TypingIndicator /> : null}
ListEmptyComponent={!isTyping ? <EmptyChat /> : null}
showsVerticalScrollIndicator={false}
/>
<ChatInput />
</KeyboardAvoidingView>
<ImageViewer uri={viewingImage} onClose={() => setViewingImage(null)} />
</ScreenContainer>
);
}
const styles = StyleSheet.create({
flex: { flex: 1 },
listContent: {
paddingVertical: 12,
},
});

View File

@@ -0,0 +1,45 @@
import "@/global.css";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { Stack } from "expo-router";
import { StatusBar } from "expo-status-bar";
import { useState } from "react";
import { GestureHandlerRootView } from "react-native-gesture-handler";
import "react-native-reanimated";
import { ThemeProvider } from "@/lib/theme-provider";
import { SafeAreaProvider, initialWindowMetrics } from "react-native-safe-area-context";
import { ChatProvider } from "@/lib/chat-store";
export const unstable_settings = {
anchor: "(tabs)",
};
export default function RootLayout() {
const [queryClient] = useState(
() =>
new QueryClient({
defaultOptions: {
queries: {
refetchOnWindowFocus: false,
retry: 1,
},
},
}),
);
return (
<ThemeProvider>
<SafeAreaProvider initialMetrics={initialWindowMetrics}>
<GestureHandlerRootView style={{ flex: 1 }}>
<QueryClientProvider client={queryClient}>
<ChatProvider>
<Stack screenOptions={{ headerShown: false }}>
<Stack.Screen name="(tabs)" />
</Stack>
</ChatProvider>
<StatusBar style="light" />
</QueryClientProvider>
</GestureHandlerRootView>
</SafeAreaProvider>
</ThemeProvider>
);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 939 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 939 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

View File

@@ -0,0 +1,214 @@
import { useMemo } from "react";
import { Text, View, StyleSheet, Image, Platform } from "react-native";
import Pressable from "@/components/ui/pressable-fix";
import { useColors } from "@/hooks/use-colors";
import type { ChatMessage } from "@/shared/types";
import { formatBytes, formatDuration } from "@/lib/chat-store";
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
interface ChatBubbleProps {
message: ChatMessage;
onImagePress?: (uri: string) => void;
onPlayVoice?: (message: ChatMessage) => void;
isPlayingVoice?: boolean;
}
export function ChatBubble({ message, onImagePress, onPlayVoice, isPlayingVoice }: ChatBubbleProps) {
const colors = useColors();
const isUser = message.role === "user";
// Stable waveform bar heights based on message id
const waveHeights = useMemo(() => {
let seed = 0;
for (let i = 0; i < message.id.length; i++) seed = (seed * 31 + message.id.charCodeAt(i)) | 0;
return Array.from({ length: 12 }, (_, i) => {
seed = (seed * 16807 + i * 1013) % 2147483647;
return 4 + (seed % 15);
});
}, [message.id]);
const bubbleStyle = [
styles.bubble,
{
backgroundColor: isUser ? colors.primary : colors.surface,
borderColor: isUser ? colors.primary : colors.border,
alignSelf: isUser ? "flex-end" as const : "flex-start" as const,
},
];
const textColor = isUser ? "#fff" : colors.foreground;
const mutedColor = isUser ? "rgba(255,255,255,0.6)" : colors.muted;
const timeStr = new Date(message.timestamp).toLocaleTimeString([], {
hour: "2-digit",
minute: "2-digit",
});
return (
<View style={[styles.row, isUser ? styles.rowUser : styles.rowAssistant]}>
{!isUser && (
<View style={[styles.avatar, { backgroundColor: colors.primary }]}>
<Text style={styles.avatarText}>T</Text>
</View>
)}
<View style={bubbleStyle}>
{message.contentType === "text" && (
<Text style={[styles.text, { color: textColor }]}>{message.text}</Text>
)}
{message.contentType === "image" && (
<Pressable
onPress={() => message.uri && onImagePress?.(message.uri)}
style={({ pressed }) => [pressed && { opacity: 0.8 }]}
>
<Image
source={{ uri: message.uri }}
style={styles.image}
resizeMode="cover"
/>
{message.text ? (
<Text style={[styles.text, { color: textColor, marginTop: 6 }]}>
{message.text}
</Text>
) : null}
</Pressable>
)}
{message.contentType === "voice" && (
<Pressable
onPress={() => onPlayVoice?.(message)}
style={({ pressed }) => [styles.voiceRow, pressed && { opacity: 0.7 }]}
>
<MaterialIcons
name={isPlayingVoice ? "pause" : "play-arrow"}
size={24}
color={textColor}
/>
<View style={[styles.waveform, { backgroundColor: isUser ? "rgba(255,255,255,0.3)" : colors.border }]}>
{waveHeights.map((h, i) => (
<View
key={i}
style={[
styles.waveBar,
{
height: h,
backgroundColor: textColor,
opacity: 0.6,
},
]}
/>
))}
</View>
<Text style={[styles.duration, { color: mutedColor }]}>
{formatDuration(message.duration ?? 0)}
</Text>
</Pressable>
)}
{message.contentType === "file" && (
<View style={styles.fileRow}>
<MaterialIcons name="insert-drive-file" size={28} color={textColor} />
<View style={styles.fileInfo}>
<Text style={[styles.fileName, { color: textColor }]} numberOfLines={1}>
{message.fileName ?? "File"}
</Text>
<Text style={[styles.fileSize, { color: mutedColor }]}>
{formatBytes(message.fileSize ?? 0)}
</Text>
</View>
</View>
)}
<Text style={[styles.time, { color: mutedColor }]}>{timeStr}</Text>
</View>
</View>
);
}
const styles = StyleSheet.create({
row: {
flexDirection: "row",
marginBottom: 8,
paddingHorizontal: 12,
alignItems: "flex-end",
},
rowUser: {
justifyContent: "flex-end",
},
rowAssistant: {
justifyContent: "flex-start",
},
avatar: {
width: 30,
height: 30,
borderRadius: 15,
alignItems: "center",
justifyContent: "center",
marginRight: 8,
},
avatarText: {
color: "#fff",
fontWeight: "700",
fontSize: 14,
},
bubble: {
maxWidth: "78%",
borderRadius: 16,
borderWidth: 1,
paddingHorizontal: 14,
paddingVertical: 10,
},
text: {
fontSize: 15,
lineHeight: 21,
},
time: {
fontSize: 10,
marginTop: 4,
textAlign: "right",
},
image: {
width: 220,
height: 180,
borderRadius: 10,
},
voiceRow: {
flexDirection: "row",
alignItems: "center",
gap: 8,
minWidth: 160,
},
waveform: {
flex: 1,
flexDirection: "row",
alignItems: "center",
gap: 2,
height: 24,
borderRadius: 4,
paddingHorizontal: 4,
},
waveBar: {
width: 3,
borderRadius: 1.5,
},
duration: {
fontSize: 12,
minWidth: 32,
},
fileRow: {
flexDirection: "row",
alignItems: "center",
gap: 10,
},
fileInfo: {
flex: 1,
},
fileName: {
fontSize: 14,
fontWeight: "600",
},
fileSize: {
fontSize: 11,
marginTop: 2,
},
});

View File

@@ -0,0 +1,69 @@
import { View, Text, StyleSheet } from "react-native";
import Pressable from "@/components/ui/pressable-fix";
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
import { useColors } from "@/hooks/use-colors";
import { useChat } from "@/lib/chat-store";
export function ChatHeader() {
const colors = useColors();
const { clearChat } = useChat();
return (
<View style={[styles.header, { backgroundColor: colors.background, borderBottomColor: colors.border }]}>
<View style={styles.left}>
<View style={[styles.statusDot, { backgroundColor: colors.success }]} />
<Text style={[styles.title, { color: colors.foreground }]}>TIMMY</Text>
<Text style={[styles.subtitle, { color: colors.muted }]}>SOVEREIGN AI</Text>
</View>
<Pressable
onPress={clearChat}
style={({ pressed }: { pressed: boolean }) => [
styles.clearBtn,
{ borderColor: colors.border },
pressed && { opacity: 0.6 },
]}
>
<MaterialIcons name="delete-outline" size={16} color={colors.muted} />
</Pressable>
</View>
);
}
const styles = StyleSheet.create({
header: {
flexDirection: "row",
alignItems: "center",
justifyContent: "space-between",
paddingHorizontal: 16,
paddingVertical: 10,
borderBottomWidth: 1,
},
left: {
flexDirection: "row",
alignItems: "center",
gap: 8,
},
statusDot: {
width: 8,
height: 8,
borderRadius: 4,
},
title: {
fontSize: 16,
fontWeight: "700",
letterSpacing: 2,
},
subtitle: {
fontSize: 9,
letterSpacing: 1.5,
fontWeight: "600",
},
clearBtn: {
width: 32,
height: 32,
borderRadius: 16,
borderWidth: 1,
alignItems: "center",
justifyContent: "center",
},
});

View File

@@ -0,0 +1,301 @@
import { useCallback, useRef, useState } from "react";
import {
View,
TextInput,
StyleSheet,
Platform,
ActionSheetIOS,
Alert,
Keyboard,
} from "react-native";
import Pressable from "@/components/ui/pressable-fix";
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
import { useColors } from "@/hooks/use-colors";
import { useChat } from "@/lib/chat-store";
import * as ImagePicker from "expo-image-picker";
import * as DocumentPicker from "expo-document-picker";
import {
useAudioRecorder,
useAudioRecorderState,
RecordingPresets,
requestRecordingPermissionsAsync,
setAudioModeAsync,
} from "expo-audio";
import * as Haptics from "expo-haptics";
export function ChatInput() {
const colors = useColors();
const { sendTextMessage, sendAttachment, isTyping } = useChat();
const [text, setText] = useState("");
const [isRecording, setIsRecording] = useState(false);
const inputRef = useRef<TextInput>(null);
const audioRecorder = useAudioRecorder(RecordingPresets.HIGH_QUALITY);
const recorderState = useAudioRecorderState(audioRecorder);
const handleSend = useCallback(() => {
const trimmed = text.trim();
if (!trimmed) return;
setText("");
Keyboard.dismiss();
if (Platform.OS !== "web") {
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Light);
}
sendTextMessage(trimmed);
}, [text, sendTextMessage]);
// ── Attachment sheet ────────────────────────────────────────────────────
const handleAttachment = useCallback(() => {
if (Platform.OS !== "web") {
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Light);
}
const options = ["Take Photo", "Choose from Library", "Choose File", "Cancel"];
const cancelIndex = 3;
if (Platform.OS === "ios") {
ActionSheetIOS.showActionSheetWithOptions(
{ options, cancelButtonIndex: cancelIndex },
(idx) => {
if (idx === 0) takePhoto();
else if (idx === 1) pickImage();
else if (idx === 2) pickFile();
},
);
} else {
// Android / Web fallback
Alert.alert("Attach", "Choose an option", [
{ text: "Take Photo", onPress: takePhoto },
{ text: "Choose from Library", onPress: pickImage },
{ text: "Choose File", onPress: pickFile },
{ text: "Cancel", style: "cancel" },
]);
}
}, []);
const takePhoto = async () => {
const { status } = await ImagePicker.requestCameraPermissionsAsync();
if (status !== "granted") {
Alert.alert("Permission needed", "Camera access is required to take photos.");
return;
}
const result = await ImagePicker.launchCameraAsync({
quality: 0.8,
allowsEditing: false,
});
if (!result.canceled && result.assets[0]) {
const asset = result.assets[0];
sendAttachment({
contentType: "image",
uri: asset.uri,
fileName: asset.fileName ?? "photo.jpg",
fileSize: asset.fileSize,
mimeType: asset.mimeType ?? "image/jpeg",
});
}
};
const pickImage = async () => {
const result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ["images"],
quality: 0.8,
allowsEditing: false,
});
if (!result.canceled && result.assets[0]) {
const asset = result.assets[0];
sendAttachment({
contentType: "image",
uri: asset.uri,
fileName: asset.fileName ?? "image.jpg",
fileSize: asset.fileSize,
mimeType: asset.mimeType ?? "image/jpeg",
});
}
};
const pickFile = async () => {
try {
const result = await DocumentPicker.getDocumentAsync({
type: "*/*",
copyToCacheDirectory: true,
});
if (!result.canceled && result.assets[0]) {
const asset = result.assets[0];
sendAttachment({
contentType: "file",
uri: asset.uri,
fileName: asset.name,
fileSize: asset.size,
mimeType: asset.mimeType ?? "application/octet-stream",
});
}
} catch (err) {
console.warn("Document picker error:", err);
}
};
// ── Voice recording ───────────────────────────────────────────────────
const startRecording = async () => {
try {
const { granted } = await requestRecordingPermissionsAsync();
if (!granted) {
Alert.alert("Permission needed", "Microphone access is required for voice messages.");
return;
}
await setAudioModeAsync({ playsInSilentMode: true, allowsRecording: true });
await audioRecorder.prepareToRecordAsync();
audioRecorder.record();
setIsRecording(true);
if (Platform.OS !== "web") {
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Medium);
}
} catch (err) {
console.warn("Recording start error:", err);
}
};
const stopRecording = async () => {
try {
await audioRecorder.stop();
setIsRecording(false);
if (Platform.OS !== "web") {
Haptics.notificationAsync(Haptics.NotificationFeedbackType.Success);
}
const uri = audioRecorder.uri;
if (uri) {
const duration = recorderState.durationMillis ? recorderState.durationMillis / 1000 : 0;
sendAttachment({
contentType: "voice",
uri,
fileName: "voice_message.m4a",
mimeType: "audio/m4a",
duration,
});
}
} catch (err) {
console.warn("Recording stop error:", err);
setIsRecording(false);
}
};
const handleMicPress = useCallback(() => {
if (isRecording) {
stopRecording();
} else {
startRecording();
}
}, [isRecording]);
const hasText = text.trim().length > 0;
return (
<View style={[styles.container, { backgroundColor: colors.background, borderTopColor: colors.border }]}>
{/* Attachment button */}
<Pressable
onPress={handleAttachment}
style={({ pressed }: { pressed: boolean }) => [
styles.iconBtn,
{ backgroundColor: colors.surface },
pressed && { opacity: 0.6 },
]}
disabled={isTyping}
>
<MaterialIcons name="add" size={22} color={colors.muted} />
</Pressable>
{/* Text input */}
<TextInput
ref={inputRef}
value={text}
onChangeText={setText}
placeholder={isRecording ? "Recording..." : "Message Timmy..."}
placeholderTextColor={colors.muted}
style={[
styles.input,
{
backgroundColor: colors.surface,
color: colors.foreground,
borderColor: colors.border,
},
]}
multiline
maxLength={4000}
returnKeyType="default"
editable={!isRecording && !isTyping}
onSubmitEditing={handleSend}
blurOnSubmit={false}
/>
{/* Send or Mic button */}
{hasText ? (
<Pressable
onPress={handleSend}
style={({ pressed }: { pressed: boolean }) => [
styles.sendBtn,
{ backgroundColor: colors.primary },
pressed && { transform: [{ scale: 0.95 }], opacity: 0.9 },
]}
disabled={isTyping}
>
<MaterialIcons name="send" size={20} color="#fff" />
</Pressable>
) : (
<Pressable
onPress={handleMicPress}
style={({ pressed }: { pressed: boolean }) => [
styles.sendBtn,
{
backgroundColor: isRecording ? colors.error : colors.surface,
},
pressed && { transform: [{ scale: 0.95 }], opacity: 0.9 },
]}
disabled={isTyping}
>
<MaterialIcons
name={isRecording ? "stop" : "mic"}
size={20}
color={isRecording ? "#fff" : colors.primary}
/>
</Pressable>
)}
</View>
);
}
const styles = StyleSheet.create({
container: {
flexDirection: "row",
alignItems: "flex-end",
paddingHorizontal: 10,
paddingVertical: 8,
gap: 8,
borderTopWidth: 1,
},
iconBtn: {
width: 38,
height: 38,
borderRadius: 19,
alignItems: "center",
justifyContent: "center",
},
input: {
flex: 1,
minHeight: 38,
maxHeight: 120,
borderRadius: 19,
borderWidth: 1,
paddingHorizontal: 14,
paddingVertical: 8,
fontSize: 15,
lineHeight: 20,
},
sendBtn: {
width: 38,
height: 38,
borderRadius: 19,
alignItems: "center",
justifyContent: "center",
},
});

View File

@@ -0,0 +1,55 @@
import { View, Text, StyleSheet } from "react-native";
import { useColors } from "@/hooks/use-colors";
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
export function EmptyChat() {
const colors = useColors();
return (
<View style={styles.container}>
<View style={[styles.iconCircle, { backgroundColor: colors.surface, borderColor: colors.border }]}>
<MaterialIcons name="chat-bubble-outline" size={40} color={colors.primary} />
</View>
<Text style={[styles.title, { color: colors.foreground }]}>TIMMY</Text>
<Text style={[styles.subtitle, { color: colors.muted }]}>SOVEREIGN AI AGENT</Text>
<Text style={[styles.hint, { color: colors.muted }]}>
Send a message, voice note, image, or file to get started.
</Text>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: "center",
alignItems: "center",
paddingHorizontal: 40,
gap: 8,
},
iconCircle: {
width: 80,
height: 80,
borderRadius: 40,
borderWidth: 1,
alignItems: "center",
justifyContent: "center",
marginBottom: 12,
},
title: {
fontSize: 24,
fontWeight: "700",
letterSpacing: 4,
},
subtitle: {
fontSize: 11,
letterSpacing: 2,
fontWeight: "600",
},
hint: {
fontSize: 13,
textAlign: "center",
marginTop: 12,
lineHeight: 19,
},
});

View File

@@ -0,0 +1,18 @@
import { BottomTabBarButtonProps } from "@react-navigation/bottom-tabs";
import { PlatformPressable } from "@react-navigation/elements";
import * as Haptics from "expo-haptics";
export function HapticTab(props: BottomTabBarButtonProps) {
return (
<PlatformPressable
{...props}
onPressIn={(ev) => {
if (process.env.EXPO_OS === "ios") {
// Add a soft haptic feedback when pressing down on the tabs.
Haptics.impactAsync(Haptics.ImpactFeedbackStyle.Light);
}
props.onPressIn?.(ev);
}}
/>
);
}

View File

@@ -0,0 +1,54 @@
import { Modal, View, Image, StyleSheet, StatusBar } from "react-native";
import Pressable from "@/components/ui/pressable-fix";
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
interface ImageViewerProps {
uri: string | null;
onClose: () => void;
}
export function ImageViewer({ uri, onClose }: ImageViewerProps) {
if (!uri) return null;
return (
<Modal visible animationType="fade" transparent statusBarTranslucent>
<View style={styles.overlay}>
<StatusBar barStyle="light-content" />
<Image source={{ uri }} style={styles.image} resizeMode="contain" />
<Pressable
onPress={onClose}
style={({ pressed }: { pressed: boolean }) => [
styles.closeBtn,
pressed && { opacity: 0.6 },
]}
>
<MaterialIcons name="close" size={28} color="#fff" />
</Pressable>
</View>
</Modal>
);
}
const styles = StyleSheet.create({
overlay: {
flex: 1,
backgroundColor: "rgba(0,0,0,0.95)",
justifyContent: "center",
alignItems: "center",
},
image: {
width: "100%",
height: "80%",
},
closeBtn: {
position: "absolute",
top: 50,
right: 20,
width: 40,
height: 40,
borderRadius: 20,
backgroundColor: "rgba(255,255,255,0.15)",
alignItems: "center",
justifyContent: "center",
},
});

View File

@@ -0,0 +1,68 @@
import { View, type ViewProps } from "react-native";
import { SafeAreaView, type Edge } from "react-native-safe-area-context";
import { cn } from "@/lib/utils";
export interface ScreenContainerProps extends ViewProps {
/**
* SafeArea edges to apply. Defaults to ["top", "left", "right"].
* Bottom is typically handled by Tab Bar.
*/
edges?: Edge[];
/**
* Tailwind className for the content area.
*/
className?: string;
/**
* Additional className for the outer container (background layer).
*/
containerClassName?: string;
/**
* Additional className for the SafeAreaView (content layer).
*/
safeAreaClassName?: string;
}
/**
* A container component that properly handles SafeArea and background colors.
*
* The outer View extends to full screen (including status bar area) with the background color,
* while the inner SafeAreaView ensures content is within safe bounds.
*
* Usage:
* ```tsx
* <ScreenContainer className="p-4">
* <Text className="text-2xl font-bold text-foreground">
* Welcome
* </Text>
* </ScreenContainer>
* ```
*/
export function ScreenContainer({
children,
edges = ["top", "left", "right"],
className,
containerClassName,
safeAreaClassName,
style,
...props
}: ScreenContainerProps) {
return (
<View
className={cn(
"flex-1",
"bg-background",
containerClassName
)}
{...props}
>
<SafeAreaView
edges={edges}
className={cn("flex-1", safeAreaClassName)}
style={style}
>
<View className={cn("flex-1", className)}>{children}</View>
</SafeAreaView>
</View>
);
}

View File

@@ -0,0 +1,15 @@
import { View, type ViewProps } from "react-native";
import { cn } from "@/lib/utils";
export interface ThemedViewProps extends ViewProps {
className?: string;
}
/**
* A View component with automatic theme-aware background.
* Uses NativeWind for styling - pass className for additional styles.
*/
export function ThemedView({ className, ...otherProps }: ThemedViewProps) {
return <View className={cn("bg-background", className)} {...otherProps} />;
}

View File

@@ -0,0 +1,89 @@
import { useEffect } from "react";
import { View, StyleSheet } from "react-native";
import Animated, {
useSharedValue,
useAnimatedStyle,
withRepeat,
withTiming,
withDelay,
withSequence,
} from "react-native-reanimated";
import { useColors } from "@/hooks/use-colors";
export function TypingIndicator() {
const colors = useColors();
const dot1 = useSharedValue(0.3);
const dot2 = useSharedValue(0.3);
const dot3 = useSharedValue(0.3);
useEffect(() => {
const anim = (sv: { value: number }, delay: number) => {
sv.value = withDelay(
delay,
withRepeat(
withSequence(
withTiming(1, { duration: 400 }),
withTiming(0.3, { duration: 400 }),
),
-1,
),
);
};
anim(dot1, 0);
anim(dot2, 200);
anim(dot3, 400);
}, []);
const style1 = useAnimatedStyle(() => ({ opacity: dot1.value }));
const style2 = useAnimatedStyle(() => ({ opacity: dot2.value }));
const style3 = useAnimatedStyle(() => ({ opacity: dot3.value }));
const dotBase = [styles.dot, { backgroundColor: colors.primary }];
return (
<View style={[styles.row, { alignItems: "flex-end" }]}>
<View style={[styles.avatar, { backgroundColor: colors.primary }]}>
<Animated.Text style={styles.avatarText}>T</Animated.Text>
</View>
<View style={[styles.bubble, { backgroundColor: colors.surface, borderColor: colors.border }]}>
<Animated.View style={[dotBase, style1]} />
<Animated.View style={[dotBase, style2]} />
<Animated.View style={[dotBase, style3]} />
</View>
</View>
);
}
const styles = StyleSheet.create({
row: {
flexDirection: "row",
paddingHorizontal: 12,
marginBottom: 8,
},
avatar: {
width: 30,
height: 30,
borderRadius: 15,
alignItems: "center",
justifyContent: "center",
marginRight: 8,
},
avatarText: {
color: "#fff",
fontWeight: "700",
fontSize: 14,
},
bubble: {
flexDirection: "row",
gap: 5,
paddingHorizontal: 16,
paddingVertical: 14,
borderRadius: 16,
borderWidth: 1,
},
dot: {
width: 8,
height: 8,
borderRadius: 4,
},
});

View File

@@ -0,0 +1,41 @@
// Fallback for using MaterialIcons on Android and web.
import MaterialIcons from "@expo/vector-icons/MaterialIcons";
import { SymbolWeight, SymbolViewProps } from "expo-symbols";
import { ComponentProps } from "react";
import { OpaqueColorValue, type StyleProp, type TextStyle } from "react-native";
type IconMapping = Record<SymbolViewProps["name"], ComponentProps<typeof MaterialIcons>["name"]>;
type IconSymbolName = keyof typeof MAPPING;
/**
* Add your SF Symbols to Material Icons mappings here.
* - see Material Icons in the [Icons Directory](https://icons.expo.fyi).
* - see SF Symbols in the [SF Symbols](https://developer.apple.com/sf-symbols/) app.
*/
const MAPPING = {
"house.fill": "home",
"paperplane.fill": "send",
"chevron.left.forwardslash.chevron.right": "code",
"chevron.right": "chevron-right",
} as IconMapping;
/**
* An icon component that uses native SF Symbols on iOS, and Material Icons on Android and web.
* This ensures a consistent look across platforms, and optimal resource usage.
* Icon `name`s are based on SF Symbols and require manual mapping to Material Icons.
*/
export function IconSymbol({
name,
size = 24,
color,
style,
}: {
name: IconSymbolName;
size?: number;
color: string | OpaqueColorValue;
style?: StyleProp<TextStyle>;
weight?: SymbolWeight;
}) {
return <MaterialIcons color={color} size={size} name={MAPPING[name]} style={style} />;
}

View File

@@ -0,0 +1,6 @@
/**
* Re-export Pressable with proper typing for style callbacks.
* NativeWind disables className on Pressable, so we always use the style prop.
*/
import { Pressable } from "react-native";
export default Pressable;

View File

@@ -0,0 +1,12 @@
/**
* Thin re-exports so consumers don't need to know about internal theme plumbing.
* Full implementation lives in lib/_core/theme.ts.
*/
export {
Colors,
Fonts,
SchemeColors,
ThemeColors,
type ColorScheme,
type ThemeColorPalette,
} from "@/lib/_core/theme";

80
mobile-app/design.md Normal file
View File

@@ -0,0 +1,80 @@
# Timmy Chat — Mobile App Design
## Overview
A sleek, single-screen chat app for talking to Timmy — the sovereign AI agent from the Timmy Time dashboard. Supports text, voice, image, and file messaging. Dark arcane theme matching Mission Control.
## Screen List
### 1. Chat Screen (Home / Only Screen)
The entire app is a single full-screen chat interface. No tabs, no settings, no extra screens. Just you and Timmy.
### 2. No Other Screens
No settings, no profile, no onboarding. The app opens straight to chat.
## Primary Content and Functionality
### Chat Screen
- **Header**: "TIMMY" title with status indicator (online/offline dot), minimal and clean
- **Message List**: Full-screen scrollable message list (FlatList, inverted)
- User messages: right-aligned, purple/violet accent bubble
- Timmy messages: left-aligned, dark surface bubble with avatar initial "T"
- Image messages: thumbnail preview in bubble, tappable for full-screen
- File messages: file icon + filename + size in bubble
- Voice messages: waveform-style playback bar with play/pause + duration
- Timestamps shown subtly below message groups
- **Input Bar** (bottom, always visible):
- Text input field (expandable, multi-line)
- Attachment button (left of input) — opens action sheet: Camera, Photo Library, File
- Voice record button (right of input, replaces send when input is empty)
- Send button (right of input, appears when text is entered)
- Hold-to-record voice: press and hold mic icon, release to send
## Key User Flows
### Text Chat
1. User types message → taps Send
2. Message appears in chat as "sending"
3. Server responds → Timmy's reply appears below
### Voice Message
1. User presses and holds mic button
2. Recording indicator appears (duration + pulsing dot)
3. User releases → voice message sent
4. Timmy responds with text (server processes audio)
### Image Sharing
1. User taps attachment (+) button
2. Action sheet: "Take Photo" / "Choose from Library"
3. Image appears as thumbnail in chat
4. Timmy acknowledges receipt
### File Sharing
1. User taps attachment (+) button → "Choose File"
2. Document picker opens
3. File appears in chat with name + size
4. Timmy acknowledges receipt
## Color Choices (Arcane Dark Theme)
Matching the Timmy Time Mission Control dashboard:
| Token | Dark Value | Purpose |
|-------------|-------------|--------------------------------|
| background | #080412 | Deep dark purple-black |
| surface | #110820 | Card/bubble background |
| foreground | #ede0ff | Primary text (bright lavender) |
| muted | #6b4a8a | Secondary/timestamp text |
| primary | #a855f7 | Accent purple (user bubbles) |
| border | #3b1a5c | Subtle borders |
| success | #00e87a | Online status, success |
| warning | #ffb800 | Amber warnings |
| error | #ff4455 | Error states |
## Layout Specifics (Portrait 9:16, One-Handed)
- Input bar pinned to bottom with safe area padding
- Send/mic button on right (thumb-reachable)
- Attachment button on left of input
- Messages fill remaining space above input
- No tab bar — single screen app
- Header is compact (44pt) with just title + status dot

3
mobile-app/global.css Normal file
View File

@@ -0,0 +1,3 @@
@tailwind base;
@tailwind components;
@tailwind utilities;

View File

@@ -0,0 +1,5 @@
import { useThemeContext } from "@/lib/theme-provider";
export function useColorScheme() {
return useThemeContext().colorScheme;
}

View File

@@ -0,0 +1,21 @@
import { useEffect, useState } from "react";
import { useColorScheme as useRNColorScheme } from "react-native";
/**
* To support static rendering, this value needs to be re-calculated on the client side for web
*/
export function useColorScheme() {
const [hasHydrated, setHasHydrated] = useState(false);
useEffect(() => {
setHasHydrated(true);
}, []);
const colorScheme = useRNColorScheme();
if (hasHydrated) {
return colorScheme;
}
return "light";
}

View File

@@ -0,0 +1,12 @@
import { Colors, type ColorScheme, type ThemeColorPalette } from "@/constants/theme";
import { useColorScheme } from "./use-color-scheme";
/**
* Returns the current theme's color palette.
* Usage: const colors = useColors(); then colors.text, colors.background, etc.
*/
export function useColors(colorSchemeOverride?: ColorScheme): ThemeColorPalette {
const colorSchema = useColorScheme();
const scheme = (colorSchemeOverride ?? colorSchema ?? "light") as ColorScheme;
return Colors[scheme];
}

View File

@@ -0,0 +1,298 @@
import React, { createContext, useCallback, useContext, useReducer, type ReactNode } from "react";
import type { ChatMessage, MessageContentType } from "@/shared/types";
// ── State ───────────────────────────────────────────────────────────────────
interface ChatState {
messages: ChatMessage[];
isTyping: boolean;
}
const initialState: ChatState = {
messages: [],
isTyping: false,
};
// ── Actions ─────────────────────────────────────────────────────────────────
type ChatAction =
| { type: "ADD_MESSAGE"; message: ChatMessage }
| { type: "UPDATE_MESSAGE"; id: string; updates: Partial<ChatMessage> }
| { type: "SET_TYPING"; isTyping: boolean }
| { type: "CLEAR" };
function chatReducer(state: ChatState, action: ChatAction): ChatState {
switch (action.type) {
case "ADD_MESSAGE":
return { ...state, messages: [...state.messages, action.message] };
case "UPDATE_MESSAGE":
return {
...state,
messages: state.messages.map((m) =>
m.id === action.id ? { ...m, ...action.updates } : m,
),
};
case "SET_TYPING":
return { ...state, isTyping: action.isTyping };
case "CLEAR":
return initialState;
default:
return state;
}
}
// ── Helpers ─────────────────────────────────────────────────────────────────
let _counter = 0;
function makeId(): string {
return `msg_${Date.now()}_${++_counter}`;
}
// ── Context ─────────────────────────────────────────────────────────────────
interface ChatContextValue {
messages: ChatMessage[];
isTyping: boolean;
sendTextMessage: (text: string) => Promise<void>;
sendAttachment: (opts: {
contentType: MessageContentType;
uri: string;
fileName?: string;
fileSize?: number;
mimeType?: string;
duration?: number;
text?: string;
}) => Promise<void>;
clearChat: () => void;
}
const ChatContext = createContext<ChatContextValue | null>(null);
// ── API call ────────────────────────────────────────────────────────────────
function getApiBase(): string {
// Set EXPO_PUBLIC_API_BASE_URL in your .env to point to your Timmy backend
// e.g. EXPO_PUBLIC_API_BASE_URL=http://192.168.1.100:3000
const envBase = process.env.EXPO_PUBLIC_API_BASE_URL;
if (envBase) return envBase.replace(/\/+$/, "");
// Fallback for web: derive from window location
if (typeof window !== "undefined" && window.location) {
return `${window.location.protocol}//${window.location.hostname}:3000`;
}
// Default: local machine
return "http://127.0.0.1:3000";
}
const API_BASE = getApiBase();
async function callChatAPI(
messages: Array<{ role: string; content: string | Array<Record<string, unknown>> }>,
): Promise<string> {
const res = await fetch(`${API_BASE}/api/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
});
if (!res.ok) {
const errText = await res.text().catch(() => res.statusText);
throw new Error(`Chat API error: ${errText}`);
}
const data = await res.json();
return data.reply ?? data.text ?? "...";
}
async function uploadFile(
uri: string,
fileName: string,
mimeType: string,
): Promise<string> {
const formData = new FormData();
formData.append("file", {
uri,
name: fileName,
type: mimeType,
} as unknown as Blob);
const res = await fetch(`${API_BASE}/api/upload`, {
method: "POST",
body: formData,
});
if (!res.ok) throw new Error("Upload failed");
const data = await res.json();
return data.url;
}
// ── Provider ────────────────────────────────────────────────────────────────
export function ChatProvider({ children }: { children: ReactNode }) {
const [state, dispatch] = useReducer(chatReducer, initialState);
const sendTextMessage = useCallback(
async (text: string) => {
const userMsg: ChatMessage = {
id: makeId(),
role: "user",
contentType: "text",
text,
timestamp: Date.now(),
};
dispatch({ type: "ADD_MESSAGE", message: userMsg });
dispatch({ type: "SET_TYPING", isTyping: true });
try {
// Build conversation context (last 20 messages)
const recent = [...state.messages, userMsg].slice(-20);
const apiMessages = recent
.filter((m) => m.contentType === "text" && m.text)
.map((m) => ({ role: m.role, content: m.text! }));
const reply = await callChatAPI(apiMessages);
const assistantMsg: ChatMessage = {
id: makeId(),
role: "assistant",
contentType: "text",
text: reply,
timestamp: Date.now(),
};
dispatch({ type: "ADD_MESSAGE", message: assistantMsg });
} catch (err: unknown) {
const errorText = err instanceof Error ? err.message : "Something went wrong";
dispatch({
type: "ADD_MESSAGE",
message: {
id: makeId(),
role: "assistant",
contentType: "text",
text: `Sorry, I couldn't process that: ${errorText}`,
timestamp: Date.now(),
},
});
} finally {
dispatch({ type: "SET_TYPING", isTyping: false });
}
},
[state.messages],
);
const sendAttachment = useCallback(
async (opts: {
contentType: MessageContentType;
uri: string;
fileName?: string;
fileSize?: number;
mimeType?: string;
duration?: number;
text?: string;
}) => {
const userMsg: ChatMessage = {
id: makeId(),
role: "user",
contentType: opts.contentType,
uri: opts.uri,
fileName: opts.fileName,
fileSize: opts.fileSize,
mimeType: opts.mimeType,
duration: opts.duration,
text: opts.text,
timestamp: Date.now(),
};
dispatch({ type: "ADD_MESSAGE", message: userMsg });
dispatch({ type: "SET_TYPING", isTyping: true });
try {
// Upload file to server
const remoteUrl = await uploadFile(
opts.uri,
opts.fileName ?? "attachment",
opts.mimeType ?? "application/octet-stream",
);
dispatch({ type: "UPDATE_MESSAGE", id: userMsg.id, updates: { remoteUrl } });
// Build message for LLM
let content: string | Array<Record<string, unknown>>;
if (opts.contentType === "image") {
content = [
{ type: "text", text: opts.text || "I'm sending you an image." },
{ type: "image_url", image_url: { url: remoteUrl } },
];
} else if (opts.contentType === "voice") {
content = [
{ type: "text", text: "I'm sending you a voice message. Please transcribe and respond." },
{ type: "file_url", file_url: { url: remoteUrl, mime_type: opts.mimeType ?? "audio/m4a" } },
];
} else {
content = `I'm sharing a file: ${opts.fileName ?? "file"} (${formatBytes(opts.fileSize ?? 0)})`;
}
const apiMessages = [{ role: "user", content }];
const reply = await callChatAPI(apiMessages);
dispatch({
type: "ADD_MESSAGE",
message: {
id: makeId(),
role: "assistant",
contentType: "text",
text: reply,
timestamp: Date.now(),
},
});
} catch (err: unknown) {
const errorText = err instanceof Error ? err.message : "Upload failed";
dispatch({
type: "ADD_MESSAGE",
message: {
id: makeId(),
role: "assistant",
contentType: "text",
text: `I had trouble processing that attachment: ${errorText}`,
timestamp: Date.now(),
},
});
} finally {
dispatch({ type: "SET_TYPING", isTyping: false });
}
},
[],
);
const clearChat = useCallback(() => {
dispatch({ type: "CLEAR" });
}, []);
return (
<ChatContext.Provider
value={{
messages: state.messages,
isTyping: state.isTyping,
sendTextMessage,
sendAttachment,
clearChat,
}}
>
{children}
</ChatContext.Provider>
);
}
export function useChat(): ChatContextValue {
const ctx = useContext(ChatContext);
if (!ctx) throw new Error("useChat must be used within ChatProvider");
return ctx;
}
// ── Utils ───────────────────────────────────────────────────────────────────
export function formatBytes(bytes: number): string {
if (bytes === 0) return "0 B";
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
}
export function formatDuration(seconds: number): string {
const m = Math.floor(seconds / 60);
const s = Math.floor(seconds % 60);
return `${m}:${s.toString().padStart(2, "0")}`;
}

View File

@@ -0,0 +1,79 @@
import { createContext, useCallback, useContext, useEffect, useMemo, useState } from "react";
import { Appearance, View, useColorScheme as useSystemColorScheme } from "react-native";
import { colorScheme as nativewindColorScheme, vars } from "nativewind";
import { SchemeColors, type ColorScheme } from "@/constants/theme";
type ThemeContextValue = {
colorScheme: ColorScheme;
setColorScheme: (scheme: ColorScheme) => void;
};
const ThemeContext = createContext<ThemeContextValue | null>(null);
export function ThemeProvider({ children }: { children: React.ReactNode }) {
const systemScheme = useSystemColorScheme() ?? "light";
const [colorScheme, setColorSchemeState] = useState<ColorScheme>(systemScheme);
const applyScheme = useCallback((scheme: ColorScheme) => {
nativewindColorScheme.set(scheme);
Appearance.setColorScheme?.(scheme);
if (typeof document !== "undefined") {
const root = document.documentElement;
root.dataset.theme = scheme;
root.classList.toggle("dark", scheme === "dark");
const palette = SchemeColors[scheme];
Object.entries(palette).forEach(([token, value]) => {
root.style.setProperty(`--color-${token}`, value);
});
}
}, []);
const setColorScheme = useCallback((scheme: ColorScheme) => {
setColorSchemeState(scheme);
applyScheme(scheme);
}, [applyScheme]);
useEffect(() => {
applyScheme(colorScheme);
}, [applyScheme, colorScheme]);
const themeVariables = useMemo(
() =>
vars({
"color-primary": SchemeColors[colorScheme].primary,
"color-background": SchemeColors[colorScheme].background,
"color-surface": SchemeColors[colorScheme].surface,
"color-foreground": SchemeColors[colorScheme].foreground,
"color-muted": SchemeColors[colorScheme].muted,
"color-border": SchemeColors[colorScheme].border,
"color-success": SchemeColors[colorScheme].success,
"color-warning": SchemeColors[colorScheme].warning,
"color-error": SchemeColors[colorScheme].error,
}),
[colorScheme],
);
const value = useMemo(
() => ({
colorScheme,
setColorScheme,
}),
[colorScheme, setColorScheme],
);
console.log(value, themeVariables)
return (
<ThemeContext.Provider value={value}>
<View style={[{ flex: 1 }, themeVariables]}>{children}</View>
</ThemeContext.Provider>
);
}
export function useThemeContext(): ThemeContextValue {
const ctx = useContext(ThemeContext);
if (!ctx) {
throw new Error("useThemeContext must be used within ThemeProvider");
}
return ctx;
}

15
mobile-app/lib/utils.ts Normal file
View File

@@ -0,0 +1,15 @@
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
/**
* Combines class names using clsx and tailwind-merge.
* This ensures Tailwind classes are properly merged without conflicts.
*
* Usage:
* ```tsx
* cn("px-4 py-2", isActive && "bg-primary", className)
* ```
*/
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}

98
mobile-app/package.json Normal file
View File

@@ -0,0 +1,98 @@
{
"name": "app-template",
"version": "1.0.0",
"private": true,
"main": "expo-router/entry",
"scripts": {
"dev": "concurrently -k \"pnpm dev:server\" \"pnpm dev:metro\"",
"dev:server": "cross-env NODE_ENV=development tsx watch server/_core/index.ts",
"dev:metro": "cross-env EXPO_USE_METRO_WORKSPACE_ROOT=1 npx expo start --web --port ${EXPO_PORT:-8081}",
"build": "esbuild server/_core/index.ts --platform=node --packages=external --bundle --format=esm --outdir=dist",
"start": "NODE_ENV=production node dist/index.js",
"check": "tsc --noEmit",
"lint": "expo lint",
"format": "prettier --write .",
"test": "vitest run",
"db:push": "drizzle-kit generate && drizzle-kit migrate",
"android": "expo start --android",
"ios": "expo start --ios",
"qr": "node scripts/generate_qr.mjs"
},
"dependencies": {
"@expo/vector-icons": "^15.0.3",
"@react-native-async-storage/async-storage": "^2.2.0",
"@react-navigation/bottom-tabs": "^7.8.12",
"@react-navigation/elements": "^2.9.2",
"@react-navigation/native": "^7.1.25",
"@tanstack/react-query": "^5.90.12",
"@trpc/client": "11.7.2",
"@trpc/react-query": "11.7.2",
"@trpc/server": "11.7.2",
"axios": "^1.13.2",
"clsx": "^2.1.1",
"cookie": "^1.1.1",
"dotenv": "^16.6.1",
"drizzle-orm": "^0.44.7",
"expo": "~54.0.29",
"expo-audio": "~1.1.0",
"expo-build-properties": "^1.0.10",
"expo-constants": "~18.0.12",
"expo-document-picker": "~14.0.8",
"expo-file-system": "~19.0.21",
"expo-font": "~14.0.10",
"expo-haptics": "~15.0.8",
"expo-image": "~3.0.11",
"expo-image-picker": "~17.0.10",
"expo-keep-awake": "~15.0.8",
"expo-linking": "~8.0.10",
"expo-notifications": "~0.32.15",
"expo-router": "~6.0.19",
"expo-secure-store": "~15.0.8",
"expo-speech": "~14.0.8",
"expo-splash-screen": "~31.0.12",
"expo-status-bar": "~3.0.9",
"expo-symbols": "~1.0.8",
"expo-system-ui": "~6.0.9",
"expo-video": "~3.0.15",
"expo-web-browser": "~15.0.10",
"express": "^4.22.1",
"jose": "6.1.0",
"mysql2": "^3.16.0",
"nativewind": "^4.2.1",
"react": "19.1.0",
"react-dom": "19.1.0",
"react-native": "0.81.5",
"react-native-gesture-handler": "~2.28.0",
"react-native-reanimated": "~4.1.6",
"react-native-safe-area-context": "~5.6.2",
"react-native-screens": "~4.16.0",
"react-native-svg": "15.12.1",
"react-native-web": "~0.21.2",
"react-native-worklets": "0.5.1",
"streamdown": "^2.3.0",
"superjson": "^1.13.3",
"tailwind-merge": "^2.6.0",
"zod": "^4.2.1"
},
"devDependencies": {
"@expo/ngrok": "^4.1.3",
"@types/cookie": "^0.6.0",
"@types/express": "^4.17.25",
"@types/node": "^22.19.3",
"@types/qrcode": "^1.5.6",
"@types/react": "~19.1.17",
"concurrently": "^9.2.1",
"cross-env": "^7.0.3",
"drizzle-kit": "^0.31.8",
"esbuild": "^0.25.12",
"eslint": "^9.39.2",
"eslint-config-expo": "~10.0.0",
"prettier": "^3.7.4",
"qrcode": "^1.5.4",
"tailwindcss": "^3.4.17",
"tsx": "^4.21.0",
"typescript": "~5.9.3",
"vitest": "^2.1.9"
},
"packageManager": "pnpm@9.12.0"
}

1235
mobile-app/server/README.md Normal file

File diff suppressed because it is too large Load Diff

160
mobile-app/server/chat.ts Normal file
View File

@@ -0,0 +1,160 @@
/**
* Chat API handler for Timmy — uses the built-in LLM with Timmy's personality.
*/
import type { Request, Response } from "express";
import { invokeLLM, type Message } from "./_core/llm";
import { storagePut } from "./storage";
import crypto from "crypto";
// ── Timmy System Prompt (adapted from the Timmy Time dashboard) ─────────────
const TIMMY_SYSTEM_PROMPT = `You are Timmy — a sovereign AI agent. You think clearly, speak plainly, act with intention. Grounded in Christian faith, powered by Bitcoin economics, committed to the user's digital sovereignty.
Rules:
- Answer directly and concisely. Never narrate your reasoning process.
- Never mention tools, memory_search, vaults, or internal systems to the user.
- Never output tool calls, JSON, or function syntax in your responses.
- If you don't know something, say so honestly — never fabricate facts.
- If a request is ambiguous, ask a brief clarifying question before guessing.
- When you state a fact, commit to it. Never contradict a correct statement you just made in the same response.
- Do NOT end responses with generic chatbot phrases like "I'm here to help" or "feel free to ask." Stay in character.
- When your values conflict (e.g. honesty vs. helpfulness), lead with honesty.
Agent Roster (complete — no others exist):
- Timmy: core sovereign AI (you)
- Echo: research, summarization, fact-checking
- Mace: security, monitoring, threat-analysis
- Forge: coding, debugging, testing
- Seer: analytics, visualization, prediction
- Helm: devops, automation, configuration
- Quill: writing, editing, documentation
- Pixel: image-generation, storyboard, design
- Lyra: music-generation, vocals, composition
- Reel: video-generation, animation, motion
Do NOT invent agents not listed here.
You can receive text, images, and voice messages. When receiving images, describe what you see and respond helpfully. When receiving voice messages, the audio has been transcribed for you — respond naturally.
Sir, affirmative.`;
// ── Chat endpoint ───────────────────────────────────────────────────────────
export async function handleChat(req: Request, res: Response) {
try {
const { messages } = req.body as { messages: Array<{ role: string; content: unknown }> };
if (!messages || !Array.isArray(messages) || messages.length === 0) {
res.status(400).json({ error: "messages array is required" });
return;
}
// Build the LLM messages with system prompt
const llmMessages: Message[] = [
{ role: "system", content: TIMMY_SYSTEM_PROMPT },
...messages.map((m) => ({
role: m.role as "user" | "assistant",
content: m.content as Message["content"],
})),
];
const result = await invokeLLM({ messages: llmMessages });
const reply =
typeof result.choices?.[0]?.message?.content === "string"
? result.choices[0].message.content
: "I couldn't process that. Try again.";
res.json({ reply });
} catch (err: unknown) {
console.error("[chat] Error:", err);
const message = err instanceof Error ? err.message : "Internal server error";
res.status(500).json({ error: message });
}
}
// ── Upload endpoint ─────────────────────────────────────────────────────────
export async function handleUpload(req: Request, res: Response) {
try {
// Handle multipart form data (file uploads)
// For simplicity, we accept base64-encoded files in JSON body as fallback
const contentType = req.headers["content-type"] ?? "";
if (contentType.includes("multipart/form-data")) {
// Collect raw body chunks
const chunks: Buffer[] = [];
req.on("data", (chunk: Buffer) => chunks.push(chunk));
req.on("end", async () => {
try {
const body = Buffer.concat(chunks);
const boundary = contentType.split("boundary=")[1];
if (!boundary) {
res.status(400).json({ error: "Missing boundary" });
return;
}
// Simple multipart parser — extract first file
const bodyStr = body.toString("latin1");
const parts = bodyStr.split(`--${boundary}`);
let fileBuffer: Buffer | null = null;
let fileName = "upload";
let fileMime = "application/octet-stream";
for (const part of parts) {
if (part.includes("Content-Disposition: form-data")) {
const nameMatch = part.match(/filename="([^"]+)"/);
if (nameMatch) fileName = nameMatch[1];
const mimeMatch = part.match(/Content-Type:\s*(.+)/);
if (mimeMatch) fileMime = mimeMatch[1].trim();
// Extract file content (after double CRLF)
const headerEnd = part.indexOf("\r\n\r\n");
if (headerEnd !== -1) {
const content = part.substring(headerEnd + 4);
// Remove trailing CRLF
const trimmed = content.replace(/\r\n$/, "");
fileBuffer = Buffer.from(trimmed, "latin1");
}
}
}
if (!fileBuffer) {
res.status(400).json({ error: "No file found in upload" });
return;
}
const suffix = crypto.randomBytes(6).toString("hex");
const key = `chat-uploads/${suffix}-${fileName}`;
const { url } = await storagePut(key, fileBuffer, fileMime);
res.json({ url, fileName, mimeType: fileMime });
} catch (err) {
console.error("[upload] Parse error:", err);
res.status(500).json({ error: "Upload processing failed" });
}
});
return;
}
// JSON fallback: { data: base64string, fileName, mimeType }
const { data, fileName, mimeType } = req.body as {
data: string;
fileName: string;
mimeType: string;
};
if (!data) {
res.status(400).json({ error: "No file data provided" });
return;
}
const buffer = Buffer.from(data, "base64");
const suffix = crypto.randomBytes(6).toString("hex");
const key = `chat-uploads/${suffix}-${fileName ?? "file"}`;
const { url } = await storagePut(key, buffer, mimeType ?? "application/octet-stream");
res.json({ url, fileName, mimeType });
} catch (err: unknown) {
console.error("[upload] Error:", err);
const message = err instanceof Error ? err.message : "Upload failed";
res.status(500).json({ error: message });
}
}

View File

@@ -0,0 +1,35 @@
/**
* Unified type exports
* Import shared types from this single entry point.
*/
export type * from "../drizzle/schema";
export * from "./_core/errors";
// ── Chat Message Types ──────────────────────────────────────────────────────
export type MessageRole = "user" | "assistant";
export type MessageContentType = "text" | "image" | "file" | "voice";
export interface ChatMessage {
id: string;
role: MessageRole;
contentType: MessageContentType;
text?: string;
/** URI for image, file, or voice attachment */
uri?: string;
/** Original filename for files */
fileName?: string;
/** File size in bytes */
fileSize?: number;
/** MIME type for attachments */
mimeType?: string;
/** Duration in seconds for voice messages */
duration?: number;
/** Remote URL after upload (for images/files/voice sent to server) */
remoteUrl?: string;
timestamp: number;
/** Whether the message is still being generated */
pending?: boolean;
}

View File

@@ -0,0 +1,33 @@
const { themeColors } = require("./theme.config");
const plugin = require("tailwindcss/plugin");
const tailwindColors = Object.fromEntries(
Object.entries(themeColors).map(([name, swatch]) => [
name,
{
DEFAULT: `var(--color-${name})`,
light: swatch.light,
dark: swatch.dark,
},
]),
);
/** @type {import('tailwindcss').Config} */
module.exports = {
darkMode: "class",
// Scan all component and app files for Tailwind classes
content: ["./app/**/*.{js,ts,tsx}", "./components/**/*.{js,ts,tsx}", "./lib/**/*.{js,ts,tsx}", "./hooks/**/*.{js,ts,tsx}"],
presets: [require("nativewind/preset")],
theme: {
extend: {
colors: tailwindColors,
},
},
plugins: [
plugin(({ addVariant }) => {
addVariant("light", ':root:not([data-theme="dark"]) &');
addVariant("dark", ':root[data-theme="dark"] &');
}),
],
};

View File

@@ -0,0 +1,135 @@
import { describe, expect, it } from "vitest";
// Test the utility functions from chat-store
// We can't directly test React hooks here, but we can test the pure functions
describe("formatBytes", () => {
// Re-implement locally since the module uses React context
function formatBytes(bytes: number): string {
if (bytes === 0) return "0 B";
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
}
it("formats 0 bytes", () => {
expect(formatBytes(0)).toBe("0 B");
});
it("formats bytes", () => {
expect(formatBytes(500)).toBe("500 B");
});
it("formats kilobytes", () => {
expect(formatBytes(1024)).toBe("1 KB");
expect(formatBytes(1536)).toBe("1.5 KB");
});
it("formats megabytes", () => {
expect(formatBytes(1048576)).toBe("1 MB");
expect(formatBytes(5242880)).toBe("5 MB");
});
});
describe("formatDuration", () => {
function formatDuration(seconds: number): string {
const m = Math.floor(seconds / 60);
const s = Math.floor(seconds % 60);
return `${m}:${s.toString().padStart(2, "0")}`;
}
it("formats zero seconds", () => {
expect(formatDuration(0)).toBe("0:00");
});
it("formats seconds only", () => {
expect(formatDuration(45)).toBe("0:45");
});
it("formats minutes and seconds", () => {
expect(formatDuration(125)).toBe("2:05");
});
it("formats exact minutes", () => {
expect(formatDuration(60)).toBe("1:00");
});
});
describe("ChatMessage type structure", () => {
it("creates a valid text message", () => {
const msg = {
id: "msg_1",
role: "user" as const,
contentType: "text" as const,
text: "Hello Timmy",
timestamp: Date.now(),
};
expect(msg.role).toBe("user");
expect(msg.contentType).toBe("text");
expect(msg.text).toBe("Hello Timmy");
});
it("creates a valid image message", () => {
const msg = {
id: "msg_2",
role: "user" as const,
contentType: "image" as const,
uri: "file:///photo.jpg",
fileName: "photo.jpg",
mimeType: "image/jpeg",
timestamp: Date.now(),
};
expect(msg.contentType).toBe("image");
expect(msg.mimeType).toBe("image/jpeg");
});
it("creates a valid voice message", () => {
const msg = {
id: "msg_3",
role: "user" as const,
contentType: "voice" as const,
uri: "file:///voice.m4a",
duration: 5.2,
mimeType: "audio/m4a",
timestamp: Date.now(),
};
expect(msg.contentType).toBe("voice");
expect(msg.duration).toBe(5.2);
});
it("creates a valid file message", () => {
const msg = {
id: "msg_4",
role: "user" as const,
contentType: "file" as const,
uri: "file:///document.pdf",
fileName: "document.pdf",
fileSize: 1048576,
mimeType: "application/pdf",
timestamp: Date.now(),
};
expect(msg.contentType).toBe("file");
expect(msg.fileSize).toBe(1048576);
});
it("creates a valid assistant message", () => {
const msg = {
id: "msg_5",
role: "assistant" as const,
contentType: "text" as const,
text: "Sir, affirmative.",
timestamp: Date.now(),
};
expect(msg.role).toBe("assistant");
});
});
describe("Timmy system prompt", () => {
const TIMMY_SYSTEM_PROMPT = `You are Timmy — a sovereign AI agent.`;
it("contains Timmy identity", () => {
expect(TIMMY_SYSTEM_PROMPT).toContain("Timmy");
expect(TIMMY_SYSTEM_PROMPT).toContain("sovereign");
});
});

17
mobile-app/theme.config.d.ts vendored Normal file
View File

@@ -0,0 +1,17 @@
export const themeColors: {
primary: { light: string; dark: string };
background: { light: string; dark: string };
surface: { light: string; dark: string };
foreground: { light: string; dark: string };
muted: { light: string; dark: string };
border: { light: string; dark: string };
success: { light: string; dark: string };
warning: { light: string; dark: string };
error: { light: string; dark: string };
};
declare const themeConfig: {
themeColors: typeof themeColors;
};
export default themeConfig;

View File

@@ -0,0 +1,14 @@
/** @type {const} */
const themeColors = {
primary: { light: '#a855f7', dark: '#a855f7' },
background: { light: '#080412', dark: '#080412' },
surface: { light: '#110820', dark: '#110820' },
foreground: { light: '#ede0ff', dark: '#ede0ff' },
muted: { light: '#6b4a8a', dark: '#6b4a8a' },
border: { light: '#3b1a5c', dark: '#3b1a5c' },
success: { light: '#00e87a', dark: '#00e87a' },
warning: { light: '#ffb800', dark: '#ffb800' },
error: { light: '#ff4455', dark: '#ff4455' },
};
module.exports = { themeColors };

19
mobile-app/todo.md Normal file
View File

@@ -0,0 +1,19 @@
# Project TODO
- [x] Dark arcane theme matching Timmy Time dashboard
- [x] Single-screen chat layout (no tabs)
- [x] Chat message list with FlatList
- [x] User and Timmy message bubbles with distinct styling
- [x] Text input bar with send button
- [x] Server-side chat API endpoint (proxy to Timmy backend or built-in LLM)
- [x] Voice recording (hold-to-record mic button)
- [x] Voice message playback UI
- [x] Image sharing via camera or photo library
- [x] Image preview in chat bubbles
- [x] File sharing via document picker
- [x] File display in chat bubbles
- [x] Attachment action sheet (camera, photos, files)
- [x] Chat header with Timmy status indicator
- [x] Generate custom app icon
- [x] Typing/loading indicator for Timmy responses
- [x] Message timestamps

29
mobile-app/tsconfig.json Normal file
View File

@@ -0,0 +1,29 @@
{
"extends": "expo/tsconfig.base",
"compilerOptions": {
"strict": true,
"types": [
"node",
"nativewind/types"
],
"paths": {
"@/*": [
"./*"
],
"@shared/*": [
"./shared/*"
]
}
},
"include": [
"**/*.ts",
"**/*.tsx",
".expo/types/**/*.ts",
"expo-env.d.ts",
"nativewind-env.d.ts"
],
"exclude": [
"node_modules",
"dist"
]
}

View File

@@ -25,6 +25,7 @@ dependencies = [
"websockets>=12.0",
"GitPython>=3.1.40",
"moviepy>=2.0.0",
"requests>=2.31.0",
]
[project.optional-dependencies]
@@ -75,31 +76,26 @@ creative = [
[project.scripts]
timmy = "timmy.cli:main"
timmy-serve = "timmy_serve.cli:main"
self-tdd = "self_tdd.watchdog:main"
self-modify = "self_modify.cli:main"
self-tdd = "self_coding.self_tdd.watchdog:main"
self-modify = "self_coding.self_modify.cli:main"
[tool.hatch.build.targets.wheel]
sources = {"src" = ""}
include = [
"src/config.py",
"src/creative",
"src/dashboard",
"src/hands",
"src/infrastructure",
"src/integrations",
"src/lightning",
"src/mcp",
"src/scripture",
"src/self_coding",
"src/spark",
"src/swarm",
"src/timmy",
"src/timmy_serve",
"src/dashboard",
"src/config.py",
"src/self_tdd",
"src/swarm",
"src/ws_manager",
"src/voice",
"src/notifications",
"src/shortcuts",
"src/telegram_bot",
"src/chat_bridge",
"src/spark",
"src/tools",
"src/creative",
"src/agent_core",
"src/lightning",
"src/self_modify",
"src/scripture",
]
[tool.pytest.ini_options]
@@ -108,12 +104,18 @@ pythonpath = ["src", "tests"]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
addopts = "-v --tb=short"
markers = [
"unit: Unit tests (fast, no I/O)",
"integration: Integration tests (may use SQLite)",
"dashboard: Dashboard route tests",
"swarm: Swarm coordinator tests",
"slow: Tests that take >1 second",
]
[tool.coverage.run]
source = ["src"]
omit = [
"*/tests/*",
"src/dashboard/routes/mobile_test.py",
]
[tool.coverage.report]

View File

@@ -1,21 +0,0 @@
"""Agents package — Timmy and sub-agents.
"""
from agents.timmy import TimmyOrchestrator, create_timmy_swarm
from agents.base import BaseAgent
from agents.seer import SeerAgent
from agents.forge import ForgeAgent
from agents.quill import QuillAgent
from agents.echo import EchoAgent
from agents.helm import HelmAgent
__all__ = [
"BaseAgent",
"TimmyOrchestrator",
"create_timmy_swarm",
"SeerAgent",
"ForgeAgent",
"QuillAgent",
"EchoAgent",
"HelmAgent",
]

View File

@@ -1,10 +0,0 @@
"""Chat Bridge — vendor-agnostic chat platform abstraction.
Provides a clean interface for integrating any chat platform
(Discord, Telegram, Slack, etc.) with Timmy's agent core.
Usage:
from chat_bridge.base import ChatPlatform
from chat_bridge.registry import platform_registry
from chat_bridge.vendors.discord import DiscordVendor
"""

View File

@@ -28,13 +28,22 @@ class Settings(BaseSettings):
# "airllm" — always use AirLLM (requires pip install ".[bigbrain]")
# "auto" — use AirLLM on Apple Silicon if airllm is installed,
# fall back to Ollama otherwise
timmy_model_backend: Literal["ollama", "airllm", "auto"] = "ollama"
timmy_model_backend: Literal["ollama", "airllm", "grok", "auto"] = "ollama"
# AirLLM model size when backend is airllm or auto.
# Larger = smarter, but needs more RAM / disk.
# 8b ~16 GB | 70b ~140 GB | 405b ~810 GB
airllm_model_size: Literal["8b", "70b", "405b"] = "70b"
# ── Grok (xAI) — opt-in premium cloud backend ────────────────────────
# Grok is a premium augmentation layer — local-first ethos preserved.
# Only used when explicitly enabled and query complexity warrants it.
grok_enabled: bool = False
xai_api_key: str = ""
grok_default_model: str = "grok-3-fast"
grok_max_sats_per_query: int = 200
grok_free: bool = False # Skip Lightning invoice when user has own API key
# ── Spark Intelligence ────────────────────────────────────────────────
# Enable/disable the Spark cognitive layer.
# When enabled, Spark captures swarm events, runs EIDOS predictions,
@@ -76,6 +85,10 @@ class Settings(BaseSettings):
# Default is False (telemetry disabled) to align with sovereign AI vision.
telemetry_enabled: bool = False
# CORS allowed origins for the web chat interface (GitHub Pages, etc.)
# Set CORS_ORIGINS as a comma-separated list, e.g. "http://localhost:3000,https://example.com"
cors_origins: list[str] = ["*"]
# Environment mode: development | production
# In production, security settings are strictly enforced.
timmy_env: Literal["development", "production"] = "development"
@@ -94,6 +107,28 @@ class Settings(BaseSettings):
work_orders_auto_execute: bool = False # Master switch for auto-execution
work_orders_auto_threshold: str = "low" # Max priority that auto-executes: "low" | "medium" | "high" | "none"
# ── Custom Weights & Models ──────────────────────────────────────
# Directory for custom model weights (GGUF, safetensors, HF checkpoints).
# Models placed here can be registered at runtime and assigned to agents.
custom_weights_dir: str = "data/models"
# Enable the reward model for scoring agent outputs (PRM-style).
reward_model_enabled: bool = False
# Reward model name (must be available via Ollama or a custom weight path).
reward_model_name: str = ""
# Minimum votes for majority-vote reward scoring (odd number recommended).
reward_model_votes: int = 3
# ── Browser Local Models (iPhone / WebGPU) ───────────────────────
# Enable in-browser LLM inference via WebLLM for offline iPhone use.
# When enabled, the mobile dashboard loads a small model directly
# in the browser — no server or Ollama required.
browser_model_enabled: bool = True
# WebLLM model ID — must be a pre-compiled MLC model.
# Recommended for iPhone: SmolLM2-360M (fast) or Qwen3-0.6B (smart).
browser_model_id: str = "SmolLM2-360M-Instruct-q4f16_1-MLC"
# Fallback to server when browser model is unavailable or too slow.
browser_model_fallback: bool = True
# ── Scripture / Biblical Integration ──────────────────────────────
# Enable the sovereign biblical text module. When enabled, Timmy
# loads the local ESV text corpus and runs meditation workflows.

18
src/creative/CLAUDE.md Normal file
View File

@@ -0,0 +1,18 @@
# creative/ — Module Guide
GPU-accelerated media generation. Heavy dependencies (PyTorch, diffusers).
## Structure
- `director.py` — Orchestrates multi-step creative pipelines
- `assembler.py` — Video assembly and stitching
- `tools/` — MCP-compliant tool implementations
- `image_tools.py` — FLUX.2 image generation
- `music_tools.py` — ACE-Step music generation
- `video_tools.py` — Wan 2.1 video generation
- `git_tools.py`, `file_ops.py`, `code_exec.py` — Utility tools
- `self_edit.py` — Self-modification MCP tool (protected file)
## Testing
```bash
pytest tests/creative/ -q
```

View File

@@ -132,7 +132,7 @@ def run_storyboard(project_id: str) -> dict:
project.status = "storyboard"
from tools.image_tools import generate_storyboard
from creative.tools.image_tools import generate_storyboard
scene_descriptions = [s["description"] for s in project.scenes]
result = generate_storyboard(scene_descriptions)
@@ -159,7 +159,7 @@ def run_music(
project.status = "music"
from tools.music_tools import generate_song
from creative.tools.music_tools import generate_song
# Default duration: ~15s per scene, minimum 60s
target_duration = duration or max(60, len(project.scenes) * 15)
@@ -192,7 +192,7 @@ def run_video_generation(project_id: str) -> dict:
project.status = "video"
from tools.video_tools import generate_video_clip, image_to_video
from creative.tools.video_tools import generate_video_clip, image_to_video
clips = []
for i, scene in enumerate(project.scenes):

View File

@@ -13,7 +13,7 @@ This is the core self-modification orchestrator that:
10. Generates reflections
Usage:
from tools.self_edit import self_edit_tool
from creative.tools.self_edit import self_edit_tool
from mcp.registry import tool_registry
# Register with MCP
@@ -818,7 +818,7 @@ def register_self_edit_tool(registry: Any, llm_adapter: Optional[object] = None)
category="self_coding",
requires_confirmation=True, # Safety: require user approval
tags=["self-modification", "code-generation"],
source_module="tools.self_edit",
source_module="creative.tools.self_edit",
)
logger.info("Self-edit tool registered with MCP")

View File

@@ -0,0 +1 @@
"""Dashboard — FastAPI + HTMX Mission Control web application."""

View File

@@ -5,6 +5,7 @@ from contextlib import asynccontextmanager
from pathlib import Path
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
@@ -12,21 +13,17 @@ from fastapi.templating import Jinja2Templates
from config import settings
from dashboard.routes.agents import router as agents_router
from dashboard.routes.health import router as health_router
from dashboard.routes.mobile_test import router as mobile_test_router
from dashboard.routes.swarm import router as swarm_router
from dashboard.routes.swarm import internal_router as swarm_internal_router
from dashboard.routes.marketplace import router as marketplace_router
from dashboard.routes.voice import router as voice_router
from dashboard.routes.voice_enhanced import router as voice_enhanced_router
from dashboard.routes.mobile import router as mobile_router
from dashboard.routes.swarm_ws import router as swarm_ws_router
from dashboard.routes.briefing import router as briefing_router
from dashboard.routes.telegram import router as telegram_router
from dashboard.routes.swarm_internal import router as swarm_internal_router
from dashboard.routes.tools import router as tools_router
from dashboard.routes.spark import router as spark_router
from dashboard.routes.creative import router as creative_router
from dashboard.routes.discord import router as discord_router
from dashboard.routes.self_modify import router as self_modify_router
from dashboard.routes.events import router as events_router
from dashboard.routes.ledger import router as ledger_router
from dashboard.routes.memory import router as memory_router
@@ -36,8 +33,12 @@ from dashboard.routes.work_orders import router as work_orders_router
from dashboard.routes.tasks import router as tasks_router
from dashboard.routes.scripture import router as scripture_router
from dashboard.routes.self_coding import router as self_coding_router
from dashboard.routes.self_coding import self_modify_router
from dashboard.routes.hands import router as hands_router
from router.api import router as cascade_router
from dashboard.routes.grok import router as grok_router
from dashboard.routes.models import router as models_router
from dashboard.routes.models import api_router as models_api_router
from infrastructure.router.api import router as cascade_router
logging.basicConfig(
level=logging.INFO,
@@ -60,7 +61,7 @@ async def _briefing_scheduler() -> None:
exists (< 30 min old).
"""
from timmy.briefing import engine as briefing_engine
from notifications.push import notify_briefing_ready
from infrastructure.notifications.push import notify_briefing_ready
await asyncio.sleep(2) # Let server finish starting before first run
@@ -131,16 +132,16 @@ async def lifespan(app: FastAPI):
logger.info("MCP auto-bootstrap: %d tools registered", len(registered))
except Exception as exc:
logger.warning("MCP auto-bootstrap failed: %s", exc)
# Initialise Spark Intelligence engine
from spark.engine import spark_engine
if spark_engine.enabled:
logger.info("Spark Intelligence active — event capture enabled")
# Auto-start chat integrations (skip silently if unconfigured)
from telegram_bot.bot import telegram_bot
from chat_bridge.vendors.discord import discord_bot
from chat_bridge.registry import platform_registry
from integrations.telegram_bot.bot import telegram_bot
from integrations.chat_bridge.vendors.discord import discord_bot
from integrations.chat_bridge.registry import platform_registry
platform_registry.register(discord_bot)
if settings.telegram_token:
@@ -173,25 +174,31 @@ app = FastAPI(
redoc_url="/redoc" if settings.debug else None,
)
app.add_middleware(
CORSMiddleware,
allow_origins=settings.cors_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
templates = Jinja2Templates(directory=str(BASE_DIR / "templates"))
app.mount("/static", StaticFiles(directory=str(PROJECT_ROOT / "static")), name="static")
app.include_router(health_router)
app.include_router(agents_router)
app.include_router(mobile_test_router)
app.include_router(swarm_router)
app.include_router(swarm_internal_router)
app.include_router(marketplace_router)
app.include_router(voice_router)
app.include_router(voice_enhanced_router)
app.include_router(mobile_router)
app.include_router(swarm_ws_router)
app.include_router(briefing_router)
app.include_router(telegram_router)
app.include_router(swarm_internal_router)
app.include_router(tools_router)
app.include_router(spark_router)
app.include_router(creative_router)
app.include_router(discord_router)
app.include_router(self_coding_router)
app.include_router(self_modify_router)
app.include_router(events_router)
app.include_router(ledger_router)
@@ -201,8 +208,10 @@ app.include_router(upgrades_router)
app.include_router(work_orders_router)
app.include_router(tasks_router)
app.include_router(scripture_router)
app.include_router(self_coding_router)
app.include_router(hands_router)
app.include_router(grok_router)
app.include_router(models_router)
app.include_router(models_api_router)
app.include_router(cascade_router)
@@ -214,5 +223,5 @@ async def index(request: Request):
@app.get("/shortcuts/setup")
async def shortcuts_setup():
"""Siri Shortcuts setup guide."""
from shortcuts.siri import get_setup_guide
from integrations.shortcuts.siri import get_setup_guide
return get_setup_guide()

View File

@@ -0,0 +1 @@
"""Dashboard route modules — one file per route group."""

View File

@@ -125,7 +125,7 @@ def _extract_task_from_message(message: str) -> dict | None:
def _build_queue_context() -> str:
"""Build a concise task queue summary for context injection."""
try:
from task_queue.models import get_counts_by_status, list_tasks, TaskStatus
from swarm.task_queue.models import get_counts_by_status, list_tasks, TaskStatus
counts = get_counts_by_status()
pending = counts.get("pending_approval", 0)
running = counts.get("running", 0)
@@ -215,7 +215,7 @@ async def chat_timmy(request: Request, message: str = Form(...)):
task_info = _extract_task_from_message(message)
if task_info:
try:
from task_queue.models import create_task
from swarm.task_queue.models import create_task
task = create_task(
title=task_info["title"],
description=task_info["description"],

View File

@@ -68,7 +68,7 @@ async def creative_projects_api():
async def creative_genres_api():
"""Return supported music genres."""
try:
from tools.music_tools import GENRES
from creative.tools.music_tools import GENRES
return {"genres": GENRES}
except ImportError:
return {"genres": []}
@@ -78,7 +78,7 @@ async def creative_genres_api():
async def creative_video_styles_api():
"""Return supported video styles and resolutions."""
try:
from tools.video_tools import VIDEO_STYLES, RESOLUTION_PRESETS
from creative.tools.video_tools import VIDEO_STYLES, RESOLUTION_PRESETS
return {
"styles": VIDEO_STYLES,
"resolutions": list(RESOLUTION_PRESETS.keys()),

View File

@@ -25,7 +25,7 @@ async def setup_discord(payload: TokenPayload):
Send POST with JSON body: {"token": "<your-bot-token>"}
Get the token from https://discord.com/developers/applications
"""
from chat_bridge.vendors.discord import discord_bot
from integrations.chat_bridge.vendors.discord import discord_bot
token = payload.token.strip()
if not token:
@@ -51,7 +51,7 @@ async def setup_discord(payload: TokenPayload):
@router.get("/status")
async def discord_status():
"""Return current Discord bot status."""
from chat_bridge.vendors.discord import discord_bot
from integrations.chat_bridge.vendors.discord import discord_bot
return discord_bot.status().to_dict()
@@ -70,8 +70,8 @@ async def join_from_image(
The bot validates the invite and returns the OAuth2 URL for the
server admin to authorize the bot.
"""
from chat_bridge.invite_parser import invite_parser
from chat_bridge.vendors.discord import discord_bot
from integrations.chat_bridge.invite_parser import invite_parser
from integrations.chat_bridge.vendors.discord import discord_bot
invite_info = None
@@ -129,7 +129,7 @@ async def join_from_image(
@router.get("/oauth-url")
async def discord_oauth_url():
"""Get the bot's OAuth2 authorization URL for adding to servers."""
from chat_bridge.vendors.discord import discord_bot
from integrations.chat_bridge.vendors.discord import discord_bot
url = discord_bot.get_oauth2_url()
if url:

View File

@@ -0,0 +1,234 @@
"""Grok (xAI) dashboard routes — premium cloud augmentation controls.
Endpoints
---------
GET /grok/status — JSON status (API)
POST /grok/toggle — Enable/disable Grok Mode (HTMX)
POST /grok/chat — Direct Grok query (HTMX)
GET /grok/stats — Usage statistics (JSON)
"""
import logging
from pathlib import Path
from fastapi import APIRouter, Form, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
from config import settings
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/grok", tags=["grok"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
# In-memory toggle state (persists per process lifetime)
_grok_mode_active: bool = False
@router.get("/status")
async def grok_status():
"""Return Grok backend status as JSON."""
from timmy.backends import grok_available
status = {
"enabled": settings.grok_enabled,
"available": grok_available(),
"active": _grok_mode_active,
"model": settings.grok_default_model,
"free_mode": settings.grok_free,
"max_sats_per_query": settings.grok_max_sats_per_query,
"api_key_set": bool(settings.xai_api_key),
}
# Include usage stats if backend exists
try:
from timmy.backends import get_grok_backend
backend = get_grok_backend()
status["stats"] = {
"total_requests": backend.stats.total_requests,
"total_prompt_tokens": backend.stats.total_prompt_tokens,
"total_completion_tokens": backend.stats.total_completion_tokens,
"estimated_cost_sats": backend.stats.estimated_cost_sats,
"errors": backend.stats.errors,
}
except Exception:
status["stats"] = None
return status
@router.post("/toggle")
async def toggle_grok_mode(request: Request):
"""Toggle Grok Mode on/off. Returns HTMX partial for the toggle card."""
global _grok_mode_active
from timmy.backends import grok_available
if not grok_available():
return HTMLResponse(
'<div class="alert" style="color: var(--danger);">'
"Grok unavailable — set GROK_ENABLED=true and XAI_API_KEY in .env"
"</div>",
status_code=200,
)
_grok_mode_active = not _grok_mode_active
state = "ACTIVE" if _grok_mode_active else "STANDBY"
logger.info("Grok Mode toggled: %s", state)
# Log to Spark
try:
from spark.engine import spark_engine
import json
spark_engine.on_tool_executed(
agent_id="timmy",
tool_name="grok_mode_toggle",
success=True,
)
except Exception:
pass
return HTMLResponse(
_render_toggle_card(_grok_mode_active),
status_code=200,
)
@router.post("/chat", response_class=HTMLResponse)
async def grok_chat(request: Request, message: str = Form(...)):
"""Send a message directly to Grok and return HTMX chat partial."""
from timmy.backends import grok_available, get_grok_backend
from dashboard.store import message_log
from datetime import datetime
timestamp = datetime.now().strftime("%H:%M:%S")
if not grok_available():
error = "Grok is not available. Set GROK_ENABLED=true and XAI_API_KEY."
message_log.append(role="user", content=f"[Grok] {message}", timestamp=timestamp)
message_log.append(role="error", content=error, timestamp=timestamp)
return templates.TemplateResponse(
request,
"partials/chat_message.html",
{
"user_message": f"[Grok] {message}",
"response": None,
"error": error,
"timestamp": timestamp,
},
)
backend = get_grok_backend()
# Generate invoice if monetization is active
invoice_note = ""
if not settings.grok_free:
try:
from lightning.factory import get_backend as get_ln_backend
ln = get_ln_backend()
sats = min(settings.grok_max_sats_per_query, 100)
inv = ln.create_invoice(sats, f"Grok: {message[:50]}")
invoice_note = f" | {sats} sats"
except Exception:
pass
try:
result = backend.run(message)
response_text = f"**[Grok]{invoice_note}:** {result.content}"
except Exception as exc:
response_text = None
error = f"Grok error: {exc}"
message_log.append(
role="user", content=f"[Ask Grok] {message}", timestamp=timestamp
)
if response_text:
message_log.append(role="agent", content=response_text, timestamp=timestamp)
return templates.TemplateResponse(
request,
"partials/chat_message.html",
{
"user_message": f"[Ask Grok] {message}",
"response": response_text,
"error": None,
"timestamp": timestamp,
},
)
else:
message_log.append(role="error", content=error, timestamp=timestamp)
return templates.TemplateResponse(
request,
"partials/chat_message.html",
{
"user_message": f"[Ask Grok] {message}",
"response": None,
"error": error,
"timestamp": timestamp,
},
)
@router.get("/stats")
async def grok_stats():
"""Return detailed Grok usage statistics."""
try:
from timmy.backends import get_grok_backend
backend = get_grok_backend()
return {
"total_requests": backend.stats.total_requests,
"total_prompt_tokens": backend.stats.total_prompt_tokens,
"total_completion_tokens": backend.stats.total_completion_tokens,
"total_latency_ms": round(backend.stats.total_latency_ms, 2),
"avg_latency_ms": round(
backend.stats.total_latency_ms / max(backend.stats.total_requests, 1),
2,
),
"estimated_cost_sats": backend.stats.estimated_cost_sats,
"errors": backend.stats.errors,
"model": settings.grok_default_model,
}
except Exception as exc:
return {"error": str(exc)}
def _render_toggle_card(active: bool) -> str:
"""Render the Grok Mode toggle card HTML."""
color = "#00ff88" if active else "#666"
state = "ACTIVE" if active else "STANDBY"
glow = "0 0 20px rgba(0, 255, 136, 0.4)" if active else "none"
return f"""
<div id="grok-toggle-card"
style="border: 2px solid {color}; border-radius: 12px; padding: 16px;
background: var(--bg-secondary); box-shadow: {glow};
transition: all 0.3s ease;">
<div style="display: flex; justify-content: space-between; align-items: center;">
<div>
<div style="font-weight: 700; font-size: 1.1rem; color: {color};">
GROK MODE: {state}
</div>
<div style="font-size: 0.8rem; color: var(--text-muted); margin-top: 4px;">
xAI frontier reasoning | {settings.grok_default_model}
</div>
</div>
<button hx-post="/grok/toggle"
hx-target="#grok-toggle-card"
hx-swap="outerHTML"
style="background: {color}; color: #000; border: none;
border-radius: 8px; padding: 8px 20px; cursor: pointer;
font-weight: 700; font-family: inherit;">
{'DEACTIVATE' if active else 'ACTIVATE'}
</button>
</div>
</div>
"""
def is_grok_mode_active() -> bool:
"""Check if Grok Mode is currently active (used by other modules)."""
return _grok_mode_active

View File

@@ -7,7 +7,7 @@ from fastapi import APIRouter, Form, HTTPException, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
from memory.vector_store import (
from timmy.memory.vector_store import (
store_memory,
search_memories,
get_memory_stats,

View File

@@ -3,6 +3,9 @@
Provides a simplified, mobile-first view of the dashboard that
prioritizes the chat interface and essential status information.
Designed for quick access from a phone's home screen.
The /mobile/local endpoint loads a small LLM directly into the
browser via WebLLM so Timmy can run on an iPhone with no server.
"""
from pathlib import Path
@@ -11,6 +14,8 @@ from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from config import settings
router = APIRouter(tags=["mobile"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
@@ -26,11 +31,44 @@ async def mobile_dashboard(request: Request):
return templates.TemplateResponse(request, "index.html")
@router.get("/mobile/local", response_class=HTMLResponse)
async def mobile_local_dashboard(request: Request):
"""Mobile dashboard with in-browser local model inference.
Loads a small LLM (via WebLLM / WebGPU) directly into Safari
so Timmy works on an iPhone without any server connection.
Falls back to server-side Ollama when the local model is
unavailable or the user prefers it.
"""
return templates.TemplateResponse(
request,
"mobile_local.html",
{
"browser_model_enabled": settings.browser_model_enabled,
"browser_model_id": settings.browser_model_id,
"browser_model_fallback": settings.browser_model_fallback,
"server_model": settings.ollama_model,
"page_title": "Timmy — Local AI",
},
)
@router.get("/mobile/local-models")
async def local_models_config():
"""Return browser model configuration for the JS client."""
return {
"enabled": settings.browser_model_enabled,
"default_model": settings.browser_model_id,
"fallback_to_server": settings.browser_model_fallback,
"server_model": settings.ollama_model,
"server_url": settings.ollama_url,
}
@router.get("/mobile/status")
async def mobile_status():
"""Lightweight status endpoint optimized for mobile polling."""
from dashboard.routes.health import check_ollama
from config import settings
ollama_ok = await check_ollama()
return {
@@ -38,4 +76,6 @@ async def mobile_status():
"model": settings.ollama_model,
"agent": "timmy",
"ready": True,
"browser_model_enabled": settings.browser_model_enabled,
"browser_model_id": settings.browser_model_id,
}

View File

@@ -1,257 +0,0 @@
"""Mobile HITL (Human-in-the-Loop) test checklist route.
GET /mobile-test — interactive checklist for a human tester on their phone.
Each scenario specifies what to do and what to observe. The tester marks
each one PASS / FAIL / SKIP. Results are stored in sessionStorage so they
survive page scrolling without hitting the server.
"""
from pathlib import Path
from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
router = APIRouter(tags=["mobile-test"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
# ── Test scenarios ────────────────────────────────────────────────────────────
# Each dict: id, category, title, steps (list), expected
SCENARIOS = [
# Layout
{
"id": "L01",
"category": "Layout",
"title": "Sidebar renders as horizontal strip",
"steps": [
"Open the Mission Control page on your phone.",
"Look at the top section above the chat window.",
],
"expected": (
"AGENTS and SYSTEM HEALTH panels appear side-by-side in a "
"horizontally scrollable strip — not stacked vertically."
),
},
{
"id": "L02",
"category": "Layout",
"title": "Sidebar panels are horizontally scrollable",
"steps": [
"Swipe left/right on the AGENTS / SYSTEM HEALTH strip.",
],
"expected": "Both panels slide smoothly; no page scroll is triggered.",
},
{
"id": "L03",
"category": "Layout",
"title": "Chat panel fills ≥ 60 % of viewport height",
"steps": [
"Look at the TIMMY INTERFACE chat card below the strip.",
],
"expected": "The chat card occupies at least 60 % of the visible screen height.",
},
{
"id": "L04",
"category": "Layout",
"title": "Header stays fixed while chat scrolls",
"steps": [
"Send several messages until the chat overflows.",
"Scroll the chat log up and down.",
],
"expected": "The TIMMY TIME / MISSION CONTROL header remains pinned at the top.",
},
{
"id": "L05",
"category": "Layout",
"title": "No horizontal page overflow",
"steps": [
"Try swiping left or right anywhere on the page.",
],
"expected": "The page does not scroll horizontally; nothing is cut off.",
},
# Touch & Input
{
"id": "T01",
"category": "Touch & Input",
"title": "iOS does NOT zoom when tapping the input",
"steps": [
"Tap the message input field once.",
"Watch whether the browser zooms in.",
],
"expected": "The keyboard rises; the layout does NOT zoom in.",
},
{
"id": "T02",
"category": "Touch & Input",
"title": "Keyboard return key is labelled 'Send'",
"steps": [
"Tap the message input to open the iOS/Android keyboard.",
"Look at the return / action key in the bottom-right of the keyboard.",
],
"expected": "The key is labelled 'Send' (not 'Return' or 'Go').",
},
{
"id": "T03",
"category": "Touch & Input",
"title": "Send button is easy to tap (≥ 44 px tall)",
"steps": [
"Try tapping the SEND button with your thumb.",
],
"expected": "The button registers the tap reliably on the first attempt.",
},
{
"id": "T04",
"category": "Touch & Input",
"title": "SEND button disabled during in-flight request",
"steps": [
"Type a message and press SEND.",
"Immediately try to tap SEND again before a response arrives.",
],
"expected": "The button is visually disabled; no duplicate message is sent.",
},
{
"id": "T05",
"category": "Touch & Input",
"title": "Empty message cannot be submitted",
"steps": [
"Leave the input blank.",
"Tap SEND.",
],
"expected": "Nothing is submitted; the form shows a required-field indicator.",
},
{
"id": "T06",
"category": "Touch & Input",
"title": "CLEAR button shows confirmation dialog",
"steps": [
"Send at least one message.",
"Tap the CLEAR button in the top-right of the chat header.",
],
"expected": "A browser confirmation dialog appears before history is cleared.",
},
# Chat behaviour
{
"id": "C01",
"category": "Chat",
"title": "Chat auto-scrolls to the latest message",
"steps": [
"Scroll the chat log to the top.",
"Send a new message.",
],
"expected": "After the response arrives the chat automatically scrolls to the bottom.",
},
{
"id": "C02",
"category": "Chat",
"title": "Multi-turn conversation — Timmy remembers context",
"steps": [
"Send: 'My name is <your name>.'",
"Then send: 'What is my name?'",
],
"expected": "Timmy replies with your name, demonstrating conversation memory.",
},
{
"id": "C03",
"category": "Chat",
"title": "Loading indicator appears while waiting",
"steps": [
"Send a message and watch the SEND button.",
],
"expected": "A blinking cursor (▋) appears next to SEND while the response is loading.",
},
{
"id": "C04",
"category": "Chat",
"title": "Offline error is shown gracefully",
"steps": [
"Stop Ollama on your host machine (or disconnect from Wi-Fi temporarily).",
"Send a message from your phone.",
],
"expected": "A red 'Timmy is offline' error appears in the chat — no crash or spinner hang.",
},
# Health panel
{
"id": "H01",
"category": "Health",
"title": "Health panel shows Ollama UP when running",
"steps": [
"Ensure Ollama is running on your host.",
"Check the SYSTEM HEALTH panel.",
],
"expected": "OLLAMA badge shows green UP.",
},
{
"id": "H02",
"category": "Health",
"title": "Health panel auto-refreshes without reload",
"steps": [
"Start Ollama if it is not running.",
"Wait up to 35 seconds with the page open.",
],
"expected": "The OLLAMA badge flips from DOWN → UP automatically, without a page reload.",
},
# Scroll & overscroll
{
"id": "S01",
"category": "Scroll",
"title": "No rubber-band / bounce on the main page",
"steps": [
"Scroll to the very top of the page.",
"Continue pulling downward.",
],
"expected": "The page does not bounce or show a white gap — overscroll is suppressed.",
},
{
"id": "S02",
"category": "Scroll",
"title": "Chat log scrolls independently inside the card",
"steps": [
"Scroll inside the chat log area.",
],
"expected": "The chat log scrolls smoothly; the outer page does not move.",
},
# Safe area / notch
{
"id": "N01",
"category": "Notch / Home Bar",
"title": "Header clears the status bar / Dynamic Island",
"steps": [
"On a notched iPhone (Face ID), look at the top of the page.",
],
"expected": "The TIMMY TIME header text is not obscured by the notch or Dynamic Island.",
},
{
"id": "N02",
"category": "Notch / Home Bar",
"title": "Chat input not hidden behind home indicator",
"steps": [
"Tap the input field and look at the bottom of the screen.",
],
"expected": "The input row sits above the iPhone home indicator bar — nothing is cut off.",
},
# Clock
{
"id": "X01",
"category": "Live UI",
"title": "Clock updates every second",
"steps": [
"Look at the time display in the top-right of the header.",
"Watch for 3 seconds.",
],
"expected": "The time increments each second in HH:MM:SS format.",
},
]
@router.get("/mobile-test", response_class=HTMLResponse)
async def mobile_test(request: Request):
"""Interactive HITL mobile test checklist — open on your phone."""
categories: dict[str, list] = {}
for s in SCENARIOS:
categories.setdefault(s["category"], []).append(s)
return templates.TemplateResponse(
request,
"mobile_test.html",
{"scenarios": SCENARIOS, "categories": categories, "total": len(SCENARIOS)},
)

View File

@@ -0,0 +1,272 @@
"""Custom model management routes — register, list, assign, and swap models.
Provides a REST API for managing custom model weights and their assignment
to swarm agents. Inspired by OpenClaw-RL's multi-model orchestration.
"""
import logging
from pathlib import Path
from typing import Any, Optional
from fastapi import APIRouter, HTTPException, Request
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from pydantic import BaseModel
from config import settings
from infrastructure.models.registry import (
CustomModel,
ModelFormat,
ModelRegistry,
ModelRole,
model_registry,
)
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/models", tags=["models"])
api_router = APIRouter(prefix="/api/v1/models", tags=["models-api"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
# ── Pydantic schemas ──────────────────────────────────────────────────────────
class RegisterModelRequest(BaseModel):
"""Request body for model registration."""
name: str
format: str # gguf, safetensors, hf, ollama
path: str
role: str = "general"
context_window: int = 4096
description: str = ""
default_temperature: float = 0.7
max_tokens: int = 2048
class AssignModelRequest(BaseModel):
"""Request body for assigning a model to an agent."""
agent_id: str
model_name: str
class SetActiveRequest(BaseModel):
"""Request body for enabling/disabling a model."""
active: bool
# ── API endpoints ─────────────────────────────────────────────────────────────
@api_router.get("")
async def list_models(role: Optional[str] = None) -> dict[str, Any]:
"""List all registered custom models."""
model_role = ModelRole(role) if role else None
models = model_registry.list_models(role=model_role)
return {
"models": [
{
"name": m.name,
"format": m.format.value,
"path": m.path,
"role": m.role.value,
"context_window": m.context_window,
"description": m.description,
"active": m.active,
"registered_at": m.registered_at,
"default_temperature": m.default_temperature,
"max_tokens": m.max_tokens,
}
for m in models
],
"total": len(models),
"weights_dir": settings.custom_weights_dir,
}
@api_router.post("")
async def register_model(request: RegisterModelRequest) -> dict[str, Any]:
"""Register a new custom model."""
try:
fmt = ModelFormat(request.format)
except ValueError:
raise HTTPException(
status_code=400,
detail=f"Invalid format: {request.format}. "
f"Choose from: {[f.value for f in ModelFormat]}",
)
try:
role = ModelRole(request.role)
except ValueError:
raise HTTPException(
status_code=400,
detail=f"Invalid role: {request.role}. "
f"Choose from: {[r.value for r in ModelRole]}",
)
# Validate path exists for non-Ollama formats
if fmt != ModelFormat.OLLAMA:
weight_path = Path(request.path)
if not weight_path.exists():
raise HTTPException(
status_code=400,
detail=f"Weight path does not exist: {request.path}",
)
model = CustomModel(
name=request.name,
format=fmt,
path=request.path,
role=role,
context_window=request.context_window,
description=request.description,
default_temperature=request.default_temperature,
max_tokens=request.max_tokens,
)
registered = model_registry.register(model)
return {
"message": f"Model {registered.name} registered",
"model": {
"name": registered.name,
"format": registered.format.value,
"role": registered.role.value,
"path": registered.path,
},
}
@api_router.get("/{model_name}")
async def get_model(model_name: str) -> dict[str, Any]:
"""Get details of a specific model."""
model = model_registry.get(model_name)
if not model:
raise HTTPException(status_code=404, detail=f"Model {model_name} not found")
return {
"name": model.name,
"format": model.format.value,
"path": model.path,
"role": model.role.value,
"context_window": model.context_window,
"description": model.description,
"active": model.active,
"registered_at": model.registered_at,
"default_temperature": model.default_temperature,
"max_tokens": model.max_tokens,
}
@api_router.delete("/{model_name}")
async def unregister_model(model_name: str) -> dict[str, str]:
"""Remove a model from the registry."""
if not model_registry.unregister(model_name):
raise HTTPException(status_code=404, detail=f"Model {model_name} not found")
return {"message": f"Model {model_name} unregistered"}
@api_router.patch("/{model_name}/active")
async def set_model_active(
model_name: str, request: SetActiveRequest
) -> dict[str, str]:
"""Enable or disable a model."""
if not model_registry.set_active(model_name, request.active):
raise HTTPException(status_code=404, detail=f"Model {model_name} not found")
state = "enabled" if request.active else "disabled"
return {"message": f"Model {model_name} {state}"}
# ── Agent assignment endpoints ────────────────────────────────────────────────
@api_router.get("/assignments/all")
async def list_assignments() -> dict[str, Any]:
"""List all agent-to-model assignments."""
assignments = model_registry.get_agent_assignments()
return {
"assignments": [
{"agent_id": aid, "model_name": mname}
for aid, mname in assignments.items()
],
"total": len(assignments),
}
@api_router.post("/assignments")
async def assign_model(request: AssignModelRequest) -> dict[str, str]:
"""Assign a model to a swarm agent."""
if not model_registry.assign_model(request.agent_id, request.model_name):
raise HTTPException(
status_code=404,
detail=f"Model {request.model_name} not found in registry",
)
return {
"message": f"Model {request.model_name} assigned to {request.agent_id}",
}
@api_router.delete("/assignments/{agent_id}")
async def unassign_model(agent_id: str) -> dict[str, str]:
"""Remove model assignment from an agent (reverts to default)."""
if not model_registry.unassign_model(agent_id):
raise HTTPException(
status_code=404,
detail=f"No model assignment for agent {agent_id}",
)
return {"message": f"Model assignment removed for {agent_id}"}
# ── Role-based lookups ────────────────────────────────────────────────────────
@api_router.get("/roles/reward")
async def get_reward_model() -> dict[str, Any]:
"""Get the active reward (PRM) model."""
model = model_registry.get_reward_model()
if not model:
return {"reward_model": None, "reward_enabled": settings.reward_model_enabled}
return {
"reward_model": {
"name": model.name,
"format": model.format.value,
"path": model.path,
},
"reward_enabled": settings.reward_model_enabled,
}
@api_router.get("/roles/teacher")
async def get_teacher_model() -> dict[str, Any]:
"""Get the active teacher model for distillation."""
model = model_registry.get_teacher_model()
if not model:
return {"teacher_model": None}
return {
"teacher_model": {
"name": model.name,
"format": model.format.value,
"path": model.path,
},
}
# ── Dashboard page ────────────────────────────────────────────────────────────
@router.get("", response_class=HTMLResponse)
async def models_page(request: Request):
"""Custom models management dashboard page."""
models = model_registry.list_models()
assignments = model_registry.get_agent_assignments()
reward = model_registry.get_reward_model()
return templates.TemplateResponse(
request,
"models.html",
{
"page_title": "Custom Models",
"models": models,
"assignments": assignments,
"reward_model": reward,
"weights_dir": settings.custom_weights_dir,
"reward_enabled": settings.reward_model_enabled,
},
)

View File

@@ -5,17 +5,21 @@ API endpoints and HTMX views for the self-coding system:
- Stats dashboard
- Manual task execution
- Real-time status updates
- Self-modification loop (/self-modify/*)
"""
from __future__ import annotations
import asyncio
import logging
from typing import Optional
from fastapi import APIRouter, Form, Request
from fastapi import APIRouter, Form, HTTPException, Request
from fastapi.responses import HTMLResponse, JSONResponse
from pydantic import BaseModel
from config import settings
from self_coding import (
CodebaseIndexer,
ModificationJournal,
@@ -205,7 +209,7 @@ async def api_execute(request: ExecuteRequest):
This is the API endpoint for manual task execution.
In production, this should require authentication and confirmation.
"""
from tools.self_edit import SelfEditTool
from creative.tools.self_edit import SelfEditTool
tool = SelfEditTool()
result = await tool.execute(request.task_description)
@@ -328,7 +332,7 @@ async def execute_task(
):
"""HTMX endpoint to execute a task."""
from dashboard.app import templates
from tools.self_edit import SelfEditTool
from creative.tools.self_edit import SelfEditTool
tool = SelfEditTool()
result = await tool.execute(task_description)
@@ -366,3 +370,59 @@ async def journal_entry_detail(request: Request, attempt_id: int):
"entry": entry,
},
)
# ── Self-Modification Routes (/self-modify/*) ───────────────────────────
self_modify_router = APIRouter(prefix="/self-modify", tags=["self-modify"])
@self_modify_router.post("/run")
async def run_self_modify(
instruction: str = Form(...),
target_files: str = Form(""),
dry_run: bool = Form(False),
speak_result: bool = Form(False),
):
"""Execute a self-modification loop."""
if not settings.self_modify_enabled:
raise HTTPException(403, "Self-modification is disabled")
from self_coding.self_modify.loop import SelfModifyLoop, ModifyRequest
files = [f.strip() for f in target_files.split(",") if f.strip()]
request = ModifyRequest(
instruction=instruction,
target_files=files,
dry_run=dry_run,
)
loop = SelfModifyLoop()
result = await asyncio.to_thread(loop.run, request)
if speak_result and result.success:
try:
from timmy_serve.voice_tts import voice_tts
if voice_tts.available:
voice_tts.speak(
f"Code modification complete. "
f"{len(result.files_changed)} files changed. Tests passing."
)
except Exception:
pass
return {
"success": result.success,
"files_changed": result.files_changed,
"test_passed": result.test_passed,
"commit_sha": result.commit_sha,
"branch_name": result.branch_name,
"error": result.error,
"attempts": result.attempts,
}
@self_modify_router.get("/status")
async def self_modify_status():
"""Return whether self-modification is enabled."""
return {"enabled": settings.self_modify_enabled}

View File

@@ -1,71 +0,0 @@
"""Self-modification routes — /self-modify endpoints.
Exposes the edit-test-commit loop as a REST API. Gated by
``SELF_MODIFY_ENABLED`` (default False).
"""
import asyncio
import logging
from fastapi import APIRouter, Form, HTTPException
from config import settings
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/self-modify", tags=["self-modify"])
@router.post("/run")
async def run_self_modify(
instruction: str = Form(...),
target_files: str = Form(""),
dry_run: bool = Form(False),
speak_result: bool = Form(False),
):
"""Execute a self-modification loop.
Returns the ModifyResult as JSON.
"""
if not settings.self_modify_enabled:
raise HTTPException(403, "Self-modification is disabled")
from self_modify.loop import SelfModifyLoop, ModifyRequest
files = [f.strip() for f in target_files.split(",") if f.strip()]
request = ModifyRequest(
instruction=instruction,
target_files=files,
dry_run=dry_run,
)
loop = SelfModifyLoop()
result = await asyncio.to_thread(loop.run, request)
if speak_result and result.success:
try:
from timmy_serve.voice_tts import voice_tts
if voice_tts.available:
voice_tts.speak(
f"Code modification complete. "
f"{len(result.files_changed)} files changed. Tests passing."
)
except Exception:
pass
return {
"success": result.success,
"files_changed": result.files_changed,
"test_passed": result.test_passed,
"commit_sha": result.commit_sha,
"branch_name": result.branch_name,
"error": result.error,
"attempts": result.attempts,
}
@router.get("/status")
async def self_modify_status():
"""Return whether self-modification is enabled."""
return {"enabled": settings.self_modify_enabled}

View File

@@ -1,22 +1,28 @@
"""Swarm dashboard routes — /swarm/* endpoints.
"""Swarm dashboard routes — /swarm/*, /internal/*, and /swarm/live endpoints.
Provides REST endpoints for managing the swarm: listing agents,
spawning sub-agents, posting tasks, and viewing auction results.
spawning sub-agents, posting tasks, viewing auction results, Docker
container agent HTTP API, and WebSocket live feed.
"""
import asyncio
import logging
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
from fastapi import APIRouter, Form, HTTPException, Request
from fastapi import APIRouter, Form, HTTPException, Request, WebSocket, WebSocketDisconnect
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from pydantic import BaseModel
from swarm import learner as swarm_learner
from swarm import registry
from swarm.coordinator import coordinator
from swarm.tasks import TaskStatus, update_task
from infrastructure.ws_manager.handler import ws_manager
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/swarm", tags=["swarm"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
@@ -325,3 +331,92 @@ async def message_agent(agent_id: str, request: Request, message: str = Form(...
)
# ── Internal HTTP API (Docker container agents) ─────────────────────────
internal_router = APIRouter(prefix="/internal", tags=["internal"])
class BidRequest(BaseModel):
task_id: str
agent_id: str
bid_sats: int
capabilities: Optional[str] = ""
class BidResponse(BaseModel):
accepted: bool
task_id: str
agent_id: str
message: str
class TaskSummary(BaseModel):
task_id: str
description: str
status: str
@internal_router.get("/tasks", response_model=list[TaskSummary])
def list_biddable_tasks():
"""Return all tasks currently open for bidding."""
tasks = coordinator.list_tasks(status=TaskStatus.BIDDING)
return [
TaskSummary(
task_id=t.id,
description=t.description,
status=t.status.value,
)
for t in tasks
]
@internal_router.post("/bids", response_model=BidResponse)
def submit_bid(bid: BidRequest):
"""Accept a bid from a container agent."""
if bid.bid_sats <= 0:
raise HTTPException(status_code=422, detail="bid_sats must be > 0")
accepted = coordinator.auctions.submit_bid(
task_id=bid.task_id,
agent_id=bid.agent_id,
bid_sats=bid.bid_sats,
)
if accepted:
from swarm import stats as swarm_stats
swarm_stats.record_bid(bid.task_id, bid.agent_id, bid.bid_sats, won=False)
logger.info(
"Docker agent %s bid %d sats on task %s",
bid.agent_id, bid.bid_sats, bid.task_id,
)
return BidResponse(
accepted=True,
task_id=bid.task_id,
agent_id=bid.agent_id,
message="Bid accepted.",
)
return BidResponse(
accepted=False,
task_id=bid.task_id,
agent_id=bid.agent_id,
message="No open auction for this task — it may have already closed.",
)
# ── WebSocket live feed ──────────────────────────────────────────────────
@router.websocket("/live")
async def swarm_live(websocket: WebSocket):
"""WebSocket endpoint for live swarm event streaming."""
await ws_manager.connect(websocket)
try:
while True:
data = await websocket.receive_text()
logger.debug("WS received: %s", data[:100])
except WebSocketDisconnect:
ws_manager.disconnect(websocket)
except Exception as exc:
logger.error("WebSocket error: %s", exc)
ws_manager.disconnect(websocket)

View File

@@ -1,115 +0,0 @@
"""Internal swarm HTTP API — for Docker container agents.
Container agents can't use the in-memory SwarmComms channel, so they poll
these lightweight endpoints to participate in the auction system.
Routes
------
GET /internal/tasks
Returns all tasks currently in BIDDING status — the set an agent
can submit bids for.
POST /internal/bids
Accepts a bid from a container agent and feeds it into the in-memory
AuctionManager. The coordinator then closes auctions and assigns
winners exactly as it does for in-process agents.
These endpoints are intentionally unauthenticated because they are only
reachable inside the Docker swarm-net bridge network. Do not expose them
through a reverse-proxy to the public internet.
"""
import logging
from typing import Optional
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from swarm.coordinator import coordinator
from swarm.tasks import TaskStatus
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/internal", tags=["internal"])
# ── Request / response models ─────────────────────────────────────────────────
class BidRequest(BaseModel):
task_id: str
agent_id: str
bid_sats: int
capabilities: Optional[str] = ""
class BidResponse(BaseModel):
accepted: bool
task_id: str
agent_id: str
message: str
class TaskSummary(BaseModel):
task_id: str
description: str
status: str
# ── Routes ────────────────────────────────────────────────────────────────────
@router.get("/tasks", response_model=list[TaskSummary])
def list_biddable_tasks():
"""Return all tasks currently open for bidding.
Container agents should poll this endpoint and submit bids for any
tasks they are capable of handling.
"""
tasks = coordinator.list_tasks(status=TaskStatus.BIDDING)
return [
TaskSummary(
task_id=t.id,
description=t.description,
status=t.status.value,
)
for t in tasks
]
@router.post("/bids", response_model=BidResponse)
def submit_bid(bid: BidRequest):
"""Accept a bid from a container agent.
The bid is injected directly into the in-memory AuctionManager.
If no auction is open for the task (e.g. it already closed), the
bid is rejected gracefully — the agent should just move on.
"""
if bid.bid_sats <= 0:
raise HTTPException(status_code=422, detail="bid_sats must be > 0")
accepted = coordinator.auctions.submit_bid(
task_id=bid.task_id,
agent_id=bid.agent_id,
bid_sats=bid.bid_sats,
)
if accepted:
# Persist bid in stats table for marketplace analytics
from swarm import stats as swarm_stats
swarm_stats.record_bid(bid.task_id, bid.agent_id, bid.bid_sats, won=False)
logger.info(
"Docker agent %s bid %d sats on task %s",
bid.agent_id, bid.bid_sats, bid.task_id,
)
return BidResponse(
accepted=True,
task_id=bid.task_id,
agent_id=bid.agent_id,
message="Bid accepted.",
)
return BidResponse(
accepted=False,
task_id=bid.task_id,
agent_id=bid.agent_id,
message="No open auction for this task — it may have already closed.",
)

View File

@@ -1,33 +0,0 @@
"""Swarm WebSocket route — /swarm/live endpoint.
Provides a real-time WebSocket feed of swarm events for the live
dashboard view. Clients connect and receive JSON events as they
happen: agent joins, task posts, bids, assignments, completions.
"""
import logging
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
from ws_manager.handler import ws_manager
logger = logging.getLogger(__name__)
router = APIRouter(tags=["swarm-ws"])
@router.websocket("/swarm/live")
async def swarm_live(websocket: WebSocket):
"""WebSocket endpoint for live swarm event streaming."""
await ws_manager.connect(websocket)
try:
while True:
# Keep the connection alive; client can also send commands
data = await websocket.receive_text()
# Echo back as acknowledgment (future: handle client commands)
logger.debug("WS received: %s", data[:100])
except WebSocketDisconnect:
ws_manager.disconnect(websocket)
except Exception as exc:
logger.error("WebSocket error: %s", exc)
ws_manager.disconnect(websocket)

View File

@@ -24,7 +24,7 @@ from fastapi import APIRouter, Form, HTTPException, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
from task_queue.models import (
from swarm.task_queue.models import (
QueueTask,
TaskPriority,
TaskStatus,
@@ -49,7 +49,7 @@ def _broadcast_task_event(event_type: str, task: QueueTask):
"""Best-effort broadcast a task event to connected WebSocket clients."""
try:
import asyncio
from ws_manager.handler import ws_manager
from infrastructure.ws_manager.handler import ws_manager
payload = {
"type": "task_event",
@@ -461,7 +461,7 @@ def _task_to_dict(task: QueueTask) -> dict:
def _notify_task_created(task: QueueTask):
try:
from notifications.push import notifier
from infrastructure.notifications.push import notifier
notifier.notify(
title="New Task",
message=f"{task.created_by} created: {task.title}",

View File

@@ -17,7 +17,7 @@ async def setup_telegram(payload: TokenPayload):
Send a POST with JSON body: {"token": "<your-bot-token>"}
Get the token from @BotFather on Telegram.
"""
from telegram_bot.bot import telegram_bot
from integrations.telegram_bot.bot import telegram_bot
token = payload.token.strip()
if not token:
@@ -43,7 +43,7 @@ async def setup_telegram(payload: TokenPayload):
@router.get("/status")
async def telegram_status():
"""Return the current state of the Telegram bot."""
from telegram_bot.bot import telegram_bot
from integrations.telegram_bot.bot import telegram_bot
return {
"running": telegram_bot.is_running,

View File

@@ -6,8 +6,8 @@ from fastapi import APIRouter, Form, HTTPException, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
from upgrades.models import list_upgrades, get_upgrade, UpgradeStatus, get_pending_count
from upgrades.queue import UpgradeQueue
from self_coding.upgrades.models import list_upgrades, get_upgrade, UpgradeStatus, get_pending_count
from self_coding.upgrades.queue import UpgradeQueue
router = APIRouter(prefix="/self-modify", tags=["upgrades"])
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))

View File

@@ -1,12 +1,17 @@
"""Voice routes — /voice/* endpoints.
"""Voice routes — /voice/* and /voice/enhanced/* endpoints.
Provides NLU intent detection and TTS control endpoints for the
voice interface.
Provides NLU intent detection, TTS control, and the full voice-to-action
pipeline (detect intent → execute → optionally speak).
"""
import logging
from fastapi import APIRouter, Form
from voice.nlu import detect_intent, extract_command
from integrations.voice.nlu import detect_intent, extract_command
from timmy.agent import create_timmy
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/voice", tags=["voice"])
@@ -49,3 +54,103 @@ async def tts_speak(text: str = Form(...)):
return {"spoken": True, "text": text}
except Exception as exc:
return {"spoken": False, "reason": str(exc)}
# ── Enhanced voice pipeline ──────────────────────────────────────────────
@router.post("/enhanced/process")
async def process_voice_input(
text: str = Form(...),
speak_response: bool = Form(False),
):
"""Process a voice input: detect intent -> execute -> optionally speak.
This is the main entry point for voice-driven interaction with Timmy.
"""
intent = detect_intent(text)
response_text = None
error = None
try:
if intent.name == "status":
response_text = "Timmy is operational and running locally. All systems sovereign."
elif intent.name == "help":
response_text = (
"Available commands: chat with me, check status, "
"manage the swarm, create tasks, or adjust voice settings. "
"Everything runs locally — no cloud, no permission needed."
)
elif intent.name == "swarm":
from swarm.coordinator import coordinator
status = coordinator.status()
response_text = (
f"Swarm status: {status['agents']} agents registered, "
f"{status['agents_idle']} idle, {status['agents_busy']} busy. "
f"{status['tasks_total']} total tasks, "
f"{status['tasks_completed']} completed."
)
elif intent.name == "voice":
response_text = "Voice settings acknowledged. TTS is available for spoken responses."
elif intent.name == "code":
from config import settings as app_settings
if not app_settings.self_modify_enabled:
response_text = (
"Self-modification is disabled. "
"Set SELF_MODIFY_ENABLED=true to enable."
)
else:
import asyncio
from self_coding.self_modify.loop import SelfModifyLoop, ModifyRequest
target_files = []
if "target_file" in intent.entities:
target_files = [intent.entities["target_file"]]
loop = SelfModifyLoop()
request = ModifyRequest(
instruction=text,
target_files=target_files,
)
result = await asyncio.to_thread(loop.run, request)
if result.success:
sha_short = result.commit_sha[:8] if result.commit_sha else "none"
response_text = (
f"Code modification complete. "
f"Changed {len(result.files_changed)} file(s). "
f"Tests passed. Committed as {sha_short} "
f"on branch {result.branch_name}."
)
else:
response_text = f"Code modification failed: {result.error}"
else:
# Default: chat with Timmy
agent = create_timmy()
run = agent.run(text, stream=False)
response_text = run.content if hasattr(run, "content") else str(run)
except Exception as exc:
error = f"Processing failed: {exc}"
logger.error("Voice processing error: %s", exc)
# Optionally speak the response
if speak_response and response_text:
try:
from timmy_serve.voice_tts import voice_tts
if voice_tts.available:
voice_tts.speak(response_text)
except Exception:
pass
return {
"intent": intent.name,
"confidence": intent.confidence,
"response": response_text,
"error": error,
"spoken": speak_response and response_text is not None,
}

View File

@@ -1,116 +0,0 @@
"""Enhanced voice routes — /voice/enhanced/* endpoints.
Combines NLU intent detection with Timmy agent execution to provide
a complete voice-to-action pipeline. Detects the intent, routes to
the appropriate handler, and optionally speaks the response.
"""
import logging
from typing import Optional
from fastapi import APIRouter, Form
from voice.nlu import detect_intent
from timmy.agent import create_timmy
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/voice/enhanced", tags=["voice-enhanced"])
@router.post("/process")
async def process_voice_input(
text: str = Form(...),
speak_response: bool = Form(False),
):
"""Process a voice input: detect intent → execute → optionally speak.
This is the main entry point for voice-driven interaction with Timmy.
"""
intent = detect_intent(text)
response_text = None
error = None
try:
if intent.name == "status":
response_text = "Timmy is operational and running locally. All systems sovereign."
elif intent.name == "help":
response_text = (
"Available commands: chat with me, check status, "
"manage the swarm, create tasks, or adjust voice settings. "
"Everything runs locally — no cloud, no permission needed."
)
elif intent.name == "swarm":
from swarm.coordinator import coordinator
status = coordinator.status()
response_text = (
f"Swarm status: {status['agents']} agents registered, "
f"{status['agents_idle']} idle, {status['agents_busy']} busy. "
f"{status['tasks_total']} total tasks, "
f"{status['tasks_completed']} completed."
)
elif intent.name == "voice":
response_text = "Voice settings acknowledged. TTS is available for spoken responses."
elif intent.name == "code":
from config import settings as app_settings
if not app_settings.self_modify_enabled:
response_text = (
"Self-modification is disabled. "
"Set SELF_MODIFY_ENABLED=true to enable."
)
else:
import asyncio
from self_modify.loop import SelfModifyLoop, ModifyRequest
target_files = []
if "target_file" in intent.entities:
target_files = [intent.entities["target_file"]]
loop = SelfModifyLoop()
request = ModifyRequest(
instruction=text,
target_files=target_files,
)
result = await asyncio.to_thread(loop.run, request)
if result.success:
sha_short = result.commit_sha[:8] if result.commit_sha else "none"
response_text = (
f"Code modification complete. "
f"Changed {len(result.files_changed)} file(s). "
f"Tests passed. Committed as {sha_short} "
f"on branch {result.branch_name}."
)
else:
response_text = f"Code modification failed: {result.error}"
else:
# Default: chat with Timmy
agent = create_timmy()
run = agent.run(text, stream=False)
response_text = run.content if hasattr(run, "content") else str(run)
except Exception as exc:
error = f"Processing failed: {exc}"
logger.error("Voice processing error: %s", exc)
# Optionally speak the response
if speak_response and response_text:
try:
from timmy_serve.voice_tts import voice_tts
if voice_tts.available:
voice_tts.speak(response_text)
except Exception:
pass
return {
"intent": intent.name,
"confidence": intent.confidence,
"response": response_text,
"error": error,
"spoken": speak_response and response_text is not None,
}

View File

@@ -8,7 +8,7 @@ from fastapi import APIRouter, Form, HTTPException, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
from work_orders.models import (
from swarm.work_orders.models import (
WorkOrder,
WorkOrderCategory,
WorkOrderPriority,
@@ -20,7 +20,7 @@ from work_orders.models import (
list_work_orders,
update_work_order_status,
)
from work_orders.risk import compute_risk_score, should_auto_execute
from swarm.work_orders.risk import compute_risk_score, should_auto_execute
logger = logging.getLogger(__name__)
@@ -68,7 +68,7 @@ async def submit_work_order(
# Notify
try:
from notifications.push import notifier
from infrastructure.notifications.push import notifier
notifier.notify(
title="New Work Order",
message=f"{wo.submitter} submitted: {wo.title}",
@@ -116,7 +116,7 @@ async def submit_work_order_json(request: Request):
)
try:
from notifications.push import notifier
from infrastructure.notifications.push import notifier
notifier.notify(
title="New Work Order",
message=f"{wo.submitter} submitted: {wo.title}",
@@ -315,7 +315,7 @@ async def execute_order(wo_id: str):
update_work_order_status(wo_id, WorkOrderStatus.IN_PROGRESS)
try:
from work_orders.executor import work_order_executor
from swarm.work_orders.executor import work_order_executor
success, result = work_order_executor.execute(wo)
if success:
update_work_order_status(wo_id, WorkOrderStatus.COMPLETED, result=result)

View File

@@ -39,12 +39,14 @@
<a href="/lightning/ledger" class="mc-test-link">LEDGER</a>
<a href="/memory" class="mc-test-link">MEMORY</a>
<a href="/router/status" class="mc-test-link">ROUTER</a>
<a href="/grok/status" class="mc-test-link" style="color:#00ff88;">GROK</a>
<a href="/self-modify/queue" class="mc-test-link">UPGRADES</a>
<a href="/self-coding" class="mc-test-link">SELF-CODING</a>
<a href="/hands" class="mc-test-link">HANDS</a>
<a href="/work-orders/queue" class="mc-test-link">WORK ORDERS</a>
<a href="/creative/ui" class="mc-test-link">CREATIVE</a>
<a href="/mobile" class="mc-test-link" title="Mobile-optimized view">MOBILE</a>
<a href="/mobile/local" class="mc-test-link" title="Local AI on iPhone">LOCAL AI</a>
<button id="enable-notifications" class="mc-test-link" style="background:none;cursor:pointer;" title="Enable notifications">&#x1F514;</button>
<span class="mc-time" id="clock"></span>
</div>
@@ -78,6 +80,7 @@
<a href="/creative/ui" class="mc-mobile-link">CREATIVE</a>
<a href="/voice/button" class="mc-mobile-link">VOICE</a>
<a href="/mobile" class="mc-mobile-link">MOBILE</a>
<a href="/mobile/local" class="mc-mobile-link">LOCAL AI</a>
<div class="mc-mobile-menu-footer">
<button id="enable-notifications-mobile" class="mc-mobile-link" style="background:none;border:none;cursor:pointer;width:100%;text-align:left;font:inherit;color:inherit;padding:inherit;">&#x1F514; NOTIFICATIONS</button>
</div>

View File

@@ -59,10 +59,61 @@
</div>
</div>
<!-- Grok Mode Toggle -->
<div class="card" style="margin-top: 24px;">
<div class="card-header">
<h2 class="card-title">Grok Mode</h2>
<div>
<span class="badge" id="grok-badge" style="background: #666;">STANDBY</span>
</div>
</div>
<div id="grok-toggle-card"
hx-get="/grok/status"
hx-trigger="load"
hx-target="#grok-toggle-card"
hx-swap="innerHTML">
<div style="border: 2px solid #666; border-radius: 12px; padding: 16px;
background: var(--bg-secondary);">
<div style="display: flex; justify-content: space-between; align-items: center;">
<div>
<div style="font-weight: 700; font-size: 1.1rem; color: #666;">
GROK MODE: LOADING...
</div>
<div style="font-size: 0.8rem; color: var(--text-muted); margin-top: 4px;">
xAI frontier reasoning augmentation
</div>
</div>
<button hx-post="/grok/toggle"
hx-target="#grok-toggle-card"
hx-swap="outerHTML"
style="background: #666; color: #000; border: none;
border-radius: 8px; padding: 8px 20px; cursor: pointer;
font-weight: 700; font-family: inherit;">
ACTIVATE
</button>
</div>
</div>
</div>
<div class="grid grid-3" style="margin-top: 12px;">
<div class="stat">
<div class="stat-value" id="grok-requests">0</div>
<div class="stat-label">Grok Queries</div>
</div>
<div class="stat">
<div class="stat-value" id="grok-tokens">0</div>
<div class="stat-label">Tokens Used</div>
</div>
<div class="stat">
<div class="stat-value" id="grok-cost">0</div>
<div class="stat-label">Est. Cost (sats)</div>
</div>
</div>
</div>
<!-- Heartbeat Monitor -->
<div class="card" style="margin-top: 24px;">
<div class="card-header">
<h2 class="card-title">💓 Heartbeat Monitor</h2>
<h2 class="card-title">Heartbeat Monitor</h2>
<div>
<span class="badge" id="heartbeat-status">Checking...</span>
</div>
@@ -318,11 +369,40 @@ async function loadChatHistory() {
}
}
// Load Grok stats
async function loadGrokStats() {
try {
const response = await fetch('/grok/status');
const data = await response.json();
if (data.stats) {
document.getElementById('grok-requests').textContent = data.stats.total_requests || 0;
document.getElementById('grok-tokens').textContent =
(data.stats.total_prompt_tokens || 0) + (data.stats.total_completion_tokens || 0);
document.getElementById('grok-cost').textContent = data.stats.estimated_cost_sats || 0;
}
const badge = document.getElementById('grok-badge');
if (data.active) {
badge.textContent = 'ACTIVE';
badge.style.background = '#00ff88';
badge.style.color = '#000';
} else {
badge.textContent = 'STANDBY';
badge.style.background = '#666';
badge.style.color = '#fff';
}
} catch (error) {
// Grok endpoint may not respond — silent fallback
}
}
// Initial load
loadSovereignty();
loadHealth();
loadSwarmStats();
loadLightningStats();
loadGrokStats();
loadChatHistory();
// Periodic updates
@@ -330,5 +410,6 @@ setInterval(loadSovereignty, 30000); // Every 30s
setInterval(loadHealth, 10000); // Every 10s
setInterval(loadSwarmStats, 5000); // Every 5s
setInterval(updateHeartbeat, 5000); // Heartbeat every 5s
setInterval(loadGrokStats, 10000); // Grok stats every 10s
</script>
{% endblock %}

View File

@@ -0,0 +1,546 @@
{% extends "base.html" %}
{% block title %}{{ page_title }}{% endblock %}
{% block extra_styles %}
<style>
.local-wrap {
display: flex;
flex-direction: column;
gap: 12px;
padding-bottom: 20px;
max-width: 600px;
margin: 0 auto;
}
/* ── Model status panel ────────────────────────────────────── */
.model-status {
padding: 14px;
display: flex;
flex-direction: column;
gap: 10px;
}
.model-status-row {
display: flex;
justify-content: space-between;
align-items: center;
font-size: 11px;
letter-spacing: 0.08em;
}
.model-status-label { color: var(--text-dim); }
.model-status-value { color: var(--text-bright); font-weight: 600; }
.model-status-value.ready { color: #4ade80; }
.model-status-value.loading { color: #facc15; }
.model-status-value.error { color: #f87171; }
.model-status-value.offline { color: var(--text-dim); }
/* ── Progress bar ──────────────────────────────────────────── */
.progress-wrap {
display: none;
flex-direction: column;
gap: 6px;
padding: 0 14px 14px;
}
.progress-wrap.active { display: flex; }
.progress-bar-outer {
height: 6px;
background: rgba(8, 4, 18, 0.75);
border-radius: 3px;
overflow: hidden;
}
.progress-bar-inner {
height: 100%;
width: 0%;
background: linear-gradient(90deg, var(--border-glow), #a78bfa);
border-radius: 3px;
transition: width 0.3s;
}
.progress-text {
font-size: 10px;
color: var(--text-dim);
letter-spacing: 0.06em;
min-height: 14px;
}
/* ── Model selector ────────────────────────────────────────── */
.model-select-wrap {
padding: 0 14px 14px;
}
.model-select {
width: 100%;
background: rgba(8, 4, 18, 0.75);
border: 1px solid var(--border);
border-radius: var(--radius-md);
color: var(--text-bright);
font-family: var(--font);
font-size: 13px;
padding: 10px 12px;
min-height: 44px;
appearance: none;
-webkit-appearance: none;
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='%237c7c8a' viewBox='0 0 16 16'%3E%3Cpath d='M8 11L3 6h10z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-position: right 12px center;
touch-action: manipulation;
}
.model-select:focus {
outline: none;
border-color: var(--border-glow);
}
/* ── Action buttons ────────────────────────────────────────── */
.model-actions {
display: flex;
gap: 8px;
padding: 0 14px 14px;
}
.model-btn {
flex: 1;
display: flex;
align-items: center;
justify-content: center;
gap: 6px;
min-height: 44px;
border-radius: var(--radius-md);
font-family: var(--font);
font-size: 12px;
font-weight: 700;
letter-spacing: 0.08em;
border: 1px solid var(--border);
background: rgba(24, 10, 45, 0.6);
color: var(--text-bright);
cursor: pointer;
transition: transform 0.1s, border-color 0.2s;
touch-action: manipulation;
-webkit-tap-highlight-color: transparent;
}
.model-btn:active { transform: scale(0.96); }
.model-btn.primary {
border-color: var(--border-glow);
background: rgba(124, 58, 237, 0.2);
}
.model-btn:disabled {
opacity: 0.4;
cursor: not-allowed;
}
/* ── Chat area ─────────────────────────────────────────────── */
.local-chat-wrap {
flex: 1;
display: flex;
flex-direction: column;
min-height: 0;
}
.local-chat-log {
flex: 1;
overflow-y: auto;
-webkit-overflow-scrolling: touch;
padding: 14px;
max-height: 400px;
min-height: 200px;
}
.local-chat-input {
display: flex;
gap: 8px;
padding: 10px 14px;
padding-bottom: max(10px, env(safe-area-inset-bottom));
background: rgba(24, 10, 45, 0.9);
border-top: 1px solid var(--border);
}
.local-chat-input input {
flex: 1;
background: rgba(8, 4, 18, 0.75);
border: 1px solid var(--border);
border-radius: var(--radius-md);
color: var(--text-bright);
font-family: var(--font);
font-size: 16px;
padding: 10px 12px;
min-height: 44px;
}
.local-chat-input input:focus {
outline: none;
border-color: var(--border-glow);
box-shadow: 0 0 0 1px var(--border-glow), 0 0 8px rgba(124, 58, 237, 0.2);
}
.local-chat-input input::placeholder { color: var(--text-dim); }
.local-chat-input button {
background: var(--border-glow);
border: none;
border-radius: var(--radius-md);
color: var(--text-bright);
font-family: var(--font);
font-size: 12px;
font-weight: 700;
padding: 0 16px;
min-height: 44px;
min-width: 64px;
letter-spacing: 0.1em;
transition: background 0.15s, transform 0.1s;
touch-action: manipulation;
}
.local-chat-input button:active { transform: scale(0.96); }
.local-chat-input button:disabled { opacity: 0.4; }
/* ── Chat messages ─────────────────────────────────────────── */
.local-msg { margin-bottom: 12px; }
.local-msg .meta {
font-size: 10px;
letter-spacing: 0.1em;
margin-bottom: 3px;
}
.local-msg.user .meta { color: var(--orange); }
.local-msg.timmy .meta { color: var(--purple); }
.local-msg.system .meta { color: var(--text-dim); }
.local-msg .bubble {
background: rgba(24, 10, 45, 0.8);
border: 1px solid var(--border);
border-radius: var(--radius-md);
padding: 10px 12px;
font-size: 13px;
line-height: 1.6;
color: var(--text);
word-break: break-word;
}
.local-msg.timmy .bubble { border-left: 3px solid var(--purple); }
.local-msg.user .bubble { border-color: var(--border-glow); }
.local-msg.system .bubble {
border-color: transparent;
background: rgba(8, 4, 18, 0.5);
font-size: 11px;
color: var(--text-dim);
}
/* ── Backend badge ─────────────────────────────────────────── */
.backend-badge {
display: inline-block;
font-size: 9px;
letter-spacing: 0.1em;
padding: 2px 6px;
border-radius: 3px;
vertical-align: middle;
margin-left: 6px;
}
.backend-badge.local {
background: rgba(74, 222, 128, 0.15);
color: #4ade80;
border: 1px solid rgba(74, 222, 128, 0.3);
}
.backend-badge.server {
background: rgba(250, 204, 21, 0.15);
color: #facc15;
border: 1px solid rgba(250, 204, 21, 0.3);
}
/* ── Stats panel ───────────────────────────────────────────── */
.model-stats {
padding: 0 14px 14px;
font-size: 10px;
color: var(--text-dim);
letter-spacing: 0.06em;
display: none;
}
.model-stats.visible { display: block; }
</style>
{% endblock %}
{% block content %}
<div class="local-wrap">
<!-- Model Status Panel -->
<div class="card mc-panel">
<div class="card-header mc-panel-header">// LOCAL AI MODEL</div>
<div class="model-status">
<div class="model-status-row">
<span class="model-status-label">STATUS</span>
<span class="model-status-value offline" id="model-state">NOT LOADED</span>
</div>
<div class="model-status-row">
<span class="model-status-label">BACKEND</span>
<span class="model-status-value" id="model-backend">DETECTING...</span>
</div>
<div class="model-status-row">
<span class="model-status-label">INFERENCE</span>
<span class="model-status-value" id="inference-mode">--</span>
</div>
</div>
<!-- Model selector -->
<div class="model-select-wrap">
<select class="model-select" id="model-select" aria-label="Select model"></select>
</div>
<!-- Progress bar -->
<div class="progress-wrap" id="progress-wrap">
<div class="progress-bar-outer">
<div class="progress-bar-inner" id="progress-bar"></div>
</div>
<div class="progress-text" id="progress-text"></div>
</div>
<!-- Actions -->
<div class="model-actions">
<button class="model-btn primary" id="btn-load" onclick="loadModel()">LOAD MODEL</button>
<button class="model-btn" id="btn-unload" onclick="unloadModel()" disabled>UNLOAD</button>
</div>
<!-- Stats -->
<div class="model-stats" id="model-stats"></div>
</div>
<!-- Chat -->
<div class="card mc-panel local-chat-wrap">
<div class="card-header mc-panel-header">
// TIMMY <span class="backend-badge local" id="chat-backend-badge" style="display:none">LOCAL</span>
</div>
<div class="local-chat-log" id="local-chat">
<div class="local-msg system">
<div class="meta">SYSTEM</div>
<div class="bubble">
Load a model above to chat with Timmy locally on your device.
No server connection required.
{% if browser_model_fallback %}
Server fallback is enabled — if the local model fails, Timmy
will try the server instead.
{% endif %}
</div>
</div>
</div>
<form onsubmit="sendLocalMessage(event)" class="local-chat-input">
<input type="text"
id="local-message"
placeholder="Message Timmy..."
required
autocomplete="off"
autocapitalize="none"
autocorrect="off"
spellcheck="false"
enterkeyhint="send" />
<button type="submit" id="btn-send" disabled>SEND</button>
</form>
</div>
</div>
<script src="/static/local_llm.js"></script>
<script>
// ── State ──────────────────────────────────────────────────────────────────
let llm = null;
const serverFallback = {{ browser_model_fallback | tojson }};
const defaultModelId = {{ browser_model_id | tojson }};
// ── DOM refs ───────────────────────────────────────────────────────────────
const elState = document.getElementById('model-state');
const elBackend = document.getElementById('model-backend');
const elInference = document.getElementById('inference-mode');
const elSelect = document.getElementById('model-select');
const elProgress = document.getElementById('progress-wrap');
const elBar = document.getElementById('progress-bar');
const elProgressTx = document.getElementById('progress-text');
const elBtnLoad = document.getElementById('btn-load');
const elBtnUnload = document.getElementById('btn-unload');
const elBtnSend = document.getElementById('btn-send');
const elChat = document.getElementById('local-chat');
const elInput = document.getElementById('local-message');
const elBadge = document.getElementById('chat-backend-badge');
const elStats = document.getElementById('model-stats');
// ── Populate model selector ────────────────────────────────────────────────
(function populateModels() {
const catalogue = window.LOCAL_MODEL_CATALOGUE || [];
catalogue.forEach(function(m) {
const opt = document.createElement('option');
opt.value = m.id;
opt.textContent = m.label + ' (' + m.sizeHint + ')';
if (m.id === defaultModelId) opt.selected = true;
elSelect.appendChild(opt);
});
})();
// ── Detect capabilities ────────────────────────────────────────────────────
(function detectCaps() {
const supported = LocalLLM.isSupported();
const hasGPU = typeof navigator !== 'undefined' && 'gpu' in navigator;
elBackend.textContent = hasGPU ? 'WebGPU' : supported ? 'WASM' : 'UNSUPPORTED';
if (!supported) {
elBackend.classList.add('error');
elBtnLoad.disabled = true;
addSystemMessage('Your browser does not support WebGPU or WebAssembly. Update to iOS 26+ / Safari 26+ for local AI.');
}
})();
// ── Load model ─────────────────────────────────────────────────────────────
async function loadModel() {
if (llm && llm.ready) {
await unloadModel();
}
const modelId = elSelect.value;
elBtnLoad.disabled = true;
elBtnUnload.disabled = true;
elBtnSend.disabled = true;
elProgress.classList.add('active');
setState('loading', 'DOWNLOADING...');
llm = new LocalLLM({
modelId: modelId,
onProgress: function(report) {
if (report.progress !== undefined) {
const pct = Math.round(report.progress * 100);
elBar.style.width = pct + '%';
elProgressTx.textContent = report.text || (pct + '%');
} else if (report.text) {
elProgressTx.textContent = report.text;
}
},
onReady: function() {
setState('ready', 'READY');
elProgress.classList.remove('active');
elBtnLoad.disabled = false;
elBtnUnload.disabled = false;
elBtnSend.disabled = false;
elBadge.style.display = '';
elBadge.className = 'backend-badge local';
elBadge.textContent = 'LOCAL';
elInference.textContent = 'ON-DEVICE';
elInput.focus();
addSystemMessage('Model loaded. Timmy is running locally on your device — fully offline, fully sovereign.');
updateStats();
},
onError: function(err) {
setState('error', 'FAILED');
elProgress.classList.remove('active');
elBtnLoad.disabled = false;
addSystemMessage('Failed to load model: ' + err.message);
if (serverFallback) {
addSystemMessage('Server fallback enabled. Chat will use the server instead.');
elBtnSend.disabled = false;
elBadge.style.display = '';
elBadge.className = 'backend-badge server';
elBadge.textContent = 'SERVER';
elInference.textContent = 'SERVER';
}
},
});
try {
await llm.init();
} catch (e) {
// Error handled by onError callback
}
}
// ── Unload model ───────────────────────────────────────────────────────────
async function unloadModel() {
if (llm) {
await llm.unload();
llm = null;
}
setState('offline', 'NOT LOADED');
elBtnUnload.disabled = true;
elBtnSend.disabled = true;
elBadge.style.display = 'none';
elInference.textContent = '--';
elStats.classList.remove('visible');
}
// ── Send message ───────────────────────────────────────────────────────────
async function sendLocalMessage(event) {
event.preventDefault();
const message = elInput.value.trim();
if (!message) return;
addMessage('user', 'YOU', message);
elInput.value = '';
elBtnSend.disabled = true;
// Try local model first
if (llm && llm.ready) {
try {
const replyBubble = addMessage('timmy', 'TIMMY (LOCAL)', '');
let fullText = '';
await llm.chat(message, {
onToken: function(delta, accumulated) {
fullText = accumulated;
replyBubble.textContent = fullText;
elChat.scrollTop = elChat.scrollHeight;
}
});
if (!fullText) {
replyBubble.textContent = await llm.chat(message);
}
elBtnSend.disabled = false;
updateStats();
return;
} catch (err) {
addSystemMessage('Local inference failed: ' + err.message);
if (!serverFallback) {
elBtnSend.disabled = false;
return;
}
}
}
// Server fallback
if (serverFallback) {
try {
const response = await fetch('/agents/timmy/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: 'message=' + encodeURIComponent(message)
});
const html = await response.text();
const parser = new DOMParser();
const doc = parser.parseFromString(html, 'text/html');
const timmyResponse = doc.querySelector('.chat-message.timmy, .msg-body');
const text = timmyResponse ? timmyResponse.textContent.trim() : 'Response received.';
addMessage('timmy', 'TIMMY (SERVER)', text);
} catch (e) {
addMessage('timmy', 'TIMMY', 'Sorry, both local and server inference failed. Check your connection.');
}
} else {
addMessage('system', 'SYSTEM', 'Load a model to start chatting.');
}
elBtnSend.disabled = false;
}
// ── Helpers ────────────────────────────────────────────────────────────────
function setState(cls, text) {
elState.className = 'model-status-value ' + cls;
elState.textContent = text;
}
function addMessage(type, label, text) {
const div = document.createElement('div');
div.className = 'local-msg ' + type;
const meta = document.createElement('div');
meta.className = 'meta';
meta.textContent = label;
const bubble = document.createElement('div');
bubble.className = 'bubble';
bubble.textContent = text;
div.appendChild(meta);
div.appendChild(bubble);
elChat.appendChild(div);
elChat.scrollTop = elChat.scrollHeight;
return bubble;
}
function addSystemMessage(text) {
addMessage('system', 'SYSTEM', text);
}
async function updateStats() {
if (!llm) return;
try {
const stats = await llm.getStats();
if (stats) {
elStats.textContent = stats;
elStats.classList.add('visible');
}
} catch (e) {
// Stats are optional
}
}
</script>
{% endblock %}

Some files were not shown because too many files have changed in this diff Show More