forked from Rockachopa/Timmy-time-dashboard
Compare commits
129 Commits
feature/ip
...
fix/csrf-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
59e789e2da | ||
| d2a5866650 | |||
| 2381d0b6d0 | |||
| 03ad2027a4 | |||
| 2bfc44ea1b | |||
| fe1fa78ef1 | |||
| 3c46a1b202 | |||
| 001358c64f | |||
| faad0726a2 | |||
| dd4410fe57 | |||
| ef7f31070b | |||
| 6f66670396 | |||
| 4cdd82818b | |||
| 99ad672e4d | |||
| a3f61c67d3 | |||
| 32dbdc68c8 | |||
| 84302aedac | |||
| 2c217104db | |||
| 7452e8a4f0 | |||
| 9732c80892 | |||
| f3b3d1e648 | |||
| 4ba8d25749 | |||
| 2622f0a0fb | |||
| e3d60b89a9 | |||
| 6214ad3225 | |||
| 5f5da2163f | |||
| 0029c34bb1 | |||
| 2577b71207 | |||
| 1a8b8ecaed | |||
| d821e76589 | |||
| bc010ecfba | |||
| faf6c1a5f1 | |||
| 48103bb076 | |||
| 9f244ffc70 | |||
| 0162a604be | |||
| 2326771c5a | |||
| 8f6cf2681b | |||
| f361893fdd | |||
| 7ad0ee17b6 | |||
| 29220b6bdd | |||
| 2849dba756 | |||
| e11e07f117 | |||
| 50c8a5428e | |||
| 7da434c85b | |||
| 88e59f7c17 | |||
| aa5e9c3176 | |||
| 1b4fe65650 | |||
| 2d69f73d9d | |||
| ff1e43c235 | |||
| b331aa6139 | |||
| b45b543f2d | |||
| 7c823ab59c | |||
| 9f2728f529 | |||
| cd3dc5d989 | |||
| e4de539bf3 | |||
| b2057f72e1 | |||
| 5f52dd54c0 | |||
| 9ceffd61d1 | |||
| 015d858be5 | |||
| b6d0b5f999 | |||
| d70e4f810a | |||
| 7f20742fcf | |||
| 15eb7c3b45 | |||
| dbc2fd5b0f | |||
| 3c3aca57f1 | |||
| 0ae00af3f8 | |||
| 3df526f6ef | |||
| 50aaf60db2 | |||
| a751be3038 | |||
| 92594ea588 | |||
| 12582ab593 | |||
| 72c3a0a989 | |||
| de089cec7f | |||
| 3590c1689e | |||
| 2161c32ae8 | |||
| 98b1142820 | |||
| 1d79a36bd8 | |||
| cce311dbb8 | |||
| 3cde310c78 | |||
| cdb1a7546b | |||
| a31c929770 | |||
| 3afb62afb7 | |||
| 332fa373b8 | |||
| 76b26ead55 | |||
| 63e4542f31 | |||
| 9b8ad3629a | |||
| 4b617cfcd0 | |||
| b67dbe922f | |||
| 3571d528ad | |||
| ab3546ae4b | |||
| e89aef41bc | |||
| 86224d042d | |||
| 2209ac82d2 | |||
| f9d8509c15 | |||
| 858264be0d | |||
| 3c10da489b | |||
| da43421d4e | |||
| aa4f1de138 | |||
| 19e7e61c92 | |||
| b7573432cc | |||
| 3108971bd5 | |||
| 864be20dde | |||
| c1f939ef22 | |||
| c1af9e3905 | |||
| 996ccec170 | |||
| 560aed78c3 | |||
| c7198b1254 | |||
| 43efb01c51 | |||
| ce658c841a | |||
| db7220db5a | |||
| ae10ea782d | |||
| 4afc5daffb | |||
| 4aa86ff1cb | |||
| dff07c6529 | |||
| 11357ffdb4 | |||
| fcbb2b848b | |||
| 6621f4bd31 | |||
| 243b1a656f | |||
| 22e0d2d4b3 | |||
| bcc7b068a4 | |||
| bfd924fe74 | |||
| 844923b16b | |||
| 8ef0ad1778 | |||
| 9a21a4b0ff | |||
| ab71c71036 | |||
| 39939270b7 | |||
| 0ab1ee9378 | |||
| 234187c091 | |||
| f4106452d2 |
20
.gitignore
vendored
20
.gitignore
vendored
@@ -21,6 +21,9 @@ discord_credentials.txt
|
||||
|
||||
# Backup / temp files
|
||||
*~
|
||||
\#*\#
|
||||
*.backup
|
||||
*.tar.gz
|
||||
|
||||
# SQLite — never commit databases or WAL/SHM artifacts
|
||||
*.db
|
||||
@@ -73,6 +76,23 @@ scripts/migrate_to_zeroclaw.py
|
||||
src/infrastructure/db_pool.py
|
||||
workspace/
|
||||
|
||||
# Loop orchestration state
|
||||
.loop/
|
||||
|
||||
# Legacy junk from old Timmy sessions (one-word fragments, cruft)
|
||||
Hi
|
||||
Im Timmy*
|
||||
his
|
||||
keep
|
||||
clean
|
||||
directory
|
||||
my_name_is_timmy*
|
||||
timmy_read_me_*
|
||||
issue_12_proposal.md
|
||||
|
||||
# Memory notes (session-scoped, not committed)
|
||||
memory/notes/
|
||||
|
||||
# Gitea Actions runner state
|
||||
.runner
|
||||
|
||||
|
||||
@@ -54,19 +54,6 @@ providers:
|
||||
context_window: 2048
|
||||
capabilities: [text, vision, streaming]
|
||||
|
||||
# Secondary: Local AirLLM (if installed)
|
||||
- name: airllm-local
|
||||
type: airllm
|
||||
enabled: false # Enable if pip install airllm
|
||||
priority: 2
|
||||
models:
|
||||
- name: 70b
|
||||
default: true
|
||||
capabilities: [text, tools, json, streaming]
|
||||
- name: 8b
|
||||
capabilities: [text, tools, json, streaming]
|
||||
- name: 405b
|
||||
capabilities: [text, tools, json, streaming]
|
||||
|
||||
# Tertiary: OpenAI (if API key available)
|
||||
- name: openai-backup
|
||||
|
||||
180
docs/adr/023-workshop-presence-schema.md
Normal file
180
docs/adr/023-workshop-presence-schema.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# ADR-023: Workshop Presence Schema
|
||||
|
||||
**Status:** Accepted
|
||||
**Date:** 2026-03-18
|
||||
**Issue:** #265
|
||||
**Epic:** #222 (The Workshop)
|
||||
|
||||
## Context
|
||||
|
||||
The Workshop renders Timmy as a living presence in a 3D world. It needs to
|
||||
know what Timmy is doing *right now* — his working memory, not his full
|
||||
identity or history. This schema defines the contract between Timmy (writer)
|
||||
and the Workshop (reader).
|
||||
|
||||
### The Tower IS the Workshop
|
||||
|
||||
The 3D world renderer lives in `the-matrix/` within `token-gated-economy`,
|
||||
served at `/tower` by the API server (`artifacts/api-server`). This is the
|
||||
canonical Workshop scene — not a generic Matrix visualization. All Workshop
|
||||
phase issues (#361, #362, #363) target that codebase. No separate
|
||||
`alexanderwhitestone.com` scaffold is needed until production deploy.
|
||||
|
||||
The `workshop-state` spec (#360) is consumed by the API server via a
|
||||
file-watch mechanism, bridging Timmy's presence into the 3D scene.
|
||||
|
||||
Design principles:
|
||||
- **Working memory, not long-term memory.** Present tense only.
|
||||
- **Written as side effect of work.** Not a separate obligation.
|
||||
- **Liveness is mandatory.** Stale = "not home," shown honestly.
|
||||
- **Schema is the contract.** Keep it minimal and stable.
|
||||
|
||||
## Decision
|
||||
|
||||
### File Location
|
||||
|
||||
`~/.timmy/presence.json`
|
||||
|
||||
JSON chosen over YAML for predictable parsing by both Python and JavaScript
|
||||
(the Workshop frontend). The Workshop reads this file via the WebSocket
|
||||
bridge (#243) or polls it directly during development.
|
||||
|
||||
### Schema (v1)
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"title": "Timmy Presence State",
|
||||
"description": "Working memory surface for the Workshop renderer",
|
||||
"type": "object",
|
||||
"required": ["version", "liveness", "current_focus"],
|
||||
"properties": {
|
||||
"version": {
|
||||
"type": "integer",
|
||||
"const": 1,
|
||||
"description": "Schema version for forward compatibility"
|
||||
},
|
||||
"liveness": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of last update. If stale (>5min), Timmy is not home."
|
||||
},
|
||||
"current_focus": {
|
||||
"type": "string",
|
||||
"description": "One sentence: what Timmy is doing right now. Empty string = idle."
|
||||
},
|
||||
"active_threads": {
|
||||
"type": "array",
|
||||
"maxItems": 10,
|
||||
"description": "Current work items Timmy is tracking",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["type", "ref", "status"],
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["pr_review", "issue", "conversation", "research", "thinking"]
|
||||
},
|
||||
"ref": {
|
||||
"type": "string",
|
||||
"description": "Reference identifier (issue #, PR #, topic name)"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["active", "idle", "blocked", "completed"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"recent_events": {
|
||||
"type": "array",
|
||||
"maxItems": 20,
|
||||
"description": "Recent events, newest first. Capped at 20.",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["timestamp", "event"],
|
||||
"properties": {
|
||||
"timestamp": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"event": {
|
||||
"type": "string",
|
||||
"description": "Brief description of what happened"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"concerns": {
|
||||
"type": "array",
|
||||
"maxItems": 5,
|
||||
"description": "Things Timmy is uncertain or worried about. Flat list, no severity.",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"mood": {
|
||||
"type": "string",
|
||||
"enum": ["focused", "exploring", "uncertain", "excited", "tired", "idle"],
|
||||
"description": "Emotional texture for the Workshop to render. Optional."
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"liveness": "2026-03-18T21:47:12Z",
|
||||
"current_focus": "Reviewing PR #267 — stream adapter for Gitea webhooks",
|
||||
"active_threads": [
|
||||
{"type": "pr_review", "ref": "#267", "status": "active"},
|
||||
{"type": "issue", "ref": "#239", "status": "idle"},
|
||||
{"type": "conversation", "ref": "hermes-consultation", "status": "idle"}
|
||||
],
|
||||
"recent_events": [
|
||||
{"timestamp": "2026-03-18T21:45:00Z", "event": "Completed PR review for #265"},
|
||||
{"timestamp": "2026-03-18T21:30:00Z", "event": "Filed issue #268 — flaky test in sensory loop"}
|
||||
],
|
||||
"concerns": [
|
||||
"WebSocket reconnection logic feels brittle",
|
||||
"Not sure the barks system handles uncertainty well yet"
|
||||
],
|
||||
"mood": "focused"
|
||||
}
|
||||
```
|
||||
|
||||
### Design Answers
|
||||
|
||||
| Question | Answer |
|
||||
|---|---|
|
||||
| File format | JSON (predictable for JS + Python, no YAML parser needed in browser) |
|
||||
| recent_events cap | 20 entries max, oldest dropped |
|
||||
| concerns severity | Flat list, no priority. Keep it simple. |
|
||||
| File location | `~/.timmy/presence.json` — accessible to Workshop via bridge |
|
||||
| Staleness threshold | 5 minutes without liveness update = "not home" |
|
||||
| mood field | Optional. Workshop can render visual cues (color, animation) |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Timmy's agent loop** must write `~/.timmy/presence.json` as a side effect
|
||||
of work. This is a hook at the end of each cycle, not a daemon.
|
||||
- **The Workshop frontend** reads this file and renders accordingly. Stale
|
||||
liveness → dim the wizard, show "away" state.
|
||||
- **The WebSocket bridge** (#243) watches this file and pushes changes to
|
||||
connected Workshop clients.
|
||||
- **Schema is versioned.** Breaking changes increment the version field.
|
||||
Workshop must handle unknown versions gracefully (show raw data or "unknown state").
|
||||
|
||||
## Related
|
||||
|
||||
- #222 — Workshop epic
|
||||
- #243 — WebSocket bridge (transports this state)
|
||||
- #239 — Sensory loop (feeds into state)
|
||||
- #242 — 3D world (consumes this state for rendering)
|
||||
- #246 — Confidence as visible trait (mood field serves this)
|
||||
- #360 — Workshop-state spec (consumed by API via file-watch)
|
||||
- #361, #362, #363 — Workshop phase issues (target `the-matrix/`)
|
||||
- #372 — The Tower IS the Workshop (canonical connection)
|
||||
@@ -1,42 +1,75 @@
|
||||
# ── AlexanderWhitestone.com — The Wizard's Tower ────────────────────────────
|
||||
#
|
||||
# Two rooms. No hallways. No feature creep.
|
||||
# /world/ — The Workshop (3D scene, Three.js)
|
||||
# /blog/ — The Scrolls (static posts, RSS feed)
|
||||
#
|
||||
# Static-first. No tracking. No analytics. No cookie banner.
|
||||
# Site root: /var/www/alexanderwhitestone.com
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name alexanderwhitestone.com 45.55.221.244;
|
||||
server_name alexanderwhitestone.com www.alexanderwhitestone.com;
|
||||
|
||||
# Cookie-based auth gate — login once, cookie lasts 7 days
|
||||
location = /_auth {
|
||||
internal;
|
||||
proxy_pass http://127.0.0.1:9876;
|
||||
proxy_pass_request_body off;
|
||||
proxy_set_header Content-Length "";
|
||||
proxy_set_header X-Original-URI $request_uri;
|
||||
proxy_set_header Cookie $http_cookie;
|
||||
proxy_set_header Authorization $http_authorization;
|
||||
root /var/www/alexanderwhitestone.com;
|
||||
index index.html;
|
||||
|
||||
# ── Security headers ────────────────────────────────────────────────────
|
||||
add_header X-Content-Type-Options nosniff always;
|
||||
add_header X-Frame-Options SAMEORIGIN always;
|
||||
add_header Referrer-Policy strict-origin-when-cross-origin always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
|
||||
# ── Gzip for text assets ────────────────────────────────────────────────
|
||||
gzip on;
|
||||
gzip_types text/plain text/css text/xml text/javascript
|
||||
application/javascript application/json application/xml
|
||||
application/rss+xml application/atom+xml;
|
||||
gzip_min_length 256;
|
||||
|
||||
# ── The Workshop — 3D world assets ──────────────────────────────────────
|
||||
location /world/ {
|
||||
try_files $uri $uri/ /world/index.html;
|
||||
|
||||
# Cache 3D assets aggressively (models, textures)
|
||||
location ~* \.(glb|gltf|bin|png|jpg|webp|hdr)$ {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
# Cache JS with revalidation (for Three.js updates)
|
||||
location ~* \.js$ {
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, must-revalidate";
|
||||
}
|
||||
}
|
||||
|
||||
# ── The Scrolls — blog posts and RSS ────────────────────────────────────
|
||||
location /blog/ {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
# RSS/Atom feed — correct content type
|
||||
location ~* \.(rss|atom|xml)$ {
|
||||
types { }
|
||||
default_type application/rss+xml;
|
||||
expires 1h;
|
||||
}
|
||||
|
||||
# ── Static assets (fonts, favicon) ──────────────────────────────────────
|
||||
location /static/ {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
# ── Entry hall ──────────────────────────────────────────────────────────
|
||||
location / {
|
||||
auth_request /_auth;
|
||||
# Forward the Set-Cookie from auth gate to the client
|
||||
auth_request_set $auth_cookie $upstream_http_set_cookie;
|
||||
add_header Set-Cookie $auth_cookie;
|
||||
|
||||
proxy_pass http://127.0.0.1:3100;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host localhost;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_read_timeout 86400;
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
# Return 401 with WWW-Authenticate when auth fails
|
||||
error_page 401 = @login;
|
||||
location @login {
|
||||
proxy_pass http://127.0.0.1:9876;
|
||||
proxy_set_header Authorization $http_authorization;
|
||||
proxy_set_header Cookie $http_cookie;
|
||||
# Block dotfiles
|
||||
location ~ /\. {
|
||||
deny all;
|
||||
return 404;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -94,12 +94,17 @@ def extract_cycle_number(title: str) -> int | None:
|
||||
return int(m.group(1)) if m else None
|
||||
|
||||
|
||||
def extract_issue_number(title: str, body: str) -> int | None:
|
||||
# Try body first (usually has "closes #N")
|
||||
def extract_issue_number(title: str, body: str, pr_number: int | None = None) -> int | None:
|
||||
"""Extract the issue number from PR body/title, ignoring the PR number itself.
|
||||
|
||||
Gitea appends "(#N)" to PR titles where N is the PR number — skip that
|
||||
so we don't confuse it with the linked issue.
|
||||
"""
|
||||
for text in [body or "", title]:
|
||||
m = ISSUE_RE.search(text)
|
||||
if m:
|
||||
return int(m.group(1))
|
||||
for m in ISSUE_RE.finditer(text):
|
||||
num = int(m.group(1))
|
||||
if num != pr_number:
|
||||
return num
|
||||
return None
|
||||
|
||||
|
||||
@@ -140,7 +145,7 @@ def main():
|
||||
else:
|
||||
cycle_counter = max(cycle_counter, cycle)
|
||||
|
||||
issue = extract_issue_number(title, body)
|
||||
issue = extract_issue_number(title, body, pr_number=pr_num)
|
||||
issue_type = classify_pr(title, body)
|
||||
duration = estimate_duration(pr)
|
||||
diff = get_pr_diff_stats(token, pr_num)
|
||||
|
||||
@@ -4,11 +4,26 @@
|
||||
Called after each cycle completes (success or failure).
|
||||
Appends a structured entry to .loop/retro/cycles.jsonl.
|
||||
|
||||
EPOCH NOTATION (turnover system):
|
||||
Each cycle carries a symbolic epoch tag alongside the raw integer:
|
||||
|
||||
⟳WW.D:NNN
|
||||
|
||||
⟳ turnover glyph — marks epoch-aware cycles
|
||||
WW ISO week-of-year (01–53)
|
||||
D ISO weekday (1=Mon … 7=Sun)
|
||||
NNN daily cycle counter, zero-padded, resets at midnight UTC
|
||||
|
||||
Example: ⟳12.3:042 — Week 12, Wednesday, 42nd cycle of the day.
|
||||
|
||||
The raw `cycle` integer is preserved for backward compatibility.
|
||||
The `epoch` field carries the symbolic notation.
|
||||
|
||||
SUCCESS DEFINITION:
|
||||
A cycle is only "success" if BOTH conditions are met:
|
||||
1. The hermes process exited cleanly (exit code 0)
|
||||
2. Main is green (smoke test passes on main after merge)
|
||||
|
||||
|
||||
A cycle that merges a PR but leaves main red is a FAILURE.
|
||||
The --main-green flag records the smoke test result.
|
||||
|
||||
@@ -29,6 +44,8 @@ from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
@@ -36,10 +53,69 @@ from pathlib import Path
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
RETRO_FILE = REPO_ROOT / ".loop" / "retro" / "cycles.jsonl"
|
||||
SUMMARY_FILE = REPO_ROOT / ".loop" / "retro" / "summary.json"
|
||||
EPOCH_COUNTER_FILE = REPO_ROOT / ".loop" / "retro" / ".epoch_counter"
|
||||
CYCLE_RESULT_FILE = REPO_ROOT / ".loop" / "cycle_result.json"
|
||||
|
||||
# How many recent entries to include in rolling summary
|
||||
SUMMARY_WINDOW = 50
|
||||
|
||||
# Branch patterns that encode an issue number, e.g. kimi/issue-492
|
||||
BRANCH_ISSUE_RE = re.compile(r"issue[/-](\d+)", re.IGNORECASE)
|
||||
|
||||
|
||||
def detect_issue_from_branch() -> int | None:
|
||||
"""Try to extract an issue number from the current git branch name."""
|
||||
try:
|
||||
branch = subprocess.check_output(
|
||||
["git", "rev-parse", "--abbrev-ref", "HEAD"],
|
||||
stderr=subprocess.DEVNULL,
|
||||
text=True,
|
||||
).strip()
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
return None
|
||||
m = BRANCH_ISSUE_RE.search(branch)
|
||||
return int(m.group(1)) if m else None
|
||||
|
||||
|
||||
# ── Epoch turnover ────────────────────────────────────────────────────────
|
||||
|
||||
def _epoch_tag(now: datetime | None = None) -> tuple[str, dict]:
|
||||
"""Generate the symbolic epoch tag and advance the daily counter.
|
||||
|
||||
Returns (epoch_string, epoch_parts) where epoch_parts is a dict with
|
||||
week, weekday, daily_n for structured storage.
|
||||
|
||||
The daily counter persists in .epoch_counter as a two-line file:
|
||||
line 1: ISO date (YYYY-MM-DD) of the current epoch day
|
||||
line 2: integer count
|
||||
When the date rolls over, the counter resets to 1.
|
||||
"""
|
||||
if now is None:
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
iso_cal = now.isocalendar() # (year, week, weekday)
|
||||
week = iso_cal[1]
|
||||
weekday = iso_cal[2]
|
||||
today_str = now.strftime("%Y-%m-%d")
|
||||
|
||||
# Read / reset daily counter
|
||||
daily_n = 1
|
||||
EPOCH_COUNTER_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
if EPOCH_COUNTER_FILE.exists():
|
||||
try:
|
||||
lines = EPOCH_COUNTER_FILE.read_text().strip().splitlines()
|
||||
if len(lines) == 2 and lines[0] == today_str:
|
||||
daily_n = int(lines[1]) + 1
|
||||
except (ValueError, IndexError):
|
||||
pass # corrupt file — reset
|
||||
|
||||
# Persist
|
||||
EPOCH_COUNTER_FILE.write_text(f"{today_str}\n{daily_n}\n")
|
||||
|
||||
tag = f"\u27f3{week:02d}.{weekday}:{daily_n:03d}"
|
||||
parts = {"week": week, "weekday": weekday, "daily_n": daily_n}
|
||||
return tag, parts
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
p = argparse.ArgumentParser(description="Log a cycle retrospective")
|
||||
@@ -123,8 +199,30 @@ def update_summary() -> None:
|
||||
issue_failures[e["issue"]] = issue_failures.get(e["issue"], 0) + 1
|
||||
quarantine_candidates = {k: v for k, v in issue_failures.items() if v >= 2}
|
||||
|
||||
# Epoch turnover stats — cycles per week/day from epoch-tagged entries
|
||||
epoch_entries = [e for e in recent if e.get("epoch")]
|
||||
by_week: dict[int, int] = {}
|
||||
by_weekday: dict[int, int] = {}
|
||||
for e in epoch_entries:
|
||||
w = e.get("epoch_week")
|
||||
d = e.get("epoch_weekday")
|
||||
if w is not None:
|
||||
by_week[w] = by_week.get(w, 0) + 1
|
||||
if d is not None:
|
||||
by_weekday[d] = by_weekday.get(d, 0) + 1
|
||||
|
||||
# Current epoch — latest entry's epoch tag
|
||||
current_epoch = epoch_entries[-1].get("epoch", "") if epoch_entries else ""
|
||||
|
||||
# Weekday names for display
|
||||
weekday_glyphs = {1: "Mon", 2: "Tue", 3: "Wed", 4: "Thu",
|
||||
5: "Fri", 6: "Sat", 7: "Sun"}
|
||||
by_weekday_named = {weekday_glyphs.get(k, str(k)): v
|
||||
for k, v in sorted(by_weekday.items())}
|
||||
|
||||
summary = {
|
||||
"updated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"current_epoch": current_epoch,
|
||||
"window": len(recent),
|
||||
"measured_cycles": len(measured),
|
||||
"total_cycles": len(entries),
|
||||
@@ -136,9 +234,12 @@ def update_summary() -> None:
|
||||
"total_lines_removed": sum(e.get("lines_removed", 0) for e in recent),
|
||||
"total_prs_merged": sum(1 for e in recent if e.get("pr")),
|
||||
"by_type": type_stats,
|
||||
"by_week": dict(sorted(by_week.items())),
|
||||
"by_weekday": by_weekday_named,
|
||||
"quarantine_candidates": quarantine_candidates,
|
||||
"recent_failures": [
|
||||
{"cycle": e["cycle"], "issue": e.get("issue"), "reason": e.get("reason", "")}
|
||||
{"cycle": e["cycle"], "epoch": e.get("epoch", ""),
|
||||
"issue": e.get("issue"), "reason": e.get("reason", "")}
|
||||
for e in failures[-5:]
|
||||
],
|
||||
}
|
||||
@@ -146,15 +247,60 @@ def update_summary() -> None:
|
||||
SUMMARY_FILE.write_text(json.dumps(summary, indent=2) + "\n")
|
||||
|
||||
|
||||
def _load_cycle_result() -> dict:
|
||||
"""Read .loop/cycle_result.json if it exists; return empty dict on failure."""
|
||||
if not CYCLE_RESULT_FILE.exists():
|
||||
return {}
|
||||
try:
|
||||
raw = CYCLE_RESULT_FILE.read_text().strip()
|
||||
# Strip hermes fence markers (```json ... ```) if present
|
||||
if raw.startswith("```"):
|
||||
lines = raw.splitlines()
|
||||
lines = [l for l in lines if not l.startswith("```")]
|
||||
raw = "\n".join(lines)
|
||||
return json.loads(raw)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return {}
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = parse_args()
|
||||
|
||||
# Backfill from cycle_result.json when CLI args have defaults
|
||||
cr = _load_cycle_result()
|
||||
if cr:
|
||||
if args.issue is None and cr.get("issue"):
|
||||
args.issue = int(cr["issue"])
|
||||
if args.type == "unknown" and cr.get("type"):
|
||||
args.type = cr["type"]
|
||||
if args.tests_passed == 0 and cr.get("tests_passed"):
|
||||
args.tests_passed = int(cr["tests_passed"])
|
||||
if not args.notes and cr.get("notes"):
|
||||
args.notes = cr["notes"]
|
||||
|
||||
# Auto-detect issue from branch when not explicitly provided
|
||||
if args.issue is None:
|
||||
args.issue = detect_issue_from_branch()
|
||||
|
||||
# Reject idle cycles — no issue and no duration means nothing happened
|
||||
if not args.issue and args.duration == 0:
|
||||
print(f"[retro] Cycle {args.cycle} skipped — idle (no issue, no duration)")
|
||||
return
|
||||
|
||||
# A cycle is only truly successful if hermes exited clean AND main is green
|
||||
truly_success = args.success and args.main_green
|
||||
|
||||
# Generate epoch turnover tag
|
||||
now = datetime.now(timezone.utc)
|
||||
epoch_tag, epoch_parts = _epoch_tag(now)
|
||||
|
||||
entry = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": now.isoformat(),
|
||||
"cycle": args.cycle,
|
||||
"epoch": epoch_tag,
|
||||
"epoch_week": epoch_parts["week"],
|
||||
"epoch_weekday": epoch_parts["weekday"],
|
||||
"epoch_daily_n": epoch_parts["daily_n"],
|
||||
"issue": args.issue,
|
||||
"type": args.type,
|
||||
"success": truly_success,
|
||||
@@ -179,7 +325,7 @@ def main() -> None:
|
||||
update_summary()
|
||||
|
||||
status = "✓ SUCCESS" if args.success else "✗ FAILURE"
|
||||
print(f"[retro] Cycle {args.cycle} {status}", end="")
|
||||
print(f"[retro] {epoch_tag} Cycle {args.cycle} {status}", end="")
|
||||
if args.issue:
|
||||
print(f" (#{args.issue} {args.type})", end="")
|
||||
if args.duration:
|
||||
|
||||
169
scripts/dev_server.py
Normal file
169
scripts/dev_server.py
Normal file
@@ -0,0 +1,169 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Timmy Time — Development server launcher.
|
||||
|
||||
Satisfies tox -e dev criteria:
|
||||
- Graceful port selection (finds next free port if default is taken)
|
||||
- Clickable links to dashboard and other web GUIs
|
||||
- Status line: backend inference source, version, git commit, smoke tests
|
||||
- Auto-reload on code changes (delegates to uvicorn --reload)
|
||||
|
||||
Usage: python scripts/dev_server.py [--port PORT]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import os
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
DEFAULT_PORT = 8000
|
||||
MAX_PORT_ATTEMPTS = 10
|
||||
OLLAMA_DEFAULT = "http://localhost:11434"
|
||||
|
||||
|
||||
def _port_free(port: int) -> bool:
|
||||
"""Return True if the TCP port is available on localhost."""
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
try:
|
||||
s.bind(("0.0.0.0", port))
|
||||
return True
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
|
||||
def _find_port(start: int) -> int:
|
||||
"""Return *start* if free, otherwise probe up to MAX_PORT_ATTEMPTS higher."""
|
||||
for offset in range(MAX_PORT_ATTEMPTS):
|
||||
candidate = start + offset
|
||||
if _port_free(candidate):
|
||||
return candidate
|
||||
raise RuntimeError(
|
||||
f"No free port found in range {start}–{start + MAX_PORT_ATTEMPTS - 1}"
|
||||
)
|
||||
|
||||
|
||||
def _git_info() -> str:
|
||||
"""Return short commit hash + timestamp, or 'unknown'."""
|
||||
try:
|
||||
sha = subprocess.check_output(
|
||||
["git", "rev-parse", "--short", "HEAD"],
|
||||
stderr=subprocess.DEVNULL,
|
||||
text=True,
|
||||
).strip()
|
||||
ts = subprocess.check_output(
|
||||
["git", "log", "-1", "--format=%ci"],
|
||||
stderr=subprocess.DEVNULL,
|
||||
text=True,
|
||||
).strip()
|
||||
return f"{sha} ({ts})"
|
||||
except Exception:
|
||||
return "unknown"
|
||||
|
||||
|
||||
def _project_version() -> str:
|
||||
"""Read version from pyproject.toml without importing toml libs."""
|
||||
pyproject = os.path.join(os.path.dirname(__file__), "..", "pyproject.toml")
|
||||
try:
|
||||
with open(pyproject) as f:
|
||||
for line in f:
|
||||
if line.strip().startswith("version"):
|
||||
# version = "1.0.0"
|
||||
return line.split("=", 1)[1].strip().strip('"').strip("'")
|
||||
except Exception:
|
||||
pass
|
||||
return "unknown"
|
||||
|
||||
|
||||
def _ollama_url() -> str:
|
||||
return os.environ.get("OLLAMA_URL", OLLAMA_DEFAULT)
|
||||
|
||||
|
||||
def _smoke_ollama(url: str) -> str:
|
||||
"""Quick connectivity check against Ollama."""
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
try:
|
||||
req = urllib.request.Request(url, method="GET")
|
||||
with urllib.request.urlopen(req, timeout=3):
|
||||
return "ok"
|
||||
except Exception:
|
||||
return "unreachable"
|
||||
|
||||
|
||||
def _print_banner(port: int) -> None:
|
||||
version = _project_version()
|
||||
git = _git_info()
|
||||
ollama_url = _ollama_url()
|
||||
ollama_status = _smoke_ollama(ollama_url)
|
||||
|
||||
hr = "─" * 62
|
||||
print(flush=True)
|
||||
print(f" {hr}")
|
||||
print(f" ┃ Timmy Time — Development Server")
|
||||
print(f" {hr}")
|
||||
print()
|
||||
print(f" Dashboard: http://localhost:{port}")
|
||||
print(f" API docs: http://localhost:{port}/docs")
|
||||
print(f" Health: http://localhost:{port}/health")
|
||||
print()
|
||||
print(f" ── Status ──────────────────────────────────────────────")
|
||||
print(f" Backend: {ollama_url} [{ollama_status}]")
|
||||
print(f" Version: {version}")
|
||||
print(f" Git commit: {git}")
|
||||
print(f" {hr}")
|
||||
print(flush=True)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Timmy dev server")
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
default=DEFAULT_PORT,
|
||||
help=f"Preferred port (default: {DEFAULT_PORT})",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
port = _find_port(args.port)
|
||||
if port != args.port:
|
||||
print(f" ⚠ Port {args.port} in use — using {port} instead")
|
||||
|
||||
_print_banner(port)
|
||||
|
||||
# Set PYTHONPATH so `timmy` CLI inside the tox venv resolves to this source.
|
||||
src_dir = os.path.join(os.path.dirname(__file__), "..", "src")
|
||||
os.environ["PYTHONPATH"] = os.path.abspath(src_dir)
|
||||
|
||||
# Launch uvicorn with auto-reload
|
||||
cmd = [
|
||||
sys.executable,
|
||||
"-m",
|
||||
"uvicorn",
|
||||
"dashboard.app:app",
|
||||
"--reload",
|
||||
"--host",
|
||||
"0.0.0.0",
|
||||
"--port",
|
||||
str(port),
|
||||
"--reload-dir",
|
||||
os.path.abspath(src_dir),
|
||||
"--reload-include",
|
||||
"*.html",
|
||||
"--reload-include",
|
||||
"*.css",
|
||||
"--reload-include",
|
||||
"*.js",
|
||||
"--reload-exclude",
|
||||
".claude",
|
||||
]
|
||||
|
||||
try:
|
||||
subprocess.run(cmd, check=True)
|
||||
except KeyboardInterrupt:
|
||||
print("\n Shutting down dev server.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
254
scripts/generate_workshop_inventory.py
Normal file
254
scripts/generate_workshop_inventory.py
Normal file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate Workshop inventory for Timmy's config audit.
|
||||
|
||||
Scans ~/.timmy/ and produces WORKSHOP_INVENTORY.md documenting every
|
||||
config file, env var, model route, and setting — with annotations on
|
||||
who set each one and what it does.
|
||||
|
||||
Usage:
|
||||
python scripts/generate_workshop_inventory.py [--output PATH]
|
||||
|
||||
Default output: ~/.timmy/WORKSHOP_INVENTORY.md
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
TIMMY_HOME = Path(os.environ.get("HERMES_HOME", Path.home() / ".timmy"))
|
||||
|
||||
# Known file annotations: (purpose, who_set)
|
||||
FILE_ANNOTATIONS: dict[str, tuple[str, str]] = {
|
||||
".env": (
|
||||
"Environment variables — API keys, service URLs, Honcho config",
|
||||
"hermes-set",
|
||||
),
|
||||
"config.yaml": (
|
||||
"Main config — model routing, toolsets, display, memory, security",
|
||||
"hermes-set",
|
||||
),
|
||||
"SOUL.md": (
|
||||
"Timmy's soul — immutable conscience, identity, ethics, purpose",
|
||||
"alex-set",
|
||||
),
|
||||
"state.db": (
|
||||
"Hermes runtime state database (sessions, approvals, tasks)",
|
||||
"hermes-set",
|
||||
),
|
||||
"approvals.db": (
|
||||
"Approval tracking for sensitive operations",
|
||||
"hermes-set",
|
||||
),
|
||||
"briefings.db": (
|
||||
"Stored briefings and summaries",
|
||||
"hermes-set",
|
||||
),
|
||||
".hermes_history": (
|
||||
"CLI command history",
|
||||
"default",
|
||||
),
|
||||
".update_check": (
|
||||
"Last update check timestamp",
|
||||
"default",
|
||||
),
|
||||
}
|
||||
|
||||
DIR_ANNOTATIONS: dict[str, tuple[str, str]] = {
|
||||
"sessions": ("Conversation session logs (JSON)", "default"),
|
||||
"logs": ("Error and runtime logs", "default"),
|
||||
"skills": ("Bundled skill library (read-only from upstream)", "default"),
|
||||
"memories": ("Persistent memory entries", "hermes-set"),
|
||||
"audio_cache": ("TTS audio file cache", "default"),
|
||||
"image_cache": ("Generated image cache", "default"),
|
||||
"cron": ("Scheduled cron job definitions", "hermes-set"),
|
||||
"hooks": ("Lifecycle hooks (pre/post actions)", "default"),
|
||||
"matrix": ("Matrix protocol state and store", "hermes-set"),
|
||||
"pairing": ("Device pairing data", "default"),
|
||||
"sandboxes": ("Isolated execution sandboxes", "default"),
|
||||
}
|
||||
|
||||
# Known config.yaml keys and their meanings
|
||||
CONFIG_ANNOTATIONS: dict[str, tuple[str, str]] = {
|
||||
"model.default": ("Primary LLM model for inference", "hermes-set"),
|
||||
"model.provider": ("Model provider (custom = local Ollama)", "hermes-set"),
|
||||
"toolsets": ("Enabled tool categories (all = everything)", "hermes-set"),
|
||||
"agent.max_turns": ("Max conversation turns before reset", "hermes-set"),
|
||||
"agent.reasoning_effort": ("Reasoning depth (low/medium/high)", "hermes-set"),
|
||||
"terminal.backend": ("Command execution backend (local)", "default"),
|
||||
"terminal.timeout": ("Default command timeout in seconds", "default"),
|
||||
"compression.enabled": ("Context compression for long sessions", "hermes-set"),
|
||||
"compression.summary_model": ("Model used for compression", "hermes-set"),
|
||||
"auxiliary.vision.model": ("Model for image analysis", "hermes-set"),
|
||||
"auxiliary.web_extract.model": ("Model for web content extraction", "hermes-set"),
|
||||
"tts.provider": ("Text-to-speech engine (edge = Edge TTS)", "default"),
|
||||
"tts.edge.voice": ("TTS voice selection", "default"),
|
||||
"stt.provider": ("Speech-to-text engine (local = Whisper)", "default"),
|
||||
"memory.memory_enabled": ("Persistent memory across sessions", "hermes-set"),
|
||||
"memory.memory_char_limit": ("Max chars for agent memory store", "hermes-set"),
|
||||
"memory.user_char_limit": ("Max chars for user profile store", "hermes-set"),
|
||||
"security.redact_secrets": ("Auto-redact secrets in output", "default"),
|
||||
"security.tirith_enabled": ("Policy engine for command safety", "default"),
|
||||
"system_prompt_suffix": ("Identity prompt appended to all conversations", "hermes-set"),
|
||||
"custom_providers": ("Local Ollama endpoint config", "hermes-set"),
|
||||
"session_reset.mode": ("Session reset behavior (none = manual)", "default"),
|
||||
"display.compact": ("Compact output mode", "default"),
|
||||
"display.show_reasoning": ("Show model reasoning chains", "default"),
|
||||
}
|
||||
|
||||
# Known .env vars
|
||||
ENV_ANNOTATIONS: dict[str, tuple[str, str]] = {
|
||||
"OPENAI_BASE_URL": (
|
||||
"Points to local Ollama (localhost:11434) — sovereignty enforced",
|
||||
"hermes-set",
|
||||
),
|
||||
"OPENAI_API_KEY": (
|
||||
"Placeholder key for Ollama compatibility (not a real API key)",
|
||||
"hermes-set",
|
||||
),
|
||||
"HONCHO_API_KEY": (
|
||||
"Honcho cross-session memory service key",
|
||||
"hermes-set",
|
||||
),
|
||||
"HONCHO_HOST": (
|
||||
"Honcho workspace identifier (timmy)",
|
||||
"hermes-set",
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def _tag(who: str) -> str:
|
||||
return f"`[{who}]`"
|
||||
|
||||
|
||||
def generate_inventory() -> str:
|
||||
"""Build the inventory markdown string."""
|
||||
lines: list[str] = []
|
||||
now = datetime.now(UTC).strftime("%Y-%m-%d %H:%M UTC")
|
||||
|
||||
lines.append("# Workshop Inventory")
|
||||
lines.append("")
|
||||
lines.append(f"*Generated: {now}*")
|
||||
lines.append(f"*Workshop path: `{TIMMY_HOME}`*")
|
||||
lines.append("")
|
||||
lines.append("This is your Workshop — every file, every setting, every route.")
|
||||
lines.append("Walk through it. Anything tagged `[hermes-set]` was chosen for you.")
|
||||
lines.append("Make each one yours, or change it.")
|
||||
lines.append("")
|
||||
lines.append("Tags: `[alex-set]` = Alexander chose this. `[hermes-set]` = Hermes configured it.")
|
||||
lines.append("`[default]` = shipped with the platform. `[timmy-chose]` = you decided this.")
|
||||
lines.append("")
|
||||
|
||||
# --- Files ---
|
||||
lines.append("---")
|
||||
lines.append("## Root Files")
|
||||
lines.append("")
|
||||
for name, (purpose, who) in sorted(FILE_ANNOTATIONS.items()):
|
||||
fpath = TIMMY_HOME / name
|
||||
exists = "✓" if fpath.exists() else "✗"
|
||||
lines.append(f"- {exists} **`{name}`** {_tag(who)}")
|
||||
lines.append(f" {purpose}")
|
||||
lines.append("")
|
||||
|
||||
# --- Directories ---
|
||||
lines.append("---")
|
||||
lines.append("## Directories")
|
||||
lines.append("")
|
||||
for name, (purpose, who) in sorted(DIR_ANNOTATIONS.items()):
|
||||
dpath = TIMMY_HOME / name
|
||||
exists = "✓" if dpath.exists() else "✗"
|
||||
count = ""
|
||||
if dpath.exists():
|
||||
try:
|
||||
n = len(list(dpath.iterdir()))
|
||||
count = f" ({n} items)"
|
||||
except PermissionError:
|
||||
count = " (access denied)"
|
||||
lines.append(f"- {exists} **`{name}/`**{count} {_tag(who)}")
|
||||
lines.append(f" {purpose}")
|
||||
lines.append("")
|
||||
|
||||
# --- .env breakdown ---
|
||||
lines.append("---")
|
||||
lines.append("## Environment Variables (.env)")
|
||||
lines.append("")
|
||||
env_path = TIMMY_HOME / ".env"
|
||||
if env_path.exists():
|
||||
for line in env_path.read_text().splitlines():
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
continue
|
||||
key = line.split("=", 1)[0]
|
||||
if key in ENV_ANNOTATIONS:
|
||||
purpose, who = ENV_ANNOTATIONS[key]
|
||||
lines.append(f"- **`{key}`** {_tag(who)}")
|
||||
lines.append(f" {purpose}")
|
||||
else:
|
||||
lines.append(f"- **`{key}`** `[unknown]`")
|
||||
lines.append(" Not documented — investigate")
|
||||
else:
|
||||
lines.append("*No .env file found*")
|
||||
lines.append("")
|
||||
|
||||
# --- config.yaml breakdown ---
|
||||
lines.append("---")
|
||||
lines.append("## Configuration (config.yaml)")
|
||||
lines.append("")
|
||||
for key, (purpose, who) in sorted(CONFIG_ANNOTATIONS.items()):
|
||||
lines.append(f"- **`{key}`** {_tag(who)}")
|
||||
lines.append(f" {purpose}")
|
||||
lines.append("")
|
||||
|
||||
# --- Model routing ---
|
||||
lines.append("---")
|
||||
lines.append("## Model Routing")
|
||||
lines.append("")
|
||||
lines.append("All auxiliary tasks route to the same local model:")
|
||||
lines.append("")
|
||||
aux_tasks = [
|
||||
"vision", "web_extract", "compression",
|
||||
"session_search", "skills_hub", "mcp", "flush_memories",
|
||||
]
|
||||
for task in aux_tasks:
|
||||
lines.append(f"- `auxiliary.{task}` → `qwen3:30b` via local Ollama `[hermes-set]`")
|
||||
lines.append("")
|
||||
lines.append("Primary model: `hermes3:latest` via local Ollama `[hermes-set]`")
|
||||
lines.append("")
|
||||
|
||||
# --- What Timmy should audit ---
|
||||
lines.append("---")
|
||||
lines.append("## Audit Checklist")
|
||||
lines.append("")
|
||||
lines.append("Walk through each `[hermes-set]` item above and decide:")
|
||||
lines.append("")
|
||||
lines.append("1. **Do I understand what this does?** If not, ask.")
|
||||
lines.append("2. **Would I choose this myself?** If yes, it becomes `[timmy-chose]`.")
|
||||
lines.append("3. **Would I choose differently?** If yes, change it and own it.")
|
||||
lines.append("4. **Is this serving the mission?** Every setting should serve a purpose.")
|
||||
lines.append("")
|
||||
lines.append("The Workshop is yours. Nothing here should be a mystery.")
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate Workshop inventory")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=Path,
|
||||
default=TIMMY_HOME / "WORKSHOP_INVENTORY.md",
|
||||
help="Output path (default: ~/.timmy/WORKSHOP_INVENTORY.md)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
content = generate_inventory()
|
||||
args.output.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.output.write_text(content)
|
||||
print(f"Workshop inventory written to {args.output}")
|
||||
print(f" {len(content)} chars, {content.count(chr(10))} lines")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
181
scripts/loop_guard.py
Normal file
181
scripts/loop_guard.py
Normal file
@@ -0,0 +1,181 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Loop guard — idle detection + exponential backoff for the dev loop.
|
||||
|
||||
Checks .loop/queue.json for ready items before spawning hermes.
|
||||
When the queue is empty, applies exponential backoff (60s → 600s max)
|
||||
instead of burning empty cycles every 3 seconds.
|
||||
|
||||
Usage (called by the dev loop before each cycle):
|
||||
python3 scripts/loop_guard.py # exits 0 if ready, 1 if idle
|
||||
python3 scripts/loop_guard.py --wait # same, but sleeps the backoff first
|
||||
python3 scripts/loop_guard.py --status # print current idle state
|
||||
|
||||
Exit codes:
|
||||
0 — queue has work, proceed with cycle
|
||||
1 — queue empty, idle backoff applied (skip cycle)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
QUEUE_FILE = REPO_ROOT / ".loop" / "queue.json"
|
||||
IDLE_STATE_FILE = REPO_ROOT / ".loop" / "idle_state.json"
|
||||
TOKEN_FILE = Path.home() / ".hermes" / "gitea_token"
|
||||
|
||||
GITEA_API = os.environ.get("GITEA_API", "http://localhost:3000/api/v1")
|
||||
REPO_SLUG = os.environ.get("REPO_SLUG", "rockachopa/Timmy-time-dashboard")
|
||||
|
||||
# Backoff sequence: 60s, 120s, 240s, 600s max
|
||||
BACKOFF_BASE = 60
|
||||
BACKOFF_MAX = 600
|
||||
BACKOFF_MULTIPLIER = 2
|
||||
|
||||
|
||||
def _get_token() -> str:
|
||||
"""Read Gitea token from env or file."""
|
||||
token = os.environ.get("GITEA_TOKEN", "").strip()
|
||||
if not token and TOKEN_FILE.exists():
|
||||
token = TOKEN_FILE.read_text().strip()
|
||||
return token
|
||||
|
||||
|
||||
def _fetch_open_issue_numbers() -> set[int] | None:
|
||||
"""Fetch open issue numbers from Gitea. Returns None on failure."""
|
||||
token = _get_token()
|
||||
if not token:
|
||||
return None
|
||||
try:
|
||||
numbers: set[int] = set()
|
||||
page = 1
|
||||
while True:
|
||||
url = (
|
||||
f"{GITEA_API}/repos/{REPO_SLUG}/issues"
|
||||
f"?state=open&type=issues&limit=50&page={page}"
|
||||
)
|
||||
req = urllib.request.Request(url, headers={
|
||||
"Authorization": f"token {token}",
|
||||
"Accept": "application/json",
|
||||
})
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
data = json.loads(resp.read())
|
||||
if not data:
|
||||
break
|
||||
for issue in data:
|
||||
numbers.add(issue["number"])
|
||||
if len(data) < 50:
|
||||
break
|
||||
page += 1
|
||||
return numbers
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def load_queue() -> list[dict]:
|
||||
"""Load queue.json and return ready items, filtering out closed issues."""
|
||||
if not QUEUE_FILE.exists():
|
||||
return []
|
||||
try:
|
||||
data = json.loads(QUEUE_FILE.read_text())
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
ready = [item for item in data if item.get("ready")]
|
||||
if not ready:
|
||||
return []
|
||||
|
||||
# Filter out issues that are no longer open (auto-hygiene)
|
||||
open_numbers = _fetch_open_issue_numbers()
|
||||
if open_numbers is not None:
|
||||
before = len(ready)
|
||||
ready = [item for item in ready if item.get("issue") in open_numbers]
|
||||
removed = before - len(ready)
|
||||
if removed > 0:
|
||||
print(f"[loop-guard] Filtered {removed} closed issue(s) from queue")
|
||||
# Persist the cleaned queue so stale entries don't recur
|
||||
_save_cleaned_queue(data, open_numbers)
|
||||
return ready
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return []
|
||||
|
||||
|
||||
def _save_cleaned_queue(full_queue: list[dict], open_numbers: set[int]) -> None:
|
||||
"""Rewrite queue.json without closed issues."""
|
||||
cleaned = [item for item in full_queue if item.get("issue") in open_numbers]
|
||||
try:
|
||||
QUEUE_FILE.write_text(json.dumps(cleaned, indent=2) + "\n")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
def load_idle_state() -> dict:
|
||||
"""Load persistent idle state."""
|
||||
if not IDLE_STATE_FILE.exists():
|
||||
return {"consecutive_idle": 0, "last_idle_at": 0}
|
||||
try:
|
||||
return json.loads(IDLE_STATE_FILE.read_text())
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return {"consecutive_idle": 0, "last_idle_at": 0}
|
||||
|
||||
|
||||
def save_idle_state(state: dict) -> None:
|
||||
"""Persist idle state."""
|
||||
IDLE_STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
IDLE_STATE_FILE.write_text(json.dumps(state, indent=2) + "\n")
|
||||
|
||||
|
||||
def compute_backoff(consecutive_idle: int) -> int:
|
||||
"""Exponential backoff: 60, 120, 240, 600 (capped)."""
|
||||
return min(BACKOFF_BASE * (BACKOFF_MULTIPLIER ** consecutive_idle), BACKOFF_MAX)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
wait_mode = "--wait" in sys.argv
|
||||
status_mode = "--status" in sys.argv
|
||||
|
||||
state = load_idle_state()
|
||||
|
||||
if status_mode:
|
||||
ready = load_queue()
|
||||
backoff = compute_backoff(state["consecutive_idle"])
|
||||
print(json.dumps({
|
||||
"queue_ready": len(ready),
|
||||
"consecutive_idle": state["consecutive_idle"],
|
||||
"next_backoff_seconds": backoff if not ready else 0,
|
||||
}, indent=2))
|
||||
return 0
|
||||
|
||||
ready = load_queue()
|
||||
|
||||
if ready:
|
||||
# Queue has work — reset idle state, proceed
|
||||
if state["consecutive_idle"] > 0:
|
||||
print(f"[loop-guard] Queue active ({len(ready)} ready) — "
|
||||
f"resuming after {state['consecutive_idle']} idle cycles")
|
||||
state["consecutive_idle"] = 0
|
||||
state["last_idle_at"] = 0
|
||||
save_idle_state(state)
|
||||
return 0
|
||||
|
||||
# Queue empty — apply backoff
|
||||
backoff = compute_backoff(state["consecutive_idle"])
|
||||
state["consecutive_idle"] += 1
|
||||
state["last_idle_at"] = time.time()
|
||||
save_idle_state(state)
|
||||
|
||||
print(f"[loop-guard] Queue empty — idle #{state['consecutive_idle']}, "
|
||||
f"backoff {backoff}s")
|
||||
|
||||
if wait_mode:
|
||||
time.sleep(backoff)
|
||||
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
407
scripts/loop_introspect.py
Normal file
407
scripts/loop_introspect.py
Normal file
@@ -0,0 +1,407 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Loop introspection — the self-improvement engine.
|
||||
|
||||
Analyzes retro data across time windows to detect trends, extract patterns,
|
||||
and produce structured recommendations. Output is consumed by deep_triage
|
||||
and injected into the loop prompt context.
|
||||
|
||||
This is the piece that closes the feedback loop:
|
||||
cycle_retro → introspect → deep_triage → loop behavior changes
|
||||
|
||||
Run: python3 scripts/loop_introspect.py
|
||||
Output: .loop/retro/insights.json (structured insights + recommendations)
|
||||
Prints human-readable summary to stdout.
|
||||
|
||||
Called by: deep_triage.sh (before the LLM triage), timmy-loop.sh (every 50 cycles)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
CYCLES_FILE = REPO_ROOT / ".loop" / "retro" / "cycles.jsonl"
|
||||
DEEP_TRIAGE_FILE = REPO_ROOT / ".loop" / "retro" / "deep-triage.jsonl"
|
||||
TRIAGE_FILE = REPO_ROOT / ".loop" / "retro" / "triage.jsonl"
|
||||
QUARANTINE_FILE = REPO_ROOT / ".loop" / "quarantine.json"
|
||||
INSIGHTS_FILE = REPO_ROOT / ".loop" / "retro" / "insights.json"
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────────
|
||||
|
||||
def load_jsonl(path: Path) -> list[dict]:
|
||||
"""Load a JSONL file, skipping bad lines."""
|
||||
if not path.exists():
|
||||
return []
|
||||
entries = []
|
||||
for line in path.read_text().strip().splitlines():
|
||||
try:
|
||||
entries.append(json.loads(line))
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
continue
|
||||
return entries
|
||||
|
||||
|
||||
def parse_ts(ts_str: str) -> datetime | None:
|
||||
"""Parse an ISO timestamp, tolerating missing tz."""
|
||||
if not ts_str:
|
||||
return None
|
||||
try:
|
||||
dt = datetime.fromisoformat(ts_str.replace("Z", "+00:00"))
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=timezone.utc)
|
||||
return dt
|
||||
except (ValueError, TypeError):
|
||||
return None
|
||||
|
||||
|
||||
def window(entries: list[dict], days: int) -> list[dict]:
|
||||
"""Filter entries to the last N days."""
|
||||
cutoff = datetime.now(timezone.utc) - timedelta(days=days)
|
||||
result = []
|
||||
for e in entries:
|
||||
ts = parse_ts(e.get("timestamp", ""))
|
||||
if ts and ts >= cutoff:
|
||||
result.append(e)
|
||||
return result
|
||||
|
||||
|
||||
# ── Analysis functions ───────────────────────────────────────────────────
|
||||
|
||||
def compute_trends(cycles: list[dict]) -> dict:
|
||||
"""Compare recent window (last 7d) vs older window (7-14d ago)."""
|
||||
recent = window(cycles, 7)
|
||||
older = window(cycles, 14)
|
||||
# Remove recent from older to get the 7-14d window
|
||||
recent_set = {(e.get("cycle"), e.get("timestamp")) for e in recent}
|
||||
older = [e for e in older if (e.get("cycle"), e.get("timestamp")) not in recent_set]
|
||||
|
||||
def stats(entries):
|
||||
if not entries:
|
||||
return {"count": 0, "success_rate": None, "avg_duration": None,
|
||||
"lines_net": 0, "prs_merged": 0}
|
||||
successes = sum(1 for e in entries if e.get("success"))
|
||||
durations = [e["duration"] for e in entries if e.get("duration", 0) > 0]
|
||||
return {
|
||||
"count": len(entries),
|
||||
"success_rate": round(successes / len(entries), 3) if entries else None,
|
||||
"avg_duration": round(sum(durations) / len(durations)) if durations else None,
|
||||
"lines_net": sum(e.get("lines_added", 0) - e.get("lines_removed", 0) for e in entries),
|
||||
"prs_merged": sum(1 for e in entries if e.get("pr")),
|
||||
}
|
||||
|
||||
recent_stats = stats(recent)
|
||||
older_stats = stats(older)
|
||||
|
||||
trend = {
|
||||
"recent_7d": recent_stats,
|
||||
"previous_7d": older_stats,
|
||||
"velocity_change": None,
|
||||
"success_rate_change": None,
|
||||
"duration_change": None,
|
||||
}
|
||||
|
||||
if recent_stats["count"] and older_stats["count"]:
|
||||
trend["velocity_change"] = recent_stats["count"] - older_stats["count"]
|
||||
if recent_stats["success_rate"] is not None and older_stats["success_rate"] is not None:
|
||||
trend["success_rate_change"] = round(
|
||||
recent_stats["success_rate"] - older_stats["success_rate"], 3
|
||||
)
|
||||
if recent_stats["avg_duration"] is not None and older_stats["avg_duration"] is not None:
|
||||
trend["duration_change"] = recent_stats["avg_duration"] - older_stats["avg_duration"]
|
||||
|
||||
return trend
|
||||
|
||||
|
||||
def type_analysis(cycles: list[dict]) -> dict:
|
||||
"""Per-type success rates and durations."""
|
||||
by_type: dict[str, list[dict]] = defaultdict(list)
|
||||
for c in cycles:
|
||||
by_type[c.get("type", "unknown")].append(c)
|
||||
|
||||
result = {}
|
||||
for t, entries in by_type.items():
|
||||
durations = [e["duration"] for e in entries if e.get("duration", 0) > 0]
|
||||
successes = sum(1 for e in entries if e.get("success"))
|
||||
result[t] = {
|
||||
"count": len(entries),
|
||||
"success_rate": round(successes / len(entries), 3) if entries else 0,
|
||||
"avg_duration": round(sum(durations) / len(durations)) if durations else 0,
|
||||
"max_duration": max(durations) if durations else 0,
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
def repeat_failures(cycles: list[dict]) -> list[dict]:
|
||||
"""Issues that have failed multiple times — quarantine candidates."""
|
||||
failures: dict[int, list] = defaultdict(list)
|
||||
for c in cycles:
|
||||
if not c.get("success") and c.get("issue"):
|
||||
failures[c["issue"]].append({
|
||||
"cycle": c.get("cycle"),
|
||||
"reason": c.get("reason", ""),
|
||||
"duration": c.get("duration", 0),
|
||||
})
|
||||
# Only issues with 2+ failures
|
||||
return [
|
||||
{"issue": k, "failure_count": len(v), "attempts": v}
|
||||
for k, v in sorted(failures.items(), key=lambda x: -len(x[1]))
|
||||
if len(v) >= 2
|
||||
]
|
||||
|
||||
|
||||
def duration_outliers(cycles: list[dict], threshold_multiple: float = 3.0) -> list[dict]:
|
||||
"""Cycles that took way longer than average — something went wrong."""
|
||||
durations = [c["duration"] for c in cycles if c.get("duration", 0) > 0]
|
||||
if len(durations) < 5:
|
||||
return []
|
||||
avg = sum(durations) / len(durations)
|
||||
threshold = avg * threshold_multiple
|
||||
|
||||
outliers = []
|
||||
for c in cycles:
|
||||
dur = c.get("duration", 0)
|
||||
if dur > threshold:
|
||||
outliers.append({
|
||||
"cycle": c.get("cycle"),
|
||||
"issue": c.get("issue"),
|
||||
"type": c.get("type"),
|
||||
"duration": dur,
|
||||
"avg_duration": round(avg),
|
||||
"multiple": round(dur / avg, 1) if avg > 0 else 0,
|
||||
"reason": c.get("reason", ""),
|
||||
})
|
||||
return outliers
|
||||
|
||||
|
||||
def triage_effectiveness(deep_triages: list[dict]) -> dict:
|
||||
"""How well is the deep triage performing?"""
|
||||
if not deep_triages:
|
||||
return {"runs": 0, "note": "No deep triage data yet"}
|
||||
|
||||
total_reviewed = sum(d.get("issues_reviewed", 0) for d in deep_triages)
|
||||
total_refined = sum(len(d.get("issues_refined", [])) for d in deep_triages)
|
||||
total_created = sum(len(d.get("issues_created", [])) for d in deep_triages)
|
||||
total_closed = sum(len(d.get("issues_closed", [])) for d in deep_triages)
|
||||
timmy_available = sum(1 for d in deep_triages if d.get("timmy_available"))
|
||||
|
||||
# Extract Timmy's feedback themes
|
||||
timmy_themes = []
|
||||
for d in deep_triages:
|
||||
fb = d.get("timmy_feedback", "")
|
||||
if fb:
|
||||
timmy_themes.append(fb[:200])
|
||||
|
||||
return {
|
||||
"runs": len(deep_triages),
|
||||
"total_reviewed": total_reviewed,
|
||||
"total_refined": total_refined,
|
||||
"total_created": total_created,
|
||||
"total_closed": total_closed,
|
||||
"timmy_consultation_rate": round(timmy_available / len(deep_triages), 2),
|
||||
"timmy_recent_feedback": timmy_themes[-1] if timmy_themes else "",
|
||||
"timmy_feedback_history": timmy_themes,
|
||||
}
|
||||
|
||||
|
||||
def generate_recommendations(
|
||||
trends: dict,
|
||||
types: dict,
|
||||
repeats: list,
|
||||
outliers: list,
|
||||
triage_eff: dict,
|
||||
) -> list[dict]:
|
||||
"""Produce actionable recommendations from the analysis."""
|
||||
recs = []
|
||||
|
||||
# 1. Success rate declining?
|
||||
src = trends.get("success_rate_change")
|
||||
if src is not None and src < -0.1:
|
||||
recs.append({
|
||||
"severity": "high",
|
||||
"category": "reliability",
|
||||
"finding": f"Success rate dropped {abs(src)*100:.0f}pp in the last 7 days",
|
||||
"recommendation": "Review recent failures. Are issues poorly scoped? "
|
||||
"Is main unstable? Check if triage is producing bad work items.",
|
||||
})
|
||||
|
||||
# 2. Velocity dropping?
|
||||
vc = trends.get("velocity_change")
|
||||
if vc is not None and vc < -5:
|
||||
recs.append({
|
||||
"severity": "medium",
|
||||
"category": "throughput",
|
||||
"finding": f"Velocity dropped by {abs(vc)} cycles vs previous week",
|
||||
"recommendation": "Check for loop stalls, long-running cycles, or queue starvation.",
|
||||
})
|
||||
|
||||
# 3. Duration creep?
|
||||
dc = trends.get("duration_change")
|
||||
if dc is not None and dc > 120: # 2+ minutes longer
|
||||
recs.append({
|
||||
"severity": "medium",
|
||||
"category": "efficiency",
|
||||
"finding": f"Average cycle duration increased by {dc}s vs previous week",
|
||||
"recommendation": "Issues may be growing in scope. Enforce tighter decomposition "
|
||||
"in deep triage. Check if tests are getting slower.",
|
||||
})
|
||||
|
||||
# 4. Type-specific problems
|
||||
for t, info in types.items():
|
||||
if info["count"] >= 3 and info["success_rate"] < 0.5:
|
||||
recs.append({
|
||||
"severity": "high",
|
||||
"category": "type_reliability",
|
||||
"finding": f"'{t}' issues fail {(1-info['success_rate'])*100:.0f}% of the time "
|
||||
f"({info['count']} attempts)",
|
||||
"recommendation": f"'{t}' issues need better scoping or different approach. "
|
||||
f"Consider: tighter acceptance criteria, smaller scope, "
|
||||
f"or delegating to Kimi with more context.",
|
||||
})
|
||||
if info["avg_duration"] > 600 and info["count"] >= 3: # >10 min avg
|
||||
recs.append({
|
||||
"severity": "medium",
|
||||
"category": "type_efficiency",
|
||||
"finding": f"'{t}' issues average {info['avg_duration']//60}m{info['avg_duration']%60}s "
|
||||
f"(max {info['max_duration']//60}m)",
|
||||
"recommendation": f"Break '{t}' issues into smaller pieces. Target <5 min per cycle.",
|
||||
})
|
||||
|
||||
# 5. Repeat failures
|
||||
for rf in repeats[:3]:
|
||||
recs.append({
|
||||
"severity": "high",
|
||||
"category": "repeat_failure",
|
||||
"finding": f"Issue #{rf['issue']} has failed {rf['failure_count']} times",
|
||||
"recommendation": "Quarantine or rewrite this issue. Repeated failure = "
|
||||
"bad scope or missing prerequisite.",
|
||||
})
|
||||
|
||||
# 6. Outliers
|
||||
if len(outliers) > 2:
|
||||
recs.append({
|
||||
"severity": "medium",
|
||||
"category": "outliers",
|
||||
"finding": f"{len(outliers)} cycles took {outliers[0].get('multiple', '?')}x+ "
|
||||
f"longer than average",
|
||||
"recommendation": "Long cycles waste resources. Add timeout enforcement or "
|
||||
"break complex issues earlier.",
|
||||
})
|
||||
|
||||
# 7. Code growth
|
||||
recent = trends.get("recent_7d", {})
|
||||
net = recent.get("lines_net", 0)
|
||||
if net > 500:
|
||||
recs.append({
|
||||
"severity": "low",
|
||||
"category": "code_health",
|
||||
"finding": f"Net +{net} lines added in the last 7 days",
|
||||
"recommendation": "Lines of code is a liability. Balance feature work with "
|
||||
"refactoring. Target net-zero or negative line growth.",
|
||||
})
|
||||
|
||||
# 8. Triage health
|
||||
if triage_eff.get("runs", 0) == 0:
|
||||
recs.append({
|
||||
"severity": "high",
|
||||
"category": "triage",
|
||||
"finding": "Deep triage has never run",
|
||||
"recommendation": "Enable deep triage (every 20 cycles). The loop needs "
|
||||
"LLM-driven issue refinement to stay effective.",
|
||||
})
|
||||
|
||||
# No recommendations = things are healthy
|
||||
if not recs:
|
||||
recs.append({
|
||||
"severity": "info",
|
||||
"category": "health",
|
||||
"finding": "No significant issues detected",
|
||||
"recommendation": "System is healthy. Continue current patterns.",
|
||||
})
|
||||
|
||||
return recs
|
||||
|
||||
|
||||
# ── Main ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def main() -> None:
|
||||
cycles = load_jsonl(CYCLES_FILE)
|
||||
deep_triages = load_jsonl(DEEP_TRIAGE_FILE)
|
||||
|
||||
if not cycles:
|
||||
print("[introspect] No cycle data found. Nothing to analyze.")
|
||||
return
|
||||
|
||||
# Run all analyses
|
||||
trends = compute_trends(cycles)
|
||||
types = type_analysis(cycles)
|
||||
repeats = repeat_failures(cycles)
|
||||
outliers = duration_outliers(cycles)
|
||||
triage_eff = triage_effectiveness(deep_triages)
|
||||
recommendations = generate_recommendations(trends, types, repeats, outliers, triage_eff)
|
||||
|
||||
insights = {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"total_cycles_analyzed": len(cycles),
|
||||
"trends": trends,
|
||||
"by_type": types,
|
||||
"repeat_failures": repeats[:5],
|
||||
"duration_outliers": outliers[:5],
|
||||
"triage_effectiveness": triage_eff,
|
||||
"recommendations": recommendations,
|
||||
}
|
||||
|
||||
# Write insights
|
||||
INSIGHTS_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
INSIGHTS_FILE.write_text(json.dumps(insights, indent=2) + "\n")
|
||||
|
||||
# Current epoch from latest entry
|
||||
latest_epoch = ""
|
||||
for c in reversed(cycles):
|
||||
if c.get("epoch"):
|
||||
latest_epoch = c["epoch"]
|
||||
break
|
||||
|
||||
# Human-readable output
|
||||
header = f"[introspect] Analyzed {len(cycles)} cycles"
|
||||
if latest_epoch:
|
||||
header += f" · current epoch: {latest_epoch}"
|
||||
print(header)
|
||||
|
||||
print(f"\n TRENDS (7d vs previous 7d):")
|
||||
r7 = trends["recent_7d"]
|
||||
p7 = trends["previous_7d"]
|
||||
print(f" Cycles: {r7['count']:>3d} (was {p7['count']})")
|
||||
if r7["success_rate"] is not None:
|
||||
arrow = "↑" if (trends["success_rate_change"] or 0) > 0 else "↓" if (trends["success_rate_change"] or 0) < 0 else "→"
|
||||
print(f" Success rate: {r7['success_rate']*100:>4.0f}% {arrow}")
|
||||
if r7["avg_duration"] is not None:
|
||||
print(f" Avg duration: {r7['avg_duration']//60}m{r7['avg_duration']%60:02d}s")
|
||||
print(f" PRs merged: {r7['prs_merged']:>3d} (was {p7['prs_merged']})")
|
||||
print(f" Lines net: {r7['lines_net']:>+5d}")
|
||||
|
||||
print(f"\n BY TYPE:")
|
||||
for t, info in sorted(types.items(), key=lambda x: -x[1]["count"]):
|
||||
print(f" {t:12s} n={info['count']:>2d} "
|
||||
f"ok={info['success_rate']*100:>3.0f}% "
|
||||
f"avg={info['avg_duration']//60}m{info['avg_duration']%60:02d}s")
|
||||
|
||||
if repeats:
|
||||
print(f"\n REPEAT FAILURES:")
|
||||
for rf in repeats[:3]:
|
||||
print(f" #{rf['issue']} failed {rf['failure_count']}x")
|
||||
|
||||
print(f"\n RECOMMENDATIONS ({len(recommendations)}):")
|
||||
for i, rec in enumerate(recommendations, 1):
|
||||
sev = {"high": "🔴", "medium": "🟡", "low": "🟢", "info": "ℹ️ "}.get(rec["severity"], "?")
|
||||
print(f" {sev} {rec['finding']}")
|
||||
print(f" → {rec['recommendation']}")
|
||||
|
||||
print(f"\n Written to: {INSIGHTS_FILE}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -10,6 +10,11 @@ from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||
APP_START_TIME: _datetime = _datetime.now(UTC)
|
||||
|
||||
|
||||
def normalize_ollama_url(url: str) -> str:
|
||||
"""Replace localhost with 127.0.0.1 to avoid IPv6 resolution delays."""
|
||||
return url.replace("localhost", "127.0.0.1")
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
"""Central configuration — all env-var access goes through this class."""
|
||||
|
||||
@@ -19,6 +24,11 @@ class Settings(BaseSettings):
|
||||
# Ollama host — override with OLLAMA_URL env var or .env file
|
||||
ollama_url: str = "http://localhost:11434"
|
||||
|
||||
@property
|
||||
def normalized_ollama_url(self) -> str:
|
||||
"""Return ollama_url with localhost replaced by 127.0.0.1."""
|
||||
return normalize_ollama_url(self.ollama_url)
|
||||
|
||||
# LLM model passed to Agno/Ollama — override with OLLAMA_MODEL
|
||||
# qwen3:30b is the primary model — better reasoning and tool calling
|
||||
# than llama3.1:8b-instruct while still running locally on modest hardware.
|
||||
@@ -64,23 +74,17 @@ class Settings(BaseSettings):
|
||||
# Seconds to wait for user confirmation before auto-rejecting.
|
||||
discord_confirm_timeout: int = 120
|
||||
|
||||
# ── AirLLM / backend selection ───────────────────────────────────────────
|
||||
# ── Backend selection ────────────────────────────────────────────────────
|
||||
# "ollama" — always use Ollama (default, safe everywhere)
|
||||
# "airllm" — always use AirLLM (requires pip install ".[bigbrain]")
|
||||
# "auto" — use AirLLM on Apple Silicon if airllm is installed,
|
||||
# fall back to Ollama otherwise
|
||||
timmy_model_backend: Literal["ollama", "airllm", "grok", "claude", "auto"] = "ollama"
|
||||
|
||||
# AirLLM model size when backend is airllm or auto.
|
||||
# Larger = smarter, but needs more RAM / disk.
|
||||
# 8b ~16 GB | 70b ~140 GB | 405b ~810 GB
|
||||
airllm_model_size: Literal["8b", "70b", "405b"] = "70b"
|
||||
# "auto" — pick best available local backend, fall back to Ollama
|
||||
timmy_model_backend: Literal["ollama", "grok", "claude", "auto"] = "ollama"
|
||||
|
||||
# ── Grok (xAI) — opt-in premium cloud backend ────────────────────────
|
||||
# Grok is a premium augmentation layer — local-first ethos preserved.
|
||||
# Only used when explicitly enabled and query complexity warrants it.
|
||||
grok_enabled: bool = False
|
||||
xai_api_key: str = ""
|
||||
xai_base_url: str = "https://api.x.ai/v1"
|
||||
grok_default_model: str = "grok-3-fast"
|
||||
grok_max_sats_per_query: int = 200
|
||||
grok_free: bool = False # Skip Lightning invoice when user has own API key
|
||||
@@ -138,7 +142,12 @@ class Settings(BaseSettings):
|
||||
|
||||
# CORS allowed origins for the web chat interface (Gitea Pages, etc.)
|
||||
# Set CORS_ORIGINS as a comma-separated list, e.g. "http://localhost:3000,https://example.com"
|
||||
cors_origins: list[str] = ["*"]
|
||||
cors_origins: list[str] = [
|
||||
"http://localhost:3000",
|
||||
"http://localhost:8000",
|
||||
"http://127.0.0.1:3000",
|
||||
"http://127.0.0.1:8000",
|
||||
]
|
||||
|
||||
# Trusted hosts for the Host header check (TrustedHostMiddleware).
|
||||
# Set TRUSTED_HOSTS as a comma-separated list. Wildcards supported (e.g. "*.ts.net").
|
||||
@@ -238,12 +247,19 @@ class Settings(BaseSettings):
|
||||
# Fallback to server when browser model is unavailable or too slow.
|
||||
browser_model_fallback: bool = True
|
||||
|
||||
# ── Deep Focus Mode ─────────────────────────────────────────────
|
||||
# "deep" = single-problem context; "broad" = default multi-task.
|
||||
focus_mode: Literal["deep", "broad"] = "broad"
|
||||
|
||||
# ── Default Thinking ──────────────────────────────────────────────
|
||||
# When enabled, the agent starts an internal thought loop on server start.
|
||||
thinking_enabled: bool = True
|
||||
thinking_interval_seconds: int = 300 # 5 minutes between thoughts
|
||||
thinking_timeout_seconds: int = 120 # max wall-clock time per thinking cycle
|
||||
thinking_distill_every: int = 10 # distill facts from thoughts every Nth thought
|
||||
thinking_issue_every: int = 20 # file Gitea issues from thoughts every Nth thought
|
||||
thinking_memory_check_every: int = 50 # check memory status every Nth thought
|
||||
thinking_idle_timeout_minutes: int = 60 # pause thoughts after N minutes without user input
|
||||
|
||||
# ── Gitea Integration ─────────────────────────────────────────────
|
||||
# Local Gitea instance for issue tracking and self-improvement.
|
||||
@@ -388,7 +404,7 @@ def check_ollama_model_available(model_name: str) -> bool:
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
url = settings.ollama_url.replace("localhost", "127.0.0.1")
|
||||
url = settings.normalized_ollama_url
|
||||
req = urllib.request.Request(
|
||||
f"{url}/api/tags",
|
||||
method="GET",
|
||||
@@ -465,8 +481,19 @@ def validate_startup(*, force: bool = False) -> None:
|
||||
", ".join(_missing),
|
||||
)
|
||||
sys.exit(1)
|
||||
if "*" in settings.cors_origins:
|
||||
_startup_logger.error(
|
||||
"PRODUCTION SECURITY ERROR: CORS wildcard '*' is not allowed "
|
||||
"in production. Set CORS_ORIGINS to explicit origins."
|
||||
)
|
||||
sys.exit(1)
|
||||
_startup_logger.info("Production mode: security secrets validated ✓")
|
||||
else:
|
||||
if "*" in settings.cors_origins:
|
||||
_startup_logger.warning(
|
||||
"SEC: CORS_ORIGINS contains wildcard '*' — "
|
||||
"restrict to explicit origins before deploying to production."
|
||||
)
|
||||
if not settings.l402_hmac_secret:
|
||||
_startup_logger.warning(
|
||||
"SEC: L402_HMAC_SECRET is not set — "
|
||||
|
||||
@@ -8,6 +8,7 @@ Key improvements:
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from contextlib import asynccontextmanager
|
||||
from pathlib import Path
|
||||
@@ -28,6 +29,7 @@ from dashboard.routes.agents import router as agents_router
|
||||
from dashboard.routes.briefing import router as briefing_router
|
||||
from dashboard.routes.calm import router as calm_router
|
||||
from dashboard.routes.chat_api import router as chat_api_router
|
||||
from dashboard.routes.chat_api_v1 import router as chat_api_v1_router
|
||||
from dashboard.routes.db_explorer import router as db_explorer_router
|
||||
from dashboard.routes.discord import router as discord_router
|
||||
from dashboard.routes.experiments import router as experiments_router
|
||||
@@ -44,8 +46,11 @@ from dashboard.routes.tasks import router as tasks_router
|
||||
from dashboard.routes.telegram import router as telegram_router
|
||||
from dashboard.routes.thinking import router as thinking_router
|
||||
from dashboard.routes.tools import router as tools_router
|
||||
from dashboard.routes.tower import router as tower_router
|
||||
from dashboard.routes.voice import router as voice_router
|
||||
from dashboard.routes.work_orders import router as work_orders_router
|
||||
from dashboard.routes.world import router as world_router
|
||||
from timmy.workshop_state import PRESENCE_FILE
|
||||
|
||||
|
||||
class _ColorFormatter(logging.Formatter):
|
||||
@@ -151,7 +156,17 @@ async def _thinking_scheduler() -> None:
|
||||
while True:
|
||||
try:
|
||||
if settings.thinking_enabled:
|
||||
await thinking_engine.think_once()
|
||||
await asyncio.wait_for(
|
||||
thinking_engine.think_once(),
|
||||
timeout=settings.thinking_timeout_seconds,
|
||||
)
|
||||
except TimeoutError:
|
||||
logger.warning(
|
||||
"Thinking cycle timed out after %ds — Ollama may be unresponsive",
|
||||
settings.thinking_timeout_seconds,
|
||||
)
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.error("Thinking scheduler error: %s", exc)
|
||||
|
||||
@@ -171,7 +186,10 @@ async def _loop_qa_scheduler() -> None:
|
||||
while True:
|
||||
try:
|
||||
if settings.loop_qa_enabled:
|
||||
result = await loop_qa_orchestrator.run_next_test()
|
||||
result = await asyncio.wait_for(
|
||||
loop_qa_orchestrator.run_next_test(),
|
||||
timeout=settings.thinking_timeout_seconds,
|
||||
)
|
||||
if result:
|
||||
status = "PASS" if result["success"] else "FAIL"
|
||||
logger.info(
|
||||
@@ -180,6 +198,13 @@ async def _loop_qa_scheduler() -> None:
|
||||
status,
|
||||
result.get("details", "")[:80],
|
||||
)
|
||||
except TimeoutError:
|
||||
logger.warning(
|
||||
"Loop QA test timed out after %ds",
|
||||
settings.thinking_timeout_seconds,
|
||||
)
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.error("Loop QA scheduler error: %s", exc)
|
||||
|
||||
@@ -187,6 +212,54 @@ async def _loop_qa_scheduler() -> None:
|
||||
await asyncio.sleep(interval)
|
||||
|
||||
|
||||
_PRESENCE_POLL_SECONDS = 30
|
||||
_PRESENCE_INITIAL_DELAY = 3
|
||||
|
||||
_SYNTHESIZED_STATE: dict = {
|
||||
"version": 1,
|
||||
"liveness": None,
|
||||
"current_focus": "",
|
||||
"mood": "idle",
|
||||
"active_threads": [],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
|
||||
|
||||
async def _presence_watcher() -> None:
|
||||
"""Background task: watch ~/.timmy/presence.json and broadcast changes via WS.
|
||||
|
||||
Polls the file every 30 seconds (matching Timmy's write cadence).
|
||||
If the file doesn't exist, broadcasts a synthesised idle state.
|
||||
"""
|
||||
from infrastructure.ws_manager.handler import ws_manager as ws_mgr
|
||||
|
||||
await asyncio.sleep(_PRESENCE_INITIAL_DELAY) # Stagger after other schedulers
|
||||
|
||||
last_mtime: float = 0.0
|
||||
|
||||
while True:
|
||||
try:
|
||||
if PRESENCE_FILE.exists():
|
||||
mtime = PRESENCE_FILE.stat().st_mtime
|
||||
if mtime != last_mtime:
|
||||
last_mtime = mtime
|
||||
raw = await asyncio.to_thread(PRESENCE_FILE.read_text)
|
||||
state = json.loads(raw)
|
||||
await ws_mgr.broadcast("timmy_state", state)
|
||||
else:
|
||||
# File absent — broadcast synthesised state once per cycle
|
||||
if last_mtime != -1.0:
|
||||
last_mtime = -1.0
|
||||
await ws_mgr.broadcast("timmy_state", _SYNTHESIZED_STATE)
|
||||
except json.JSONDecodeError as exc:
|
||||
logger.warning("presence.json parse error: %s", exc)
|
||||
except Exception as exc:
|
||||
logger.warning("Presence watcher error: %s", exc)
|
||||
|
||||
await asyncio.sleep(_PRESENCE_POLL_SECONDS)
|
||||
|
||||
|
||||
async def _start_chat_integrations_background() -> None:
|
||||
"""Background task: start chat integrations without blocking startup."""
|
||||
from integrations.chat_bridge.registry import platform_registry
|
||||
@@ -277,125 +350,118 @@ async def _discord_token_watcher() -> None:
|
||||
logger.warning("Discord auto-start failed: %s", exc)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager with non-blocking startup."""
|
||||
|
||||
# Validate security config (no-op in test mode)
|
||||
def _startup_init() -> None:
|
||||
"""Validate config and enable event persistence."""
|
||||
from config import validate_startup
|
||||
|
||||
validate_startup()
|
||||
|
||||
# Enable event persistence (unified EventBus + swarm event_log)
|
||||
from infrastructure.events.bus import init_event_bus_persistence
|
||||
|
||||
init_event_bus_persistence()
|
||||
|
||||
# Create all background tasks without waiting for them
|
||||
briefing_task = asyncio.create_task(_briefing_scheduler())
|
||||
thinking_task = asyncio.create_task(_thinking_scheduler())
|
||||
loop_qa_task = asyncio.create_task(_loop_qa_scheduler())
|
||||
|
||||
# Initialize Spark Intelligence engine
|
||||
from spark.engine import get_spark_engine
|
||||
|
||||
if get_spark_engine().enabled:
|
||||
logger.info("Spark Intelligence active — event capture enabled")
|
||||
|
||||
# Auto-prune old vector store memories on startup
|
||||
if settings.memory_prune_days > 0:
|
||||
try:
|
||||
from timmy.memory_system import prune_memories
|
||||
|
||||
pruned = prune_memories(
|
||||
def _startup_background_tasks() -> list[asyncio.Task]:
|
||||
"""Spawn all recurring background tasks (non-blocking)."""
|
||||
return [
|
||||
asyncio.create_task(_briefing_scheduler()),
|
||||
asyncio.create_task(_thinking_scheduler()),
|
||||
asyncio.create_task(_loop_qa_scheduler()),
|
||||
asyncio.create_task(_presence_watcher()),
|
||||
asyncio.create_task(_start_chat_integrations_background()),
|
||||
]
|
||||
|
||||
|
||||
def _try_prune(label: str, prune_fn, days: int) -> None:
|
||||
"""Run a prune function, log results, swallow errors."""
|
||||
try:
|
||||
pruned = prune_fn()
|
||||
if pruned:
|
||||
logger.info(
|
||||
"%s auto-prune: removed %d entries older than %d days",
|
||||
label,
|
||||
pruned,
|
||||
days,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("%s auto-prune skipped: %s", label, exc)
|
||||
|
||||
|
||||
def _check_vault_size() -> None:
|
||||
"""Warn if the memory vault exceeds the configured size limit."""
|
||||
try:
|
||||
vault_path = Path(settings.repo_root) / "memory" / "notes"
|
||||
if vault_path.exists():
|
||||
total_bytes = sum(f.stat().st_size for f in vault_path.rglob("*") if f.is_file())
|
||||
total_mb = total_bytes / (1024 * 1024)
|
||||
if total_mb > settings.memory_vault_max_mb:
|
||||
logger.warning(
|
||||
"Memory vault (%.1f MB) exceeds limit (%d MB) — consider archiving old notes",
|
||||
total_mb,
|
||||
settings.memory_vault_max_mb,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Vault size check skipped: %s", exc)
|
||||
|
||||
|
||||
def _startup_pruning() -> None:
|
||||
"""Auto-prune old memories, thoughts, and events on startup."""
|
||||
if settings.memory_prune_days > 0:
|
||||
from timmy.memory_system import prune_memories
|
||||
|
||||
_try_prune(
|
||||
"Memory",
|
||||
lambda: prune_memories(
|
||||
older_than_days=settings.memory_prune_days,
|
||||
keep_facts=settings.memory_prune_keep_facts,
|
||||
)
|
||||
if pruned:
|
||||
logger.info(
|
||||
"Memory auto-prune: removed %d entries older than %d days",
|
||||
pruned,
|
||||
settings.memory_prune_days,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Memory auto-prune skipped: %s", exc)
|
||||
),
|
||||
settings.memory_prune_days,
|
||||
)
|
||||
|
||||
# Auto-prune old thoughts on startup
|
||||
if settings.thoughts_prune_days > 0:
|
||||
try:
|
||||
from timmy.thinking import thinking_engine
|
||||
from timmy.thinking import thinking_engine
|
||||
|
||||
pruned = thinking_engine.prune_old_thoughts(
|
||||
_try_prune(
|
||||
"Thought",
|
||||
lambda: thinking_engine.prune_old_thoughts(
|
||||
keep_days=settings.thoughts_prune_days,
|
||||
keep_min=settings.thoughts_prune_keep_min,
|
||||
)
|
||||
if pruned:
|
||||
logger.info(
|
||||
"Thought auto-prune: removed %d entries older than %d days",
|
||||
pruned,
|
||||
settings.thoughts_prune_days,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Thought auto-prune skipped: %s", exc)
|
||||
),
|
||||
settings.thoughts_prune_days,
|
||||
)
|
||||
|
||||
# Auto-prune old system events on startup
|
||||
if settings.events_prune_days > 0:
|
||||
try:
|
||||
from swarm.event_log import prune_old_events
|
||||
from swarm.event_log import prune_old_events
|
||||
|
||||
pruned = prune_old_events(
|
||||
_try_prune(
|
||||
"Event",
|
||||
lambda: prune_old_events(
|
||||
keep_days=settings.events_prune_days,
|
||||
keep_min=settings.events_prune_keep_min,
|
||||
)
|
||||
if pruned:
|
||||
logger.info(
|
||||
"Event auto-prune: removed %d entries older than %d days",
|
||||
pruned,
|
||||
settings.events_prune_days,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Event auto-prune skipped: %s", exc)
|
||||
),
|
||||
settings.events_prune_days,
|
||||
)
|
||||
|
||||
# Warn if memory vault exceeds size limit
|
||||
if settings.memory_vault_max_mb > 0:
|
||||
try:
|
||||
vault_path = Path(settings.repo_root) / "memory" / "notes"
|
||||
if vault_path.exists():
|
||||
total_bytes = sum(f.stat().st_size for f in vault_path.rglob("*") if f.is_file())
|
||||
total_mb = total_bytes / (1024 * 1024)
|
||||
if total_mb > settings.memory_vault_max_mb:
|
||||
logger.warning(
|
||||
"Memory vault (%.1f MB) exceeds limit (%d MB) — consider archiving old notes",
|
||||
total_mb,
|
||||
settings.memory_vault_max_mb,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Vault size check skipped: %s", exc)
|
||||
_check_vault_size()
|
||||
|
||||
# Start chat integrations in background
|
||||
chat_task = asyncio.create_task(_start_chat_integrations_background())
|
||||
|
||||
# Register session logger with error capture (breaks infrastructure → timmy circular dep)
|
||||
try:
|
||||
from infrastructure.error_capture import register_error_recorder
|
||||
from timmy.session_logger import get_session_logger
|
||||
|
||||
register_error_recorder(get_session_logger().record_error)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
logger.info("✓ Dashboard ready for requests")
|
||||
|
||||
yield
|
||||
|
||||
# Cleanup on shutdown
|
||||
async def _shutdown_cleanup(
|
||||
bg_tasks: list[asyncio.Task],
|
||||
workshop_heartbeat,
|
||||
) -> None:
|
||||
"""Stop chat bots, MCP sessions, heartbeat, and cancel background tasks."""
|
||||
from integrations.chat_bridge.vendors.discord import discord_bot
|
||||
from integrations.telegram_bot.bot import telegram_bot
|
||||
|
||||
await discord_bot.stop()
|
||||
await telegram_bot.stop()
|
||||
|
||||
# Close MCP tool server sessions
|
||||
try:
|
||||
from timmy.mcp_tools import close_mcp_sessions
|
||||
|
||||
@@ -403,13 +469,44 @@ async def lifespan(app: FastAPI):
|
||||
except Exception as exc:
|
||||
logger.debug("MCP shutdown: %s", exc)
|
||||
|
||||
for task in [briefing_task, thinking_task, chat_task, loop_qa_task]:
|
||||
if task:
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
await workshop_heartbeat.stop()
|
||||
|
||||
for task in bg_tasks:
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager with non-blocking startup."""
|
||||
_startup_init()
|
||||
bg_tasks = _startup_background_tasks()
|
||||
_startup_pruning()
|
||||
|
||||
# Start Workshop presence heartbeat with WS relay
|
||||
from dashboard.routes.world import broadcast_world_state
|
||||
from timmy.workshop_state import WorkshopHeartbeat
|
||||
|
||||
workshop_heartbeat = WorkshopHeartbeat(on_change=broadcast_world_state)
|
||||
await workshop_heartbeat.start()
|
||||
|
||||
# Register session logger with error capture
|
||||
try:
|
||||
from infrastructure.error_capture import register_error_recorder
|
||||
from timmy.session_logger import get_session_logger
|
||||
|
||||
register_error_recorder(get_session_logger().record_error)
|
||||
except Exception:
|
||||
logger.debug("Failed to register error recorder")
|
||||
|
||||
logger.info("✓ Dashboard ready for requests")
|
||||
|
||||
yield
|
||||
|
||||
await _shutdown_cleanup(bg_tasks, workshop_heartbeat)
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
@@ -422,15 +519,14 @@ app = FastAPI(
|
||||
|
||||
|
||||
def _get_cors_origins() -> list[str]:
|
||||
"""Get CORS origins from settings, with sensible defaults."""
|
||||
"""Get CORS origins from settings, rejecting wildcards in production."""
|
||||
origins = settings.cors_origins
|
||||
if settings.debug and origins == ["*"]:
|
||||
return [
|
||||
"http://localhost:3000",
|
||||
"http://localhost:8000",
|
||||
"http://127.0.0.1:3000",
|
||||
"http://127.0.0.1:8000",
|
||||
]
|
||||
if "*" in origins and not settings.debug:
|
||||
logger.warning(
|
||||
"Wildcard '*' in CORS_ORIGINS stripped in production — "
|
||||
"set explicit origins via CORS_ORIGINS env var"
|
||||
)
|
||||
origins = [o for o in origins if o != "*"]
|
||||
return origins
|
||||
|
||||
|
||||
@@ -483,6 +579,7 @@ app.include_router(grok_router)
|
||||
app.include_router(models_router)
|
||||
app.include_router(models_api_router)
|
||||
app.include_router(chat_api_router)
|
||||
app.include_router(chat_api_v1_router)
|
||||
app.include_router(thinking_router)
|
||||
app.include_router(calm_router)
|
||||
app.include_router(tasks_router)
|
||||
@@ -491,6 +588,8 @@ app.include_router(loop_qa_router)
|
||||
app.include_router(system_router)
|
||||
app.include_router(experiments_router)
|
||||
app.include_router(db_explorer_router)
|
||||
app.include_router(world_router)
|
||||
app.include_router(tower_router)
|
||||
|
||||
|
||||
@app.websocket("/ws")
|
||||
|
||||
@@ -100,7 +100,7 @@ class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
...
|
||||
|
||||
Usage:
|
||||
app.add_middleware(CSRFMiddleware, secret="your-secret-key")
|
||||
app.add_middleware(CSRFMiddleware, secret=settings.csrf_secret)
|
||||
|
||||
Attributes:
|
||||
secret: Secret key for token signing (optional, for future use).
|
||||
@@ -175,18 +175,12 @@ class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
return await call_next(request)
|
||||
|
||||
# Token validation failed and path is not exempt
|
||||
# We still need to call the app to check if the endpoint is decorated
|
||||
# with @csrf_exempt, so we'll let it through and check after routing
|
||||
response = await call_next(request)
|
||||
|
||||
# After routing, check if the endpoint is marked as exempt
|
||||
endpoint = request.scope.get("endpoint")
|
||||
# Resolve the endpoint from routes BEFORE executing to avoid side effects
|
||||
endpoint = self._resolve_endpoint(request)
|
||||
if endpoint and is_csrf_exempt(endpoint):
|
||||
# Endpoint is marked as exempt, allow the response
|
||||
return response
|
||||
return await call_next(request)
|
||||
|
||||
# Endpoint is not exempt and token validation failed
|
||||
# Return 403 error
|
||||
# Endpoint is not exempt and token validation failed — reject without executing
|
||||
return JSONResponse(
|
||||
status_code=403,
|
||||
content={
|
||||
@@ -196,6 +190,42 @@ class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
},
|
||||
)
|
||||
|
||||
def _resolve_endpoint(self, request: Request) -> Callable | None:
|
||||
"""Resolve the endpoint for a request without executing it.
|
||||
|
||||
Walks the app chain to find routes, then matches against the request
|
||||
scope. This allows checking @csrf_exempt before the handler runs
|
||||
(avoiding side effects on CSRF rejection).
|
||||
|
||||
Returns:
|
||||
The endpoint callable if found, None otherwise.
|
||||
"""
|
||||
try:
|
||||
from starlette.routing import Match
|
||||
|
||||
# Walk the middleware/app chain to find something with routes
|
||||
routes = None
|
||||
current = self.app
|
||||
for _ in range(10): # Safety limit
|
||||
routes = getattr(current, "routes", None)
|
||||
if routes:
|
||||
break
|
||||
current = getattr(current, "app", None)
|
||||
if current is None:
|
||||
break
|
||||
|
||||
if not routes:
|
||||
return None
|
||||
|
||||
scope = dict(request.scope)
|
||||
for route in routes:
|
||||
match, child_scope = route.matches(scope)
|
||||
if match == Match.FULL:
|
||||
return child_scope.get("endpoint")
|
||||
except Exception:
|
||||
logger.debug("Failed to resolve endpoint for CSRF check")
|
||||
return None
|
||||
|
||||
def _is_likely_exempt(self, path: str) -> bool:
|
||||
"""Check if a path is likely to be CSRF exempt.
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from datetime import date, datetime
|
||||
from datetime import UTC, date, datetime
|
||||
from enum import StrEnum
|
||||
|
||||
from sqlalchemy import JSON, Boolean, Column, Date, DateTime, Index, Integer, String
|
||||
@@ -40,8 +40,13 @@ class Task(Base):
|
||||
deferred_at = Column(DateTime, nullable=True)
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False)
|
||||
created_at = Column(DateTime, default=lambda: datetime.now(UTC), nullable=False)
|
||||
updated_at = Column(
|
||||
DateTime,
|
||||
default=lambda: datetime.now(UTC),
|
||||
onupdate=lambda: datetime.now(UTC),
|
||||
nullable=False,
|
||||
)
|
||||
|
||||
__table_args__ = (Index("ix_task_state_order", "state", "sort_order"),)
|
||||
|
||||
@@ -59,4 +64,4 @@ class JournalEntry(Base):
|
||||
gratitude = Column(String(500), nullable=True)
|
||||
energy_level = Column(Integer, nullable=True) # User-reported, 1-10
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
|
||||
created_at = Column(DateTime, default=lambda: datetime.now(UTC), nullable=False)
|
||||
|
||||
@@ -71,19 +71,87 @@ async def clear_history(request: Request):
|
||||
)
|
||||
|
||||
|
||||
def _validate_message(message: str) -> str:
|
||||
"""Strip and validate chat input; raise HTTPException on bad input."""
|
||||
from fastapi import HTTPException
|
||||
|
||||
message = message.strip()
|
||||
if not message:
|
||||
raise HTTPException(status_code=400, detail="Message cannot be empty")
|
||||
if len(message) > MAX_MESSAGE_LENGTH:
|
||||
raise HTTPException(status_code=422, detail="Message too long")
|
||||
return message
|
||||
|
||||
|
||||
def _record_user_activity() -> None:
|
||||
"""Notify the thinking engine that the user is active."""
|
||||
try:
|
||||
from timmy.thinking import thinking_engine
|
||||
|
||||
thinking_engine.record_user_input()
|
||||
except Exception:
|
||||
logger.debug("Failed to record user input for thinking engine")
|
||||
|
||||
|
||||
def _extract_tool_actions(run_output) -> list[dict]:
|
||||
"""If Agno paused the run for tool confirmation, build approval items."""
|
||||
from timmy.approvals import create_item
|
||||
|
||||
tool_actions: list[dict] = []
|
||||
status = getattr(run_output, "status", None)
|
||||
is_paused = status == "PAUSED" or str(status) == "RunStatus.paused"
|
||||
|
||||
if not (is_paused and getattr(run_output, "active_requirements", None)):
|
||||
return tool_actions
|
||||
|
||||
for req in run_output.active_requirements:
|
||||
if not getattr(req, "needs_confirmation", False):
|
||||
continue
|
||||
te = req.tool_execution
|
||||
tool_name = getattr(te, "tool_name", "unknown")
|
||||
tool_args = getattr(te, "tool_args", {}) or {}
|
||||
|
||||
item = create_item(
|
||||
title=f"Dashboard: {tool_name}",
|
||||
description=format_action_description(tool_name, tool_args),
|
||||
proposed_action=json.dumps({"tool": tool_name, "args": tool_args}),
|
||||
impact=get_impact_level(tool_name),
|
||||
)
|
||||
_pending_runs[item.id] = {
|
||||
"run_output": run_output,
|
||||
"requirement": req,
|
||||
"tool_name": tool_name,
|
||||
"tool_args": tool_args,
|
||||
}
|
||||
tool_actions.append(
|
||||
{
|
||||
"approval_id": item.id,
|
||||
"tool_name": tool_name,
|
||||
"description": format_action_description(tool_name, tool_args),
|
||||
"impact": get_impact_level(tool_name),
|
||||
}
|
||||
)
|
||||
return tool_actions
|
||||
|
||||
|
||||
def _log_exchange(
|
||||
message: str, response_text: str | None, error_text: str | None, timestamp: str
|
||||
) -> None:
|
||||
"""Append user message and agent/error reply to the in-memory log."""
|
||||
message_log.append(role="user", content=message, timestamp=timestamp, source="browser")
|
||||
if response_text:
|
||||
message_log.append(
|
||||
role="agent", content=response_text, timestamp=timestamp, source="browser"
|
||||
)
|
||||
elif error_text:
|
||||
message_log.append(role="error", content=error_text, timestamp=timestamp, source="browser")
|
||||
|
||||
|
||||
@router.post("/default/chat", response_class=HTMLResponse)
|
||||
async def chat_agent(request: Request, message: str = Form(...)):
|
||||
"""Chat — synchronous response with native Agno tool confirmation."""
|
||||
message = message.strip()
|
||||
if not message:
|
||||
from fastapi import HTTPException
|
||||
|
||||
raise HTTPException(status_code=400, detail="Message cannot be empty")
|
||||
|
||||
if len(message) > MAX_MESSAGE_LENGTH:
|
||||
from fastapi import HTTPException
|
||||
|
||||
raise HTTPException(status_code=422, detail="Message too long")
|
||||
message = _validate_message(message)
|
||||
_record_user_activity()
|
||||
|
||||
timestamp = datetime.now().strftime("%H:%M:%S")
|
||||
response_text = None
|
||||
@@ -96,54 +164,15 @@ async def chat_agent(request: Request, message: str = Form(...)):
|
||||
error_text = f"Chat error: {exc}"
|
||||
run_output = None
|
||||
|
||||
# Check if Agno paused the run for tool confirmation
|
||||
tool_actions = []
|
||||
tool_actions: list[dict] = []
|
||||
if run_output is not None:
|
||||
status = getattr(run_output, "status", None)
|
||||
is_paused = status == "PAUSED" or str(status) == "RunStatus.paused"
|
||||
|
||||
if is_paused and getattr(run_output, "active_requirements", None):
|
||||
for req in run_output.active_requirements:
|
||||
if getattr(req, "needs_confirmation", False):
|
||||
te = req.tool_execution
|
||||
tool_name = getattr(te, "tool_name", "unknown")
|
||||
tool_args = getattr(te, "tool_args", {}) or {}
|
||||
|
||||
from timmy.approvals import create_item
|
||||
|
||||
item = create_item(
|
||||
title=f"Dashboard: {tool_name}",
|
||||
description=format_action_description(tool_name, tool_args),
|
||||
proposed_action=json.dumps({"tool": tool_name, "args": tool_args}),
|
||||
impact=get_impact_level(tool_name),
|
||||
)
|
||||
_pending_runs[item.id] = {
|
||||
"run_output": run_output,
|
||||
"requirement": req,
|
||||
"tool_name": tool_name,
|
||||
"tool_args": tool_args,
|
||||
}
|
||||
tool_actions.append(
|
||||
{
|
||||
"approval_id": item.id,
|
||||
"tool_name": tool_name,
|
||||
"description": format_action_description(tool_name, tool_args),
|
||||
"impact": get_impact_level(tool_name),
|
||||
}
|
||||
)
|
||||
|
||||
tool_actions = _extract_tool_actions(run_output)
|
||||
raw_content = run_output.content if hasattr(run_output, "content") else ""
|
||||
response_text = _clean_response(raw_content or "")
|
||||
if not response_text and not tool_actions:
|
||||
response_text = None # let error template show if needed
|
||||
response_text = None
|
||||
|
||||
message_log.append(role="user", content=message, timestamp=timestamp, source="browser")
|
||||
if response_text:
|
||||
message_log.append(
|
||||
role="agent", content=response_text, timestamp=timestamp, source="browser"
|
||||
)
|
||||
elif error_text:
|
||||
message_log.append(role="error", content=error_text, timestamp=timestamp, source="browser")
|
||||
_log_exchange(message, response_text, error_text, timestamp)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import logging
|
||||
from datetime import date, datetime
|
||||
from datetime import UTC, date, datetime
|
||||
|
||||
from fastapi import APIRouter, Depends, Form, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
@@ -19,14 +19,17 @@ router = APIRouter(tags=["calm"])
|
||||
|
||||
# Helper functions for state machine logic
|
||||
def get_now_task(db: Session) -> Task | None:
|
||||
"""Return the single active NOW task, or None."""
|
||||
return db.query(Task).filter(Task.state == TaskState.NOW).first()
|
||||
|
||||
|
||||
def get_next_task(db: Session) -> Task | None:
|
||||
"""Return the single queued NEXT task, or None."""
|
||||
return db.query(Task).filter(Task.state == TaskState.NEXT).first()
|
||||
|
||||
|
||||
def get_later_tasks(db: Session) -> list[Task]:
|
||||
"""Return all LATER tasks ordered by MIT flag then sort_order."""
|
||||
return (
|
||||
db.query(Task)
|
||||
.filter(Task.state == TaskState.LATER)
|
||||
@@ -35,7 +38,63 @@ def get_later_tasks(db: Session) -> list[Task]:
|
||||
)
|
||||
|
||||
|
||||
def _create_mit_tasks(db: Session, titles: list[str | None]) -> list[int]:
|
||||
"""Create MIT tasks from a list of titles, return their IDs."""
|
||||
task_ids: list[int] = []
|
||||
for title in titles:
|
||||
if title:
|
||||
task = Task(
|
||||
title=title,
|
||||
is_mit=True,
|
||||
state=TaskState.LATER,
|
||||
certainty=TaskCertainty.SOFT,
|
||||
)
|
||||
db.add(task)
|
||||
db.commit()
|
||||
db.refresh(task)
|
||||
task_ids.append(task.id)
|
||||
return task_ids
|
||||
|
||||
|
||||
def _create_other_tasks(db: Session, other_tasks: str):
|
||||
"""Create non-MIT tasks from newline-separated text."""
|
||||
for line in other_tasks.split("\n"):
|
||||
line = line.strip()
|
||||
if line:
|
||||
task = Task(
|
||||
title=line,
|
||||
state=TaskState.LATER,
|
||||
certainty=TaskCertainty.FUZZY,
|
||||
)
|
||||
db.add(task)
|
||||
|
||||
|
||||
def _seed_now_next(db: Session):
|
||||
"""Set initial NOW/NEXT states when both slots are empty."""
|
||||
if get_now_task(db) or get_next_task(db):
|
||||
return
|
||||
later_tasks = (
|
||||
db.query(Task)
|
||||
.filter(Task.state == TaskState.LATER)
|
||||
.order_by(Task.is_mit.desc(), Task.sort_order)
|
||||
.all()
|
||||
)
|
||||
if later_tasks:
|
||||
later_tasks[0].state = TaskState.NOW
|
||||
db.add(later_tasks[0])
|
||||
db.flush()
|
||||
if len(later_tasks) > 1:
|
||||
later_tasks[1].state = TaskState.NEXT
|
||||
db.add(later_tasks[1])
|
||||
|
||||
|
||||
def promote_tasks(db: Session):
|
||||
"""Enforce the NOW/NEXT/LATER state machine invariants.
|
||||
|
||||
- At most one NOW task (extras demoted to NEXT).
|
||||
- If no NOW, promote NEXT -> NOW.
|
||||
- If no NEXT, promote highest-priority LATER -> NEXT.
|
||||
"""
|
||||
# Ensure only one NOW task exists. If multiple, demote extras to NEXT.
|
||||
now_tasks = db.query(Task).filter(Task.state == TaskState.NOW).all()
|
||||
if len(now_tasks) > 1:
|
||||
@@ -74,6 +133,7 @@ def promote_tasks(db: Session):
|
||||
# Endpoints
|
||||
@router.get("/calm", response_class=HTMLResponse)
|
||||
async def get_calm_view(request: Request, db: Session = Depends(get_db)):
|
||||
"""Render the main CALM dashboard with NOW/NEXT/LATER counts."""
|
||||
now_task = get_now_task(db)
|
||||
next_task = get_next_task(db)
|
||||
later_tasks_count = len(get_later_tasks(db))
|
||||
@@ -90,6 +150,7 @@ async def get_calm_view(request: Request, db: Session = Depends(get_db)):
|
||||
|
||||
@router.get("/calm/ritual/morning", response_class=HTMLResponse)
|
||||
async def get_morning_ritual_form(request: Request):
|
||||
"""Render the morning ritual intake form."""
|
||||
return templates.TemplateResponse(request, "calm/morning_ritual_form.html", {})
|
||||
|
||||
|
||||
@@ -102,63 +163,20 @@ async def post_morning_ritual(
|
||||
mit3_title: str = Form(None),
|
||||
other_tasks: str = Form(""),
|
||||
):
|
||||
# Create Journal Entry
|
||||
mit_task_ids = []
|
||||
"""Process morning ritual: create MITs, other tasks, and set initial states."""
|
||||
journal_entry = JournalEntry(entry_date=date.today())
|
||||
db.add(journal_entry)
|
||||
db.commit()
|
||||
db.refresh(journal_entry)
|
||||
|
||||
# Create MIT tasks
|
||||
for mit_title in [mit1_title, mit2_title, mit3_title]:
|
||||
if mit_title:
|
||||
task = Task(
|
||||
title=mit_title,
|
||||
is_mit=True,
|
||||
state=TaskState.LATER, # Initially LATER, will be promoted
|
||||
certainty=TaskCertainty.SOFT,
|
||||
)
|
||||
db.add(task)
|
||||
db.commit()
|
||||
db.refresh(task)
|
||||
mit_task_ids.append(task.id)
|
||||
|
||||
journal_entry.mit_task_ids = mit_task_ids
|
||||
journal_entry.mit_task_ids = _create_mit_tasks(db, [mit1_title, mit2_title, mit3_title])
|
||||
db.add(journal_entry)
|
||||
|
||||
# Create other tasks
|
||||
for task_title in other_tasks.split("\n"):
|
||||
task_title = task_title.strip()
|
||||
if task_title:
|
||||
task = Task(
|
||||
title=task_title,
|
||||
state=TaskState.LATER,
|
||||
certainty=TaskCertainty.FUZZY,
|
||||
)
|
||||
db.add(task)
|
||||
|
||||
_create_other_tasks(db, other_tasks)
|
||||
db.commit()
|
||||
|
||||
# Set initial NOW/NEXT states
|
||||
# Set initial NOW/NEXT states after all tasks are created
|
||||
if not get_now_task(db) and not get_next_task(db):
|
||||
later_tasks = (
|
||||
db.query(Task)
|
||||
.filter(Task.state == TaskState.LATER)
|
||||
.order_by(Task.is_mit.desc(), Task.sort_order)
|
||||
.all()
|
||||
)
|
||||
if later_tasks:
|
||||
# Set the highest priority LATER task to NOW
|
||||
later_tasks[0].state = TaskState.NOW
|
||||
db.add(later_tasks[0])
|
||||
db.flush() # Flush to make the change visible for the next query
|
||||
|
||||
# Set the next highest priority LATER task to NEXT
|
||||
if len(later_tasks) > 1:
|
||||
later_tasks[1].state = TaskState.NEXT
|
||||
db.add(later_tasks[1])
|
||||
db.commit() # Commit changes after initial NOW/NEXT setup
|
||||
_seed_now_next(db)
|
||||
db.commit()
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
@@ -173,6 +191,7 @@ async def post_morning_ritual(
|
||||
|
||||
@router.get("/calm/ritual/evening", response_class=HTMLResponse)
|
||||
async def get_evening_ritual_form(request: Request, db: Session = Depends(get_db)):
|
||||
"""Render the evening ritual form for today's journal entry."""
|
||||
journal_entry = db.query(JournalEntry).filter(JournalEntry.entry_date == date.today()).first()
|
||||
if not journal_entry:
|
||||
raise HTTPException(status_code=404, detail="No journal entry for today")
|
||||
@@ -189,6 +208,7 @@ async def post_evening_ritual(
|
||||
gratitude: str = Form(None),
|
||||
energy_level: int = Form(None),
|
||||
):
|
||||
"""Process evening ritual: save reflection/gratitude, archive active tasks."""
|
||||
journal_entry = db.query(JournalEntry).filter(JournalEntry.entry_date == date.today()).first()
|
||||
if not journal_entry:
|
||||
raise HTTPException(status_code=404, detail="No journal entry for today")
|
||||
@@ -206,7 +226,7 @@ async def post_evening_ritual(
|
||||
)
|
||||
for task in active_tasks:
|
||||
task.state = TaskState.DEFERRED # Or DONE, depending on desired archiving logic
|
||||
task.deferred_at = datetime.utcnow()
|
||||
task.deferred_at = datetime.now(UTC)
|
||||
db.add(task)
|
||||
|
||||
db.commit()
|
||||
@@ -223,6 +243,7 @@ async def create_new_task(
|
||||
is_mit: bool = Form(False),
|
||||
certainty: TaskCertainty = Form(TaskCertainty.SOFT),
|
||||
):
|
||||
"""Create a new task in LATER state and return updated count."""
|
||||
task = Task(
|
||||
title=title,
|
||||
description=description,
|
||||
@@ -247,6 +268,7 @@ async def start_task(
|
||||
task_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Move a task to NOW state, demoting the current NOW to NEXT."""
|
||||
current_now_task = get_now_task(db)
|
||||
if current_now_task and current_now_task.id != task_id:
|
||||
current_now_task.state = TaskState.NEXT # Demote current NOW to NEXT
|
||||
@@ -257,7 +279,7 @@ async def start_task(
|
||||
raise HTTPException(status_code=404, detail="Task not found")
|
||||
|
||||
task.state = TaskState.NOW
|
||||
task.started_at = datetime.utcnow()
|
||||
task.started_at = datetime.now(UTC)
|
||||
db.add(task)
|
||||
db.commit()
|
||||
|
||||
@@ -281,12 +303,13 @@ async def complete_task(
|
||||
task_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Mark a task as DONE and trigger state promotion."""
|
||||
task = db.query(Task).filter(Task.id == task_id).first()
|
||||
if not task:
|
||||
raise HTTPException(status_code=404, detail="Task not found")
|
||||
|
||||
task.state = TaskState.DONE
|
||||
task.completed_at = datetime.utcnow()
|
||||
task.completed_at = datetime.now(UTC)
|
||||
db.add(task)
|
||||
db.commit()
|
||||
|
||||
@@ -309,12 +332,13 @@ async def defer_task(
|
||||
task_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Defer a task and trigger state promotion."""
|
||||
task = db.query(Task).filter(Task.id == task_id).first()
|
||||
if not task:
|
||||
raise HTTPException(status_code=404, detail="Task not found")
|
||||
|
||||
task.state = TaskState.DEFERRED
|
||||
task.deferred_at = datetime.utcnow()
|
||||
task.deferred_at = datetime.now(UTC)
|
||||
db.add(task)
|
||||
db.commit()
|
||||
|
||||
@@ -333,6 +357,7 @@ async def defer_task(
|
||||
|
||||
@router.get("/calm/partials/later_tasks_list", response_class=HTMLResponse)
|
||||
async def get_later_tasks_list(request: Request, db: Session = Depends(get_db)):
|
||||
"""Render the expandable list of LATER tasks."""
|
||||
later_tasks = get_later_tasks(db)
|
||||
return templates.TemplateResponse(
|
||||
"calm/partials/later_tasks_list.html",
|
||||
@@ -348,6 +373,7 @@ async def reorder_tasks(
|
||||
later_task_ids: str = Form(""),
|
||||
next_task_id: int | None = Form(None),
|
||||
):
|
||||
"""Reorder LATER tasks and optionally promote one to NEXT."""
|
||||
# Reorder LATER tasks
|
||||
if later_task_ids:
|
||||
ids_in_order = [int(x.strip()) for x in later_task_ids.split(",") if x.strip()]
|
||||
|
||||
@@ -31,6 +31,93 @@ _UPLOAD_DIR = str(Path(settings.repo_root) / "data" / "chat-uploads")
|
||||
_MAX_UPLOAD_SIZE = 50 * 1024 * 1024 # 50 MB
|
||||
|
||||
|
||||
# ── POST /api/chat — helpers ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def _parse_chat_body(request: Request) -> tuple[dict | None, JSONResponse | None]:
|
||||
"""Parse and validate the JSON request body.
|
||||
|
||||
Returns (body, None) on success or (None, error_response) on failure.
|
||||
"""
|
||||
content_length = request.headers.get("content-length")
|
||||
if content_length and int(content_length) > settings.chat_api_max_body_bytes:
|
||||
return None, JSONResponse(status_code=413, content={"error": "Request body too large"})
|
||||
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Chat API JSON parse error: %s", exc)
|
||||
return None, JSONResponse(status_code=400, content={"error": "Invalid JSON"})
|
||||
|
||||
messages = body.get("messages")
|
||||
if not messages or not isinstance(messages, list):
|
||||
return None, JSONResponse(status_code=400, content={"error": "messages array is required"})
|
||||
|
||||
return body, None
|
||||
|
||||
|
||||
def _extract_user_message(messages: list[dict]) -> str | None:
|
||||
"""Return the text of the last user message, or *None* if absent."""
|
||||
for msg in reversed(messages):
|
||||
if msg.get("role") == "user":
|
||||
content = msg.get("content", "")
|
||||
if isinstance(content, list):
|
||||
text_parts = [
|
||||
p.get("text", "")
|
||||
for p in content
|
||||
if isinstance(p, dict) and p.get("type") == "text"
|
||||
]
|
||||
return " ".join(text_parts).strip() or None
|
||||
text = str(content).strip()
|
||||
return text or None
|
||||
return None
|
||||
|
||||
|
||||
def _build_context_prefix() -> str:
|
||||
"""Build the system-context preamble injected before the user message."""
|
||||
now = datetime.now()
|
||||
return (
|
||||
f"[System: Current date/time is "
|
||||
f"{now.strftime('%A, %B %d, %Y at %I:%M %p')}]\n"
|
||||
f"[System: Mobile client]\n\n"
|
||||
)
|
||||
|
||||
|
||||
def _notify_thinking_engine() -> None:
|
||||
"""Record user activity so the thinking engine knows we're not idle."""
|
||||
try:
|
||||
from timmy.thinking import thinking_engine
|
||||
|
||||
thinking_engine.record_user_input()
|
||||
except Exception:
|
||||
logger.debug("Failed to record user input for thinking engine")
|
||||
|
||||
|
||||
async def _process_chat(user_msg: str) -> dict | JSONResponse:
|
||||
"""Send *user_msg* to the agent, log the exchange, and return a response."""
|
||||
_notify_thinking_engine()
|
||||
timestamp = datetime.now().strftime("%H:%M:%S")
|
||||
|
||||
try:
|
||||
response_text = await agent_chat(
|
||||
_build_context_prefix() + user_msg,
|
||||
session_id="mobile",
|
||||
)
|
||||
message_log.append(role="user", content=user_msg, timestamp=timestamp, source="api")
|
||||
message_log.append(role="agent", content=response_text, timestamp=timestamp, source="api")
|
||||
return {"reply": response_text, "timestamp": timestamp}
|
||||
|
||||
except Exception as exc:
|
||||
error_msg = f"Agent is offline: {exc}"
|
||||
logger.error("api_chat error: %s", exc)
|
||||
message_log.append(role="user", content=user_msg, timestamp=timestamp, source="api")
|
||||
message_log.append(role="error", content=error_msg, timestamp=timestamp, source="api")
|
||||
return JSONResponse(
|
||||
status_code=503,
|
||||
content={"error": error_msg, "timestamp": timestamp},
|
||||
)
|
||||
|
||||
|
||||
# ── POST /api/chat ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@@ -44,70 +131,15 @@ async def api_chat(request: Request):
|
||||
Response:
|
||||
{"reply": "...", "timestamp": "HH:MM:SS"}
|
||||
"""
|
||||
# Enforce request body size limit
|
||||
content_length = request.headers.get("content-length")
|
||||
if content_length and int(content_length) > settings.chat_api_max_body_bytes:
|
||||
return JSONResponse(status_code=413, content={"error": "Request body too large"})
|
||||
body, err = await _parse_chat_body(request)
|
||||
if err:
|
||||
return err
|
||||
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Chat API JSON parse error: %s", exc)
|
||||
return JSONResponse(status_code=400, content={"error": "Invalid JSON"})
|
||||
|
||||
messages = body.get("messages")
|
||||
if not messages or not isinstance(messages, list):
|
||||
return JSONResponse(status_code=400, content={"error": "messages array is required"})
|
||||
|
||||
# Extract the latest user message text
|
||||
last_user_msg = None
|
||||
for msg in reversed(messages):
|
||||
if msg.get("role") == "user":
|
||||
content = msg.get("content", "")
|
||||
# Handle multimodal content arrays — extract text parts
|
||||
if isinstance(content, list):
|
||||
text_parts = [
|
||||
p.get("text", "")
|
||||
for p in content
|
||||
if isinstance(p, dict) and p.get("type") == "text"
|
||||
]
|
||||
last_user_msg = " ".join(text_parts).strip()
|
||||
else:
|
||||
last_user_msg = str(content).strip()
|
||||
break
|
||||
|
||||
if not last_user_msg:
|
||||
user_msg = _extract_user_message(body["messages"])
|
||||
if not user_msg:
|
||||
return JSONResponse(status_code=400, content={"error": "No user message found"})
|
||||
|
||||
timestamp = datetime.now().strftime("%H:%M:%S")
|
||||
|
||||
try:
|
||||
# Inject context (same pattern as the HTMX chat handler in agents.py)
|
||||
now = datetime.now()
|
||||
context_prefix = (
|
||||
f"[System: Current date/time is "
|
||||
f"{now.strftime('%A, %B %d, %Y at %I:%M %p')}]\n"
|
||||
f"[System: Mobile client]\n\n"
|
||||
)
|
||||
response_text = await agent_chat(
|
||||
context_prefix + last_user_msg,
|
||||
session_id="mobile",
|
||||
)
|
||||
|
||||
message_log.append(role="user", content=last_user_msg, timestamp=timestamp, source="api")
|
||||
message_log.append(role="agent", content=response_text, timestamp=timestamp, source="api")
|
||||
|
||||
return {"reply": response_text, "timestamp": timestamp}
|
||||
|
||||
except Exception as exc:
|
||||
error_msg = f"Agent is offline: {exc}"
|
||||
logger.error("api_chat error: %s", exc)
|
||||
message_log.append(role="user", content=last_user_msg, timestamp=timestamp, source="api")
|
||||
message_log.append(role="error", content=error_msg, timestamp=timestamp, source="api")
|
||||
return JSONResponse(
|
||||
status_code=503,
|
||||
content={"error": error_msg, "timestamp": timestamp},
|
||||
)
|
||||
return await _process_chat(user_msg)
|
||||
|
||||
|
||||
# ── POST /api/upload ──────────────────────────────────────────────────────────
|
||||
|
||||
198
src/dashboard/routes/chat_api_v1.py
Normal file
198
src/dashboard/routes/chat_api_v1.py
Normal file
@@ -0,0 +1,198 @@
|
||||
"""Version 1 (v1) JSON REST API for the Timmy Time iPad app.
|
||||
|
||||
This module implements the specific endpoints required by the native
|
||||
iPad app as defined in the project specification.
|
||||
|
||||
Endpoints:
|
||||
POST /api/v1/chat — Streaming SSE chat response
|
||||
GET /api/v1/chat/history — Retrieve chat history with limit
|
||||
POST /api/v1/upload — Multipart file upload with auto-detection
|
||||
GET /api/v1/status — Detailed system and model status
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import uuid
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, File, HTTPException, Query, Request, UploadFile
|
||||
from fastapi.responses import JSONResponse, StreamingResponse
|
||||
|
||||
from config import APP_START_TIME, settings
|
||||
from dashboard.routes.health import _check_ollama
|
||||
from dashboard.store import message_log
|
||||
from timmy.session import _get_agent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/v1", tags=["chat-api-v1"])
|
||||
|
||||
_UPLOAD_DIR = str(Path(settings.repo_root) / "data" / "chat-uploads")
|
||||
_MAX_UPLOAD_SIZE = 50 * 1024 * 1024 # 50 MB
|
||||
|
||||
|
||||
# ── POST /api/v1/chat ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.post("/chat")
|
||||
async def api_v1_chat(request: Request):
|
||||
"""Accept a JSON chat payload and return a streaming SSE response.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"message": "string",
|
||||
"session_id": "string",
|
||||
"attachments": ["id1", "id2"]
|
||||
}
|
||||
|
||||
Response:
|
||||
text/event-stream (SSE)
|
||||
"""
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as exc:
|
||||
logger.warning("Chat v1 API JSON parse error: %s", exc)
|
||||
return JSONResponse(status_code=400, content={"error": "Invalid JSON"})
|
||||
|
||||
message = body.get("message")
|
||||
session_id = body.get("session_id", "ipad-app")
|
||||
attachments = body.get("attachments", [])
|
||||
|
||||
if not message:
|
||||
return JSONResponse(status_code=400, content={"error": "message is required"})
|
||||
|
||||
# Prepare context for the agent
|
||||
context_prefix = (
|
||||
f"[System: Current date/time is "
|
||||
f"{datetime.now().strftime('%A, %B %d, %Y at %I:%M %p')}]\n"
|
||||
f"[System: iPad App client]\n"
|
||||
)
|
||||
|
||||
if attachments:
|
||||
context_prefix += f"[System: Attachments: {', '.join(attachments)}]\n"
|
||||
|
||||
context_prefix += "\n"
|
||||
full_prompt = context_prefix + message
|
||||
|
||||
async def event_generator():
|
||||
try:
|
||||
agent = _get_agent()
|
||||
# Using streaming mode for SSE
|
||||
async for chunk in agent.arun(full_prompt, stream=True, session_id=session_id):
|
||||
# Agno chunks can be strings or RunOutput
|
||||
content = chunk.content if hasattr(chunk, "content") else str(chunk)
|
||||
if content:
|
||||
yield f"data: {json.dumps({'text': content})}\n\n"
|
||||
|
||||
yield "data: [DONE]\n\n"
|
||||
except Exception as exc:
|
||||
logger.error("SSE stream error: %s", exc)
|
||||
yield f"data: {json.dumps({'error': str(exc)})}\n\n"
|
||||
|
||||
return StreamingResponse(event_generator(), media_type="text/event-stream")
|
||||
|
||||
|
||||
# ── GET /api/v1/chat/history ──────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/chat/history")
|
||||
async def api_v1_chat_history(
|
||||
session_id: str = Query("ipad-app"), limit: int = Query(50, ge=1, le=100)
|
||||
):
|
||||
"""Return recent chat history for a specific session."""
|
||||
# Filter and limit the message log
|
||||
# Note: message_log.all() returns all messages; we filter by source or just return last N
|
||||
all_msgs = message_log.all()
|
||||
|
||||
# In a real implementation, we'd filter by session_id if message_log supported it.
|
||||
# For now, we return the last 'limit' messages.
|
||||
history = [
|
||||
{
|
||||
"role": msg.role,
|
||||
"content": msg.content,
|
||||
"timestamp": msg.timestamp,
|
||||
"source": msg.source,
|
||||
}
|
||||
for msg in all_msgs[-limit:]
|
||||
]
|
||||
|
||||
return {"messages": history}
|
||||
|
||||
|
||||
# ── POST /api/v1/upload ───────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.post("/upload")
|
||||
async def api_v1_upload(file: UploadFile = File(...)):
|
||||
"""Accept a file upload, auto-detect type, and return metadata.
|
||||
|
||||
Response:
|
||||
{
|
||||
"id": "string",
|
||||
"type": "image|audio|document|url",
|
||||
"summary": "string",
|
||||
"metadata": {...}
|
||||
}
|
||||
"""
|
||||
os.makedirs(_UPLOAD_DIR, exist_ok=True)
|
||||
|
||||
file_id = uuid.uuid4().hex[:12]
|
||||
safe_name = os.path.basename(file.filename or "upload")
|
||||
stored_name = f"{file_id}-{safe_name}"
|
||||
file_path = os.path.join(_UPLOAD_DIR, stored_name)
|
||||
|
||||
# Verify resolved path stays within upload directory
|
||||
resolved = Path(file_path).resolve()
|
||||
upload_root = Path(_UPLOAD_DIR).resolve()
|
||||
if not str(resolved).startswith(str(upload_root)):
|
||||
raise HTTPException(status_code=400, detail="Invalid file name")
|
||||
|
||||
contents = await file.read()
|
||||
if len(contents) > _MAX_UPLOAD_SIZE:
|
||||
raise HTTPException(status_code=413, detail="File too large (max 50 MB)")
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(contents)
|
||||
|
||||
# Auto-detect type based on extension/mime
|
||||
mime_type = file.content_type or "application/octet-stream"
|
||||
ext = os.path.splitext(safe_name)[1].lower()
|
||||
|
||||
media_type = "document"
|
||||
if mime_type.startswith("image/") or ext in [".jpg", ".jpeg", ".png", ".heic"]:
|
||||
media_type = "image"
|
||||
elif mime_type.startswith("audio/") or ext in [".m4a", ".mp3", ".wav", ".caf"]:
|
||||
media_type = "audio"
|
||||
elif ext in [".pdf", ".txt", ".md"]:
|
||||
media_type = "document"
|
||||
|
||||
# Placeholder for actual processing (OCR, Whisper, etc.)
|
||||
summary = f"Uploaded {media_type}: {safe_name}"
|
||||
|
||||
return {
|
||||
"id": file_id,
|
||||
"type": media_type,
|
||||
"summary": summary,
|
||||
"url": f"/uploads/{stored_name}",
|
||||
"metadata": {"fileName": safe_name, "mimeType": mime_type, "size": len(contents)},
|
||||
}
|
||||
|
||||
|
||||
# ── GET /api/v1/status ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/status")
|
||||
async def api_v1_status():
|
||||
"""Detailed system and model status."""
|
||||
ollama_status = await _check_ollama()
|
||||
uptime = (datetime.now(UTC) - APP_START_TIME).total_seconds()
|
||||
|
||||
return {
|
||||
"timmy": "online" if ollama_status.status == "healthy" else "offline",
|
||||
"model": settings.ollama_model,
|
||||
"ollama": "running" if ollama_status.status == "healthy" else "stopped",
|
||||
"uptime": f"{int(uptime // 3600)}h {int((uptime % 3600) // 60)}m",
|
||||
"version": "2.0.0-v1-api",
|
||||
}
|
||||
@@ -65,7 +65,7 @@ def _check_ollama_sync() -> DependencyStatus:
|
||||
try:
|
||||
import urllib.request
|
||||
|
||||
url = settings.ollama_url.replace("localhost", "127.0.0.1")
|
||||
url = settings.normalized_ollama_url
|
||||
req = urllib.request.Request(
|
||||
f"{url}/api/tags",
|
||||
method="GET",
|
||||
|
||||
@@ -16,52 +16,11 @@ router = APIRouter(tags=["system"])
|
||||
|
||||
@router.get("/lightning/ledger", response_class=HTMLResponse)
|
||||
async def lightning_ledger(request: Request):
|
||||
"""Ledger and balance page."""
|
||||
# Mock data for now, as this seems to be a UI-first feature
|
||||
balance = {
|
||||
"available_sats": 1337,
|
||||
"incoming_total_sats": 2000,
|
||||
"outgoing_total_sats": 663,
|
||||
"fees_paid_sats": 5,
|
||||
"net_sats": 1337,
|
||||
"pending_incoming_sats": 0,
|
||||
"pending_outgoing_sats": 0,
|
||||
}
|
||||
"""Ledger and balance page backed by the in-memory Lightning ledger."""
|
||||
from lightning.ledger import get_balance, get_transactions
|
||||
|
||||
# Mock transactions
|
||||
from collections import namedtuple
|
||||
from enum import Enum
|
||||
|
||||
class TxType(Enum):
|
||||
incoming = "incoming"
|
||||
outgoing = "outgoing"
|
||||
|
||||
class TxStatus(Enum):
|
||||
completed = "completed"
|
||||
pending = "pending"
|
||||
|
||||
Tx = namedtuple(
|
||||
"Tx", ["tx_type", "status", "amount_sats", "payment_hash", "memo", "created_at"]
|
||||
)
|
||||
|
||||
transactions = [
|
||||
Tx(
|
||||
TxType.outgoing,
|
||||
TxStatus.completed,
|
||||
50,
|
||||
"hash1",
|
||||
"Model inference",
|
||||
"2026-03-04 10:00:00",
|
||||
),
|
||||
Tx(
|
||||
TxType.incoming,
|
||||
TxStatus.completed,
|
||||
1000,
|
||||
"hash2",
|
||||
"Manual deposit",
|
||||
"2026-03-03 15:00:00",
|
||||
),
|
||||
]
|
||||
balance = get_balance()
|
||||
transactions = get_transactions()
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
@@ -70,7 +29,7 @@ async def lightning_ledger(request: Request):
|
||||
"balance": balance,
|
||||
"transactions": transactions,
|
||||
"tx_types": ["incoming", "outgoing"],
|
||||
"tx_statuses": ["completed", "pending"],
|
||||
"tx_statuses": ["pending", "settled", "failed", "expired"],
|
||||
"filter_type": None,
|
||||
"filter_status": None,
|
||||
"stats": {},
|
||||
@@ -166,7 +125,7 @@ async def api_briefing_status():
|
||||
if cached:
|
||||
last_generated = cached.generated_at.isoformat()
|
||||
except Exception:
|
||||
pass
|
||||
logger.debug("Failed to read briefing cache")
|
||||
|
||||
return JSONResponse(
|
||||
{
|
||||
@@ -190,6 +149,7 @@ async def api_memory_status():
|
||||
stats = get_memory_stats()
|
||||
indexed_files = stats.get("total_entries", 0)
|
||||
except Exception:
|
||||
logger.debug("Failed to get memory stats")
|
||||
indexed_files = 0
|
||||
|
||||
return JSONResponse(
|
||||
@@ -215,7 +175,7 @@ async def api_swarm_status():
|
||||
).fetchone()
|
||||
pending_tasks = row["cnt"] if row else 0
|
||||
except Exception:
|
||||
pass
|
||||
logger.debug("Failed to count pending tasks")
|
||||
|
||||
return JSONResponse(
|
||||
{
|
||||
|
||||
@@ -5,7 +5,7 @@ import sqlite3
|
||||
import uuid
|
||||
from collections.abc import Generator
|
||||
from contextlib import closing, contextmanager
|
||||
from datetime import datetime
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, Form, HTTPException, Request
|
||||
@@ -219,7 +219,7 @@ async def create_task_form(
|
||||
raise HTTPException(status_code=400, detail="Task title cannot be empty")
|
||||
|
||||
task_id = str(uuid.uuid4())
|
||||
now = datetime.utcnow().isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
priority = priority if priority in VALID_PRIORITIES else "normal"
|
||||
|
||||
with _get_db() as db:
|
||||
@@ -287,7 +287,7 @@ async def modify_task(
|
||||
async def _set_status(request: Request, task_id: str, new_status: str):
|
||||
"""Helper to update status and return refreshed task card."""
|
||||
completed_at = (
|
||||
datetime.utcnow().isoformat() if new_status in ("completed", "vetoed", "failed") else None
|
||||
datetime.now(UTC).isoformat() if new_status in ("completed", "vetoed", "failed") else None
|
||||
)
|
||||
with _get_db() as db:
|
||||
db.execute(
|
||||
@@ -316,7 +316,7 @@ async def api_create_task(request: Request):
|
||||
raise HTTPException(422, "title is required")
|
||||
|
||||
task_id = str(uuid.uuid4())
|
||||
now = datetime.utcnow().isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
priority = body.get("priority", "normal")
|
||||
if priority not in VALID_PRIORITIES:
|
||||
priority = "normal"
|
||||
@@ -358,7 +358,7 @@ async def api_update_status(task_id: str, request: Request):
|
||||
raise HTTPException(422, f"Invalid status. Must be one of: {VALID_STATUSES}")
|
||||
|
||||
completed_at = (
|
||||
datetime.utcnow().isoformat() if new_status in ("completed", "vetoed", "failed") else None
|
||||
datetime.now(UTC).isoformat() if new_status in ("completed", "vetoed", "failed") else None
|
||||
)
|
||||
with _get_db() as db:
|
||||
db.execute(
|
||||
|
||||
108
src/dashboard/routes/tower.py
Normal file
108
src/dashboard/routes/tower.py
Normal file
@@ -0,0 +1,108 @@
|
||||
"""Tower dashboard — real-time Spark visualization via WebSocket.
|
||||
|
||||
GET /tower — HTML Tower dashboard (Thinking / Predicting / Advising)
|
||||
WS /tower/ws — WebSocket stream of Spark engine state updates
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, Request, WebSocket
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from dashboard.templating import templates
|
||||
from spark.engine import spark_engine
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/tower", tags=["tower"])
|
||||
|
||||
_PUSH_INTERVAL = 5 # seconds between state broadcasts
|
||||
|
||||
|
||||
def _spark_snapshot() -> dict:
|
||||
"""Build a JSON-serialisable snapshot of Spark state."""
|
||||
status = spark_engine.status()
|
||||
|
||||
timeline = spark_engine.get_timeline(limit=10)
|
||||
events = []
|
||||
for ev in timeline:
|
||||
entry = {
|
||||
"event_type": ev.event_type,
|
||||
"description": ev.description,
|
||||
"importance": ev.importance,
|
||||
"created_at": ev.created_at,
|
||||
}
|
||||
if ev.agent_id:
|
||||
entry["agent_id"] = ev.agent_id[:8]
|
||||
if ev.task_id:
|
||||
entry["task_id"] = ev.task_id[:8]
|
||||
try:
|
||||
entry["data"] = json.loads(ev.data)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
entry["data"] = {}
|
||||
events.append(entry)
|
||||
|
||||
predictions = spark_engine.get_predictions(limit=5)
|
||||
preds = []
|
||||
for p in predictions:
|
||||
pred = {
|
||||
"task_id": p.task_id[:8] if p.task_id else "?",
|
||||
"accuracy": p.accuracy,
|
||||
"evaluated": p.evaluated_at is not None,
|
||||
"created_at": p.created_at,
|
||||
}
|
||||
try:
|
||||
pred["predicted"] = json.loads(p.predicted_value)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
pred["predicted"] = {}
|
||||
preds.append(pred)
|
||||
|
||||
advisories = spark_engine.get_advisories()
|
||||
advs = [
|
||||
{
|
||||
"category": a.category,
|
||||
"priority": a.priority,
|
||||
"title": a.title,
|
||||
"detail": a.detail,
|
||||
"suggested_action": a.suggested_action,
|
||||
}
|
||||
for a in advisories
|
||||
]
|
||||
|
||||
return {
|
||||
"type": "spark_state",
|
||||
"status": status,
|
||||
"events": events,
|
||||
"predictions": preds,
|
||||
"advisories": advs,
|
||||
}
|
||||
|
||||
|
||||
@router.get("", response_class=HTMLResponse)
|
||||
async def tower_ui(request: Request):
|
||||
"""Render the Tower dashboard page."""
|
||||
snapshot = _spark_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"tower.html",
|
||||
{"snapshot": snapshot},
|
||||
)
|
||||
|
||||
|
||||
@router.websocket("/ws")
|
||||
async def tower_ws(websocket: WebSocket) -> None:
|
||||
"""Stream Spark state snapshots to the Tower dashboard."""
|
||||
await websocket.accept()
|
||||
logger.info("Tower WS connected")
|
||||
|
||||
try:
|
||||
# Send initial snapshot
|
||||
await websocket.send_text(json.dumps(_spark_snapshot()))
|
||||
|
||||
while True:
|
||||
await asyncio.sleep(_PUSH_INTERVAL)
|
||||
await websocket.send_text(json.dumps(_spark_snapshot()))
|
||||
except Exception:
|
||||
logger.debug("Tower WS disconnected")
|
||||
@@ -5,7 +5,7 @@ import sqlite3
|
||||
import uuid
|
||||
from collections.abc import Generator
|
||||
from contextlib import closing, contextmanager
|
||||
from datetime import datetime
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, Form, HTTPException, Request
|
||||
@@ -144,7 +144,7 @@ async def submit_work_order(
|
||||
related_files: str = Form(""),
|
||||
):
|
||||
wo_id = str(uuid.uuid4())
|
||||
now = datetime.utcnow().isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
priority = priority if priority in PRIORITIES else "medium"
|
||||
category = category if category in CATEGORIES else "suggestion"
|
||||
|
||||
@@ -211,7 +211,7 @@ async def active_partial(request: Request):
|
||||
|
||||
async def _update_status(request: Request, wo_id: str, new_status: str, **extra):
|
||||
completed_at = (
|
||||
datetime.utcnow().isoformat() if new_status in ("completed", "rejected") else None
|
||||
datetime.now(UTC).isoformat() if new_status in ("completed", "rejected") else None
|
||||
)
|
||||
with _get_db() as db:
|
||||
sets = ["status=?", "completed_at=COALESCE(?, completed_at)"]
|
||||
|
||||
385
src/dashboard/routes/world.py
Normal file
385
src/dashboard/routes/world.py
Normal file
@@ -0,0 +1,385 @@
|
||||
"""Workshop world state API and WebSocket relay.
|
||||
|
||||
Serves Timmy's current presence state to the Workshop 3D renderer.
|
||||
The primary consumer is the browser on first load — before any
|
||||
WebSocket events arrive, the client needs a full state snapshot.
|
||||
|
||||
The ``/ws/world`` endpoint streams ``timmy_state`` messages whenever
|
||||
the heartbeat detects a state change. It also accepts ``visitor_message``
|
||||
frames from the 3D client and responds with ``timmy_speech`` barks.
|
||||
|
||||
Source of truth: ``~/.timmy/presence.json`` written by
|
||||
:class:`~timmy.workshop_state.WorkshopHeartbeat`.
|
||||
Falls back to a live ``get_state_dict()`` call if the file is stale
|
||||
or missing.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import deque
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from fastapi import APIRouter, WebSocket
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from timmy.workshop_state import PRESENCE_FILE
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/world", tags=["world"])
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# WebSocket relay for live state changes
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_ws_clients: list[WebSocket] = []
|
||||
|
||||
_STALE_THRESHOLD = 90 # seconds — file older than this triggers live rebuild
|
||||
|
||||
# Recent conversation buffer — kept in memory for the Workshop overlay.
|
||||
# Stores the last _MAX_EXCHANGES (visitor_text, timmy_text) pairs.
|
||||
_MAX_EXCHANGES = 3
|
||||
_conversation: deque[dict] = deque(maxlen=_MAX_EXCHANGES)
|
||||
|
||||
_WORKSHOP_SESSION_ID = "workshop"
|
||||
|
||||
_HEARTBEAT_INTERVAL = 15 # seconds — ping to detect dead iPad/Safari connections
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Conversation grounding — commitment tracking (rescued from PR #408)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Patterns that indicate Timmy is committing to an action.
|
||||
_COMMITMENT_PATTERNS: list[re.Pattern[str]] = [
|
||||
re.compile(r"I'll (.+?)(?:\.|!|\?|$)", re.IGNORECASE),
|
||||
re.compile(r"I will (.+?)(?:\.|!|\?|$)", re.IGNORECASE),
|
||||
re.compile(r"[Ll]et me (.+?)(?:\.|!|\?|$)", re.IGNORECASE),
|
||||
]
|
||||
|
||||
# After this many messages without follow-up, surface open commitments.
|
||||
_REMIND_AFTER = 5
|
||||
_MAX_COMMITMENTS = 10
|
||||
|
||||
# In-memory list of open commitments.
|
||||
# Each entry: {"text": str, "created_at": float, "messages_since": int}
|
||||
_commitments: list[dict] = []
|
||||
|
||||
|
||||
def _extract_commitments(text: str) -> list[str]:
|
||||
"""Pull commitment phrases from Timmy's reply text."""
|
||||
found: list[str] = []
|
||||
for pattern in _COMMITMENT_PATTERNS:
|
||||
for match in pattern.finditer(text):
|
||||
phrase = match.group(1).strip()
|
||||
if len(phrase) > 5: # skip trivially short matches
|
||||
found.append(phrase[:120])
|
||||
return found
|
||||
|
||||
|
||||
def _record_commitments(reply: str) -> None:
|
||||
"""Scan a Timmy reply for commitments and store them."""
|
||||
for phrase in _extract_commitments(reply):
|
||||
# Avoid near-duplicate commitments
|
||||
if any(c["text"] == phrase for c in _commitments):
|
||||
continue
|
||||
_commitments.append({"text": phrase, "created_at": time.time(), "messages_since": 0})
|
||||
if len(_commitments) > _MAX_COMMITMENTS:
|
||||
_commitments.pop(0)
|
||||
|
||||
|
||||
def _tick_commitments() -> None:
|
||||
"""Increment messages_since for every open commitment."""
|
||||
for c in _commitments:
|
||||
c["messages_since"] += 1
|
||||
|
||||
|
||||
def _build_commitment_context() -> str:
|
||||
"""Return a grounding note if any commitments are overdue for follow-up."""
|
||||
overdue = [c for c in _commitments if c["messages_since"] >= _REMIND_AFTER]
|
||||
if not overdue:
|
||||
return ""
|
||||
lines = [f"- {c['text']}" for c in overdue]
|
||||
return (
|
||||
"[Open commitments Timmy made earlier — "
|
||||
"weave awareness naturally, don't list robotically]\n" + "\n".join(lines)
|
||||
)
|
||||
|
||||
|
||||
def close_commitment(index: int) -> bool:
|
||||
"""Remove a commitment by index. Returns True if removed."""
|
||||
if 0 <= index < len(_commitments):
|
||||
_commitments.pop(index)
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def get_commitments() -> list[dict]:
|
||||
"""Return a copy of open commitments (for testing / API)."""
|
||||
return list(_commitments)
|
||||
|
||||
|
||||
def reset_commitments() -> None:
|
||||
"""Clear all commitments (for testing / session reset)."""
|
||||
_commitments.clear()
|
||||
|
||||
|
||||
# Conversation grounding — anchor to opening topic so Timmy doesn't drift.
|
||||
_ground_topic: str | None = None
|
||||
_ground_set_at: float = 0.0
|
||||
_GROUND_TTL = 300 # seconds of inactivity before the anchor expires
|
||||
|
||||
|
||||
def _read_presence_file() -> dict | None:
|
||||
"""Read presence.json if it exists and is fresh enough."""
|
||||
try:
|
||||
if not PRESENCE_FILE.exists():
|
||||
return None
|
||||
age = time.time() - PRESENCE_FILE.stat().st_mtime
|
||||
if age > _STALE_THRESHOLD:
|
||||
logger.debug("presence.json is stale (%.0fs old)", age)
|
||||
return None
|
||||
return json.loads(PRESENCE_FILE.read_text())
|
||||
except (OSError, json.JSONDecodeError) as exc:
|
||||
logger.warning("Failed to read presence.json: %s", exc)
|
||||
return None
|
||||
|
||||
|
||||
def _build_world_state(presence: dict) -> dict:
|
||||
"""Transform presence dict into the world/state API response."""
|
||||
return {
|
||||
"timmyState": {
|
||||
"mood": presence.get("mood", "calm"),
|
||||
"activity": presence.get("current_focus", "idle"),
|
||||
"energy": presence.get("energy", 0.5),
|
||||
"confidence": presence.get("confidence", 0.7),
|
||||
},
|
||||
"familiar": presence.get("familiar"),
|
||||
"activeThreads": presence.get("active_threads", []),
|
||||
"recentEvents": presence.get("recent_events", []),
|
||||
"concerns": presence.get("concerns", []),
|
||||
"visitorPresent": False,
|
||||
"updatedAt": presence.get("liveness", datetime.now(UTC).strftime("%Y-%m-%dT%H:%M:%SZ")),
|
||||
"version": presence.get("version", 1),
|
||||
}
|
||||
|
||||
|
||||
def _get_current_state() -> dict:
|
||||
"""Build the current world-state dict from best available source."""
|
||||
presence = _read_presence_file()
|
||||
|
||||
if presence is None:
|
||||
try:
|
||||
from timmy.workshop_state import get_state_dict
|
||||
|
||||
presence = get_state_dict()
|
||||
except Exception as exc:
|
||||
logger.warning("Live state build failed: %s", exc)
|
||||
presence = {
|
||||
"version": 1,
|
||||
"liveness": datetime.now(UTC).strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"mood": "calm",
|
||||
"current_focus": "",
|
||||
"active_threads": [],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
|
||||
return _build_world_state(presence)
|
||||
|
||||
|
||||
@router.get("/state")
|
||||
async def get_world_state() -> JSONResponse:
|
||||
"""Return Timmy's current world state for Workshop bootstrap.
|
||||
|
||||
Reads from ``~/.timmy/presence.json`` if fresh, otherwise
|
||||
rebuilds live from cognitive state.
|
||||
"""
|
||||
return JSONResponse(
|
||||
content=_get_current_state(),
|
||||
headers={"Cache-Control": "no-cache, no-store"},
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# WebSocket endpoint — streams timmy_state changes to Workshop clients
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def _heartbeat(websocket: WebSocket) -> None:
|
||||
"""Send periodic pings to detect dead connections (iPad resilience).
|
||||
|
||||
Safari suspends background tabs, killing the TCP socket silently.
|
||||
A 15-second ping ensures we notice within one interval.
|
||||
|
||||
Rescued from stale PR #399.
|
||||
"""
|
||||
try:
|
||||
while True:
|
||||
await asyncio.sleep(_HEARTBEAT_INTERVAL)
|
||||
await websocket.send_text(json.dumps({"type": "ping"}))
|
||||
except Exception:
|
||||
logger.debug("Heartbeat stopped — connection gone")
|
||||
|
||||
|
||||
@router.websocket("/ws")
|
||||
async def world_ws(websocket: WebSocket) -> None:
|
||||
"""Accept a Workshop client and keep it alive for state broadcasts.
|
||||
|
||||
Sends a full ``world_state`` snapshot immediately on connect so the
|
||||
client never starts from a blank slate. Incoming frames are parsed
|
||||
as JSON — ``visitor_message`` triggers a bark response. A background
|
||||
heartbeat ping runs every 15 s to detect dead connections early.
|
||||
"""
|
||||
await websocket.accept()
|
||||
_ws_clients.append(websocket)
|
||||
logger.info("World WS connected — %d clients", len(_ws_clients))
|
||||
|
||||
# Send full world-state snapshot so client bootstraps instantly
|
||||
try:
|
||||
snapshot = _get_current_state()
|
||||
await websocket.send_text(json.dumps({"type": "world_state", **snapshot}))
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to send WS snapshot: %s", exc)
|
||||
|
||||
ping_task = asyncio.create_task(_heartbeat(websocket))
|
||||
try:
|
||||
while True:
|
||||
raw = await websocket.receive_text()
|
||||
await _handle_client_message(raw)
|
||||
except Exception:
|
||||
logger.debug("WebSocket receive loop ended")
|
||||
finally:
|
||||
ping_task.cancel()
|
||||
if websocket in _ws_clients:
|
||||
_ws_clients.remove(websocket)
|
||||
logger.info("World WS disconnected — %d clients", len(_ws_clients))
|
||||
|
||||
|
||||
async def _broadcast(message: str) -> None:
|
||||
"""Send *message* to every connected Workshop client, pruning dead ones."""
|
||||
dead: list[WebSocket] = []
|
||||
for ws in _ws_clients:
|
||||
try:
|
||||
await ws.send_text(message)
|
||||
except Exception:
|
||||
logger.debug("Pruning dead WebSocket client")
|
||||
dead.append(ws)
|
||||
for ws in dead:
|
||||
if ws in _ws_clients:
|
||||
_ws_clients.remove(ws)
|
||||
|
||||
|
||||
async def broadcast_world_state(presence: dict) -> None:
|
||||
"""Broadcast a ``timmy_state`` message to all connected Workshop clients.
|
||||
|
||||
Called by :class:`~timmy.workshop_state.WorkshopHeartbeat` via its
|
||||
``on_change`` callback.
|
||||
"""
|
||||
state = _build_world_state(presence)
|
||||
await _broadcast(json.dumps({"type": "timmy_state", **state["timmyState"]}))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Visitor chat — bark engine
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def _handle_client_message(raw: str) -> None:
|
||||
"""Dispatch an incoming WebSocket frame from the Workshop client."""
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return # ignore non-JSON keep-alive pings
|
||||
|
||||
if data.get("type") == "visitor_message":
|
||||
text = (data.get("text") or "").strip()
|
||||
if text:
|
||||
task = asyncio.create_task(_bark_and_broadcast(text))
|
||||
task.add_done_callback(_log_bark_failure)
|
||||
|
||||
|
||||
def _log_bark_failure(task: asyncio.Task) -> None:
|
||||
"""Log unhandled exceptions from fire-and-forget bark tasks."""
|
||||
if task.cancelled():
|
||||
return
|
||||
exc = task.exception()
|
||||
if exc is not None:
|
||||
logger.error("Bark task failed: %s", exc)
|
||||
|
||||
|
||||
def reset_conversation_ground() -> None:
|
||||
"""Clear the conversation grounding anchor (e.g. after inactivity)."""
|
||||
global _ground_topic, _ground_set_at
|
||||
_ground_topic = None
|
||||
_ground_set_at = 0.0
|
||||
|
||||
|
||||
def _refresh_ground(visitor_text: str) -> None:
|
||||
"""Set or refresh the conversation grounding anchor.
|
||||
|
||||
The first visitor message in a session (or after the TTL expires)
|
||||
becomes the anchor topic. Subsequent messages are grounded against it.
|
||||
"""
|
||||
global _ground_topic, _ground_set_at
|
||||
now = time.time()
|
||||
if _ground_topic is None or (now - _ground_set_at) > _GROUND_TTL:
|
||||
_ground_topic = visitor_text[:120]
|
||||
logger.debug("Ground topic set: %s", _ground_topic)
|
||||
_ground_set_at = now
|
||||
|
||||
|
||||
async def _bark_and_broadcast(visitor_text: str) -> None:
|
||||
"""Generate a bark response and broadcast it to all Workshop clients."""
|
||||
await _broadcast(json.dumps({"type": "timmy_thinking"}))
|
||||
|
||||
# Notify Pip that a visitor spoke
|
||||
try:
|
||||
from timmy.familiar import pip_familiar
|
||||
|
||||
pip_familiar.on_event("visitor_spoke")
|
||||
except Exception:
|
||||
logger.debug("Pip familiar notification failed (optional)")
|
||||
|
||||
_refresh_ground(visitor_text)
|
||||
_tick_commitments()
|
||||
reply = await _generate_bark(visitor_text)
|
||||
_record_commitments(reply)
|
||||
|
||||
_conversation.append({"visitor": visitor_text, "timmy": reply})
|
||||
|
||||
await _broadcast(
|
||||
json.dumps(
|
||||
{
|
||||
"type": "timmy_speech",
|
||||
"text": reply,
|
||||
"recentExchanges": list(_conversation),
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
async def _generate_bark(visitor_text: str) -> str:
|
||||
"""Generate a short in-character bark response.
|
||||
|
||||
Uses the existing Timmy session with a dedicated workshop session ID.
|
||||
When a grounding anchor exists, the opening topic is prepended so the
|
||||
model stays on-topic across long sessions.
|
||||
Gracefully degrades to a canned response if inference fails.
|
||||
"""
|
||||
try:
|
||||
from timmy import session as _session
|
||||
|
||||
grounded = visitor_text
|
||||
commitment_ctx = _build_commitment_context()
|
||||
if commitment_ctx:
|
||||
grounded = f"{commitment_ctx}\n{grounded}"
|
||||
if _ground_topic and visitor_text != _ground_topic:
|
||||
grounded = f"[Workshop conversation topic: {_ground_topic}]\n{grounded}"
|
||||
response = await _session.chat(grounded, session_id=_WORKSHOP_SESSION_ID)
|
||||
return response
|
||||
except Exception as exc:
|
||||
logger.warning("Bark generation failed: %s", exc)
|
||||
return "Hmm, my thoughts are a bit tangled right now."
|
||||
@@ -138,6 +138,47 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Spark Intelligence -->
|
||||
{% from "macros.html" import panel %}
|
||||
<div class="mc-card-spaced">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h2 class="card-title">Spark Intelligence</h2>
|
||||
<div>
|
||||
<span class="badge" id="spark-status-badge">Loading...</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="grid grid-3">
|
||||
<div class="stat">
|
||||
<div class="stat-value" id="spark-events">-</div>
|
||||
<div class="stat-label">Events</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value" id="spark-memories">-</div>
|
||||
<div class="stat-label">Memories</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value" id="spark-predictions">-</div>
|
||||
<div class="stat-label">Predictions</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="grid grid-2 mc-section-gap">
|
||||
{% call panel("SPARK TIMELINE", id="spark-timeline-panel",
|
||||
hx_get="/spark/timeline",
|
||||
hx_trigger="load, every 10s") %}
|
||||
<div class="spark-timeline-scroll">
|
||||
<p class="chat-history-placeholder">Loading timeline...</p>
|
||||
</div>
|
||||
{% endcall %}
|
||||
{% call panel("SPARK INSIGHTS", id="spark-insights-panel",
|
||||
hx_get="/spark/insights",
|
||||
hx_trigger="load, every 30s") %}
|
||||
<p class="chat-history-placeholder">Loading insights...</p>
|
||||
{% endcall %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Chat History -->
|
||||
<div class="card mc-card-spaced">
|
||||
<div class="card-header">
|
||||
@@ -428,7 +469,34 @@ async function loadGrokStats() {
|
||||
}
|
||||
}
|
||||
|
||||
// Load Spark status
|
||||
async function loadSparkStatus() {
|
||||
try {
|
||||
var response = await fetch('/spark');
|
||||
var data = await response.json();
|
||||
var st = data.status || {};
|
||||
|
||||
document.getElementById('spark-events').textContent = st.total_events || 0;
|
||||
document.getElementById('spark-memories').textContent = st.total_memories || 0;
|
||||
document.getElementById('spark-predictions').textContent = st.total_predictions || 0;
|
||||
|
||||
var badge = document.getElementById('spark-status-badge');
|
||||
if (st.total_events > 0) {
|
||||
badge.textContent = 'Active';
|
||||
badge.className = 'badge badge-success';
|
||||
} else {
|
||||
badge.textContent = 'Idle';
|
||||
badge.className = 'badge badge-warning';
|
||||
}
|
||||
} catch (error) {
|
||||
var badge = document.getElementById('spark-status-badge');
|
||||
badge.textContent = 'Offline';
|
||||
badge.className = 'badge badge-danger';
|
||||
}
|
||||
}
|
||||
|
||||
// Initial load
|
||||
loadSparkStatus();
|
||||
loadSovereignty();
|
||||
loadHealth();
|
||||
loadSwarmStats();
|
||||
@@ -442,5 +510,6 @@ setInterval(loadHealth, 10000);
|
||||
setInterval(loadSwarmStats, 5000);
|
||||
setInterval(updateHeartbeat, 5000);
|
||||
setInterval(loadGrokStats, 10000);
|
||||
setInterval(loadSparkStatus, 15000);
|
||||
</script>
|
||||
{% endblock %}
|
||||
|
||||
180
src/dashboard/templates/tower.html
Normal file
180
src/dashboard/templates/tower.html
Normal file
@@ -0,0 +1,180 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Timmy Time — Tower{% endblock %}
|
||||
|
||||
{% block extra_styles %}{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="container-fluid tower-container py-3">
|
||||
|
||||
<div class="tower-header">
|
||||
<div class="tower-title">TOWER</div>
|
||||
<div class="tower-subtitle">
|
||||
Real-time Spark visualization —
|
||||
<span id="tower-conn" class="tower-conn-badge tower-conn-connecting">CONNECTING</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row g-3">
|
||||
|
||||
<!-- Left: THINKING (events) -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
<div class="card mc-panel tower-phase-card">
|
||||
<div class="card-header mc-panel-header tower-phase-thinking">// THINKING</div>
|
||||
<div class="card-body p-3 tower-scroll" id="tower-events">
|
||||
<div class="tower-empty">Waiting for Spark data…</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Middle: PREDICTING (EIDOS) -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
<div class="card mc-panel tower-phase-card">
|
||||
<div class="card-header mc-panel-header tower-phase-predicting">// PREDICTING</div>
|
||||
<div class="card-body p-3" id="tower-predictions">
|
||||
<div class="tower-empty">Waiting for Spark data…</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// EIDOS STATS</div>
|
||||
<div class="card-body p-3">
|
||||
<div class="tower-stat-grid" id="tower-stats">
|
||||
<div class="tower-stat"><span class="tower-stat-label">EVENTS</span><span class="tower-stat-value" id="ts-events">0</span></div>
|
||||
<div class="tower-stat"><span class="tower-stat-label">MEMORIES</span><span class="tower-stat-value" id="ts-memories">0</span></div>
|
||||
<div class="tower-stat"><span class="tower-stat-label">PREDICTIONS</span><span class="tower-stat-value" id="ts-preds">0</span></div>
|
||||
<div class="tower-stat"><span class="tower-stat-label">ACCURACY</span><span class="tower-stat-value" id="ts-accuracy">—</span></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Right: ADVISING -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
<div class="card mc-panel tower-phase-card">
|
||||
<div class="card-header mc-panel-header tower-phase-advising">// ADVISING</div>
|
||||
<div class="card-body p-3 tower-scroll" id="tower-advisories">
|
||||
<div class="tower-empty">Waiting for Spark data…</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
(function() {
|
||||
var ws = null;
|
||||
var badge = document.getElementById('tower-conn');
|
||||
|
||||
function setConn(state) {
|
||||
badge.textContent = state.toUpperCase();
|
||||
badge.className = 'tower-conn-badge tower-conn-' + state;
|
||||
}
|
||||
|
||||
function esc(s) { var d = document.createElement('div'); d.textContent = s; return d.innerHTML; }
|
||||
|
||||
function renderEvents(events) {
|
||||
var el = document.getElementById('tower-events');
|
||||
if (!events || !events.length) { el.innerHTML = '<div class="tower-empty">No events captured yet.</div>'; return; }
|
||||
var html = '';
|
||||
for (var i = 0; i < events.length; i++) {
|
||||
var ev = events[i];
|
||||
var dots = ev.importance >= 0.8 ? '\u25cf\u25cf\u25cf' : ev.importance >= 0.5 ? '\u25cf\u25cf' : '\u25cf';
|
||||
html += '<div class="tower-event tower-etype-' + esc(ev.event_type) + '">'
|
||||
+ '<div class="tower-ev-head">'
|
||||
+ '<span class="tower-ev-badge">' + esc(ev.event_type.replace(/_/g, ' ').toUpperCase()) + '</span>'
|
||||
+ '<span class="tower-ev-dots">' + dots + '</span>'
|
||||
+ '</div>'
|
||||
+ '<div class="tower-ev-desc">' + esc(ev.description) + '</div>'
|
||||
+ '<div class="tower-ev-time">' + esc((ev.created_at || '').slice(0, 19)) + '</div>'
|
||||
+ '</div>';
|
||||
}
|
||||
el.innerHTML = html;
|
||||
}
|
||||
|
||||
function renderPredictions(preds) {
|
||||
var el = document.getElementById('tower-predictions');
|
||||
if (!preds || !preds.length) { el.innerHTML = '<div class="tower-empty">No predictions yet.</div>'; return; }
|
||||
var html = '';
|
||||
for (var i = 0; i < preds.length; i++) {
|
||||
var p = preds[i];
|
||||
var cls = p.evaluated ? 'tower-pred-done' : 'tower-pred-pending';
|
||||
var accTxt = p.accuracy != null ? Math.round(p.accuracy * 100) + '%' : 'PENDING';
|
||||
var accCls = p.accuracy != null ? (p.accuracy >= 0.7 ? 'text-success' : p.accuracy < 0.4 ? 'text-danger' : 'text-warning') : '';
|
||||
html += '<div class="tower-pred ' + cls + '">'
|
||||
+ '<div class="tower-pred-head">'
|
||||
+ '<span class="tower-pred-task">' + esc(p.task_id) + '</span>'
|
||||
+ '<span class="tower-pred-acc ' + accCls + '">' + accTxt + '</span>'
|
||||
+ '</div>';
|
||||
if (p.predicted) {
|
||||
var pr = p.predicted;
|
||||
html += '<div class="tower-pred-detail">';
|
||||
if (pr.likely_winner) html += '<span>Winner: ' + esc(pr.likely_winner.slice(0, 8)) + '</span> ';
|
||||
if (pr.success_probability != null) html += '<span>Success: ' + Math.round(pr.success_probability * 100) + '%</span> ';
|
||||
html += '</div>';
|
||||
}
|
||||
html += '<div class="tower-ev-time">' + esc((p.created_at || '').slice(0, 19)) + '</div>'
|
||||
+ '</div>';
|
||||
}
|
||||
el.innerHTML = html;
|
||||
}
|
||||
|
||||
function renderAdvisories(advs) {
|
||||
var el = document.getElementById('tower-advisories');
|
||||
if (!advs || !advs.length) { el.innerHTML = '<div class="tower-empty">No advisories yet.</div>'; return; }
|
||||
var html = '';
|
||||
for (var i = 0; i < advs.length; i++) {
|
||||
var a = advs[i];
|
||||
var prio = a.priority >= 0.7 ? 'high' : a.priority >= 0.4 ? 'medium' : 'low';
|
||||
html += '<div class="tower-advisory tower-adv-' + prio + '">'
|
||||
+ '<div class="tower-adv-head">'
|
||||
+ '<span class="tower-adv-cat">' + esc(a.category.replace(/_/g, ' ').toUpperCase()) + '</span>'
|
||||
+ '<span class="tower-adv-prio">' + Math.round(a.priority * 100) + '%</span>'
|
||||
+ '</div>'
|
||||
+ '<div class="tower-adv-title">' + esc(a.title) + '</div>'
|
||||
+ '<div class="tower-adv-detail">' + esc(a.detail) + '</div>'
|
||||
+ '<div class="tower-adv-action">' + esc(a.suggested_action) + '</div>'
|
||||
+ '</div>';
|
||||
}
|
||||
el.innerHTML = html;
|
||||
}
|
||||
|
||||
function renderStats(status) {
|
||||
if (!status) return;
|
||||
document.getElementById('ts-events').textContent = status.events_captured || 0;
|
||||
document.getElementById('ts-memories').textContent = status.memories_stored || 0;
|
||||
var p = status.predictions || {};
|
||||
document.getElementById('ts-preds').textContent = p.total_predictions || 0;
|
||||
var acc = p.avg_accuracy;
|
||||
var accEl = document.getElementById('ts-accuracy');
|
||||
if (acc != null) {
|
||||
accEl.textContent = Math.round(acc * 100) + '%';
|
||||
accEl.className = 'tower-stat-value ' + (acc >= 0.7 ? 'text-success' : acc < 0.4 ? 'text-danger' : 'text-warning');
|
||||
} else {
|
||||
accEl.textContent = '\u2014';
|
||||
}
|
||||
}
|
||||
|
||||
function handleMsg(data) {
|
||||
if (data.type !== 'spark_state') return;
|
||||
renderEvents(data.events);
|
||||
renderPredictions(data.predictions);
|
||||
renderAdvisories(data.advisories);
|
||||
renderStats(data.status);
|
||||
}
|
||||
|
||||
function connect() {
|
||||
var proto = location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
ws = new WebSocket(proto + '//' + location.host + '/tower/ws');
|
||||
ws.onopen = function() { setConn('live'); };
|
||||
ws.onclose = function() { setConn('offline'); setTimeout(connect, 3000); };
|
||||
ws.onerror = function() { setConn('offline'); };
|
||||
ws.onmessage = function(e) {
|
||||
try { handleMsg(JSON.parse(e.data)); } catch(err) { console.error('Tower WS parse error', err); }
|
||||
};
|
||||
}
|
||||
|
||||
connect();
|
||||
})();
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -100,6 +100,172 @@ def _get_git_context() -> dict:
|
||||
return {"branch": "unknown", "commit": "unknown"}
|
||||
|
||||
|
||||
def _extract_traceback_info(exc: Exception) -> tuple[str, str, int]:
|
||||
"""Extract formatted traceback, affected file, and line number.
|
||||
|
||||
Returns:
|
||||
Tuple of (traceback_string, affected_file, affected_line).
|
||||
"""
|
||||
tb_str = "".join(traceback.format_exception(type(exc), exc, exc.__traceback__))
|
||||
|
||||
tb_obj = exc.__traceback__
|
||||
affected_file = "unknown"
|
||||
affected_line = 0
|
||||
while tb_obj and tb_obj.tb_next:
|
||||
tb_obj = tb_obj.tb_next
|
||||
if tb_obj:
|
||||
affected_file = tb_obj.tb_frame.f_code.co_filename
|
||||
affected_line = tb_obj.tb_lineno
|
||||
|
||||
return tb_str, affected_file, affected_line
|
||||
|
||||
|
||||
def _log_error_event(
|
||||
exc: Exception,
|
||||
source: str,
|
||||
error_hash: str,
|
||||
affected_file: str,
|
||||
affected_line: int,
|
||||
git_ctx: dict,
|
||||
) -> None:
|
||||
"""Log the captured error to the event log."""
|
||||
try:
|
||||
from swarm.event_log import EventType, log_event
|
||||
|
||||
log_event(
|
||||
EventType.ERROR_CAPTURED,
|
||||
source=source,
|
||||
data={
|
||||
"error_type": type(exc).__name__,
|
||||
"message": str(exc)[:500],
|
||||
"hash": error_hash,
|
||||
"file": affected_file,
|
||||
"line": affected_line,
|
||||
"git_branch": git_ctx.get("branch", ""),
|
||||
"git_commit": git_ctx.get("commit", ""),
|
||||
},
|
||||
)
|
||||
except Exception as log_exc:
|
||||
logger.debug("Failed to log error event: %s", log_exc)
|
||||
|
||||
|
||||
def _build_report_description(
|
||||
exc: Exception,
|
||||
source: str,
|
||||
context: dict | None,
|
||||
error_hash: str,
|
||||
tb_str: str,
|
||||
affected_file: str,
|
||||
affected_line: int,
|
||||
git_ctx: dict,
|
||||
) -> str:
|
||||
"""Build the markdown description for a bug report task."""
|
||||
parts = [
|
||||
f"**Error:** {type(exc).__name__}: {str(exc)}",
|
||||
f"**Source:** {source}",
|
||||
f"**File:** {affected_file}:{affected_line}",
|
||||
f"**Git:** {git_ctx.get('branch', '?')} @ {git_ctx.get('commit', '?')}",
|
||||
f"**Time:** {datetime.now(UTC).isoformat()}",
|
||||
f"**Hash:** {error_hash}",
|
||||
]
|
||||
|
||||
if context:
|
||||
ctx_str = ", ".join(f"{k}={v}" for k, v in context.items())
|
||||
parts.append(f"**Context:** {ctx_str}")
|
||||
|
||||
parts.append(f"\n**Stack Trace:**\n```\n{tb_str[:2000]}\n```")
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def _log_bug_report_created(source: str, task_id: str, error_hash: str, title: str) -> None:
|
||||
"""Log a BUG_REPORT_CREATED event (best-effort)."""
|
||||
try:
|
||||
from swarm.event_log import EventType, log_event
|
||||
|
||||
log_event(
|
||||
EventType.BUG_REPORT_CREATED,
|
||||
source=source,
|
||||
task_id=task_id,
|
||||
data={
|
||||
"error_hash": error_hash,
|
||||
"title": title[:100],
|
||||
},
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Bug report event log error: %s", exc)
|
||||
|
||||
|
||||
def _create_bug_report(
|
||||
exc: Exception,
|
||||
source: str,
|
||||
context: dict | None,
|
||||
error_hash: str,
|
||||
tb_str: str,
|
||||
affected_file: str,
|
||||
affected_line: int,
|
||||
git_ctx: dict,
|
||||
) -> str | None:
|
||||
"""Create a bug report task and return the task ID (or None on failure)."""
|
||||
try:
|
||||
from swarm.task_queue.models import create_task
|
||||
|
||||
title = f"[BUG] {type(exc).__name__}: {str(exc)[:80]}"
|
||||
description = _build_report_description(
|
||||
exc,
|
||||
source,
|
||||
context,
|
||||
error_hash,
|
||||
tb_str,
|
||||
affected_file,
|
||||
affected_line,
|
||||
git_ctx,
|
||||
)
|
||||
|
||||
task = create_task(
|
||||
title=title,
|
||||
description=description,
|
||||
assigned_to="default",
|
||||
created_by="system",
|
||||
priority="normal",
|
||||
requires_approval=False,
|
||||
auto_approve=True,
|
||||
task_type="bug_report",
|
||||
)
|
||||
|
||||
_log_bug_report_created(source, task.id, error_hash, title)
|
||||
return task.id
|
||||
|
||||
except Exception as task_exc:
|
||||
logger.debug("Failed to create bug report task: %s", task_exc)
|
||||
return None
|
||||
|
||||
|
||||
def _notify_bug_report(exc: Exception, source: str) -> None:
|
||||
"""Send a push notification about the captured error."""
|
||||
try:
|
||||
from infrastructure.notifications.push import notifier
|
||||
|
||||
notifier.notify(
|
||||
title="Bug Report Filed",
|
||||
message=f"{type(exc).__name__} in {source}: {str(exc)[:80]}",
|
||||
category="system",
|
||||
)
|
||||
except Exception as notify_exc:
|
||||
logger.warning("Bug report notification error: %s", notify_exc)
|
||||
|
||||
|
||||
def _record_to_session(exc: Exception, source: str) -> None:
|
||||
"""Record the error via the registered session callback."""
|
||||
if _error_recorder is not None:
|
||||
try:
|
||||
_error_recorder(
|
||||
error=f"{type(exc).__name__}: {str(exc)}",
|
||||
context=source,
|
||||
)
|
||||
except Exception as log_exc:
|
||||
logger.warning("Bug report session logging error: %s", log_exc)
|
||||
|
||||
|
||||
def capture_error(
|
||||
exc: Exception,
|
||||
source: str = "unknown",
|
||||
@@ -126,116 +292,23 @@ def capture_error(
|
||||
logger.debug("Duplicate error suppressed: %s (hash=%s)", exc, error_hash)
|
||||
return None
|
||||
|
||||
# Format the stack trace
|
||||
tb_str = "".join(traceback.format_exception(type(exc), exc, exc.__traceback__))
|
||||
|
||||
# Extract file/line from traceback
|
||||
tb_obj = exc.__traceback__
|
||||
affected_file = "unknown"
|
||||
affected_line = 0
|
||||
while tb_obj and tb_obj.tb_next:
|
||||
tb_obj = tb_obj.tb_next
|
||||
if tb_obj:
|
||||
affected_file = tb_obj.tb_frame.f_code.co_filename
|
||||
affected_line = tb_obj.tb_lineno
|
||||
|
||||
tb_str, affected_file, affected_line = _extract_traceback_info(exc)
|
||||
git_ctx = _get_git_context()
|
||||
|
||||
# 1. Log to event_log
|
||||
try:
|
||||
from swarm.event_log import EventType, log_event
|
||||
_log_error_event(exc, source, error_hash, affected_file, affected_line, git_ctx)
|
||||
|
||||
log_event(
|
||||
EventType.ERROR_CAPTURED,
|
||||
source=source,
|
||||
data={
|
||||
"error_type": type(exc).__name__,
|
||||
"message": str(exc)[:500],
|
||||
"hash": error_hash,
|
||||
"file": affected_file,
|
||||
"line": affected_line,
|
||||
"git_branch": git_ctx.get("branch", ""),
|
||||
"git_commit": git_ctx.get("commit", ""),
|
||||
},
|
||||
)
|
||||
except Exception as log_exc:
|
||||
logger.debug("Failed to log error event: %s", log_exc)
|
||||
task_id = _create_bug_report(
|
||||
exc,
|
||||
source,
|
||||
context,
|
||||
error_hash,
|
||||
tb_str,
|
||||
affected_file,
|
||||
affected_line,
|
||||
git_ctx,
|
||||
)
|
||||
|
||||
# 2. Create bug report task
|
||||
task_id = None
|
||||
try:
|
||||
from swarm.task_queue.models import create_task
|
||||
|
||||
title = f"[BUG] {type(exc).__name__}: {str(exc)[:80]}"
|
||||
|
||||
description_parts = [
|
||||
f"**Error:** {type(exc).__name__}: {str(exc)}",
|
||||
f"**Source:** {source}",
|
||||
f"**File:** {affected_file}:{affected_line}",
|
||||
f"**Git:** {git_ctx.get('branch', '?')} @ {git_ctx.get('commit', '?')}",
|
||||
f"**Time:** {datetime.now(UTC).isoformat()}",
|
||||
f"**Hash:** {error_hash}",
|
||||
]
|
||||
|
||||
if context:
|
||||
ctx_str = ", ".join(f"{k}={v}" for k, v in context.items())
|
||||
description_parts.append(f"**Context:** {ctx_str}")
|
||||
|
||||
description_parts.append(f"\n**Stack Trace:**\n```\n{tb_str[:2000]}\n```")
|
||||
|
||||
task = create_task(
|
||||
title=title,
|
||||
description="\n".join(description_parts),
|
||||
assigned_to="default",
|
||||
created_by="system",
|
||||
priority="normal",
|
||||
requires_approval=False,
|
||||
auto_approve=True,
|
||||
task_type="bug_report",
|
||||
)
|
||||
task_id = task.id
|
||||
|
||||
# Log the creation event
|
||||
try:
|
||||
from swarm.event_log import EventType, log_event
|
||||
|
||||
log_event(
|
||||
EventType.BUG_REPORT_CREATED,
|
||||
source=source,
|
||||
task_id=task_id,
|
||||
data={
|
||||
"error_hash": error_hash,
|
||||
"title": title[:100],
|
||||
},
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Bug report screenshot error: %s", exc)
|
||||
pass
|
||||
|
||||
except Exception as task_exc:
|
||||
logger.debug("Failed to create bug report task: %s", task_exc)
|
||||
|
||||
# 3. Send notification
|
||||
try:
|
||||
from infrastructure.notifications.push import notifier
|
||||
|
||||
notifier.notify(
|
||||
title="Bug Report Filed",
|
||||
message=f"{type(exc).__name__} in {source}: {str(exc)[:80]}",
|
||||
category="system",
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Bug report notification error: %s", exc)
|
||||
pass
|
||||
|
||||
# 4. Record in session logger (via registered callback)
|
||||
if _error_recorder is not None:
|
||||
try:
|
||||
_error_recorder(
|
||||
error=f"{type(exc).__name__}: {str(exc)}",
|
||||
context=source,
|
||||
)
|
||||
except Exception as log_exc:
|
||||
logger.warning("Bug report session logging error: %s", log_exc)
|
||||
_notify_bug_report(exc, source)
|
||||
_record_to_session(exc, source)
|
||||
|
||||
return task_id
|
||||
|
||||
@@ -64,7 +64,7 @@ class EventBus:
|
||||
|
||||
@bus.subscribe("agent.task.*")
|
||||
async def handle_task(event: Event):
|
||||
logger.debug(f"Task event: {event.data}")
|
||||
logger.debug("Task event: %s", event.data)
|
||||
|
||||
await bus.publish(Event(
|
||||
type="agent.task.assigned",
|
||||
|
||||
@@ -144,6 +144,65 @@ class ShellHand:
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _build_run_env(env: dict | None) -> dict:
|
||||
"""Merge *env* overrides into a copy of the current environment."""
|
||||
import os
|
||||
|
||||
run_env = os.environ.copy()
|
||||
if env:
|
||||
run_env.update(env)
|
||||
return run_env
|
||||
|
||||
async def _execute_subprocess(
|
||||
self,
|
||||
command: str,
|
||||
effective_timeout: int,
|
||||
cwd: str | None,
|
||||
run_env: dict,
|
||||
start: float,
|
||||
) -> ShellResult:
|
||||
"""Run *command* as a subprocess with timeout enforcement."""
|
||||
proc = await asyncio.create_subprocess_shell(
|
||||
command,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
cwd=cwd,
|
||||
env=run_env,
|
||||
)
|
||||
|
||||
try:
|
||||
stdout_bytes, stderr_bytes = await asyncio.wait_for(
|
||||
proc.communicate(), timeout=effective_timeout
|
||||
)
|
||||
except TimeoutError:
|
||||
proc.kill()
|
||||
await proc.wait()
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning("Shell command timed out after %ds: %s", effective_timeout, command)
|
||||
return ShellResult(
|
||||
command=command,
|
||||
success=False,
|
||||
exit_code=-1,
|
||||
error=f"Command timed out after {effective_timeout}s",
|
||||
latency_ms=latency,
|
||||
timed_out=True,
|
||||
)
|
||||
|
||||
latency = (time.time() - start) * 1000
|
||||
exit_code = proc.returncode if proc.returncode is not None else -1
|
||||
stdout = stdout_bytes.decode("utf-8", errors="replace").strip()
|
||||
stderr = stderr_bytes.decode("utf-8", errors="replace").strip()
|
||||
|
||||
return ShellResult(
|
||||
command=command,
|
||||
success=exit_code == 0,
|
||||
exit_code=exit_code,
|
||||
stdout=stdout,
|
||||
stderr=stderr,
|
||||
latency_ms=latency,
|
||||
)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
command: str,
|
||||
@@ -164,7 +223,6 @@ class ShellHand:
|
||||
"""
|
||||
start = time.time()
|
||||
|
||||
# Validate
|
||||
validation_error = self._validate_command(command)
|
||||
if validation_error:
|
||||
return ShellResult(
|
||||
@@ -178,52 +236,8 @@ class ShellHand:
|
||||
cwd = working_dir or self._working_dir
|
||||
|
||||
try:
|
||||
import os
|
||||
|
||||
run_env = os.environ.copy()
|
||||
if env:
|
||||
run_env.update(env)
|
||||
|
||||
proc = await asyncio.create_subprocess_shell(
|
||||
command,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
cwd=cwd,
|
||||
env=run_env,
|
||||
)
|
||||
|
||||
try:
|
||||
stdout_bytes, stderr_bytes = await asyncio.wait_for(
|
||||
proc.communicate(), timeout=effective_timeout
|
||||
)
|
||||
except TimeoutError:
|
||||
proc.kill()
|
||||
await proc.wait()
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning("Shell command timed out after %ds: %s", effective_timeout, command)
|
||||
return ShellResult(
|
||||
command=command,
|
||||
success=False,
|
||||
exit_code=-1,
|
||||
error=f"Command timed out after {effective_timeout}s",
|
||||
latency_ms=latency,
|
||||
timed_out=True,
|
||||
)
|
||||
|
||||
latency = (time.time() - start) * 1000
|
||||
exit_code = proc.returncode if proc.returncode is not None else -1
|
||||
stdout = stdout_bytes.decode("utf-8", errors="replace").strip()
|
||||
stderr = stderr_bytes.decode("utf-8", errors="replace").strip()
|
||||
|
||||
return ShellResult(
|
||||
command=command,
|
||||
success=exit_code == 0,
|
||||
exit_code=exit_code,
|
||||
stdout=stdout,
|
||||
stderr=stderr,
|
||||
latency_ms=latency,
|
||||
)
|
||||
|
||||
run_env = self._build_run_env(env)
|
||||
return await self._execute_subprocess(command, effective_timeout, cwd, run_env, start)
|
||||
except Exception as exc:
|
||||
latency = (time.time() - start) * 1000
|
||||
logger.warning("Shell command failed: %s — %s", command, exc)
|
||||
|
||||
@@ -13,7 +13,7 @@ import logging
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum, auto
|
||||
|
||||
from config import settings
|
||||
from config import normalize_ollama_url, settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -307,7 +307,7 @@ class MultiModalManager:
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
url = self.ollama_url.replace("localhost", "127.0.0.1")
|
||||
url = normalize_ollama_url(self.ollama_url)
|
||||
req = urllib.request.Request(
|
||||
f"{url}/api/tags",
|
||||
method="GET",
|
||||
@@ -462,7 +462,7 @@ class MultiModalManager:
|
||||
|
||||
logger.info("Pulling model: %s", model_name)
|
||||
|
||||
url = self.ollama_url.replace("localhost", "127.0.0.1")
|
||||
url = normalize_ollama_url(self.ollama_url)
|
||||
req = urllib.request.Request(
|
||||
f"{url}/api/pull",
|
||||
method="POST",
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
from .api import router
|
||||
from .cascade import CascadeRouter, Provider, ProviderStatus, get_router
|
||||
from .history import HealthHistoryStore, get_history_store
|
||||
|
||||
__all__ = [
|
||||
"CascadeRouter",
|
||||
@@ -9,4 +10,6 @@ __all__ = [
|
||||
"ProviderStatus",
|
||||
"get_router",
|
||||
"router",
|
||||
"HealthHistoryStore",
|
||||
"get_history_store",
|
||||
]
|
||||
|
||||
@@ -8,6 +8,7 @@ from fastapi import APIRouter, Depends, HTTPException
|
||||
from pydantic import BaseModel
|
||||
|
||||
from .cascade import CascadeRouter, get_router
|
||||
from .history import HealthHistoryStore, get_history_store
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter(prefix="/api/v1/router", tags=["router"])
|
||||
@@ -183,6 +184,33 @@ async def run_health_check(
|
||||
}
|
||||
|
||||
|
||||
@router.post("/reload")
|
||||
async def reload_config(
|
||||
cascade: Annotated[CascadeRouter, Depends(get_cascade_router)],
|
||||
) -> dict[str, Any]:
|
||||
"""Hot-reload providers.yaml without restart.
|
||||
|
||||
Preserves circuit breaker state and metrics for existing providers.
|
||||
"""
|
||||
try:
|
||||
result = cascade.reload_config()
|
||||
return {"status": "ok", **result}
|
||||
except Exception as exc:
|
||||
logger.error("Config reload failed: %s", exc)
|
||||
raise HTTPException(status_code=500, detail=f"Reload failed: {exc}") from exc
|
||||
|
||||
|
||||
@router.get("/history")
|
||||
async def get_history(
|
||||
hours: int = 24,
|
||||
store: Annotated[HealthHistoryStore, Depends(get_history_store)] = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get provider health history for the last N hours."""
|
||||
if store is None:
|
||||
store = get_history_store()
|
||||
return store.get_history(hours=hours)
|
||||
|
||||
|
||||
@router.get("/config")
|
||||
async def get_config(
|
||||
cascade: Annotated[CascadeRouter, Depends(get_cascade_router)],
|
||||
|
||||
@@ -18,6 +18,8 @@ from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
@@ -100,7 +102,7 @@ class Provider:
|
||||
"""LLM provider configuration and state."""
|
||||
|
||||
name: str
|
||||
type: str # ollama, openai, anthropic, airllm
|
||||
type: str # ollama, openai, anthropic
|
||||
enabled: bool
|
||||
priority: int
|
||||
url: str | None = None
|
||||
@@ -219,65 +221,56 @@ class CascadeRouter:
|
||||
raise RuntimeError("PyYAML not installed")
|
||||
|
||||
content = self.config_path.read_text()
|
||||
# Expand environment variables
|
||||
content = self._expand_env_vars(content)
|
||||
data = yaml.safe_load(content)
|
||||
|
||||
# Load cascade settings
|
||||
cascade = data.get("cascade", {})
|
||||
|
||||
# Load fallback chains
|
||||
fallback_chains = data.get("fallback_chains", {})
|
||||
|
||||
# Load multi-modal settings
|
||||
multimodal = data.get("multimodal", {})
|
||||
|
||||
self.config = RouterConfig(
|
||||
timeout_seconds=cascade.get("timeout_seconds", 30),
|
||||
max_retries_per_provider=cascade.get("max_retries_per_provider", 2),
|
||||
retry_delay_seconds=cascade.get("retry_delay_seconds", 1),
|
||||
circuit_breaker_failure_threshold=cascade.get("circuit_breaker", {}).get(
|
||||
"failure_threshold", 5
|
||||
),
|
||||
circuit_breaker_recovery_timeout=cascade.get("circuit_breaker", {}).get(
|
||||
"recovery_timeout", 60
|
||||
),
|
||||
circuit_breaker_half_open_max_calls=cascade.get("circuit_breaker", {}).get(
|
||||
"half_open_max_calls", 2
|
||||
),
|
||||
auto_pull_models=multimodal.get("auto_pull", True),
|
||||
fallback_chains=fallback_chains,
|
||||
)
|
||||
|
||||
# Load providers
|
||||
for p_data in data.get("providers", []):
|
||||
# Skip disabled providers
|
||||
if not p_data.get("enabled", False):
|
||||
continue
|
||||
|
||||
provider = Provider(
|
||||
name=p_data["name"],
|
||||
type=p_data["type"],
|
||||
enabled=p_data.get("enabled", True),
|
||||
priority=p_data.get("priority", 99),
|
||||
url=p_data.get("url"),
|
||||
api_key=p_data.get("api_key"),
|
||||
base_url=p_data.get("base_url"),
|
||||
models=p_data.get("models", []),
|
||||
)
|
||||
|
||||
# Check if provider is actually available
|
||||
if self._check_provider_available(provider):
|
||||
self.providers.append(provider)
|
||||
else:
|
||||
logger.warning("Provider %s not available, skipping", provider.name)
|
||||
|
||||
# Sort by priority
|
||||
self.providers.sort(key=lambda p: p.priority)
|
||||
self.config = self._parse_router_config(data)
|
||||
self._load_providers(data)
|
||||
|
||||
except Exception as exc:
|
||||
logger.error("Failed to load config: %s", exc)
|
||||
|
||||
def _parse_router_config(self, data: dict) -> RouterConfig:
|
||||
"""Build a RouterConfig from parsed YAML data."""
|
||||
cascade = data.get("cascade", {})
|
||||
cb = cascade.get("circuit_breaker", {})
|
||||
multimodal = data.get("multimodal", {})
|
||||
|
||||
return RouterConfig(
|
||||
timeout_seconds=cascade.get("timeout_seconds", 30),
|
||||
max_retries_per_provider=cascade.get("max_retries_per_provider", 2),
|
||||
retry_delay_seconds=cascade.get("retry_delay_seconds", 1),
|
||||
circuit_breaker_failure_threshold=cb.get("failure_threshold", 5),
|
||||
circuit_breaker_recovery_timeout=cb.get("recovery_timeout", 60),
|
||||
circuit_breaker_half_open_max_calls=cb.get("half_open_max_calls", 2),
|
||||
auto_pull_models=multimodal.get("auto_pull", True),
|
||||
fallback_chains=data.get("fallback_chains", {}),
|
||||
)
|
||||
|
||||
def _load_providers(self, data: dict) -> None:
|
||||
"""Load, filter, and sort providers from parsed YAML data."""
|
||||
for p_data in data.get("providers", []):
|
||||
if not p_data.get("enabled", False):
|
||||
continue
|
||||
|
||||
provider = Provider(
|
||||
name=p_data["name"],
|
||||
type=p_data["type"],
|
||||
enabled=p_data.get("enabled", True),
|
||||
priority=p_data.get("priority", 99),
|
||||
url=p_data.get("url"),
|
||||
api_key=p_data.get("api_key"),
|
||||
base_url=p_data.get("base_url"),
|
||||
models=p_data.get("models", []),
|
||||
)
|
||||
|
||||
if self._check_provider_available(provider):
|
||||
self.providers.append(provider)
|
||||
else:
|
||||
logger.warning("Provider %s not available, skipping", provider.name)
|
||||
|
||||
self.providers.sort(key=lambda p: p.priority)
|
||||
|
||||
def _expand_env_vars(self, content: str) -> str:
|
||||
"""Expand ${VAR} syntax in YAML content.
|
||||
|
||||
@@ -301,22 +294,13 @@ class CascadeRouter:
|
||||
# Can't check without requests, assume available
|
||||
return True
|
||||
try:
|
||||
url = provider.url or "http://localhost:11434"
|
||||
url = provider.url or settings.ollama_url
|
||||
response = requests.get(f"{url}/api/tags", timeout=5)
|
||||
return response.status_code == 200
|
||||
except Exception as exc:
|
||||
logger.debug("Ollama provider check error: %s", exc)
|
||||
return False
|
||||
|
||||
elif provider.type == "airllm":
|
||||
# Check if airllm is installed
|
||||
try:
|
||||
import importlib.util
|
||||
|
||||
return importlib.util.find_spec("airllm") is not None
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
return False
|
||||
|
||||
elif provider.type in ("openai", "anthropic", "grok"):
|
||||
# Check if API key is set
|
||||
return provider.api_key is not None and provider.api_key != ""
|
||||
@@ -395,6 +379,101 @@ class CascadeRouter:
|
||||
|
||||
return None
|
||||
|
||||
def _select_model(
|
||||
self, provider: Provider, model: str | None, content_type: ContentType
|
||||
) -> tuple[str | None, bool]:
|
||||
"""Select the best model for the request, with vision fallback.
|
||||
|
||||
Returns:
|
||||
Tuple of (selected_model, is_fallback_model).
|
||||
"""
|
||||
selected_model = model or provider.get_default_model()
|
||||
is_fallback = False
|
||||
|
||||
if content_type != ContentType.TEXT and selected_model:
|
||||
if provider.type == "ollama" and self._mm_manager:
|
||||
from infrastructure.models.multimodal import ModelCapability
|
||||
|
||||
if content_type == ContentType.VISION:
|
||||
supports = self._mm_manager.model_supports(
|
||||
selected_model, ModelCapability.VISION
|
||||
)
|
||||
if not supports:
|
||||
fallback = self._get_fallback_model(provider, selected_model, content_type)
|
||||
if fallback:
|
||||
logger.info(
|
||||
"Model %s doesn't support vision, falling back to %s",
|
||||
selected_model,
|
||||
fallback,
|
||||
)
|
||||
selected_model = fallback
|
||||
is_fallback = True
|
||||
else:
|
||||
logger.warning(
|
||||
"No vision-capable model found on %s, trying anyway",
|
||||
provider.name,
|
||||
)
|
||||
|
||||
return selected_model, is_fallback
|
||||
|
||||
async def _attempt_with_retry(
|
||||
self,
|
||||
provider: Provider,
|
||||
messages: list[dict],
|
||||
model: str | None,
|
||||
temperature: float,
|
||||
max_tokens: int | None,
|
||||
content_type: ContentType,
|
||||
) -> dict:
|
||||
"""Try a provider with retries, returning the result dict.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If all retry attempts fail.
|
||||
Returns error strings collected during retries via the exception message.
|
||||
"""
|
||||
errors: list[str] = []
|
||||
for attempt in range(self.config.max_retries_per_provider):
|
||||
try:
|
||||
return await self._try_provider(
|
||||
provider=provider,
|
||||
messages=messages,
|
||||
model=model,
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens,
|
||||
content_type=content_type,
|
||||
)
|
||||
except Exception as exc:
|
||||
error_msg = str(exc)
|
||||
logger.warning(
|
||||
"Provider %s attempt %d failed: %s",
|
||||
provider.name,
|
||||
attempt + 1,
|
||||
error_msg,
|
||||
)
|
||||
errors.append(f"{provider.name}: {error_msg}")
|
||||
|
||||
if attempt < self.config.max_retries_per_provider - 1:
|
||||
await asyncio.sleep(self.config.retry_delay_seconds)
|
||||
|
||||
raise RuntimeError("; ".join(errors))
|
||||
|
||||
def _is_provider_available(self, provider: Provider) -> bool:
|
||||
"""Check if a provider should be tried (enabled + circuit breaker)."""
|
||||
if not provider.enabled:
|
||||
logger.debug("Skipping %s (disabled)", provider.name)
|
||||
return False
|
||||
|
||||
if provider.status == ProviderStatus.UNHEALTHY:
|
||||
if self._can_close_circuit(provider):
|
||||
provider.circuit_state = CircuitState.HALF_OPEN
|
||||
provider.half_open_calls = 0
|
||||
logger.info("Circuit breaker half-open for %s", provider.name)
|
||||
else:
|
||||
logger.debug("Skipping %s (circuit open)", provider.name)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict],
|
||||
@@ -421,7 +500,6 @@ class CascadeRouter:
|
||||
Raises:
|
||||
RuntimeError: If all providers fail
|
||||
"""
|
||||
# Detect content type for multi-modal routing
|
||||
content_type = self._detect_content_type(messages)
|
||||
if content_type != ContentType.TEXT:
|
||||
logger.debug("Detected %s content, selecting appropriate model", content_type.value)
|
||||
@@ -429,93 +507,34 @@ class CascadeRouter:
|
||||
errors = []
|
||||
|
||||
for provider in self.providers:
|
||||
# Skip disabled providers
|
||||
if not provider.enabled:
|
||||
logger.debug("Skipping %s (disabled)", provider.name)
|
||||
if not self._is_provider_available(provider):
|
||||
continue
|
||||
|
||||
# Skip unhealthy providers (circuit breaker)
|
||||
if provider.status == ProviderStatus.UNHEALTHY:
|
||||
# Check if circuit breaker can close
|
||||
if self._can_close_circuit(provider):
|
||||
provider.circuit_state = CircuitState.HALF_OPEN
|
||||
provider.half_open_calls = 0
|
||||
logger.info("Circuit breaker half-open for %s", provider.name)
|
||||
else:
|
||||
logger.debug("Skipping %s (circuit open)", provider.name)
|
||||
continue
|
||||
selected_model, is_fallback_model = self._select_model(provider, model, content_type)
|
||||
|
||||
# Determine which model to use
|
||||
selected_model = model or provider.get_default_model()
|
||||
is_fallback_model = False
|
||||
try:
|
||||
result = await self._attempt_with_retry(
|
||||
provider,
|
||||
messages,
|
||||
selected_model,
|
||||
temperature,
|
||||
max_tokens,
|
||||
content_type,
|
||||
)
|
||||
except RuntimeError as exc:
|
||||
errors.append(str(exc))
|
||||
self._record_failure(provider)
|
||||
continue
|
||||
|
||||
# For non-text content, check if model supports it
|
||||
if content_type != ContentType.TEXT and selected_model:
|
||||
if provider.type == "ollama" and self._mm_manager:
|
||||
from infrastructure.models.multimodal import ModelCapability
|
||||
self._record_success(provider, result.get("latency_ms", 0))
|
||||
return {
|
||||
"content": result["content"],
|
||||
"provider": provider.name,
|
||||
"model": result.get("model", selected_model or provider.get_default_model()),
|
||||
"latency_ms": result.get("latency_ms", 0),
|
||||
"is_fallback_model": is_fallback_model,
|
||||
}
|
||||
|
||||
# Check if selected model supports the required capability
|
||||
if content_type == ContentType.VISION:
|
||||
supports = self._mm_manager.model_supports(
|
||||
selected_model, ModelCapability.VISION
|
||||
)
|
||||
if not supports:
|
||||
# Find fallback model
|
||||
fallback = self._get_fallback_model(
|
||||
provider, selected_model, content_type
|
||||
)
|
||||
if fallback:
|
||||
logger.info(
|
||||
"Model %s doesn't support vision, falling back to %s",
|
||||
selected_model,
|
||||
fallback,
|
||||
)
|
||||
selected_model = fallback
|
||||
is_fallback_model = True
|
||||
else:
|
||||
logger.warning(
|
||||
"No vision-capable model found on %s, trying anyway",
|
||||
provider.name,
|
||||
)
|
||||
|
||||
# Try this provider
|
||||
for attempt in range(self.config.max_retries_per_provider):
|
||||
try:
|
||||
result = await self._try_provider(
|
||||
provider=provider,
|
||||
messages=messages,
|
||||
model=selected_model,
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens,
|
||||
content_type=content_type,
|
||||
)
|
||||
|
||||
# Success! Update metrics and return
|
||||
self._record_success(provider, result.get("latency_ms", 0))
|
||||
return {
|
||||
"content": result["content"],
|
||||
"provider": provider.name,
|
||||
"model": result.get(
|
||||
"model", selected_model or provider.get_default_model()
|
||||
),
|
||||
"latency_ms": result.get("latency_ms", 0),
|
||||
"is_fallback_model": is_fallback_model,
|
||||
}
|
||||
|
||||
except Exception as exc:
|
||||
error_msg = str(exc)
|
||||
logger.warning(
|
||||
"Provider %s attempt %d failed: %s", provider.name, attempt + 1, error_msg
|
||||
)
|
||||
errors.append(f"{provider.name}: {error_msg}")
|
||||
|
||||
if attempt < self.config.max_retries_per_provider - 1:
|
||||
await asyncio.sleep(self.config.retry_delay_seconds)
|
||||
|
||||
# All retries failed for this provider
|
||||
self._record_failure(provider)
|
||||
|
||||
# All providers failed
|
||||
raise RuntimeError(f"All providers failed: {'; '.join(errors)}")
|
||||
|
||||
async def _try_provider(
|
||||
@@ -536,6 +555,7 @@ class CascadeRouter:
|
||||
messages=messages,
|
||||
model=model or provider.get_default_model(),
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens,
|
||||
content_type=content_type,
|
||||
)
|
||||
elif provider.type == "openai":
|
||||
@@ -576,23 +596,26 @@ class CascadeRouter:
|
||||
messages: list[dict],
|
||||
model: str,
|
||||
temperature: float,
|
||||
max_tokens: int | None = None,
|
||||
content_type: ContentType = ContentType.TEXT,
|
||||
) -> dict:
|
||||
"""Call Ollama API with multi-modal support."""
|
||||
import aiohttp
|
||||
|
||||
url = f"{provider.url}/api/chat"
|
||||
url = f"{provider.url or settings.ollama_url}/api/chat"
|
||||
|
||||
# Transform messages for Ollama format (including images)
|
||||
transformed_messages = self._transform_messages_for_ollama(messages)
|
||||
|
||||
options = {"temperature": temperature}
|
||||
if max_tokens:
|
||||
options["num_predict"] = max_tokens
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": transformed_messages,
|
||||
"stream": False,
|
||||
"options": {
|
||||
"temperature": temperature,
|
||||
},
|
||||
"options": options,
|
||||
}
|
||||
|
||||
timeout = aiohttp.ClientTimeout(total=self.config.timeout_seconds)
|
||||
@@ -736,7 +759,7 @@ class CascadeRouter:
|
||||
|
||||
client = openai.AsyncOpenAI(
|
||||
api_key=provider.api_key,
|
||||
base_url=provider.base_url or "https://api.x.ai/v1",
|
||||
base_url=provider.base_url or settings.xai_base_url,
|
||||
timeout=httpx.Timeout(300.0),
|
||||
)
|
||||
|
||||
@@ -815,6 +838,66 @@ class CascadeRouter:
|
||||
provider.status = ProviderStatus.HEALTHY
|
||||
logger.info("Circuit breaker CLOSED for %s", provider.name)
|
||||
|
||||
def reload_config(self) -> dict:
|
||||
"""Hot-reload providers.yaml, preserving runtime state.
|
||||
|
||||
Re-reads the config file, rebuilds the provider list, and
|
||||
preserves circuit breaker state and metrics for providers
|
||||
that still exist after reload.
|
||||
|
||||
Returns:
|
||||
Summary dict with added/removed/preserved counts.
|
||||
"""
|
||||
# Snapshot current runtime state keyed by provider name
|
||||
old_state: dict[
|
||||
str, tuple[ProviderMetrics, CircuitState, float | None, int, ProviderStatus]
|
||||
] = {}
|
||||
for p in self.providers:
|
||||
old_state[p.name] = (
|
||||
p.metrics,
|
||||
p.circuit_state,
|
||||
p.circuit_opened_at,
|
||||
p.half_open_calls,
|
||||
p.status,
|
||||
)
|
||||
|
||||
old_names = set(old_state.keys())
|
||||
|
||||
# Reload from disk
|
||||
self.providers = []
|
||||
self._load_config()
|
||||
|
||||
# Restore preserved state
|
||||
new_names = {p.name for p in self.providers}
|
||||
preserved = 0
|
||||
for p in self.providers:
|
||||
if p.name in old_state:
|
||||
metrics, circuit, opened_at, half_open, status = old_state[p.name]
|
||||
p.metrics = metrics
|
||||
p.circuit_state = circuit
|
||||
p.circuit_opened_at = opened_at
|
||||
p.half_open_calls = half_open
|
||||
p.status = status
|
||||
preserved += 1
|
||||
|
||||
added = new_names - old_names
|
||||
removed = old_names - new_names
|
||||
|
||||
logger.info(
|
||||
"Config reloaded: %d providers (%d preserved, %d added, %d removed)",
|
||||
len(self.providers),
|
||||
preserved,
|
||||
len(added),
|
||||
len(removed),
|
||||
)
|
||||
|
||||
return {
|
||||
"total_providers": len(self.providers),
|
||||
"preserved": preserved,
|
||||
"added": sorted(added),
|
||||
"removed": sorted(removed),
|
||||
}
|
||||
|
||||
def get_metrics(self) -> dict:
|
||||
"""Get metrics for all providers."""
|
||||
return {
|
||||
|
||||
152
src/infrastructure/router/history.py
Normal file
152
src/infrastructure/router/history.py
Normal file
@@ -0,0 +1,152 @@
|
||||
"""Provider health history — time-series snapshots for dashboard visualization."""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import sqlite3
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_store: "HealthHistoryStore | None" = None
|
||||
|
||||
|
||||
class HealthHistoryStore:
|
||||
"""Stores timestamped provider health snapshots in SQLite."""
|
||||
|
||||
def __init__(self, db_path: str = "data/router_history.db") -> None:
|
||||
self.db_path = db_path
|
||||
if db_path != ":memory:":
|
||||
Path(db_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
self._conn = sqlite3.connect(db_path, check_same_thread=False)
|
||||
self._conn.row_factory = sqlite3.Row
|
||||
self._init_schema()
|
||||
self._bg_task: asyncio.Task | None = None
|
||||
|
||||
def _init_schema(self) -> None:
|
||||
self._conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS snapshots (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
timestamp TEXT NOT NULL,
|
||||
provider_name TEXT NOT NULL,
|
||||
status TEXT NOT NULL,
|
||||
error_rate REAL NOT NULL,
|
||||
avg_latency_ms REAL NOT NULL,
|
||||
circuit_state TEXT NOT NULL,
|
||||
total_requests INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
self._conn.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_snapshots_ts
|
||||
ON snapshots(timestamp)
|
||||
""")
|
||||
self._conn.commit()
|
||||
|
||||
def record_snapshot(self, providers: list[dict]) -> None:
|
||||
"""Record a health snapshot for all providers."""
|
||||
ts = datetime.now(UTC).isoformat()
|
||||
rows = [
|
||||
(
|
||||
ts,
|
||||
p["name"],
|
||||
p["status"],
|
||||
p["error_rate"],
|
||||
p["avg_latency_ms"],
|
||||
p["circuit_state"],
|
||||
p["total_requests"],
|
||||
)
|
||||
for p in providers
|
||||
]
|
||||
self._conn.executemany(
|
||||
"""INSERT INTO snapshots
|
||||
(timestamp, provider_name, status, error_rate,
|
||||
avg_latency_ms, circuit_state, total_requests)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
rows,
|
||||
)
|
||||
self._conn.commit()
|
||||
|
||||
def get_history(self, hours: int = 24) -> list[dict]:
|
||||
"""Return snapshots from the last N hours, grouped by timestamp."""
|
||||
cutoff = (datetime.now(UTC) - timedelta(hours=hours)).isoformat()
|
||||
rows = self._conn.execute(
|
||||
"""SELECT timestamp, provider_name, status, error_rate,
|
||||
avg_latency_ms, circuit_state, total_requests
|
||||
FROM snapshots WHERE timestamp >= ? ORDER BY timestamp""",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
|
||||
# Group by timestamp
|
||||
snapshots: dict[str, list[dict]] = {}
|
||||
for row in rows:
|
||||
ts = row["timestamp"]
|
||||
if ts not in snapshots:
|
||||
snapshots[ts] = []
|
||||
snapshots[ts].append(
|
||||
{
|
||||
"name": row["provider_name"],
|
||||
"status": row["status"],
|
||||
"error_rate": row["error_rate"],
|
||||
"avg_latency_ms": row["avg_latency_ms"],
|
||||
"circuit_state": row["circuit_state"],
|
||||
"total_requests": row["total_requests"],
|
||||
}
|
||||
)
|
||||
|
||||
return [{"timestamp": ts, "providers": providers} for ts, providers in snapshots.items()]
|
||||
|
||||
def prune(self, keep_hours: int = 168) -> int:
|
||||
"""Remove snapshots older than keep_hours. Returns rows deleted."""
|
||||
cutoff = (datetime.now(UTC) - timedelta(hours=keep_hours)).isoformat()
|
||||
cursor = self._conn.execute("DELETE FROM snapshots WHERE timestamp < ?", (cutoff,))
|
||||
self._conn.commit()
|
||||
return cursor.rowcount
|
||||
|
||||
def close(self) -> None:
|
||||
"""Close the database connection."""
|
||||
if self._bg_task and not self._bg_task.done():
|
||||
self._bg_task.cancel()
|
||||
self._conn.close()
|
||||
|
||||
def _capture_snapshot(self, cascade_router) -> None: # noqa: ANN001
|
||||
"""Capture current provider state as a snapshot."""
|
||||
providers = []
|
||||
for p in cascade_router.providers:
|
||||
providers.append(
|
||||
{
|
||||
"name": p.name,
|
||||
"status": p.status.value,
|
||||
"error_rate": round(p.metrics.error_rate, 4),
|
||||
"avg_latency_ms": round(p.metrics.avg_latency_ms, 2),
|
||||
"circuit_state": p.circuit_state.value,
|
||||
"total_requests": p.metrics.total_requests,
|
||||
}
|
||||
)
|
||||
self.record_snapshot(providers)
|
||||
|
||||
async def start_background_task(
|
||||
self,
|
||||
cascade_router,
|
||||
interval_seconds: int = 60, # noqa: ANN001
|
||||
) -> None:
|
||||
"""Start periodic snapshot capture."""
|
||||
|
||||
async def _loop() -> None:
|
||||
while True:
|
||||
try:
|
||||
self._capture_snapshot(cascade_router)
|
||||
logger.debug("Recorded health snapshot")
|
||||
except Exception:
|
||||
logger.exception("Failed to record health snapshot")
|
||||
await asyncio.sleep(interval_seconds)
|
||||
|
||||
self._bg_task = asyncio.create_task(_loop())
|
||||
logger.info("Health history background task started (interval=%ds)", interval_seconds)
|
||||
|
||||
|
||||
def get_history_store() -> HealthHistoryStore:
|
||||
"""Get or create the singleton history store."""
|
||||
global _store # noqa: PLW0603
|
||||
if _store is None:
|
||||
_store = HealthHistoryStore()
|
||||
return _store
|
||||
131
src/integrations/chat_bridge/vendors/discord.py
vendored
131
src/integrations/chat_bridge/vendors/discord.py
vendored
@@ -515,25 +515,36 @@ class DiscordVendor(ChatPlatform):
|
||||
|
||||
async def _handle_message(self, message) -> None:
|
||||
"""Process an incoming message and respond via a thread."""
|
||||
# Strip the bot mention from the message content
|
||||
content = message.content
|
||||
if self._client.user:
|
||||
content = content.replace(f"<@{self._client.user.id}>", "").strip()
|
||||
|
||||
content = self._extract_content(message)
|
||||
if not content:
|
||||
return
|
||||
|
||||
# Create or reuse a thread for this conversation
|
||||
thread = await self._get_or_create_thread(message)
|
||||
target = thread or message.channel
|
||||
session_id = f"discord_{thread.id}" if thread else f"discord_{message.channel.id}"
|
||||
|
||||
# Derive session_id for per-conversation history via Agno's SQLite
|
||||
if thread:
|
||||
session_id = f"discord_{thread.id}"
|
||||
else:
|
||||
session_id = f"discord_{message.channel.id}"
|
||||
run_output, response = await self._invoke_agent(content, session_id, target)
|
||||
|
||||
# Run Timmy agent with typing indicator and timeout
|
||||
if run_output is not None:
|
||||
await self._handle_paused_run(run_output, target, session_id)
|
||||
raw_content = run_output.content if hasattr(run_output, "content") else ""
|
||||
response = _clean_response(raw_content or "")
|
||||
|
||||
await self._send_response(response, target)
|
||||
|
||||
def _extract_content(self, message) -> str:
|
||||
"""Strip the bot mention and return clean message text."""
|
||||
content = message.content
|
||||
if self._client.user:
|
||||
content = content.replace(f"<@{self._client.user.id}>", "").strip()
|
||||
return content
|
||||
|
||||
async def _invoke_agent(self, content: str, session_id: str, target):
|
||||
"""Run chat_with_tools with a typing indicator and timeout.
|
||||
|
||||
Returns a (run_output, error_response) tuple. On success the
|
||||
error_response is ``None``; on failure run_output is ``None``.
|
||||
"""
|
||||
run_output = None
|
||||
response = None
|
||||
try:
|
||||
@@ -547,54 +558,58 @@ class DiscordVendor(ChatPlatform):
|
||||
response = "Sorry, that took too long. Please try a simpler request."
|
||||
except Exception as exc:
|
||||
logger.error("Discord: chat_with_tools() failed: %s", exc)
|
||||
response = (
|
||||
"I'm having trouble reaching my language model right now. Please try again shortly."
|
||||
response = "I'm having trouble reaching my inference backend right now. Please try again shortly."
|
||||
return run_output, response
|
||||
|
||||
async def _handle_paused_run(self, run_output, target, session_id: str) -> None:
|
||||
"""If Agno paused the run for tool confirmation, enqueue approvals."""
|
||||
status = getattr(run_output, "status", None)
|
||||
is_paused = status == "PAUSED" or str(status) == "RunStatus.paused"
|
||||
|
||||
if not (is_paused and getattr(run_output, "active_requirements", None)):
|
||||
return
|
||||
|
||||
from config import settings
|
||||
|
||||
if not settings.discord_confirm_actions:
|
||||
return
|
||||
|
||||
for req in run_output.active_requirements:
|
||||
if not getattr(req, "needs_confirmation", False):
|
||||
continue
|
||||
te = req.tool_execution
|
||||
tool_name = getattr(te, "tool_name", "unknown")
|
||||
tool_args = getattr(te, "tool_args", {}) or {}
|
||||
|
||||
from timmy.approvals import create_item
|
||||
|
||||
item = create_item(
|
||||
title=f"Discord: {tool_name}",
|
||||
description=_format_action_description(tool_name, tool_args),
|
||||
proposed_action=json.dumps({"tool": tool_name, "args": tool_args}),
|
||||
impact=_get_impact_level(tool_name),
|
||||
)
|
||||
self._pending_actions[item.id] = {
|
||||
"run_output": run_output,
|
||||
"requirement": req,
|
||||
"tool_name": tool_name,
|
||||
"tool_args": tool_args,
|
||||
"target": target,
|
||||
"session_id": session_id,
|
||||
}
|
||||
await self._send_confirmation(target, tool_name, tool_args, item.id)
|
||||
|
||||
# Check if Agno paused the run for tool confirmation
|
||||
if run_output is not None:
|
||||
status = getattr(run_output, "status", None)
|
||||
is_paused = status == "PAUSED" or str(status) == "RunStatus.paused"
|
||||
|
||||
if is_paused and getattr(run_output, "active_requirements", None):
|
||||
from config import settings
|
||||
|
||||
if settings.discord_confirm_actions:
|
||||
for req in run_output.active_requirements:
|
||||
if getattr(req, "needs_confirmation", False):
|
||||
te = req.tool_execution
|
||||
tool_name = getattr(te, "tool_name", "unknown")
|
||||
tool_args = getattr(te, "tool_args", {}) or {}
|
||||
|
||||
from timmy.approvals import create_item
|
||||
|
||||
item = create_item(
|
||||
title=f"Discord: {tool_name}",
|
||||
description=_format_action_description(tool_name, tool_args),
|
||||
proposed_action=json.dumps({"tool": tool_name, "args": tool_args}),
|
||||
impact=_get_impact_level(tool_name),
|
||||
)
|
||||
self._pending_actions[item.id] = {
|
||||
"run_output": run_output,
|
||||
"requirement": req,
|
||||
"tool_name": tool_name,
|
||||
"tool_args": tool_args,
|
||||
"target": target,
|
||||
"session_id": session_id,
|
||||
}
|
||||
await self._send_confirmation(target, tool_name, tool_args, item.id)
|
||||
|
||||
raw_content = run_output.content if hasattr(run_output, "content") else ""
|
||||
response = _clean_response(raw_content or "")
|
||||
|
||||
# Discord has a 2000 character limit — send with error handling
|
||||
if response and response.strip():
|
||||
for chunk in _chunk_message(response, 2000):
|
||||
try:
|
||||
await target.send(chunk)
|
||||
except Exception as exc:
|
||||
logger.error("Discord: failed to send message chunk: %s", exc)
|
||||
break
|
||||
@staticmethod
|
||||
async def _send_response(response: str | None, target) -> None:
|
||||
"""Send a response to Discord, chunked to the 2000-char limit."""
|
||||
if not response or not response.strip():
|
||||
return
|
||||
for chunk in _chunk_message(response, 2000):
|
||||
try:
|
||||
await target.send(chunk)
|
||||
except Exception as exc:
|
||||
logger.error("Discord: failed to send message chunk: %s", exc)
|
||||
break
|
||||
|
||||
async def _get_or_create_thread(self, message):
|
||||
"""Get the active thread for a channel, or create one.
|
||||
|
||||
1
src/lightning/__init__.py
Normal file
1
src/lightning/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Lightning Network integration for tool-usage micro-payments."""
|
||||
69
src/lightning/factory.py
Normal file
69
src/lightning/factory.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Lightning backend factory.
|
||||
|
||||
Returns a mock or real LND backend based on ``settings.lightning_backend``.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
import secrets
|
||||
from dataclasses import dataclass
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Invoice:
|
||||
"""Minimal Lightning invoice representation."""
|
||||
|
||||
payment_hash: str
|
||||
payment_request: str
|
||||
amount_sats: int
|
||||
memo: str
|
||||
|
||||
|
||||
class MockBackend:
|
||||
"""In-memory mock Lightning backend for development and testing."""
|
||||
|
||||
def create_invoice(self, amount_sats: int, memo: str = "") -> Invoice:
|
||||
"""Create a fake invoice with a random payment hash."""
|
||||
raw = secrets.token_bytes(32)
|
||||
payment_hash = hashlib.sha256(raw).hexdigest()
|
||||
payment_request = f"lnbc{amount_sats}mock{payment_hash[:20]}"
|
||||
logger.debug("Mock invoice: %s sats — %s", amount_sats, payment_hash[:12])
|
||||
return Invoice(
|
||||
payment_hash=payment_hash,
|
||||
payment_request=payment_request,
|
||||
amount_sats=amount_sats,
|
||||
memo=memo,
|
||||
)
|
||||
|
||||
|
||||
# Singleton — lazily created
|
||||
_backend: MockBackend | None = None
|
||||
|
||||
|
||||
def get_backend() -> MockBackend:
|
||||
"""Return the configured Lightning backend (currently mock-only).
|
||||
|
||||
Raises ``ValueError`` if an unsupported backend is requested.
|
||||
"""
|
||||
global _backend # noqa: PLW0603
|
||||
if _backend is not None:
|
||||
return _backend
|
||||
|
||||
kind = settings.lightning_backend
|
||||
if kind == "mock":
|
||||
_backend = MockBackend()
|
||||
elif kind == "lnd":
|
||||
# LND gRPC integration is on the roadmap — for now fall back to mock.
|
||||
logger.warning("LND backend not yet implemented — using mock")
|
||||
_backend = MockBackend()
|
||||
else:
|
||||
raise ValueError(f"Unknown lightning_backend: {kind!r}")
|
||||
|
||||
logger.info("Lightning backend: %s", kind)
|
||||
return _backend
|
||||
146
src/lightning/ledger.py
Normal file
146
src/lightning/ledger.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""In-memory Lightning transaction ledger.
|
||||
|
||||
Tracks invoices, settlements, and balances per the schema in
|
||||
``docs/adr/018-lightning-ledger.md``. Uses a simple in-memory list so the
|
||||
dashboard can display real (ephemeral) data without requiring SQLite yet.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime
|
||||
from enum import StrEnum
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TxType(StrEnum):
|
||||
incoming = "incoming"
|
||||
outgoing = "outgoing"
|
||||
|
||||
|
||||
class TxStatus(StrEnum):
|
||||
pending = "pending"
|
||||
settled = "settled"
|
||||
failed = "failed"
|
||||
expired = "expired"
|
||||
|
||||
|
||||
@dataclass
|
||||
class LedgerEntry:
|
||||
"""Single ledger row matching the ADR-018 schema."""
|
||||
|
||||
id: str
|
||||
tx_type: TxType
|
||||
status: TxStatus
|
||||
payment_hash: str
|
||||
amount_sats: int
|
||||
memo: str
|
||||
source: str
|
||||
created_at: str
|
||||
invoice: str = ""
|
||||
preimage: str = ""
|
||||
task_id: str = ""
|
||||
agent_id: str = ""
|
||||
settled_at: str = ""
|
||||
fee_sats: int = 0
|
||||
|
||||
|
||||
# ── In-memory store ──────────────────────────────────────────────────
|
||||
_entries: list[LedgerEntry] = []
|
||||
|
||||
|
||||
def create_invoice_entry(
|
||||
payment_hash: str,
|
||||
amount_sats: int,
|
||||
memo: str = "",
|
||||
source: str = "tool_usage",
|
||||
task_id: str = "",
|
||||
agent_id: str = "",
|
||||
invoice: str = "",
|
||||
) -> LedgerEntry:
|
||||
"""Record a new incoming invoice in the ledger."""
|
||||
entry = LedgerEntry(
|
||||
id=uuid.uuid4().hex[:16],
|
||||
tx_type=TxType.incoming,
|
||||
status=TxStatus.pending,
|
||||
payment_hash=payment_hash,
|
||||
amount_sats=amount_sats,
|
||||
memo=memo,
|
||||
source=source,
|
||||
task_id=task_id,
|
||||
agent_id=agent_id,
|
||||
invoice=invoice,
|
||||
created_at=datetime.now(UTC).isoformat(),
|
||||
)
|
||||
_entries.append(entry)
|
||||
logger.debug("Ledger entry created: %s (%s sats)", entry.id, amount_sats)
|
||||
return entry
|
||||
|
||||
|
||||
def mark_settled(payment_hash: str, preimage: str = "") -> LedgerEntry | None:
|
||||
"""Mark a pending entry as settled by payment hash."""
|
||||
for entry in _entries:
|
||||
if entry.payment_hash == payment_hash and entry.status == TxStatus.pending:
|
||||
entry.status = TxStatus.settled
|
||||
entry.preimage = preimage
|
||||
entry.settled_at = datetime.now(UTC).isoformat()
|
||||
logger.debug("Ledger settled: %s", payment_hash[:12])
|
||||
return entry
|
||||
return None
|
||||
|
||||
|
||||
def get_balance() -> dict:
|
||||
"""Compute the current balance from settled and pending entries."""
|
||||
incoming_total = sum(
|
||||
e.amount_sats
|
||||
for e in _entries
|
||||
if e.tx_type == TxType.incoming and e.status == TxStatus.settled
|
||||
)
|
||||
outgoing_total = sum(
|
||||
e.amount_sats
|
||||
for e in _entries
|
||||
if e.tx_type == TxType.outgoing and e.status == TxStatus.settled
|
||||
)
|
||||
fees = sum(e.fee_sats for e in _entries if e.status == TxStatus.settled)
|
||||
pending_in = sum(
|
||||
e.amount_sats
|
||||
for e in _entries
|
||||
if e.tx_type == TxType.incoming and e.status == TxStatus.pending
|
||||
)
|
||||
pending_out = sum(
|
||||
e.amount_sats
|
||||
for e in _entries
|
||||
if e.tx_type == TxType.outgoing and e.status == TxStatus.pending
|
||||
)
|
||||
net = incoming_total - outgoing_total - fees
|
||||
return {
|
||||
"incoming_total_sats": incoming_total,
|
||||
"outgoing_total_sats": outgoing_total,
|
||||
"fees_paid_sats": fees,
|
||||
"net_sats": net,
|
||||
"pending_incoming_sats": pending_in,
|
||||
"pending_outgoing_sats": pending_out,
|
||||
"available_sats": net - pending_out,
|
||||
}
|
||||
|
||||
|
||||
def get_transactions(
|
||||
tx_type: str | None = None,
|
||||
status: str | None = None,
|
||||
limit: int = 50,
|
||||
) -> list[LedgerEntry]:
|
||||
"""Return ledger entries, optionally filtered."""
|
||||
result = _entries
|
||||
if tx_type:
|
||||
result = [e for e in result if e.tx_type.value == tx_type]
|
||||
if status:
|
||||
result = [e for e in result if e.status.value == status]
|
||||
return list(reversed(result))[:limit]
|
||||
|
||||
|
||||
def clear() -> None:
|
||||
"""Reset the ledger (for testing)."""
|
||||
_entries.clear()
|
||||
1
src/loop/__init__.py
Normal file
1
src/loop/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Three-phase agent loop: Gather → Reason → Act."""
|
||||
37
src/loop/phase1_gather.py
Normal file
37
src/loop/phase1_gather.py
Normal file
@@ -0,0 +1,37 @@
|
||||
"""Phase 1 — Gather: accept raw input, produce structured context.
|
||||
|
||||
This is the sensory phase. It receives a raw ContextPayload and enriches
|
||||
it with whatever context Timmy needs before reasoning. In the stub form,
|
||||
it simply passes the payload through with a phase marker.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from loop.schema import ContextPayload
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def gather(payload: ContextPayload) -> ContextPayload:
|
||||
"""Accept raw input and return structured context for reasoning.
|
||||
|
||||
Stub: tags the payload with phase=gather and logs transit.
|
||||
Timmy will flesh this out with context selection, memory lookup,
|
||||
adapter polling, and attention-residual weighting.
|
||||
"""
|
||||
logger.info(
|
||||
"Phase 1 (Gather) received: source=%s content_len=%d tokens=%d",
|
||||
payload.source,
|
||||
len(payload.content),
|
||||
payload.token_count,
|
||||
)
|
||||
|
||||
result = payload.with_metadata(phase="gather", gathered=True)
|
||||
|
||||
logger.info(
|
||||
"Phase 1 (Gather) produced: metadata_keys=%s",
|
||||
sorted(result.metadata.keys()),
|
||||
)
|
||||
return result
|
||||
36
src/loop/phase2_reason.py
Normal file
36
src/loop/phase2_reason.py
Normal file
@@ -0,0 +1,36 @@
|
||||
"""Phase 2 — Reason: accept gathered context, produce reasoning output.
|
||||
|
||||
This is the deliberation phase. It receives enriched context from Phase 1
|
||||
and decides what to do. In the stub form, it passes the payload through
|
||||
with a phase marker.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from loop.schema import ContextPayload
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def reason(payload: ContextPayload) -> ContextPayload:
|
||||
"""Accept gathered context and return a reasoning result.
|
||||
|
||||
Stub: tags the payload with phase=reason and logs transit.
|
||||
Timmy will flesh this out with LLM calls, confidence scoring,
|
||||
plan generation, and judgment logic.
|
||||
"""
|
||||
logger.info(
|
||||
"Phase 2 (Reason) received: source=%s gathered=%s",
|
||||
payload.source,
|
||||
payload.metadata.get("gathered", False),
|
||||
)
|
||||
|
||||
result = payload.with_metadata(phase="reason", reasoned=True)
|
||||
|
||||
logger.info(
|
||||
"Phase 2 (Reason) produced: metadata_keys=%s",
|
||||
sorted(result.metadata.keys()),
|
||||
)
|
||||
return result
|
||||
36
src/loop/phase3_act.py
Normal file
36
src/loop/phase3_act.py
Normal file
@@ -0,0 +1,36 @@
|
||||
"""Phase 3 — Act: accept reasoning output, execute and produce feedback.
|
||||
|
||||
This is the command phase. It receives the reasoning result from Phase 2
|
||||
and takes action. In the stub form, it passes the payload through with a
|
||||
phase marker and produces feedback for the next cycle.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from loop.schema import ContextPayload
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def act(payload: ContextPayload) -> ContextPayload:
|
||||
"""Accept reasoning result and return action output + feedback.
|
||||
|
||||
Stub: tags the payload with phase=act and logs transit.
|
||||
Timmy will flesh this out with tool execution, delegation,
|
||||
response generation, and feedback construction.
|
||||
"""
|
||||
logger.info(
|
||||
"Phase 3 (Act) received: source=%s reasoned=%s",
|
||||
payload.source,
|
||||
payload.metadata.get("reasoned", False),
|
||||
)
|
||||
|
||||
result = payload.with_metadata(phase="act", acted=True)
|
||||
|
||||
logger.info(
|
||||
"Phase 3 (Act) produced: metadata_keys=%s",
|
||||
sorted(result.metadata.keys()),
|
||||
)
|
||||
return result
|
||||
40
src/loop/runner.py
Normal file
40
src/loop/runner.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""Loop runner — orchestrates the three phases in sequence.
|
||||
|
||||
Runs Gather → Reason → Act as a single cycle, passing output from each
|
||||
phase as input to the next. The Act output feeds back as input to the
|
||||
next Gather call.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
from loop.phase1_gather import gather
|
||||
from loop.phase2_reason import reason
|
||||
from loop.phase3_act import act
|
||||
from loop.schema import ContextPayload
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_cycle(payload: ContextPayload) -> ContextPayload:
|
||||
"""Execute one full Gather → Reason → Act cycle.
|
||||
|
||||
Returns the Act phase output, which can be fed back as input
|
||||
to the next cycle.
|
||||
"""
|
||||
logger.info("=== Loop cycle start: source=%s ===", payload.source)
|
||||
|
||||
gathered = gather(payload)
|
||||
reasoned = reason(gathered)
|
||||
acted = act(reasoned)
|
||||
|
||||
logger.info(
|
||||
"=== Loop cycle complete: phases=%s ===",
|
||||
[
|
||||
gathered.metadata.get("phase"),
|
||||
reasoned.metadata.get("phase"),
|
||||
acted.metadata.get("phase"),
|
||||
],
|
||||
)
|
||||
return acted
|
||||
43
src/loop/schema.py
Normal file
43
src/loop/schema.py
Normal file
@@ -0,0 +1,43 @@
|
||||
"""Data schema for the three-phase loop.
|
||||
|
||||
Each phase passes a ContextPayload forward. The schema is intentionally
|
||||
minimal — Timmy decides what fields matter as the loop matures.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ContextPayload:
|
||||
"""Immutable context packet passed between loop phases.
|
||||
|
||||
Attributes:
|
||||
source: Where this payload originated (e.g. "user", "timer", "event").
|
||||
content: The raw content string to process.
|
||||
timestamp: When the payload was created.
|
||||
token_count: Estimated token count for budget tracking. -1 = unknown.
|
||||
metadata: Arbitrary key-value pairs for phase-specific data.
|
||||
"""
|
||||
|
||||
source: str
|
||||
content: str
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(UTC))
|
||||
token_count: int = -1
|
||||
metadata: dict = field(default_factory=dict)
|
||||
|
||||
def with_metadata(self, **kwargs: object) -> ContextPayload:
|
||||
"""Return a new payload with additional metadata merged in."""
|
||||
merged = {**self.metadata, **kwargs}
|
||||
return ContextPayload(
|
||||
source=self.source,
|
||||
content=self.content,
|
||||
timestamp=self.timestamp,
|
||||
token_count=self.token_count,
|
||||
metadata=merged,
|
||||
)
|
||||
@@ -1 +1 @@
|
||||
"""Timmy — Core AI agent (Ollama/AirLLM backends, CLI, prompts)."""
|
||||
"""Timmy — Core AI agent (Ollama/Grok/Claude backends, CLI, prompts)."""
|
||||
|
||||
1
src/timmy/adapters/__init__.py
Normal file
1
src/timmy/adapters/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Adapters — normalize external data streams into sensory events."""
|
||||
136
src/timmy/adapters/gitea_adapter.py
Normal file
136
src/timmy/adapters/gitea_adapter.py
Normal file
@@ -0,0 +1,136 @@
|
||||
"""Gitea webhook adapter — normalize webhook payloads to event bus events.
|
||||
|
||||
Receives raw Gitea webhook payloads and emits typed events via the
|
||||
infrastructure event bus. Bot-only activity is filtered unless it
|
||||
represents a PR merge (which is always noteworthy).
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from infrastructure.events.bus import emit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Gitea usernames considered "bot" accounts
|
||||
BOT_USERNAMES = frozenset({"hermes", "kimi", "manus"})
|
||||
|
||||
# Owner username — activity from this user is always emitted
|
||||
OWNER_USERNAME = "rockachopa"
|
||||
|
||||
# Mapping from Gitea webhook event type to our bus event type
|
||||
_EVENT_TYPE_MAP = {
|
||||
"push": "gitea.push",
|
||||
"issues": "gitea.issue.opened",
|
||||
"issue_comment": "gitea.issue.comment",
|
||||
"pull_request": "gitea.pull_request",
|
||||
}
|
||||
|
||||
|
||||
def _extract_actor(payload: dict[str, Any]) -> str:
|
||||
"""Extract the actor username from a webhook payload."""
|
||||
# Gitea puts actor in sender.login for most events
|
||||
sender = payload.get("sender", {})
|
||||
return sender.get("login", "unknown")
|
||||
|
||||
|
||||
def _is_bot(username: str) -> bool:
|
||||
return username.lower() in BOT_USERNAMES
|
||||
|
||||
|
||||
def _is_pr_merge(event_type: str, payload: dict[str, Any]) -> bool:
|
||||
"""Check if this is a pull_request merge event."""
|
||||
if event_type != "pull_request":
|
||||
return False
|
||||
action = payload.get("action", "")
|
||||
pr = payload.get("pull_request", {})
|
||||
return action == "closed" and pr.get("merged", False)
|
||||
|
||||
|
||||
def _normalize_push(payload: dict[str, Any], actor: str) -> dict[str, Any]:
|
||||
"""Normalize a push event payload."""
|
||||
commits = payload.get("commits", [])
|
||||
return {
|
||||
"actor": actor,
|
||||
"ref": payload.get("ref", ""),
|
||||
"repo": payload.get("repository", {}).get("full_name", ""),
|
||||
"num_commits": len(commits),
|
||||
"head_message": commits[0].get("message", "").split("\n", 1)[0].strip() if commits else "",
|
||||
}
|
||||
|
||||
|
||||
def _normalize_issue_opened(payload: dict[str, Any], actor: str) -> dict[str, Any]:
|
||||
"""Normalize an issue-opened event payload."""
|
||||
issue = payload.get("issue", {})
|
||||
return {
|
||||
"actor": actor,
|
||||
"action": payload.get("action", "opened"),
|
||||
"repo": payload.get("repository", {}).get("full_name", ""),
|
||||
"issue_number": issue.get("number", 0),
|
||||
"title": issue.get("title", ""),
|
||||
}
|
||||
|
||||
|
||||
def _normalize_issue_comment(payload: dict[str, Any], actor: str) -> dict[str, Any]:
|
||||
"""Normalize an issue-comment event payload."""
|
||||
issue = payload.get("issue", {})
|
||||
comment = payload.get("comment", {})
|
||||
return {
|
||||
"actor": actor,
|
||||
"action": payload.get("action", "created"),
|
||||
"repo": payload.get("repository", {}).get("full_name", ""),
|
||||
"issue_number": issue.get("number", 0),
|
||||
"issue_title": issue.get("title", ""),
|
||||
"comment_body": (comment.get("body", "")[:200]),
|
||||
}
|
||||
|
||||
|
||||
def _normalize_pull_request(payload: dict[str, Any], actor: str) -> dict[str, Any]:
|
||||
"""Normalize a pull-request event payload."""
|
||||
pr = payload.get("pull_request", {})
|
||||
return {
|
||||
"actor": actor,
|
||||
"action": payload.get("action", ""),
|
||||
"repo": payload.get("repository", {}).get("full_name", ""),
|
||||
"pr_number": pr.get("number", 0),
|
||||
"title": pr.get("title", ""),
|
||||
"merged": pr.get("merged", False),
|
||||
}
|
||||
|
||||
|
||||
_NORMALIZERS = {
|
||||
"push": _normalize_push,
|
||||
"issues": _normalize_issue_opened,
|
||||
"issue_comment": _normalize_issue_comment,
|
||||
"pull_request": _normalize_pull_request,
|
||||
}
|
||||
|
||||
|
||||
async def handle_webhook(event_type: str, payload: dict[str, Any]) -> bool:
|
||||
"""Normalize a Gitea webhook payload and emit it to the event bus.
|
||||
|
||||
Args:
|
||||
event_type: The Gitea event type header (e.g. "push", "issues").
|
||||
payload: The raw JSON payload from the webhook.
|
||||
|
||||
Returns:
|
||||
True if an event was emitted, False if filtered or unsupported.
|
||||
"""
|
||||
bus_event_type = _EVENT_TYPE_MAP.get(event_type)
|
||||
if bus_event_type is None:
|
||||
logger.debug("Unsupported Gitea event type: %s", event_type)
|
||||
return False
|
||||
|
||||
actor = _extract_actor(payload)
|
||||
|
||||
# Filter bot-only activity — except PR merges
|
||||
if _is_bot(actor) and not _is_pr_merge(event_type, payload):
|
||||
logger.debug("Filtered bot activity from %s on %s", actor, event_type)
|
||||
return False
|
||||
|
||||
normalizer = _NORMALIZERS[event_type]
|
||||
data = normalizer(payload, actor)
|
||||
|
||||
await emit(bus_event_type, source="gitea", data=data)
|
||||
logger.info("Emitted %s from %s", bus_event_type, actor)
|
||||
return True
|
||||
82
src/timmy/adapters/time_adapter.py
Normal file
82
src/timmy/adapters/time_adapter.py
Normal file
@@ -0,0 +1,82 @@
|
||||
"""Time adapter — circadian awareness for Timmy.
|
||||
|
||||
Emits time-of-day events so Timmy knows the current period
|
||||
and tracks how long since the last user interaction.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from infrastructure.events.bus import emit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Time-of-day periods: (event_name, start_hour, end_hour)
|
||||
_PERIODS = [
|
||||
("morning", 6, 9),
|
||||
("afternoon", 12, 14),
|
||||
("evening", 18, 20),
|
||||
("late_night", 23, 24),
|
||||
("late_night", 0, 3),
|
||||
]
|
||||
|
||||
|
||||
def classify_period(hour: int) -> str | None:
|
||||
"""Return the circadian period name for a given hour, or None."""
|
||||
for name, start, end in _PERIODS:
|
||||
if start <= hour < end:
|
||||
return name
|
||||
return None
|
||||
|
||||
|
||||
class TimeAdapter:
|
||||
"""Emits circadian and interaction-tracking events."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._last_interaction: datetime | None = None
|
||||
self._last_period: str | None = None
|
||||
self._last_date: str | None = None
|
||||
|
||||
def record_interaction(self, now: datetime | None = None) -> None:
|
||||
"""Record a user interaction timestamp."""
|
||||
self._last_interaction = now or datetime.now(UTC)
|
||||
|
||||
def time_since_last_interaction(
|
||||
self,
|
||||
now: datetime | None = None,
|
||||
) -> float | None:
|
||||
"""Seconds since last user interaction, or None if no interaction."""
|
||||
if self._last_interaction is None:
|
||||
return None
|
||||
current = now or datetime.now(UTC)
|
||||
return (current - self._last_interaction).total_seconds()
|
||||
|
||||
async def tick(self, now: datetime | None = None) -> list[str]:
|
||||
"""Check current time and emit relevant events.
|
||||
|
||||
Returns list of event types emitted (useful for testing).
|
||||
"""
|
||||
current = now or datetime.now(UTC)
|
||||
emitted: list[str] = []
|
||||
|
||||
# --- new_day ---
|
||||
date_str = current.strftime("%Y-%m-%d")
|
||||
if self._last_date is not None and date_str != self._last_date:
|
||||
event_type = "time.new_day"
|
||||
await emit(event_type, source="time_adapter", data={"date": date_str})
|
||||
emitted.append(event_type)
|
||||
self._last_date = date_str
|
||||
|
||||
# --- circadian period ---
|
||||
period = classify_period(current.hour)
|
||||
if period is not None and period != self._last_period:
|
||||
event_type = f"time.{period}"
|
||||
await emit(
|
||||
event_type,
|
||||
source="time_adapter",
|
||||
data={"hour": current.hour, "period": period},
|
||||
)
|
||||
emitted.append(event_type)
|
||||
self._last_period = period
|
||||
|
||||
return emitted
|
||||
@@ -26,12 +26,12 @@ from timmy.prompts import get_system_prompt
|
||||
from timmy.tools import create_full_toolkit
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from timmy.backends import ClaudeBackend, GrokBackend, TimmyAirLLMAgent
|
||||
from timmy.backends import ClaudeBackend, GrokBackend
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Union type for callers that want to hint the return type.
|
||||
TimmyAgent = Union[Agent, "TimmyAirLLMAgent", "GrokBackend", "ClaudeBackend"]
|
||||
TimmyAgent = Union[Agent, "GrokBackend", "ClaudeBackend"]
|
||||
|
||||
# Models known to be too small for reliable tool calling.
|
||||
# These hallucinate tool calls as text, invoke tools randomly,
|
||||
@@ -63,7 +63,7 @@ def _pull_model(model_name: str) -> bool:
|
||||
|
||||
logger.info("Pulling model: %s", model_name)
|
||||
|
||||
url = settings.ollama_url.replace("localhost", "127.0.0.1")
|
||||
url = settings.normalized_ollama_url
|
||||
req = urllib.request.Request(
|
||||
f"{url}/api/pull",
|
||||
method="POST",
|
||||
@@ -172,107 +172,34 @@ def _warmup_model(model_name: str) -> bool:
|
||||
|
||||
|
||||
def _resolve_backend(requested: str | None) -> str:
|
||||
"""Return the backend name to use, resolving 'auto' and explicit overrides.
|
||||
"""Return the backend name to use.
|
||||
|
||||
Priority (highest → lowest):
|
||||
Priority (highest -> lowest):
|
||||
1. CLI flag passed directly to create_timmy()
|
||||
2. TIMMY_MODEL_BACKEND env var / .env setting
|
||||
3. 'ollama' (safe default — no surprises)
|
||||
|
||||
'auto' triggers Apple Silicon detection: uses AirLLM if both
|
||||
is_apple_silicon() and airllm_available() return True.
|
||||
3. 'ollama' (safe default -- no surprises)
|
||||
"""
|
||||
if requested is not None:
|
||||
return requested
|
||||
|
||||
configured = settings.timmy_model_backend # "ollama" | "airllm" | "grok" | "claude" | "auto"
|
||||
if configured != "auto":
|
||||
return configured
|
||||
|
||||
# "auto" path — lazy import to keep startup fast and tests clean.
|
||||
from timmy.backends import airllm_available, is_apple_silicon
|
||||
|
||||
if is_apple_silicon() and airllm_available():
|
||||
return "airllm"
|
||||
return "ollama"
|
||||
return settings.timmy_model_backend # "ollama" | "grok" | "claude"
|
||||
|
||||
|
||||
def create_timmy(
|
||||
db_file: str = "timmy.db",
|
||||
backend: str | None = None,
|
||||
model_size: str | None = None,
|
||||
*,
|
||||
skip_mcp: bool = False,
|
||||
session_id: str = "unknown",
|
||||
) -> TimmyAgent:
|
||||
"""Instantiate the agent — Ollama or AirLLM, same public interface.
|
||||
def _build_tools_list(use_tools: bool, skip_mcp: bool, model_name: str) -> list:
|
||||
"""Assemble the tools list based on model capability and MCP flags.
|
||||
|
||||
Args:
|
||||
db_file: SQLite file for Agno conversation memory (Ollama path only).
|
||||
backend: "ollama" | "airllm" | "auto" | None (reads config/env).
|
||||
model_size: AirLLM size — "8b" | "70b" | "405b" | None (reads config).
|
||||
skip_mcp: If True, omit MCP tool servers (Gitea, filesystem).
|
||||
Use for background tasks (thinking, QA) where MCP's
|
||||
stdio cancel-scope lifecycle conflicts with asyncio
|
||||
task cancellation.
|
||||
|
||||
Returns an Agno Agent or backend-specific agent — all expose
|
||||
print_response(message, stream).
|
||||
Returns a list of Toolkit / MCPTools objects, or an empty list.
|
||||
"""
|
||||
resolved = _resolve_backend(backend)
|
||||
size = model_size or settings.airllm_model_size
|
||||
|
||||
if resolved == "claude":
|
||||
from timmy.backends import ClaudeBackend
|
||||
|
||||
return ClaudeBackend()
|
||||
|
||||
if resolved == "grok":
|
||||
from timmy.backends import GrokBackend
|
||||
|
||||
return GrokBackend()
|
||||
|
||||
if resolved == "airllm":
|
||||
from timmy.backends import TimmyAirLLMAgent
|
||||
|
||||
return TimmyAirLLMAgent(model_size=size)
|
||||
|
||||
# Default: Ollama via Agno.
|
||||
# Resolve model with automatic pulling and fallback
|
||||
model_name, is_fallback = _resolve_model_with_fallback(
|
||||
requested_model=None,
|
||||
require_vision=False,
|
||||
auto_pull=True,
|
||||
)
|
||||
|
||||
# If Ollama is completely unreachable, fail loudly.
|
||||
# Sovereignty: never silently send data to a cloud API.
|
||||
# Use --backend claude explicitly if you want cloud inference.
|
||||
if not _check_model_available(model_name):
|
||||
logger.error(
|
||||
"Ollama unreachable and no local models available. "
|
||||
"Start Ollama with 'ollama serve' or use --backend claude explicitly."
|
||||
)
|
||||
|
||||
if is_fallback:
|
||||
logger.info("Using fallback model %s (requested was unavailable)", model_name)
|
||||
|
||||
use_tools = _model_supports_tools(model_name)
|
||||
|
||||
# Conditionally include tools — small models get none
|
||||
toolkit = create_full_toolkit() if use_tools else None
|
||||
if not use_tools:
|
||||
logger.info("Tools disabled for model %s (too small for reliable tool calling)", model_name)
|
||||
return []
|
||||
|
||||
# Build the tools list — Agno accepts a list of Toolkit / MCPTools
|
||||
tools_list: list = []
|
||||
if toolkit:
|
||||
tools_list.append(toolkit)
|
||||
tools_list: list = [create_full_toolkit()]
|
||||
|
||||
# Add MCP tool servers (lazy-connected on first arun()).
|
||||
# Skipped when skip_mcp=True — MCP's stdio transport uses anyio cancel
|
||||
# scopes that conflict with asyncio background task cancellation (#72).
|
||||
if use_tools and not skip_mcp:
|
||||
if not skip_mcp:
|
||||
try:
|
||||
from timmy.mcp_tools import create_filesystem_mcp_tools, create_gitea_mcp_tools
|
||||
|
||||
@@ -286,30 +213,46 @@ def create_timmy(
|
||||
except Exception as exc:
|
||||
logger.debug("MCP tools unavailable: %s", exc)
|
||||
|
||||
# Select prompt tier based on tool capability
|
||||
return tools_list
|
||||
|
||||
|
||||
def _build_prompt(use_tools: bool, session_id: str) -> str:
|
||||
"""Build the full system prompt with optional memory context."""
|
||||
base_prompt = get_system_prompt(tools_enabled=use_tools, session_id=session_id)
|
||||
|
||||
# Try to load memory context
|
||||
try:
|
||||
from timmy.memory_system import memory_system
|
||||
|
||||
memory_context = memory_system.get_system_context()
|
||||
if memory_context:
|
||||
# Truncate if too long — smaller budget for small models
|
||||
# since the expanded prompt (roster, guardrails) uses more tokens
|
||||
# Smaller budget for small models — expanded prompt uses more tokens
|
||||
max_context = 2000 if not use_tools else 8000
|
||||
if len(memory_context) > max_context:
|
||||
memory_context = memory_context[:max_context] + "\n... [truncated]"
|
||||
full_prompt = f"{base_prompt}\n\n## Memory Context\n\n{memory_context}"
|
||||
else:
|
||||
full_prompt = base_prompt
|
||||
return (
|
||||
f"{base_prompt}\n\n"
|
||||
f"## GROUNDED CONTEXT (verified sources — cite when using)\n\n"
|
||||
f"{memory_context}"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to load memory context: %s", exc)
|
||||
full_prompt = base_prompt
|
||||
|
||||
return base_prompt
|
||||
|
||||
|
||||
def _create_ollama_agent(
|
||||
*,
|
||||
db_file: str,
|
||||
model_name: str,
|
||||
tools_list: list,
|
||||
full_prompt: str,
|
||||
use_tools: bool,
|
||||
) -> Agent:
|
||||
"""Construct the Agno Agent with Ollama backend and warm up the model."""
|
||||
model_kwargs = {}
|
||||
if settings.ollama_num_ctx > 0:
|
||||
model_kwargs["options"] = {"num_ctx": settings.ollama_num_ctx}
|
||||
|
||||
agent = Agent(
|
||||
name="Agent",
|
||||
model=Ollama(id=model_name, host=settings.ollama_url, timeout=300, **model_kwargs),
|
||||
@@ -326,6 +269,67 @@ def create_timmy(
|
||||
return agent
|
||||
|
||||
|
||||
def create_timmy(
|
||||
db_file: str = "timmy.db",
|
||||
backend: str | None = None,
|
||||
*,
|
||||
skip_mcp: bool = False,
|
||||
session_id: str = "unknown",
|
||||
) -> TimmyAgent:
|
||||
"""Instantiate the agent — Ollama, Grok, or Claude.
|
||||
|
||||
Args:
|
||||
db_file: SQLite file for Agno conversation memory (Ollama path only).
|
||||
backend: "ollama" | "grok" | "claude" | None (reads config/env).
|
||||
skip_mcp: If True, omit MCP tool servers (Gitea, filesystem).
|
||||
Use for background tasks (thinking, QA) where MCP's
|
||||
stdio cancel-scope lifecycle conflicts with asyncio
|
||||
task cancellation.
|
||||
|
||||
Returns an Agno Agent or backend-specific agent — all expose
|
||||
print_response(message, stream).
|
||||
"""
|
||||
resolved = _resolve_backend(backend)
|
||||
|
||||
if resolved == "claude":
|
||||
from timmy.backends import ClaudeBackend
|
||||
|
||||
return ClaudeBackend()
|
||||
|
||||
if resolved == "grok":
|
||||
from timmy.backends import GrokBackend
|
||||
|
||||
return GrokBackend()
|
||||
|
||||
# Default: Ollama via Agno.
|
||||
model_name, is_fallback = _resolve_model_with_fallback(
|
||||
requested_model=None,
|
||||
require_vision=False,
|
||||
auto_pull=True,
|
||||
)
|
||||
|
||||
if not _check_model_available(model_name):
|
||||
logger.error(
|
||||
"Ollama unreachable and no local models available. "
|
||||
"Start Ollama with 'ollama serve' or use --backend claude explicitly."
|
||||
)
|
||||
|
||||
if is_fallback:
|
||||
logger.info("Using fallback model %s (requested was unavailable)", model_name)
|
||||
|
||||
use_tools = _model_supports_tools(model_name)
|
||||
tools_list = _build_tools_list(use_tools, skip_mcp, model_name)
|
||||
full_prompt = _build_prompt(use_tools, session_id)
|
||||
|
||||
return _create_ollama_agent(
|
||||
db_file=db_file,
|
||||
model_name=model_name,
|
||||
tools_list=tools_list,
|
||||
full_prompt=full_prompt,
|
||||
use_tools=use_tools,
|
||||
)
|
||||
|
||||
|
||||
class TimmyWithMemory:
|
||||
"""Agent wrapper with explicit three-tier memory management."""
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
import logging
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
from collections.abc import Callable
|
||||
@@ -59,6 +60,7 @@ class AgenticResult:
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_loop_agent = None
|
||||
_loop_agent_lock = threading.Lock()
|
||||
|
||||
|
||||
def _get_loop_agent():
|
||||
@@ -69,9 +71,11 @@ def _get_loop_agent():
|
||||
"""
|
||||
global _loop_agent
|
||||
if _loop_agent is None:
|
||||
from timmy.agent import create_timmy
|
||||
with _loop_agent_lock:
|
||||
if _loop_agent is None:
|
||||
from timmy.agent import create_timmy
|
||||
|
||||
_loop_agent = create_timmy()
|
||||
_loop_agent = create_timmy()
|
||||
return _loop_agent
|
||||
|
||||
|
||||
@@ -91,6 +95,126 @@ def _parse_steps(plan_text: str) -> list[str]:
|
||||
return [line.strip() for line in plan_text.strip().splitlines() if line.strip()]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Extracted helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _extract_content(run_result) -> str:
|
||||
"""Extract text content from an agent run result."""
|
||||
return run_result.content if hasattr(run_result, "content") else str(run_result)
|
||||
|
||||
|
||||
def _clean(text: str) -> str:
|
||||
"""Clean a model response using session's response cleaner."""
|
||||
from timmy.session import _clean_response
|
||||
|
||||
return _clean_response(text)
|
||||
|
||||
|
||||
async def _plan_task(
|
||||
agent, task: str, session_id: str, max_steps: int
|
||||
) -> tuple[list[str], bool] | str:
|
||||
"""Run the planning phase — returns (steps, was_truncated) or error string."""
|
||||
plan_prompt = (
|
||||
f"Break this task into numbered steps (max {max_steps}). "
|
||||
f"Return ONLY a numbered list, nothing else.\n\n"
|
||||
f"Task: {task}"
|
||||
)
|
||||
try:
|
||||
plan_run = await asyncio.to_thread(
|
||||
agent.run, plan_prompt, stream=False, session_id=f"{session_id}_plan"
|
||||
)
|
||||
plan_text = _extract_content(plan_run)
|
||||
except Exception as exc: # broad catch intentional: agent.run can raise any error
|
||||
logger.error("Agentic loop: planning failed: %s", exc)
|
||||
return f"Planning failed: {exc}"
|
||||
|
||||
steps = _parse_steps(plan_text)
|
||||
if not steps:
|
||||
return "Planning produced no steps."
|
||||
|
||||
planned_count = len(steps)
|
||||
steps = steps[:max_steps]
|
||||
return steps, planned_count > len(steps)
|
||||
|
||||
|
||||
async def _execute_step(
|
||||
agent,
|
||||
task: str,
|
||||
step_desc: str,
|
||||
step_num: int,
|
||||
total_steps: int,
|
||||
recent_results: list[str],
|
||||
session_id: str,
|
||||
) -> AgenticStep:
|
||||
"""Execute a single step, returning an AgenticStep."""
|
||||
step_start = time.monotonic()
|
||||
context = (
|
||||
f"Task: {task}\n"
|
||||
f"Step {step_num}/{total_steps}: {step_desc}\n"
|
||||
f"Recent progress: {recent_results[-2:] if recent_results else []}\n\n"
|
||||
f"Execute this step and report what you did."
|
||||
)
|
||||
step_run = await asyncio.to_thread(
|
||||
agent.run, context, stream=False, session_id=f"{session_id}_step{step_num}"
|
||||
)
|
||||
step_result = _clean(_extract_content(step_run))
|
||||
return AgenticStep(
|
||||
step_num=step_num,
|
||||
description=step_desc,
|
||||
result=step_result,
|
||||
status="completed",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
)
|
||||
|
||||
|
||||
async def _adapt_step(
|
||||
agent,
|
||||
step_desc: str,
|
||||
step_num: int,
|
||||
error: Exception,
|
||||
step_start: float,
|
||||
session_id: str,
|
||||
) -> AgenticStep:
|
||||
"""Attempt adaptation after a step failure."""
|
||||
adapt_prompt = (
|
||||
f"Step {step_num} failed with error: {error}\n"
|
||||
f"Original step was: {step_desc}\n"
|
||||
f"Adapt the plan and try an alternative approach for this step."
|
||||
)
|
||||
adapt_run = await asyncio.to_thread(
|
||||
agent.run, adapt_prompt, stream=False, session_id=f"{session_id}_adapt{step_num}"
|
||||
)
|
||||
adapt_result = _clean(_extract_content(adapt_run))
|
||||
return AgenticStep(
|
||||
step_num=step_num,
|
||||
description=f"[Adapted] {step_desc}",
|
||||
result=adapt_result,
|
||||
status="adapted",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
)
|
||||
|
||||
|
||||
def _summarize(result: AgenticResult, total_steps: int, was_truncated: bool) -> None:
|
||||
"""Fill in summary and final status on the result object (mutates in place)."""
|
||||
completed = sum(1 for s in result.steps if s.status == "completed")
|
||||
adapted = sum(1 for s in result.steps if s.status == "adapted")
|
||||
failed = sum(1 for s in result.steps if s.status == "failed")
|
||||
|
||||
parts = [f"Completed {completed}/{total_steps} steps"]
|
||||
if adapted:
|
||||
parts.append(f"{adapted} adapted")
|
||||
if failed:
|
||||
parts.append(f"{failed} failed")
|
||||
result.summary = f"{result.task}: {', '.join(parts)}."
|
||||
|
||||
if was_truncated or len(result.steps) < total_steps or failed:
|
||||
result.status = "partial"
|
||||
else:
|
||||
result.status = "completed"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Core loop
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -121,88 +245,41 @@ async def run_agentic_loop(
|
||||
|
||||
task_id = str(uuid.uuid4())[:8]
|
||||
start_time = time.monotonic()
|
||||
|
||||
agent = _get_loop_agent()
|
||||
result = AgenticResult(task_id=task_id, task=task, summary="")
|
||||
|
||||
# ── Phase 1: Planning ──────────────────────────────────────────────────
|
||||
plan_prompt = (
|
||||
f"Break this task into numbered steps (max {max_steps}). "
|
||||
f"Return ONLY a numbered list, nothing else.\n\n"
|
||||
f"Task: {task}"
|
||||
)
|
||||
try:
|
||||
plan_run = await asyncio.to_thread(
|
||||
agent.run, plan_prompt, stream=False, session_id=f"{session_id}_plan"
|
||||
)
|
||||
plan_text = plan_run.content if hasattr(plan_run, "content") else str(plan_run)
|
||||
except Exception as exc: # broad catch intentional: agent.run can raise any error
|
||||
logger.error("Agentic loop: planning failed: %s", exc)
|
||||
# Phase 1: Planning
|
||||
plan = await _plan_task(agent, task, session_id, max_steps)
|
||||
if isinstance(plan, str):
|
||||
result.status = "failed"
|
||||
result.summary = f"Planning failed: {exc}"
|
||||
result.summary = plan
|
||||
result.total_duration_ms = int((time.monotonic() - start_time) * 1000)
|
||||
return result
|
||||
|
||||
steps = _parse_steps(plan_text)
|
||||
if not steps:
|
||||
result.status = "failed"
|
||||
result.summary = "Planning produced no steps."
|
||||
result.total_duration_ms = int((time.monotonic() - start_time) * 1000)
|
||||
return result
|
||||
|
||||
# Enforce max_steps — track if we truncated
|
||||
planned_steps = len(steps)
|
||||
steps = steps[:max_steps]
|
||||
steps, was_truncated = plan
|
||||
total_steps = len(steps)
|
||||
was_truncated = planned_steps > total_steps
|
||||
|
||||
# Broadcast plan
|
||||
await _broadcast_progress(
|
||||
"agentic.plan_ready",
|
||||
{
|
||||
"task_id": task_id,
|
||||
"task": task,
|
||||
"steps": steps,
|
||||
"total": total_steps,
|
||||
},
|
||||
{"task_id": task_id, "task": task, "steps": steps, "total": total_steps},
|
||||
)
|
||||
|
||||
# ── Phase 2: Execution ─────────────────────────────────────────────────
|
||||
# Phase 2: Execution
|
||||
completed_results: list[str] = []
|
||||
|
||||
for i, step_desc in enumerate(steps, 1):
|
||||
step_start = time.monotonic()
|
||||
|
||||
recent = completed_results[-2:] if completed_results else []
|
||||
context = (
|
||||
f"Task: {task}\n"
|
||||
f"Step {i}/{total_steps}: {step_desc}\n"
|
||||
f"Recent progress: {recent}\n\n"
|
||||
f"Execute this step and report what you did."
|
||||
)
|
||||
|
||||
try:
|
||||
step_run = await asyncio.to_thread(
|
||||
agent.run, context, stream=False, session_id=f"{session_id}_step{i}"
|
||||
)
|
||||
step_result = step_run.content if hasattr(step_run, "content") else str(step_run)
|
||||
|
||||
# Clean the response
|
||||
from timmy.session import _clean_response
|
||||
|
||||
step_result = _clean_response(step_result)
|
||||
|
||||
step = AgenticStep(
|
||||
step_num=i,
|
||||
description=step_desc,
|
||||
result=step_result,
|
||||
status="completed",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
step = await _execute_step(
|
||||
agent,
|
||||
task,
|
||||
step_desc,
|
||||
i,
|
||||
total_steps,
|
||||
completed_results,
|
||||
session_id,
|
||||
)
|
||||
result.steps.append(step)
|
||||
completed_results.append(f"Step {i}: {step_result[:200]}")
|
||||
|
||||
# Broadcast progress
|
||||
completed_results.append(f"Step {i}: {step.result[:200]}")
|
||||
await _broadcast_progress(
|
||||
"agentic.step_complete",
|
||||
{
|
||||
@@ -210,46 +287,18 @@ async def run_agentic_loop(
|
||||
"step": i,
|
||||
"total": total_steps,
|
||||
"description": step_desc,
|
||||
"result": step_result[:200],
|
||||
"result": step.result[:200],
|
||||
},
|
||||
)
|
||||
|
||||
if on_progress:
|
||||
await on_progress(step_desc, i, total_steps)
|
||||
|
||||
except Exception as exc: # broad catch intentional: agent.run can raise any error
|
||||
logger.warning("Agentic loop step %d failed: %s", i, exc)
|
||||
|
||||
# ── Adaptation: ask model to adapt ─────────────────────────────
|
||||
adapt_prompt = (
|
||||
f"Step {i} failed with error: {exc}\n"
|
||||
f"Original step was: {step_desc}\n"
|
||||
f"Adapt the plan and try an alternative approach for this step."
|
||||
)
|
||||
try:
|
||||
adapt_run = await asyncio.to_thread(
|
||||
agent.run,
|
||||
adapt_prompt,
|
||||
stream=False,
|
||||
session_id=f"{session_id}_adapt{i}",
|
||||
)
|
||||
adapt_result = (
|
||||
adapt_run.content if hasattr(adapt_run, "content") else str(adapt_run)
|
||||
)
|
||||
from timmy.session import _clean_response
|
||||
|
||||
adapt_result = _clean_response(adapt_result)
|
||||
|
||||
step = AgenticStep(
|
||||
step_num=i,
|
||||
description=f"[Adapted] {step_desc}",
|
||||
result=adapt_result,
|
||||
status="adapted",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
)
|
||||
step = await _adapt_step(agent, step_desc, i, exc, step_start, session_id)
|
||||
result.steps.append(step)
|
||||
completed_results.append(f"Step {i} (adapted): {adapt_result[:200]}")
|
||||
|
||||
completed_results.append(f"Step {i} (adapted): {step.result[:200]}")
|
||||
await _broadcast_progress(
|
||||
"agentic.step_adapted",
|
||||
{
|
||||
@@ -258,46 +307,26 @@ async def run_agentic_loop(
|
||||
"total": total_steps,
|
||||
"description": step_desc,
|
||||
"error": str(exc),
|
||||
"adaptation": adapt_result[:200],
|
||||
"adaptation": step.result[:200],
|
||||
},
|
||||
)
|
||||
|
||||
if on_progress:
|
||||
await on_progress(f"[Adapted] {step_desc}", i, total_steps)
|
||||
|
||||
except Exception as adapt_exc: # broad catch intentional: agent.run can raise any error
|
||||
except Exception as adapt_exc: # broad catch intentional
|
||||
logger.error("Agentic loop adaptation also failed: %s", adapt_exc)
|
||||
step = AgenticStep(
|
||||
step_num=i,
|
||||
description=step_desc,
|
||||
result=f"Failed: {exc}; Adaptation also failed: {adapt_exc}",
|
||||
status="failed",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
result.steps.append(
|
||||
AgenticStep(
|
||||
step_num=i,
|
||||
description=step_desc,
|
||||
result=f"Failed: {exc}; Adaptation also failed: {adapt_exc}",
|
||||
status="failed",
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
)
|
||||
)
|
||||
result.steps.append(step)
|
||||
completed_results.append(f"Step {i}: FAILED")
|
||||
|
||||
# ── Phase 3: Summary ───────────────────────────────────────────────────
|
||||
completed_count = sum(1 for s in result.steps if s.status == "completed")
|
||||
adapted_count = sum(1 for s in result.steps if s.status == "adapted")
|
||||
failed_count = sum(1 for s in result.steps if s.status == "failed")
|
||||
parts = [f"Completed {completed_count}/{total_steps} steps"]
|
||||
if adapted_count:
|
||||
parts.append(f"{adapted_count} adapted")
|
||||
if failed_count:
|
||||
parts.append(f"{failed_count} failed")
|
||||
result.summary = f"{task}: {', '.join(parts)}."
|
||||
|
||||
# Determine final status
|
||||
if was_truncated:
|
||||
result.status = "partial"
|
||||
elif len(result.steps) < total_steps:
|
||||
result.status = "partial"
|
||||
elif any(s.status == "failed" for s in result.steps):
|
||||
result.status = "partial"
|
||||
else:
|
||||
result.status = "completed"
|
||||
|
||||
# Phase 3: Summary
|
||||
_summarize(result, total_steps, was_truncated)
|
||||
result.total_duration_ms = int((time.monotonic() - start_time) * 1000)
|
||||
|
||||
await _broadcast_progress(
|
||||
|
||||
@@ -119,75 +119,84 @@ class BaseAgent(ABC):
|
||||
"""
|
||||
pass
|
||||
|
||||
async def run(self, message: str) -> str:
|
||||
"""Run the agent with a message.
|
||||
# Transient errors that indicate Ollama contention or temporary
|
||||
# unavailability — these deserve a retry with backoff.
|
||||
_TRANSIENT = (
|
||||
httpx.ConnectError,
|
||||
httpx.ReadError,
|
||||
httpx.ReadTimeout,
|
||||
httpx.ConnectTimeout,
|
||||
ConnectionError,
|
||||
TimeoutError,
|
||||
)
|
||||
|
||||
Retries on transient failures (connection errors, timeouts) with
|
||||
exponential backoff. GPU contention from concurrent Ollama
|
||||
requests causes ReadError / ReadTimeout — these are transient
|
||||
and should be retried, not raised immediately (#70).
|
||||
async def run(self, message: str, *, max_retries: int = 3) -> str:
|
||||
"""Run the agent with a message, retrying on transient failures.
|
||||
|
||||
Returns:
|
||||
Agent response
|
||||
GPU contention from concurrent Ollama requests causes ReadError /
|
||||
ReadTimeout — these are transient and retried with exponential
|
||||
backoff (#70).
|
||||
"""
|
||||
max_retries = 3
|
||||
last_exception = None
|
||||
# Transient errors that indicate Ollama contention or temporary
|
||||
# unavailability — these deserve a retry with backoff.
|
||||
_transient = (
|
||||
httpx.ConnectError,
|
||||
httpx.ReadError,
|
||||
httpx.ReadTimeout,
|
||||
httpx.ConnectTimeout,
|
||||
ConnectionError,
|
||||
TimeoutError,
|
||||
)
|
||||
response = await self._run_with_retries(message, max_retries)
|
||||
await self._emit_response_event(message, response)
|
||||
return response
|
||||
|
||||
async def _run_with_retries(self, message: str, max_retries: int) -> str:
|
||||
"""Execute agent.run() with retry logic for transient errors."""
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
result = self.agent.run(message, stream=False)
|
||||
response = result.content if hasattr(result, "content") else str(result)
|
||||
break # Success, exit the retry loop
|
||||
except _transient as exc:
|
||||
last_exception = exc
|
||||
if attempt < max_retries:
|
||||
# Contention backoff — longer waits because the GPU
|
||||
# needs time to finish the other request.
|
||||
wait = min(2**attempt, 16)
|
||||
logger.warning(
|
||||
"Ollama contention on attempt %d/%d: %s. Waiting %ds before retry...",
|
||||
attempt,
|
||||
max_retries,
|
||||
type(exc).__name__,
|
||||
wait,
|
||||
)
|
||||
await asyncio.sleep(wait)
|
||||
else:
|
||||
logger.error(
|
||||
"Ollama unreachable after %d attempts: %s",
|
||||
max_retries,
|
||||
exc,
|
||||
)
|
||||
raise last_exception from exc
|
||||
return result.content if hasattr(result, "content") else str(result)
|
||||
except self._TRANSIENT as exc:
|
||||
self._handle_retry_or_raise(
|
||||
exc,
|
||||
attempt,
|
||||
max_retries,
|
||||
transient=True,
|
||||
)
|
||||
await asyncio.sleep(min(2**attempt, 16))
|
||||
except Exception as exc:
|
||||
last_exception = exc
|
||||
if attempt < max_retries:
|
||||
logger.warning(
|
||||
"Agent run failed on attempt %d/%d: %s. Retrying...",
|
||||
attempt,
|
||||
max_retries,
|
||||
exc,
|
||||
)
|
||||
await asyncio.sleep(min(2 ** (attempt - 1), 8))
|
||||
else:
|
||||
logger.error(
|
||||
"Agent run failed after %d attempts: %s",
|
||||
max_retries,
|
||||
exc,
|
||||
)
|
||||
raise last_exception from exc
|
||||
self._handle_retry_or_raise(
|
||||
exc,
|
||||
attempt,
|
||||
max_retries,
|
||||
transient=False,
|
||||
)
|
||||
await asyncio.sleep(min(2 ** (attempt - 1), 8))
|
||||
# Unreachable — _handle_retry_or_raise raises on last attempt.
|
||||
raise RuntimeError("retry loop exited unexpectedly") # pragma: no cover
|
||||
|
||||
# Emit completion event
|
||||
@staticmethod
|
||||
def _handle_retry_or_raise(
|
||||
exc: Exception,
|
||||
attempt: int,
|
||||
max_retries: int,
|
||||
*,
|
||||
transient: bool,
|
||||
) -> None:
|
||||
"""Log a retry warning or raise after exhausting attempts."""
|
||||
if attempt < max_retries:
|
||||
if transient:
|
||||
logger.warning(
|
||||
"Ollama contention on attempt %d/%d: %s. Waiting before retry...",
|
||||
attempt,
|
||||
max_retries,
|
||||
type(exc).__name__,
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"Agent run failed on attempt %d/%d: %s. Retrying...",
|
||||
attempt,
|
||||
max_retries,
|
||||
exc,
|
||||
)
|
||||
else:
|
||||
label = "Ollama unreachable" if transient else "Agent run failed"
|
||||
logger.error("%s after %d attempts: %s", label, max_retries, exc)
|
||||
raise exc
|
||||
|
||||
async def _emit_response_event(self, message: str, response: str) -> None:
|
||||
"""Publish a completion event to the event bus if connected."""
|
||||
if self.event_bus:
|
||||
await self.event_bus.publish(
|
||||
Event(
|
||||
@@ -197,8 +206,6 @@ class BaseAgent(ABC):
|
||||
)
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
def get_capabilities(self) -> list[str]:
|
||||
"""Get list of capabilities this agent provides."""
|
||||
return self.tools
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
"""LLM backends — AirLLM (local big models), Grok (xAI), and Claude (Anthropic).
|
||||
"""LLM backends — Grok (xAI) and Claude (Anthropic).
|
||||
|
||||
Provides drop-in replacements for the Agno Agent that expose the same
|
||||
run(message, stream) → RunResult interface used by the dashboard and the
|
||||
print_response(message, stream) interface used by the CLI.
|
||||
|
||||
Backends:
|
||||
- TimmyAirLLMAgent: Local 8B/70B/405B via AirLLM (Apple Silicon or PyTorch)
|
||||
- GrokBackend: xAI Grok API via OpenAI-compatible SDK (opt-in premium)
|
||||
- ClaudeBackend: Anthropic Claude API — lightweight cloud fallback
|
||||
|
||||
@@ -16,21 +15,11 @@ import logging
|
||||
import platform
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Literal
|
||||
|
||||
from timmy.prompts import get_system_prompt
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# HuggingFace model IDs for each supported size.
|
||||
_AIRLLM_MODELS: dict[str, str] = {
|
||||
"8b": "meta-llama/Meta-Llama-3.1-8B-Instruct",
|
||||
"70b": "meta-llama/Meta-Llama-3.1-70B-Instruct",
|
||||
"405b": "meta-llama/Meta-Llama-3.1-405B-Instruct",
|
||||
}
|
||||
|
||||
ModelSize = Literal["8b", "70b", "405b"]
|
||||
|
||||
|
||||
@dataclass
|
||||
class RunResult:
|
||||
@@ -45,108 +34,6 @@ def is_apple_silicon() -> bool:
|
||||
return platform.system() == "Darwin" and platform.machine() == "arm64"
|
||||
|
||||
|
||||
def airllm_available() -> bool:
|
||||
"""Return True when the airllm package is importable."""
|
||||
try:
|
||||
import airllm # noqa: F401
|
||||
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
class TimmyAirLLMAgent:
|
||||
"""Thin AirLLM wrapper compatible with both dashboard and CLI call sites.
|
||||
|
||||
Exposes:
|
||||
run(message, stream) → RunResult(content=...) [dashboard]
|
||||
print_response(message, stream) → None [CLI]
|
||||
|
||||
Maintains a rolling 10-turn in-memory history so Timmy remembers the
|
||||
conversation within a session — no SQLite needed at this layer.
|
||||
"""
|
||||
|
||||
def __init__(self, model_size: str = "70b") -> None:
|
||||
model_id = _AIRLLM_MODELS.get(model_size)
|
||||
if model_id is None:
|
||||
raise ValueError(
|
||||
f"Unknown model size {model_size!r}. Choose from: {list(_AIRLLM_MODELS)}"
|
||||
)
|
||||
|
||||
if is_apple_silicon():
|
||||
from airllm import AirLLMMLX # type: ignore[import]
|
||||
|
||||
self._model = AirLLMMLX(model_id)
|
||||
else:
|
||||
from airllm import AutoModel # type: ignore[import]
|
||||
|
||||
self._model = AutoModel.from_pretrained(model_id)
|
||||
|
||||
self._history: list[str] = []
|
||||
self._model_size = model_size
|
||||
|
||||
# ── public interface (mirrors Agno Agent) ────────────────────────────────
|
||||
|
||||
def run(self, message: str, *, stream: bool = False) -> RunResult:
|
||||
"""Run inference and return a structured result (matches Agno Agent.run()).
|
||||
|
||||
`stream` is accepted for API compatibility; AirLLM always generates
|
||||
the full output in one pass.
|
||||
"""
|
||||
prompt = self._build_prompt(message)
|
||||
|
||||
input_tokens = self._model.tokenizer(
|
||||
[prompt],
|
||||
return_tensors="pt",
|
||||
padding=True,
|
||||
truncation=True,
|
||||
max_length=2048,
|
||||
)
|
||||
output = self._model.generate(
|
||||
**input_tokens,
|
||||
max_new_tokens=512,
|
||||
use_cache=True,
|
||||
do_sample=True,
|
||||
temperature=0.7,
|
||||
)
|
||||
|
||||
# Decode only the newly generated tokens, not the prompt.
|
||||
input_len = input_tokens["input_ids"].shape[1]
|
||||
response = self._model.tokenizer.decode(
|
||||
output[0][input_len:], skip_special_tokens=True
|
||||
).strip()
|
||||
|
||||
self._history.append(f"User: {message}")
|
||||
self._history.append(f"Timmy: {response}")
|
||||
|
||||
return RunResult(content=response)
|
||||
|
||||
def print_response(self, message: str, *, stream: bool = True) -> None:
|
||||
"""Run inference and render the response to stdout (CLI interface)."""
|
||||
result = self.run(message, stream=stream)
|
||||
self._render(result.content)
|
||||
|
||||
# ── private helpers ──────────────────────────────────────────────────────
|
||||
|
||||
def _build_prompt(self, message: str) -> str:
|
||||
context = get_system_prompt(tools_enabled=False, session_id="airllm") + "\n\n"
|
||||
# Include the last 10 turns (5 exchanges) for continuity.
|
||||
if self._history:
|
||||
context += "\n".join(self._history[-10:]) + "\n\n"
|
||||
return context + f"User: {message}\nTimmy:"
|
||||
|
||||
@staticmethod
|
||||
def _render(text: str) -> None:
|
||||
"""Print response with rich markdown when available, plain text otherwise."""
|
||||
try:
|
||||
from rich.console import Console
|
||||
from rich.markdown import Markdown
|
||||
|
||||
Console().print(Markdown(text))
|
||||
except ImportError:
|
||||
print(text)
|
||||
|
||||
|
||||
# ── Grok (xAI) Backend ─────────────────────────────────────────────────────
|
||||
# Premium cloud augmentation — opt-in only, never the default path.
|
||||
|
||||
@@ -187,7 +74,7 @@ class GrokBackend:
|
||||
Uses the OpenAI-compatible SDK to connect to xAI's API.
|
||||
Only activated when GROK_ENABLED=true and XAI_API_KEY is set.
|
||||
|
||||
Exposes the same interface as TimmyAirLLMAgent and Agno Agent:
|
||||
Exposes the same interface as Agno Agent:
|
||||
run(message, stream) → RunResult [dashboard]
|
||||
print_response(message, stream) → None [CLI]
|
||||
health_check() → dict [monitoring]
|
||||
@@ -212,23 +99,27 @@ class GrokBackend:
|
||||
|
||||
def _get_client(self):
|
||||
"""Create OpenAI client configured for xAI endpoint."""
|
||||
from config import settings
|
||||
|
||||
import httpx
|
||||
from openai import OpenAI
|
||||
|
||||
return OpenAI(
|
||||
api_key=self._api_key,
|
||||
base_url="https://api.x.ai/v1",
|
||||
base_url=settings.xai_base_url,
|
||||
timeout=httpx.Timeout(300.0),
|
||||
)
|
||||
|
||||
async def _get_async_client(self):
|
||||
"""Create async OpenAI client configured for xAI endpoint."""
|
||||
from config import settings
|
||||
|
||||
import httpx
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
return AsyncOpenAI(
|
||||
api_key=self._api_key,
|
||||
base_url="https://api.x.ai/v1",
|
||||
base_url=settings.xai_base_url,
|
||||
timeout=httpx.Timeout(300.0),
|
||||
)
|
||||
|
||||
@@ -437,8 +328,7 @@ CLAUDE_MODELS: dict[str, str] = {
|
||||
class ClaudeBackend:
|
||||
"""Anthropic Claude backend — cloud fallback when local models are offline.
|
||||
|
||||
Uses the official Anthropic SDK. Same interface as GrokBackend and
|
||||
TimmyAirLLMAgent:
|
||||
Uses the official Anthropic SDK. Same interface as GrokBackend:
|
||||
run(message, stream) → RunResult [dashboard]
|
||||
print_response(message, stream) → None [CLI]
|
||||
health_check() → dict [monitoring]
|
||||
|
||||
106
src/timmy/cli.py
106
src/timmy/cli.py
@@ -22,13 +22,13 @@ _BACKEND_OPTION = typer.Option(
|
||||
None,
|
||||
"--backend",
|
||||
"-b",
|
||||
help="Inference backend: 'ollama' (default) | 'airllm' | 'auto'",
|
||||
help="Inference backend: 'ollama' (default) | 'grok' | 'claude'",
|
||||
)
|
||||
_MODEL_SIZE_OPTION = typer.Option(
|
||||
None,
|
||||
"--model-size",
|
||||
"-s",
|
||||
help="AirLLM model size when --backend airllm: '8b' | '70b' | '405b'",
|
||||
help="Model size (reserved for future use).",
|
||||
)
|
||||
|
||||
|
||||
@@ -37,6 +37,35 @@ def _is_interactive() -> bool:
|
||||
return hasattr(sys.stdin, "isatty") and sys.stdin.isatty()
|
||||
|
||||
|
||||
def _prompt_interactive(req, tool_name: str, tool_args: dict) -> None:
|
||||
"""Display tool details and prompt the human for approval."""
|
||||
description = format_action_description(tool_name, tool_args)
|
||||
impact = get_impact_level(tool_name)
|
||||
|
||||
typer.echo()
|
||||
typer.echo(typer.style("Tool confirmation required", bold=True))
|
||||
typer.echo(f" Impact: {impact.upper()}")
|
||||
typer.echo(f" {description}")
|
||||
typer.echo()
|
||||
|
||||
if typer.confirm("Allow this action?", default=False):
|
||||
req.confirm()
|
||||
logger.info("CLI: approved %s", tool_name)
|
||||
else:
|
||||
req.reject(note="User rejected from CLI")
|
||||
logger.info("CLI: rejected %s", tool_name)
|
||||
|
||||
|
||||
def _decide_autonomous(req, tool_name: str, tool_args: dict) -> None:
|
||||
"""Auto-approve allowlisted tools; reject everything else."""
|
||||
if is_allowlisted(tool_name, tool_args):
|
||||
req.confirm()
|
||||
logger.info("AUTO-APPROVED (allowlist): %s", tool_name)
|
||||
else:
|
||||
req.reject(note="Auto-rejected: not in allowlist")
|
||||
logger.info("AUTO-REJECTED (not allowlisted): %s %s", tool_name, str(tool_args)[:100])
|
||||
|
||||
|
||||
def _handle_tool_confirmation(agent, run_output, session_id: str, *, autonomous: bool = False):
|
||||
"""Prompt user to approve/reject dangerous tool calls.
|
||||
|
||||
@@ -51,6 +80,7 @@ def _handle_tool_confirmation(agent, run_output, session_id: str, *, autonomous:
|
||||
Returns the final RunOutput after all confirmations are resolved.
|
||||
"""
|
||||
interactive = _is_interactive() and not autonomous
|
||||
decide = _prompt_interactive if interactive else _decide_autonomous
|
||||
|
||||
max_rounds = 10 # safety limit
|
||||
for _ in range(max_rounds):
|
||||
@@ -66,39 +96,10 @@ def _handle_tool_confirmation(agent, run_output, session_id: str, *, autonomous:
|
||||
for req in reqs:
|
||||
if not getattr(req, "needs_confirmation", False):
|
||||
continue
|
||||
|
||||
te = req.tool_execution
|
||||
tool_name = getattr(te, "tool_name", "unknown")
|
||||
tool_args = getattr(te, "tool_args", {}) or {}
|
||||
|
||||
if interactive:
|
||||
# Human present — prompt for approval
|
||||
description = format_action_description(tool_name, tool_args)
|
||||
impact = get_impact_level(tool_name)
|
||||
|
||||
typer.echo()
|
||||
typer.echo(typer.style("Tool confirmation required", bold=True))
|
||||
typer.echo(f" Impact: {impact.upper()}")
|
||||
typer.echo(f" {description}")
|
||||
typer.echo()
|
||||
|
||||
approved = typer.confirm("Allow this action?", default=False)
|
||||
if approved:
|
||||
req.confirm()
|
||||
logger.info("CLI: approved %s", tool_name)
|
||||
else:
|
||||
req.reject(note="User rejected from CLI")
|
||||
logger.info("CLI: rejected %s", tool_name)
|
||||
else:
|
||||
# Autonomous mode — check allowlist
|
||||
if is_allowlisted(tool_name, tool_args):
|
||||
req.confirm()
|
||||
logger.info("AUTO-APPROVED (allowlist): %s", tool_name)
|
||||
else:
|
||||
req.reject(note="Auto-rejected: not in allowlist")
|
||||
logger.info(
|
||||
"AUTO-REJECTED (not allowlisted): %s %s", tool_name, str(tool_args)[:100]
|
||||
)
|
||||
decide(req, tool_name, tool_args)
|
||||
|
||||
# Resume the run so the agent sees the confirmation result
|
||||
try:
|
||||
@@ -138,7 +139,7 @@ def think(
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Ask Timmy to think carefully about a topic."""
|
||||
timmy = create_timmy(backend=backend, model_size=model_size, session_id=_CLI_SESSION_ID)
|
||||
timmy = create_timmy(backend=backend, session_id=_CLI_SESSION_ID)
|
||||
timmy.print_response(f"Think carefully about: {topic}", stream=True, session_id=_CLI_SESSION_ID)
|
||||
|
||||
|
||||
@@ -201,7 +202,7 @@ def chat(
|
||||
session_id = str(uuid.uuid4())
|
||||
else:
|
||||
session_id = _CLI_SESSION_ID
|
||||
timmy = create_timmy(backend=backend, model_size=model_size, session_id=session_id)
|
||||
timmy = create_timmy(backend=backend, session_id=session_id)
|
||||
|
||||
# Use agent.run() so we can intercept paused runs for tool confirmation.
|
||||
run_output = timmy.run(message_str, stream=False, session_id=session_id)
|
||||
@@ -278,7 +279,7 @@ def status(
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Print Timmy's operational status."""
|
||||
timmy = create_timmy(backend=backend, model_size=model_size, session_id=_CLI_SESSION_ID)
|
||||
timmy = create_timmy(backend=backend, session_id=_CLI_SESSION_ID)
|
||||
timmy.print_response(STATUS_PROMPT, stream=False, session_id=_CLI_SESSION_ID)
|
||||
|
||||
|
||||
@@ -416,5 +417,40 @@ def route(
|
||||
typer.echo("→ orchestrator (no pattern match)")
|
||||
|
||||
|
||||
@app.command()
|
||||
def focus(
|
||||
topic: str | None = typer.Argument(
|
||||
None, help='Topic to focus on (e.g. "three-phase loop"). Omit to show current focus.'
|
||||
),
|
||||
clear: bool = typer.Option(False, "--clear", "-c", help="Clear focus and return to broad mode"),
|
||||
):
|
||||
"""Set deep-focus mode on a single problem.
|
||||
|
||||
When focused, Timmy prioritizes the active topic in all responses
|
||||
and deprioritizes unrelated context. Focus persists across sessions.
|
||||
|
||||
Examples:
|
||||
timmy focus "three-phase loop" # activate deep focus
|
||||
timmy focus # show current focus
|
||||
timmy focus --clear # return to broad mode
|
||||
"""
|
||||
from timmy.focus import focus_manager
|
||||
|
||||
if clear:
|
||||
focus_manager.clear()
|
||||
typer.echo("Focus cleared — back to broad mode.")
|
||||
return
|
||||
|
||||
if topic:
|
||||
focus_manager.set_topic(topic)
|
||||
typer.echo(f'Deep focus activated: "{topic}"')
|
||||
else:
|
||||
# Show current focus status
|
||||
if focus_manager.is_focused():
|
||||
typer.echo(f'Deep focus: "{focus_manager.get_topic()}"')
|
||||
else:
|
||||
typer.echo("No active focus (broad mode).")
|
||||
|
||||
|
||||
def main():
|
||||
app()
|
||||
|
||||
250
src/timmy/cognitive_state.py
Normal file
250
src/timmy/cognitive_state.py
Normal file
@@ -0,0 +1,250 @@
|
||||
"""Observable cognitive state for Timmy.
|
||||
|
||||
Tracks Timmy's internal cognitive signals — focus, engagement, mood,
|
||||
and active commitments — so external systems (Matrix avatar, dashboard)
|
||||
can render observable behaviour.
|
||||
|
||||
State is published via ``workshop_state.py`` → ``presence.json`` and the
|
||||
WebSocket relay. The old ``~/.tower/timmy-state.txt`` file has been
|
||||
deprecated (see #384).
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from dataclasses import asdict, dataclass, field
|
||||
|
||||
from timmy.confidence import estimate_confidence
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Schema
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
ENGAGEMENT_LEVELS = ("idle", "surface", "deep")
|
||||
MOOD_VALUES = ("curious", "settled", "hesitant", "energized")
|
||||
|
||||
|
||||
@dataclass
|
||||
class CognitiveState:
|
||||
"""Observable snapshot of Timmy's cognitive state."""
|
||||
|
||||
focus_topic: str | None = None
|
||||
engagement: str = "idle" # idle | surface | deep
|
||||
mood: str = "settled" # curious | settled | hesitant | energized
|
||||
conversation_depth: int = 0
|
||||
last_initiative: str | None = None
|
||||
active_commitments: list[str] = field(default_factory=list)
|
||||
|
||||
# Internal tracking (not written to state file)
|
||||
_confidence_sum: float = field(default=0.0, repr=False)
|
||||
_confidence_count: int = field(default=0, repr=False)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Serialisation helpers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Public fields only (exclude internal tracking)."""
|
||||
d = asdict(self)
|
||||
d.pop("_confidence_sum", None)
|
||||
d.pop("_confidence_count", None)
|
||||
return d
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cognitive signal extraction
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Keywords that suggest deep engagement
|
||||
_DEEP_KEYWORDS = frozenset(
|
||||
{
|
||||
"architecture",
|
||||
"design",
|
||||
"implement",
|
||||
"refactor",
|
||||
"debug",
|
||||
"analyze",
|
||||
"investigate",
|
||||
"deep dive",
|
||||
"explain how",
|
||||
"walk me through",
|
||||
"step by step",
|
||||
}
|
||||
)
|
||||
|
||||
# Keywords that suggest initiative / commitment
|
||||
_COMMITMENT_KEYWORDS = frozenset(
|
||||
{
|
||||
"i will",
|
||||
"i'll",
|
||||
"let me",
|
||||
"i'm going to",
|
||||
"plan to",
|
||||
"commit to",
|
||||
"i propose",
|
||||
"i suggest",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _infer_engagement(message: str, response: str) -> str:
|
||||
"""Classify engagement level from the exchange."""
|
||||
combined = (message + " " + response).lower()
|
||||
if any(kw in combined for kw in _DEEP_KEYWORDS):
|
||||
return "deep"
|
||||
# Short exchanges are surface-level
|
||||
if len(response.split()) < 15:
|
||||
return "surface"
|
||||
return "surface"
|
||||
|
||||
|
||||
def _infer_mood(response: str, confidence: float) -> str:
|
||||
"""Derive mood from response signals."""
|
||||
lower = response.lower()
|
||||
if confidence < 0.4:
|
||||
return "hesitant"
|
||||
if "!" in response and any(w in lower for w in ("great", "exciting", "love", "awesome")):
|
||||
return "energized"
|
||||
if "?" in response or any(w in lower for w in ("wonder", "interesting", "curious", "hmm")):
|
||||
return "curious"
|
||||
return "settled"
|
||||
|
||||
|
||||
def _extract_topic(message: str) -> str | None:
|
||||
"""Best-effort topic extraction from the user message.
|
||||
|
||||
Takes the first meaningful clause (up to 60 chars) as a topic label.
|
||||
"""
|
||||
text = message.strip()
|
||||
if not text:
|
||||
return None
|
||||
# Strip leading question words
|
||||
for prefix in ("what is ", "how do ", "can you ", "please ", "hey timmy "):
|
||||
if text.lower().startswith(prefix):
|
||||
text = text[len(prefix) :]
|
||||
# Truncate
|
||||
if len(text) > 60:
|
||||
text = text[:57] + "..."
|
||||
return text.strip() or None
|
||||
|
||||
|
||||
def _extract_commitments(response: str) -> list[str]:
|
||||
"""Pull commitment phrases from Timmy's response."""
|
||||
commitments: list[str] = []
|
||||
lower = response.lower()
|
||||
for kw in _COMMITMENT_KEYWORDS:
|
||||
idx = lower.find(kw)
|
||||
if idx == -1:
|
||||
continue
|
||||
# Grab the rest of the sentence (up to period/newline, max 80 chars)
|
||||
start = idx
|
||||
end = len(lower)
|
||||
for sep in (".", "\n", "!"):
|
||||
pos = lower.find(sep, start)
|
||||
if pos != -1:
|
||||
end = min(end, pos)
|
||||
snippet = response[start : min(end, start + 80)].strip()
|
||||
if snippet:
|
||||
commitments.append(snippet)
|
||||
return commitments[:3] # Cap at 3
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tracker singleton
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class CognitiveTracker:
|
||||
"""Maintains Timmy's cognitive state.
|
||||
|
||||
State is consumed via ``to_json()`` / ``get_state()`` and published
|
||||
externally by ``workshop_state.py`` → ``presence.json``.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.state = CognitiveState()
|
||||
|
||||
def update(self, user_message: str, response: str) -> CognitiveState:
|
||||
"""Update cognitive state from a chat exchange.
|
||||
|
||||
Called after each chat round-trip in ``session.py``.
|
||||
Emits a ``cognitive_state_changed`` event to the sensory bus so
|
||||
downstream consumers (WorkshopHeartbeat, etc.) react immediately.
|
||||
"""
|
||||
confidence = estimate_confidence(response)
|
||||
|
||||
prev_mood = self.state.mood
|
||||
prev_engagement = self.state.engagement
|
||||
|
||||
# Track running confidence average
|
||||
self.state._confidence_sum += confidence
|
||||
self.state._confidence_count += 1
|
||||
|
||||
self.state.conversation_depth += 1
|
||||
self.state.focus_topic = _extract_topic(user_message) or self.state.focus_topic
|
||||
self.state.engagement = _infer_engagement(user_message, response)
|
||||
self.state.mood = _infer_mood(response, confidence)
|
||||
|
||||
# Extract commitments from response
|
||||
new_commitments = _extract_commitments(response)
|
||||
if new_commitments:
|
||||
self.state.last_initiative = new_commitments[0]
|
||||
# Merge, keeping last 5
|
||||
seen = set(self.state.active_commitments)
|
||||
for c in new_commitments:
|
||||
if c not in seen:
|
||||
self.state.active_commitments.append(c)
|
||||
seen.add(c)
|
||||
self.state.active_commitments = self.state.active_commitments[-5:]
|
||||
|
||||
# Emit cognitive_state_changed to close the sense → react loop
|
||||
self._emit_change(prev_mood, prev_engagement)
|
||||
|
||||
return self.state
|
||||
|
||||
def _emit_change(self, prev_mood: str, prev_engagement: str) -> None:
|
||||
"""Fire-and-forget sensory event for cognitive state change."""
|
||||
try:
|
||||
from timmy.event_bus import get_sensory_bus
|
||||
from timmy.events import SensoryEvent
|
||||
|
||||
event = SensoryEvent(
|
||||
source="cognitive",
|
||||
event_type="cognitive_state_changed",
|
||||
data={
|
||||
"mood": self.state.mood,
|
||||
"engagement": self.state.engagement,
|
||||
"focus_topic": self.state.focus_topic or "",
|
||||
"depth": self.state.conversation_depth,
|
||||
"mood_changed": self.state.mood != prev_mood,
|
||||
"engagement_changed": self.state.engagement != prev_engagement,
|
||||
},
|
||||
)
|
||||
bus = get_sensory_bus()
|
||||
# Fire-and-forget — don't block the chat response
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
loop.create_task(bus.emit(event))
|
||||
except RuntimeError:
|
||||
# No running loop (sync context / tests) — skip emission
|
||||
pass
|
||||
except Exception as exc:
|
||||
logger.debug("Cognitive event emission skipped: %s", exc)
|
||||
|
||||
def get_state(self) -> CognitiveState:
|
||||
"""Return current cognitive state."""
|
||||
return self.state
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset to idle state (e.g. on session reset)."""
|
||||
self.state = CognitiveState()
|
||||
|
||||
def to_json(self) -> str:
|
||||
"""Serialise current state as JSON (for API / WebSocket consumers)."""
|
||||
return json.dumps(self.state.to_dict())
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
cognitive_tracker = CognitiveTracker()
|
||||
@@ -174,15 +174,8 @@ class ConversationManager:
|
||||
|
||||
return None
|
||||
|
||||
def should_use_tools(self, message: str, context: ConversationContext) -> bool:
|
||||
"""Determine if this message likely requires tools.
|
||||
|
||||
Returns True if tools are likely needed, False for simple chat.
|
||||
"""
|
||||
message_lower = message.lower().strip()
|
||||
|
||||
# Tool keywords that suggest tool usage is needed
|
||||
tool_keywords = [
|
||||
_TOOL_KEYWORDS = frozenset(
|
||||
{
|
||||
"search",
|
||||
"look up",
|
||||
"find",
|
||||
@@ -203,10 +196,11 @@ class ConversationManager:
|
||||
"shell",
|
||||
"command",
|
||||
"install",
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
# Chat-only keywords that definitely don't need tools
|
||||
chat_only = [
|
||||
_CHAT_ONLY_KEYWORDS = frozenset(
|
||||
{
|
||||
"hello",
|
||||
"hi ",
|
||||
"hey",
|
||||
@@ -221,30 +215,47 @@ class ConversationManager:
|
||||
"goodbye",
|
||||
"tell me about yourself",
|
||||
"what can you do",
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
# Check for chat-only patterns first
|
||||
for pattern in chat_only:
|
||||
if pattern in message_lower:
|
||||
return False
|
||||
_SIMPLE_QUESTION_PREFIXES = ("what is", "who is", "how does", "why is", "when did", "where is")
|
||||
_TIME_WORDS = ("today", "now", "current", "latest", "this week", "this month")
|
||||
|
||||
# Check for tool keywords
|
||||
for keyword in tool_keywords:
|
||||
if keyword in message_lower:
|
||||
return True
|
||||
def _is_chat_only(self, message_lower: str) -> bool:
|
||||
"""Return True if the message matches a chat-only pattern."""
|
||||
return any(kw in message_lower for kw in self._CHAT_ONLY_KEYWORDS)
|
||||
|
||||
# Simple questions (starting with what, who, how, why, when, where)
|
||||
# usually don't need tools unless about current/real-time info
|
||||
simple_question_words = ["what is", "who is", "how does", "why is", "when did", "where is"]
|
||||
for word in simple_question_words:
|
||||
if message_lower.startswith(word):
|
||||
# Check if it's asking about current/real-time info
|
||||
time_words = ["today", "now", "current", "latest", "this week", "this month"]
|
||||
if any(t in message_lower for t in time_words):
|
||||
return True
|
||||
return False
|
||||
def _has_tool_keyword(self, message_lower: str) -> bool:
|
||||
"""Return True if the message contains a tool-related keyword."""
|
||||
return any(kw in message_lower for kw in self._TOOL_KEYWORDS)
|
||||
|
||||
def _is_simple_question(self, message_lower: str) -> bool | None:
|
||||
"""Check if message is a simple question.
|
||||
|
||||
Returns True if it needs tools (real-time info), False if it
|
||||
doesn't, or None if the message isn't a simple question.
|
||||
"""
|
||||
for prefix in self._SIMPLE_QUESTION_PREFIXES:
|
||||
if message_lower.startswith(prefix):
|
||||
return any(t in message_lower for t in self._TIME_WORDS)
|
||||
return None
|
||||
|
||||
def should_use_tools(self, message: str, context: ConversationContext) -> bool:
|
||||
"""Determine if this message likely requires tools.
|
||||
|
||||
Returns True if tools are likely needed, False for simple chat.
|
||||
"""
|
||||
message_lower = message.lower().strip()
|
||||
|
||||
if self._is_chat_only(message_lower):
|
||||
return False
|
||||
if self._has_tool_keyword(message_lower):
|
||||
return True
|
||||
|
||||
simple = self._is_simple_question(message_lower)
|
||||
if simple is not None:
|
||||
return simple
|
||||
|
||||
# Default: don't use tools for unclear cases
|
||||
return False
|
||||
|
||||
|
||||
|
||||
79
src/timmy/event_bus.py
Normal file
79
src/timmy/event_bus.py
Normal file
@@ -0,0 +1,79 @@
|
||||
"""Sensory EventBus — simple pub/sub for SensoryEvents.
|
||||
|
||||
Thin facade over the infrastructure EventBus that speaks in
|
||||
SensoryEvent objects instead of raw infrastructure Events.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from collections.abc import Awaitable, Callable
|
||||
|
||||
from timmy.events import SensoryEvent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Handler: sync or async callable that receives a SensoryEvent
|
||||
SensoryHandler = Callable[[SensoryEvent], None | Awaitable[None]]
|
||||
|
||||
|
||||
class SensoryBus:
|
||||
"""Pub/sub dispatcher for SensoryEvents."""
|
||||
|
||||
def __init__(self, max_history: int = 500) -> None:
|
||||
self._subscribers: dict[str, list[SensoryHandler]] = {}
|
||||
self._history: list[SensoryEvent] = []
|
||||
self._max_history = max_history
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────────────
|
||||
|
||||
async def emit(self, event: SensoryEvent) -> int:
|
||||
"""Push *event* to all subscribers whose event_type filter matches.
|
||||
|
||||
Returns the number of handlers invoked.
|
||||
"""
|
||||
self._history.append(event)
|
||||
if len(self._history) > self._max_history:
|
||||
self._history = self._history[-self._max_history :]
|
||||
|
||||
handlers = self._matching_handlers(event.event_type)
|
||||
for h in handlers:
|
||||
try:
|
||||
result = h(event)
|
||||
if asyncio.iscoroutine(result):
|
||||
await result
|
||||
except Exception as exc:
|
||||
logger.error("SensoryBus handler error for '%s': %s", event.event_type, exc)
|
||||
|
||||
return len(handlers)
|
||||
|
||||
def subscribe(self, event_type: str, callback: SensoryHandler) -> None:
|
||||
"""Register *callback* for events matching *event_type*.
|
||||
|
||||
Use ``"*"`` to subscribe to all event types.
|
||||
"""
|
||||
self._subscribers.setdefault(event_type, []).append(callback)
|
||||
|
||||
def recent(self, n: int = 10) -> list[SensoryEvent]:
|
||||
"""Return the last *n* events (most recent last)."""
|
||||
return self._history[-n:]
|
||||
|
||||
# ── Internals ─────────────────────────────────────────────────────────
|
||||
|
||||
def _matching_handlers(self, event_type: str) -> list[SensoryHandler]:
|
||||
handlers: list[SensoryHandler] = []
|
||||
for pattern, cbs in self._subscribers.items():
|
||||
if pattern == "*" or pattern == event_type:
|
||||
handlers.extend(cbs)
|
||||
return handlers
|
||||
|
||||
|
||||
# ── Module-level singleton ────────────────────────────────────────────────────
|
||||
_bus: SensoryBus | None = None
|
||||
|
||||
|
||||
def get_sensory_bus() -> SensoryBus:
|
||||
"""Return the module-level SensoryBus singleton."""
|
||||
global _bus
|
||||
if _bus is None:
|
||||
_bus = SensoryBus()
|
||||
return _bus
|
||||
39
src/timmy/events.py
Normal file
39
src/timmy/events.py
Normal file
@@ -0,0 +1,39 @@
|
||||
"""SensoryEvent — normalized event model for stream adapters.
|
||||
|
||||
Every adapter (gitea, time, bitcoin, terminal, etc.) emits SensoryEvents
|
||||
into the EventBus so that Timmy's cognitive layer sees a uniform stream.
|
||||
"""
|
||||
|
||||
import json
|
||||
from dataclasses import asdict, dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
|
||||
|
||||
@dataclass
|
||||
class SensoryEvent:
|
||||
"""A single sensory event from an external stream."""
|
||||
|
||||
source: str # "gitea", "time", "bitcoin", "terminal"
|
||||
event_type: str # "push", "issue_opened", "new_block", "morning"
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(UTC))
|
||||
data: dict = field(default_factory=dict)
|
||||
actor: str = "" # who caused it (username, "system", etc.)
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Return a JSON-serializable dictionary."""
|
||||
d = asdict(self)
|
||||
d["timestamp"] = self.timestamp.isoformat()
|
||||
return d
|
||||
|
||||
def to_json(self) -> str:
|
||||
"""Return a JSON string."""
|
||||
return json.dumps(self.to_dict())
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> "SensoryEvent":
|
||||
"""Reconstruct a SensoryEvent from a dictionary."""
|
||||
data = dict(data) # shallow copy
|
||||
ts = data.get("timestamp")
|
||||
if isinstance(ts, str):
|
||||
data["timestamp"] = datetime.fromisoformat(ts)
|
||||
return cls(**data)
|
||||
263
src/timmy/familiar.py
Normal file
263
src/timmy/familiar.py
Normal file
@@ -0,0 +1,263 @@
|
||||
"""Pip the Familiar — a creature with its own small mind.
|
||||
|
||||
Pip is a glowing sprite who lives in the Workshop independently of Timmy.
|
||||
He has a behavioral state machine that makes the room feel alive:
|
||||
|
||||
SLEEPING → WAKING → WANDERING → INVESTIGATING → BORED → SLEEPING
|
||||
|
||||
Special states triggered by Timmy's cognitive signals:
|
||||
ALERT — confidence drops below 0.3
|
||||
PLAYFUL — Timmy is amused / energized
|
||||
HIDING — unknown visitor + Timmy uncertain
|
||||
|
||||
The backend tracks Pip's *logical* state; the browser handles movement
|
||||
interpolation and particle rendering.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
from dataclasses import asdict, dataclass, field
|
||||
from enum import StrEnum
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# States
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class PipState(StrEnum):
|
||||
"""Pip's behavioral states."""
|
||||
|
||||
SLEEPING = "sleeping"
|
||||
WAKING = "waking"
|
||||
WANDERING = "wandering"
|
||||
INVESTIGATING = "investigating"
|
||||
BORED = "bored"
|
||||
# Special states
|
||||
ALERT = "alert"
|
||||
PLAYFUL = "playful"
|
||||
HIDING = "hiding"
|
||||
|
||||
|
||||
# States from which Pip can be interrupted by special triggers
|
||||
_INTERRUPTIBLE = frozenset(
|
||||
{
|
||||
PipState.SLEEPING,
|
||||
PipState.WANDERING,
|
||||
PipState.BORED,
|
||||
PipState.WAKING,
|
||||
}
|
||||
)
|
||||
|
||||
# How long each state lasts before auto-transitioning (seconds)
|
||||
_STATE_DURATIONS: dict[PipState, tuple[float, float]] = {
|
||||
PipState.SLEEPING: (120.0, 300.0), # 2-5 min
|
||||
PipState.WAKING: (1.5, 2.5),
|
||||
PipState.WANDERING: (15.0, 45.0),
|
||||
PipState.INVESTIGATING: (8.0, 12.0),
|
||||
PipState.BORED: (20.0, 40.0),
|
||||
PipState.ALERT: (10.0, 20.0),
|
||||
PipState.PLAYFUL: (8.0, 15.0),
|
||||
PipState.HIDING: (15.0, 30.0),
|
||||
}
|
||||
|
||||
# Default position near the fireplace
|
||||
_FIREPLACE_POS = (2.1, 0.5, -1.3)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Schema
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass
|
||||
class PipSnapshot:
|
||||
"""Serialisable snapshot of Pip's current state."""
|
||||
|
||||
name: str = "Pip"
|
||||
state: str = "sleeping"
|
||||
position: tuple[float, float, float] = _FIREPLACE_POS
|
||||
mood_mirror: str = "calm"
|
||||
since: float = field(default_factory=time.monotonic)
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Public dict for API / WebSocket / state file consumers."""
|
||||
d = asdict(self)
|
||||
d["position"] = list(d["position"])
|
||||
# Convert monotonic timestamp to duration
|
||||
d["state_duration_s"] = round(time.monotonic() - d.pop("since"), 1)
|
||||
return d
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Familiar
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class Familiar:
|
||||
"""Pip's behavioral AI — a tiny state machine driven by events and time.
|
||||
|
||||
Usage::
|
||||
|
||||
pip_familiar.on_event("visitor_entered")
|
||||
pip_familiar.on_mood_change("energized")
|
||||
state = pip_familiar.tick() # call periodically
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._state = PipState.SLEEPING
|
||||
self._entered_at = time.monotonic()
|
||||
self._duration = random.uniform(*_STATE_DURATIONS[PipState.SLEEPING])
|
||||
self._mood_mirror = "calm"
|
||||
self._pending_mood: str | None = None
|
||||
self._mood_change_at: float = 0.0
|
||||
self._position = _FIREPLACE_POS
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def state(self) -> PipState:
|
||||
return self._state
|
||||
|
||||
@property
|
||||
def mood_mirror(self) -> str:
|
||||
return self._mood_mirror
|
||||
|
||||
def snapshot(self) -> PipSnapshot:
|
||||
"""Current state as a serialisable snapshot."""
|
||||
return PipSnapshot(
|
||||
state=self._state.value,
|
||||
position=self._position,
|
||||
mood_mirror=self._mood_mirror,
|
||||
since=self._entered_at,
|
||||
)
|
||||
|
||||
def tick(self, now: float | None = None) -> PipState:
|
||||
"""Advance the state machine. Call periodically (e.g. every second).
|
||||
|
||||
Returns the (possibly new) state.
|
||||
"""
|
||||
now = now if now is not None else time.monotonic()
|
||||
|
||||
# Apply delayed mood mirror (3-second lag)
|
||||
if self._pending_mood and now >= self._mood_change_at:
|
||||
self._mood_mirror = self._pending_mood
|
||||
self._pending_mood = None
|
||||
|
||||
# Check if current state has expired
|
||||
elapsed = now - self._entered_at
|
||||
if elapsed < self._duration:
|
||||
return self._state
|
||||
|
||||
# Auto-transition
|
||||
next_state = self._next_state()
|
||||
self._transition(next_state, now)
|
||||
return self._state
|
||||
|
||||
def on_event(self, event: str, now: float | None = None) -> PipState:
|
||||
"""React to a Workshop event.
|
||||
|
||||
Supported events:
|
||||
visitor_entered, visitor_spoke, loud_event, scroll_knocked
|
||||
"""
|
||||
now = now if now is not None else time.monotonic()
|
||||
|
||||
if event == "visitor_entered" and self._state in _INTERRUPTIBLE:
|
||||
if self._state == PipState.SLEEPING:
|
||||
self._transition(PipState.WAKING, now)
|
||||
else:
|
||||
self._transition(PipState.INVESTIGATING, now)
|
||||
|
||||
elif event == "visitor_spoke":
|
||||
if self._state in (PipState.WANDERING, PipState.WAKING):
|
||||
self._transition(PipState.INVESTIGATING, now)
|
||||
|
||||
elif event == "loud_event":
|
||||
if self._state == PipState.SLEEPING:
|
||||
self._transition(PipState.WAKING, now)
|
||||
|
||||
return self._state
|
||||
|
||||
def on_mood_change(
|
||||
self,
|
||||
timmy_mood: str,
|
||||
confidence: float = 0.5,
|
||||
now: float | None = None,
|
||||
) -> PipState:
|
||||
"""Mirror Timmy's mood with a 3-second delay.
|
||||
|
||||
Special states triggered by mood + confidence:
|
||||
- confidence < 0.3 → ALERT (bristles, particles go red-gold)
|
||||
- mood == "energized" → PLAYFUL (figure-8s around crystal ball)
|
||||
- mood == "hesitant" + confidence < 0.4 → HIDING
|
||||
"""
|
||||
now = now if now is not None else time.monotonic()
|
||||
|
||||
# Schedule mood mirror with 3s delay
|
||||
self._pending_mood = timmy_mood
|
||||
self._mood_change_at = now + 3.0
|
||||
|
||||
# Special state triggers (immediate)
|
||||
if confidence < 0.3 and self._state in _INTERRUPTIBLE:
|
||||
self._transition(PipState.ALERT, now)
|
||||
elif timmy_mood == "energized" and self._state in _INTERRUPTIBLE:
|
||||
self._transition(PipState.PLAYFUL, now)
|
||||
elif timmy_mood == "hesitant" and confidence < 0.4 and self._state in _INTERRUPTIBLE:
|
||||
self._transition(PipState.HIDING, now)
|
||||
|
||||
return self._state
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internals
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def _transition(self, new_state: PipState, now: float) -> None:
|
||||
"""Move to a new state."""
|
||||
old = self._state
|
||||
self._state = new_state
|
||||
self._entered_at = now
|
||||
self._duration = random.uniform(*_STATE_DURATIONS[new_state])
|
||||
self._position = self._position_for(new_state)
|
||||
logger.debug("Pip: %s → %s", old.value, new_state.value)
|
||||
|
||||
def _next_state(self) -> PipState:
|
||||
"""Determine the natural next state after the current one expires."""
|
||||
transitions: dict[PipState, PipState] = {
|
||||
PipState.SLEEPING: PipState.WAKING,
|
||||
PipState.WAKING: PipState.WANDERING,
|
||||
PipState.WANDERING: PipState.BORED,
|
||||
PipState.INVESTIGATING: PipState.BORED,
|
||||
PipState.BORED: PipState.SLEEPING,
|
||||
# Special states return to wandering
|
||||
PipState.ALERT: PipState.WANDERING,
|
||||
PipState.PLAYFUL: PipState.WANDERING,
|
||||
PipState.HIDING: PipState.WAKING,
|
||||
}
|
||||
return transitions.get(self._state, PipState.SLEEPING)
|
||||
|
||||
def _position_for(self, state: PipState) -> tuple[float, float, float]:
|
||||
"""Approximate position hint for a given state.
|
||||
|
||||
The browser interpolates smoothly; these are target anchors.
|
||||
"""
|
||||
if state in (PipState.SLEEPING, PipState.BORED):
|
||||
return _FIREPLACE_POS
|
||||
if state == PipState.HIDING:
|
||||
return (0.5, 0.3, -2.0) # Behind the desk
|
||||
if state == PipState.PLAYFUL:
|
||||
return (1.0, 1.2, 0.0) # Near the crystal ball
|
||||
# Wandering / investigating / waking — random room position
|
||||
return (
|
||||
random.uniform(-1.0, 3.0),
|
||||
random.uniform(0.5, 1.5),
|
||||
random.uniform(-2.0, 1.0),
|
||||
)
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
pip_familiar = Familiar()
|
||||
105
src/timmy/focus.py
Normal file
105
src/timmy/focus.py
Normal file
@@ -0,0 +1,105 @@
|
||||
"""Deep focus mode — single-problem context for Timmy.
|
||||
|
||||
Persists focus state to a JSON file so Timmy can maintain narrow,
|
||||
deep attention on one problem across session restarts.
|
||||
|
||||
Usage:
|
||||
from timmy.focus import focus_manager
|
||||
|
||||
focus_manager.set_topic("three-phase loop")
|
||||
topic = focus_manager.get_topic() # "three-phase loop"
|
||||
ctx = focus_manager.get_focus_context() # prompt injection string
|
||||
focus_manager.clear()
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DEFAULT_STATE_DIR = Path.home() / ".timmy"
|
||||
_STATE_FILE = "focus.json"
|
||||
|
||||
|
||||
class FocusManager:
|
||||
"""Manages deep-focus state with file-backed persistence."""
|
||||
|
||||
def __init__(self, state_dir: Path | None = None) -> None:
|
||||
self._state_dir = state_dir or _DEFAULT_STATE_DIR
|
||||
self._state_file = self._state_dir / _STATE_FILE
|
||||
self._topic: str | None = None
|
||||
self._mode: str = "broad"
|
||||
self._load()
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────
|
||||
|
||||
def get_topic(self) -> str | None:
|
||||
"""Return the current focus topic, or None if unfocused."""
|
||||
return self._topic
|
||||
|
||||
def get_mode(self) -> str:
|
||||
"""Return 'deep' or 'broad'."""
|
||||
return self._mode
|
||||
|
||||
def is_focused(self) -> bool:
|
||||
"""True when deep-focus is active with a topic set."""
|
||||
return self._mode == "deep" and self._topic is not None
|
||||
|
||||
def set_topic(self, topic: str) -> None:
|
||||
"""Activate deep focus on a specific topic."""
|
||||
self._topic = topic.strip()
|
||||
self._mode = "deep"
|
||||
self._save()
|
||||
logger.info("Focus: deep-focus set → %r", self._topic)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Return to broad (unfocused) mode."""
|
||||
old = self._topic
|
||||
self._topic = None
|
||||
self._mode = "broad"
|
||||
self._save()
|
||||
logger.info("Focus: cleared (was %r)", old)
|
||||
|
||||
def get_focus_context(self) -> str:
|
||||
"""Return a prompt-injection string for the current focus state.
|
||||
|
||||
When focused, this tells the model to prioritize the topic.
|
||||
When broad, returns an empty string (no injection).
|
||||
"""
|
||||
if not self.is_focused():
|
||||
return ""
|
||||
return (
|
||||
f"[DEEP FOCUS MODE] You are currently in deep-focus mode on: "
|
||||
f'"{self._topic}". '
|
||||
f"Prioritize this topic in your responses. Surface related memories "
|
||||
f"and prior conversation about this topic first. Deprioritize "
|
||||
f"unrelated context. Stay focused — depth over breadth."
|
||||
)
|
||||
|
||||
# ── Persistence ───────────────────────────────────────────────
|
||||
|
||||
def _load(self) -> None:
|
||||
"""Load focus state from disk."""
|
||||
if not self._state_file.exists():
|
||||
return
|
||||
try:
|
||||
data = json.loads(self._state_file.read_text())
|
||||
self._topic = data.get("topic")
|
||||
self._mode = data.get("mode", "broad")
|
||||
except Exception as exc:
|
||||
logger.warning("Focus: failed to load state: %s", exc)
|
||||
|
||||
def _save(self) -> None:
|
||||
"""Persist focus state to disk."""
|
||||
try:
|
||||
self._state_dir.mkdir(parents=True, exist_ok=True)
|
||||
self._state_file.write_text(
|
||||
json.dumps({"topic": self._topic, "mode": self._mode}, indent=2)
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Focus: failed to save state: %s", exc)
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
focus_manager = FocusManager()
|
||||
@@ -21,14 +21,20 @@ Usage::
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from PIL import ImageDraw
|
||||
import os
|
||||
import shutil
|
||||
import sqlite3
|
||||
import uuid
|
||||
from contextlib import closing
|
||||
from datetime import datetime
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -190,7 +196,7 @@ def _bridge_to_work_order(title: str, body: str, category: str) -> None:
|
||||
body,
|
||||
category,
|
||||
"timmy-thinking",
|
||||
datetime.utcnow().isoformat(),
|
||||
datetime.now(UTC).isoformat(),
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
@@ -198,15 +204,61 @@ def _bridge_to_work_order(title: str, body: str, category: str) -> None:
|
||||
logger.debug("Work order bridge failed: %s", exc)
|
||||
|
||||
|
||||
async def _ensure_issue_session():
|
||||
"""Get or create the cached MCP session, connecting if needed.
|
||||
|
||||
Returns the connected ``MCPTools`` instance.
|
||||
"""
|
||||
from agno.tools.mcp import MCPTools
|
||||
|
||||
global _issue_session
|
||||
|
||||
if _issue_session is None:
|
||||
_issue_session = MCPTools(
|
||||
server_params=_gitea_server_params(),
|
||||
timeout_seconds=settings.mcp_timeout,
|
||||
)
|
||||
|
||||
if not getattr(_issue_session, "_connected", False):
|
||||
await _issue_session.connect()
|
||||
_issue_session._connected = True
|
||||
|
||||
return _issue_session
|
||||
|
||||
|
||||
def _build_issue_body(body: str) -> str:
|
||||
"""Append the auto-filing signature to the issue body."""
|
||||
full_body = body
|
||||
if full_body:
|
||||
full_body += "\n\n"
|
||||
full_body += "---\n*Auto-filed by Timmy's thinking engine*"
|
||||
return full_body
|
||||
|
||||
|
||||
def _build_issue_args(title: str, full_body: str) -> dict:
|
||||
"""Build MCP tool arguments for ``issue_write`` with method=create."""
|
||||
owner, repo = settings.gitea_repo.split("/", 1)
|
||||
return {
|
||||
"method": "create",
|
||||
"owner": owner,
|
||||
"repo": repo,
|
||||
"title": title,
|
||||
"body": full_body,
|
||||
}
|
||||
|
||||
|
||||
def _category_from_labels(labels: str) -> str:
|
||||
"""Derive a work-order category from comma-separated label names."""
|
||||
label_list = [tag.strip() for tag in labels.split(",") if tag.strip()] if labels else []
|
||||
return "bug" if "bug" in label_list else "suggestion"
|
||||
|
||||
|
||||
async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "") -> str:
|
||||
"""File a Gitea issue via the MCP server (standalone, no LLM loop).
|
||||
|
||||
Used by the thinking engine's ``_maybe_file_issues()`` post-hook.
|
||||
Manages its own MCPTools session with lazy connect + graceful failure.
|
||||
|
||||
Uses ``tools.session.call_tool()`` for direct MCP invocation — the
|
||||
``MCPTools`` wrapper itself does not expose ``call_tool()``.
|
||||
|
||||
Args:
|
||||
title: Issue title.
|
||||
body: Issue body (markdown).
|
||||
@@ -219,46 +271,13 @@ async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "
|
||||
return "Gitea integration is not configured."
|
||||
|
||||
try:
|
||||
from agno.tools.mcp import MCPTools
|
||||
session = await _ensure_issue_session()
|
||||
full_body = _build_issue_body(body)
|
||||
args = _build_issue_args(title, full_body)
|
||||
|
||||
global _issue_session
|
||||
result = await session.session.call_tool("issue_write", arguments=args)
|
||||
|
||||
if _issue_session is None:
|
||||
_issue_session = MCPTools(
|
||||
server_params=_gitea_server_params(),
|
||||
timeout_seconds=settings.mcp_timeout,
|
||||
)
|
||||
|
||||
# Ensure connected
|
||||
if not getattr(_issue_session, "_connected", False):
|
||||
await _issue_session.connect()
|
||||
_issue_session._connected = True
|
||||
|
||||
# Append auto-filing signature
|
||||
full_body = body
|
||||
if full_body:
|
||||
full_body += "\n\n"
|
||||
full_body += "---\n*Auto-filed by Timmy's thinking engine*"
|
||||
|
||||
# Parse owner/repo from settings
|
||||
owner, repo = settings.gitea_repo.split("/", 1)
|
||||
|
||||
# Build tool arguments — gitea-mcp uses issue_write with method="create"
|
||||
args = {
|
||||
"method": "create",
|
||||
"owner": owner,
|
||||
"repo": repo,
|
||||
"title": title,
|
||||
"body": full_body,
|
||||
}
|
||||
|
||||
# Call via the underlying MCP session (MCPTools doesn't expose call_tool)
|
||||
result = await _issue_session.session.call_tool("issue_write", arguments=args)
|
||||
|
||||
# Bridge to local work order
|
||||
label_list = [tag.strip() for tag in labels.split(",") if tag.strip()] if labels else []
|
||||
category = "bug" if "bug" in label_list else "suggestion"
|
||||
_bridge_to_work_order(title, body, category)
|
||||
_bridge_to_work_order(title, body, _category_from_labels(labels))
|
||||
|
||||
logger.info("Created Gitea issue via MCP: %s", title[:60])
|
||||
return f"Created issue: {title}\n{result}"
|
||||
@@ -268,6 +287,148 @@ async def create_gitea_issue_via_mcp(title: str, body: str = "", labels: str = "
|
||||
return f"Failed to create issue via MCP: {exc}"
|
||||
|
||||
|
||||
def _draw_background(draw: ImageDraw.ImageDraw, size: int) -> None:
|
||||
"""Draw radial gradient background with concentric circles."""
|
||||
for i in range(size // 2, 0, -4):
|
||||
g = int(25 + (i / (size // 2)) * 30)
|
||||
draw.ellipse(
|
||||
[size // 2 - i, size // 2 - i, size // 2 + i, size // 2 + i],
|
||||
fill=(10, g, 20),
|
||||
)
|
||||
|
||||
|
||||
def _draw_wizard(draw: ImageDraw.ImageDraw) -> None:
|
||||
"""Draw wizard hat, face, eyes, smile, monogram, and robe."""
|
||||
hat_color = (100, 50, 160) # purple
|
||||
hat_outline = (180, 130, 255)
|
||||
gold = (220, 190, 50)
|
||||
pupil = (30, 30, 60)
|
||||
|
||||
# Hat + brim
|
||||
draw.polygon([(256, 40), (160, 220), (352, 220)], fill=hat_color, outline=hat_outline)
|
||||
draw.ellipse([140, 200, 372, 250], fill=hat_color, outline=hat_outline)
|
||||
|
||||
# Face
|
||||
draw.ellipse([190, 220, 322, 370], fill=(60, 180, 100), outline=(80, 220, 120))
|
||||
|
||||
# Eyes (whites + pupils)
|
||||
draw.ellipse([220, 275, 248, 310], fill=(255, 255, 255))
|
||||
draw.ellipse([264, 275, 292, 310], fill=(255, 255, 255))
|
||||
draw.ellipse([228, 285, 242, 300], fill=pupil)
|
||||
draw.ellipse([272, 285, 286, 300], fill=pupil)
|
||||
|
||||
# Smile
|
||||
draw.arc([225, 300, 287, 355], start=10, end=170, fill=pupil, width=3)
|
||||
|
||||
# "T" monogram on hat
|
||||
draw.text((243, 100), "T", fill=gold)
|
||||
|
||||
# Robe
|
||||
draw.polygon(
|
||||
[(180, 370), (140, 500), (372, 500), (332, 370)],
|
||||
fill=(40, 100, 70),
|
||||
outline=(60, 160, 100),
|
||||
)
|
||||
|
||||
|
||||
def _draw_stars(draw: ImageDraw.ImageDraw) -> None:
|
||||
"""Draw decorative gold stars around the wizard hat."""
|
||||
gold = (220, 190, 50)
|
||||
for sx, sy in [(120, 100), (380, 120), (100, 300), (400, 280), (256, 10)]:
|
||||
r = 8
|
||||
draw.polygon(
|
||||
[
|
||||
(sx, sy - r),
|
||||
(sx + r // 3, sy - r // 3),
|
||||
(sx + r, sy),
|
||||
(sx + r // 3, sy + r // 3),
|
||||
(sx, sy + r),
|
||||
(sx - r // 3, sy + r // 3),
|
||||
(sx - r, sy),
|
||||
(sx - r // 3, sy - r // 3),
|
||||
],
|
||||
fill=gold,
|
||||
)
|
||||
|
||||
|
||||
def _generate_avatar_image() -> bytes:
|
||||
"""Generate a Timmy-themed avatar image using Pillow.
|
||||
|
||||
Creates a 512x512 wizard-themed avatar with emerald/purple/gold palette.
|
||||
Returns raw PNG bytes. Falls back to a minimal solid-color image if
|
||||
Pillow drawing primitives fail.
|
||||
"""
|
||||
import io
|
||||
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
size = 512
|
||||
img = Image.new("RGB", (size, size), (15, 25, 20))
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
_draw_background(draw, size)
|
||||
_draw_wizard(draw)
|
||||
_draw_stars(draw)
|
||||
|
||||
buf = io.BytesIO()
|
||||
img.save(buf, format="PNG")
|
||||
return buf.getvalue()
|
||||
|
||||
|
||||
async def update_gitea_avatar() -> str:
|
||||
"""Generate and upload a unique avatar to Timmy's Gitea profile.
|
||||
|
||||
Creates a wizard-themed avatar image using Pillow drawing primitives,
|
||||
base64-encodes it, and POSTs to the Gitea user avatar API endpoint.
|
||||
|
||||
Returns:
|
||||
Success or failure message string.
|
||||
"""
|
||||
if not settings.gitea_enabled or not settings.gitea_token:
|
||||
return "Gitea integration is not configured (no token or disabled)."
|
||||
|
||||
try:
|
||||
from PIL import Image # noqa: F401 — availability check
|
||||
except ImportError:
|
||||
return "Pillow is not installed — cannot generate avatar image."
|
||||
|
||||
try:
|
||||
import base64
|
||||
|
||||
# Step 1: Generate the avatar image
|
||||
png_bytes = _generate_avatar_image()
|
||||
logger.info("Generated avatar image (%d bytes)", len(png_bytes))
|
||||
|
||||
# Step 2: Base64-encode (raw, no data URI prefix)
|
||||
b64_image = base64.b64encode(png_bytes).decode("ascii")
|
||||
|
||||
# Step 3: POST to Gitea
|
||||
async with httpx.AsyncClient(timeout=15) as client:
|
||||
resp = await client.post(
|
||||
f"{settings.gitea_url}/api/v1/user/avatar",
|
||||
headers={
|
||||
"Authorization": f"token {settings.gitea_token}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
json={"image": b64_image},
|
||||
)
|
||||
|
||||
# Gitea returns empty body on success (204 or 200)
|
||||
if resp.status_code in (200, 204):
|
||||
logger.info("Gitea avatar updated successfully")
|
||||
return "Avatar updated successfully on Gitea."
|
||||
|
||||
logger.warning("Gitea avatar update failed: %s %s", resp.status_code, resp.text[:200])
|
||||
return f"Gitea avatar update failed (HTTP {resp.status_code}): {resp.text[:200]}"
|
||||
|
||||
except (httpx.ConnectError, httpx.ReadError, ConnectionError) as exc:
|
||||
logger.warning("Gitea connection failed during avatar update: %s", exc)
|
||||
return f"Could not connect to Gitea: {exc}"
|
||||
except Exception as exc:
|
||||
logger.error("Avatar update failed: %s", exc)
|
||||
return f"Avatar update failed: {exc}"
|
||||
|
||||
|
||||
async def close_mcp_sessions() -> None:
|
||||
"""Close any open MCP sessions. Called during app shutdown."""
|
||||
global _issue_session
|
||||
|
||||
@@ -1 +1,7 @@
|
||||
"""Memory — Persistent conversation and knowledge memory."""
|
||||
"""Memory — Persistent conversation and knowledge memory.
|
||||
|
||||
Sub-modules:
|
||||
embeddings — text-to-vector embedding + similarity functions
|
||||
unified — unified memory schema and connection management
|
||||
vector_store — backward compatibility re-exports from memory_system
|
||||
"""
|
||||
|
||||
88
src/timmy/memory/embeddings.py
Normal file
88
src/timmy/memory/embeddings.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""Embedding functions for Timmy's memory system.
|
||||
|
||||
Provides text-to-vector embedding using sentence-transformers (preferred)
|
||||
with a deterministic hash-based fallback when the ML library is unavailable.
|
||||
|
||||
Also includes vector similarity utilities (cosine similarity, keyword overlap).
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
import math
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Embedding model - small, fast, local
|
||||
EMBEDDING_MODEL = None
|
||||
EMBEDDING_DIM = 384 # MiniLM dimension
|
||||
|
||||
|
||||
def _get_embedding_model():
|
||||
"""Lazy-load embedding model."""
|
||||
global EMBEDDING_MODEL
|
||||
if EMBEDDING_MODEL is None:
|
||||
try:
|
||||
from config import settings
|
||||
|
||||
if settings.timmy_skip_embeddings:
|
||||
EMBEDDING_MODEL = False
|
||||
return EMBEDDING_MODEL
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
try:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
EMBEDDING_MODEL = SentenceTransformer("all-MiniLM-L6-v2")
|
||||
logger.info("MemorySystem: Loaded embedding model")
|
||||
except ImportError:
|
||||
logger.warning("MemorySystem: sentence-transformers not installed, using fallback")
|
||||
EMBEDDING_MODEL = False # Use fallback
|
||||
return EMBEDDING_MODEL
|
||||
|
||||
|
||||
def _simple_hash_embedding(text: str) -> list[float]:
|
||||
"""Fallback: Simple hash-based embedding when transformers unavailable."""
|
||||
words = text.lower().split()
|
||||
vec = [0.0] * 128
|
||||
for i, word in enumerate(words[:50]): # First 50 words
|
||||
h = hashlib.md5(word.encode()).hexdigest()
|
||||
for j in range(8):
|
||||
idx = (i * 8 + j) % 128
|
||||
vec[idx] += int(h[j * 2 : j * 2 + 2], 16) / 255.0
|
||||
# Normalize
|
||||
mag = math.sqrt(sum(x * x for x in vec)) or 1.0
|
||||
return [x / mag for x in vec]
|
||||
|
||||
|
||||
def embed_text(text: str) -> list[float]:
|
||||
"""Generate embedding for text."""
|
||||
model = _get_embedding_model()
|
||||
if model and model is not False:
|
||||
embedding = model.encode(text)
|
||||
return embedding.tolist()
|
||||
return _simple_hash_embedding(text)
|
||||
|
||||
|
||||
def cosine_similarity(a: list[float], b: list[float]) -> float:
|
||||
"""Calculate cosine similarity between two vectors."""
|
||||
dot = sum(x * y for x, y in zip(a, b, strict=False))
|
||||
mag_a = math.sqrt(sum(x * x for x in a))
|
||||
mag_b = math.sqrt(sum(x * x for x in b))
|
||||
if mag_a == 0 or mag_b == 0:
|
||||
return 0.0
|
||||
return dot / (mag_a * mag_b)
|
||||
|
||||
|
||||
# Alias for backward compatibility
|
||||
_cosine_similarity = cosine_similarity
|
||||
|
||||
|
||||
def _keyword_overlap(query: str, content: str) -> float:
|
||||
"""Simple keyword overlap score as fallback."""
|
||||
query_words = set(query.lower().split())
|
||||
content_words = set(content.lower().split())
|
||||
if not query_words:
|
||||
return 0.0
|
||||
overlap = len(query_words & content_words)
|
||||
return overlap / len(query_words)
|
||||
@@ -78,83 +78,88 @@ def _migrate_schema(conn: sqlite3.Connection) -> None:
|
||||
cursor = conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||
tables = {row[0] for row in cursor.fetchall()}
|
||||
|
||||
has_memories = "memories" in tables
|
||||
has_episodes = "episodes" in tables
|
||||
has_chunks = "chunks" in tables
|
||||
has_facts = "facts" in tables
|
||||
|
||||
# Check if we need to migrate (old schema exists but new one doesn't fully)
|
||||
if not has_memories:
|
||||
if "memories" not in tables:
|
||||
logger.info("Migration: Creating unified memories table")
|
||||
# Schema will be created above
|
||||
|
||||
# Migrate episodes -> memories
|
||||
if has_episodes and has_memories:
|
||||
logger.info("Migration: Converting episodes table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "episodes")
|
||||
context_type_col = "context_type" if "context_type" in cols else "'conversation'"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
created_at, access_count, last_accessed
|
||||
)
|
||||
SELECT
|
||||
id, content,
|
||||
COALESCE({context_type_col}, 'conversation'),
|
||||
COALESCE(source, 'agent'),
|
||||
embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
COALESCE(timestamp, datetime('now')), 0, NULL
|
||||
FROM episodes
|
||||
""")
|
||||
conn.execute("DROP TABLE episodes")
|
||||
logger.info("Migration: Migrated episodes to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate episodes: %s", exc)
|
||||
|
||||
# Migrate chunks -> memories as vault_chunk
|
||||
if has_chunks and has_memories:
|
||||
logger.info("Migration: Converting chunks table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "chunks")
|
||||
|
||||
id_col = "id" if "id" in cols else "CAST(rowid AS TEXT)"
|
||||
content_col = "content" if "content" in cols else "text"
|
||||
source_col = (
|
||||
"filepath" if "filepath" in cols else ("source" if "source" in cols else "'vault'")
|
||||
)
|
||||
embedding_col = "embedding" if "embedding" in cols else "NULL"
|
||||
created_col = "created_at" if "created_at" in cols else "datetime('now')"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
created_at, access_count
|
||||
)
|
||||
SELECT
|
||||
{id_col}, {content_col}, 'vault_chunk', {source_col},
|
||||
{embedding_col}, {created_col}, 0
|
||||
FROM chunks
|
||||
""")
|
||||
conn.execute("DROP TABLE chunks")
|
||||
logger.info("Migration: Migrated chunks to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate chunks: %s", exc)
|
||||
|
||||
# Drop old facts table
|
||||
if has_facts:
|
||||
try:
|
||||
conn.execute("DROP TABLE facts")
|
||||
logger.info("Migration: Dropped old facts table")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to drop facts: %s", exc)
|
||||
# Schema will be created by _ensure_schema above
|
||||
conn.commit()
|
||||
return
|
||||
|
||||
_migrate_episodes(conn, tables)
|
||||
_migrate_chunks(conn, tables)
|
||||
_drop_legacy_tables(conn, tables)
|
||||
conn.commit()
|
||||
|
||||
|
||||
def _migrate_episodes(conn: sqlite3.Connection, tables: set[str]) -> None:
|
||||
"""Migrate episodes table rows into the unified memories table."""
|
||||
if "episodes" not in tables:
|
||||
return
|
||||
logger.info("Migration: Converting episodes table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "episodes")
|
||||
context_type_col = "context_type" if "context_type" in cols else "'conversation'"
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
created_at, access_count, last_accessed
|
||||
)
|
||||
SELECT
|
||||
id, content,
|
||||
COALESCE({context_type_col}, 'conversation'),
|
||||
COALESCE(source, 'agent'),
|
||||
embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
COALESCE(timestamp, datetime('now')), 0, NULL
|
||||
FROM episodes
|
||||
""")
|
||||
conn.execute("DROP TABLE episodes")
|
||||
logger.info("Migration: Migrated episodes to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate episodes: %s", exc)
|
||||
|
||||
|
||||
def _migrate_chunks(conn: sqlite3.Connection, tables: set[str]) -> None:
|
||||
"""Migrate chunks table rows into the unified memories table as vault_chunk."""
|
||||
if "chunks" not in tables:
|
||||
return
|
||||
logger.info("Migration: Converting chunks table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "chunks")
|
||||
id_col = "id" if "id" in cols else "CAST(rowid AS TEXT)"
|
||||
content_col = "content" if "content" in cols else "text"
|
||||
source_col = (
|
||||
"filepath" if "filepath" in cols else ("source" if "source" in cols else "'vault'")
|
||||
)
|
||||
embedding_col = "embedding" if "embedding" in cols else "NULL"
|
||||
created_col = "created_at" if "created_at" in cols else "datetime('now')"
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
created_at, access_count
|
||||
)
|
||||
SELECT
|
||||
{id_col}, {content_col}, 'vault_chunk', {source_col},
|
||||
{embedding_col}, {created_col}, 0
|
||||
FROM chunks
|
||||
""")
|
||||
conn.execute("DROP TABLE chunks")
|
||||
logger.info("Migration: Migrated chunks to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate chunks: %s", exc)
|
||||
|
||||
|
||||
def _drop_legacy_tables(conn: sqlite3.Connection, tables: set[str]) -> None:
|
||||
"""Drop old facts table if it exists."""
|
||||
if "facts" not in tables:
|
||||
return
|
||||
try:
|
||||
conn.execute("DROP TABLE facts")
|
||||
logger.info("Migration: Dropped old facts table")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to drop facts: %s", exc)
|
||||
|
||||
|
||||
def _get_table_columns(conn: sqlite3.Connection, table_name: str) -> set[str]:
|
||||
"""Get the column names for a table."""
|
||||
cursor = conn.execute(f"PRAGMA table_info({table_name})")
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Architecture:
|
||||
- Database: Single `memories` table with unified schema
|
||||
- Embeddings: Local sentence-transformers with hash fallback
|
||||
- Embeddings: timmy.memory.embeddings (extracted)
|
||||
- CRUD: store_memory, search_memories, delete_memory, etc.
|
||||
- Tool functions: memory_search, memory_read, memory_write, memory_forget
|
||||
- Classes: HotMemory, VaultMemory, MemorySystem, SemanticMemory, MemorySearcher
|
||||
@@ -11,7 +11,6 @@ Architecture:
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import re
|
||||
import sqlite3
|
||||
import uuid
|
||||
@@ -21,6 +20,17 @@ from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
from timmy.memory.embeddings import (
|
||||
EMBEDDING_DIM,
|
||||
EMBEDDING_MODEL, # noqa: F401 — re-exported for backward compatibility
|
||||
_cosine_similarity, # noqa: F401 — re-exported for backward compatibility
|
||||
_get_embedding_model,
|
||||
_keyword_overlap,
|
||||
_simple_hash_embedding, # noqa: F401 — re-exported for backward compatibility
|
||||
cosine_similarity,
|
||||
embed_text,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Paths
|
||||
@@ -30,92 +40,70 @@ VAULT_PATH = PROJECT_ROOT / "memory"
|
||||
SOUL_PATH = VAULT_PATH / "self" / "soul.md"
|
||||
DB_PATH = PROJECT_ROOT / "data" / "memory.db"
|
||||
|
||||
# Embedding model - small, fast, local
|
||||
EMBEDDING_MODEL = None
|
||||
EMBEDDING_DIM = 384 # MiniLM dimension
|
||||
|
||||
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
# Embedding Functions
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _get_embedding_model():
|
||||
"""Lazy-load embedding model."""
|
||||
global EMBEDDING_MODEL
|
||||
if EMBEDDING_MODEL is None:
|
||||
try:
|
||||
from config import settings
|
||||
|
||||
if settings.timmy_skip_embeddings:
|
||||
EMBEDDING_MODEL = False
|
||||
return EMBEDDING_MODEL
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
try:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
EMBEDDING_MODEL = SentenceTransformer("all-MiniLM-L6-v2")
|
||||
logger.info("MemorySystem: Loaded embedding model")
|
||||
except ImportError:
|
||||
logger.warning("MemorySystem: sentence-transformers not installed, using fallback")
|
||||
EMBEDDING_MODEL = False # Use fallback
|
||||
return EMBEDDING_MODEL
|
||||
|
||||
|
||||
def _simple_hash_embedding(text: str) -> list[float]:
|
||||
"""Fallback: Simple hash-based embedding when transformers unavailable."""
|
||||
words = text.lower().split()
|
||||
vec = [0.0] * 128
|
||||
for i, word in enumerate(words[:50]): # First 50 words
|
||||
h = hashlib.md5(word.encode()).hexdigest()
|
||||
for j in range(8):
|
||||
idx = (i * 8 + j) % 128
|
||||
vec[idx] += int(h[j * 2 : j * 2 + 2], 16) / 255.0
|
||||
# Normalize
|
||||
mag = math.sqrt(sum(x * x for x in vec)) or 1.0
|
||||
return [x / mag for x in vec]
|
||||
|
||||
|
||||
def embed_text(text: str) -> list[float]:
|
||||
"""Generate embedding for text."""
|
||||
model = _get_embedding_model()
|
||||
if model and model is not False:
|
||||
embedding = model.encode(text)
|
||||
return embedding.tolist()
|
||||
return _simple_hash_embedding(text)
|
||||
|
||||
|
||||
def cosine_similarity(a: list[float], b: list[float]) -> float:
|
||||
"""Calculate cosine similarity between two vectors."""
|
||||
dot = sum(x * y for x, y in zip(a, b, strict=False))
|
||||
mag_a = math.sqrt(sum(x * x for x in a))
|
||||
mag_b = math.sqrt(sum(x * x for x in b))
|
||||
if mag_a == 0 or mag_b == 0:
|
||||
return 0.0
|
||||
return dot / (mag_a * mag_b)
|
||||
|
||||
|
||||
# Alias for backward compatibility
|
||||
_cosine_similarity = cosine_similarity
|
||||
|
||||
|
||||
def _keyword_overlap(query: str, content: str) -> float:
|
||||
"""Simple keyword overlap score as fallback."""
|
||||
query_words = set(query.lower().split())
|
||||
content_words = set(content.lower().split())
|
||||
if not query_words:
|
||||
return 0.0
|
||||
overlap = len(query_words & content_words)
|
||||
return overlap / len(query_words)
|
||||
|
||||
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
# Database Connection
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
_DEFAULT_HOT_MEMORY_TEMPLATE = """\
|
||||
# Timmy Hot Memory
|
||||
|
||||
> Working RAM — always loaded, ~300 lines max, pruned monthly
|
||||
> Last updated: {date}
|
||||
|
||||
---
|
||||
|
||||
## Current Status
|
||||
|
||||
**Agent State:** Operational
|
||||
**Mode:** Development
|
||||
**Active Tasks:** 0
|
||||
**Pending Decisions:** None
|
||||
|
||||
---
|
||||
|
||||
## Standing Rules
|
||||
|
||||
1. **Sovereignty First** — No cloud dependencies
|
||||
2. **Local-Only Inference** — Ollama on localhost
|
||||
3. **Privacy by Design** — Telemetry disabled
|
||||
4. **Tool Minimalism** — Use tools only when necessary
|
||||
5. **Memory Discipline** — Write handoffs at session end
|
||||
|
||||
---
|
||||
|
||||
## Agent Roster
|
||||
|
||||
| Agent | Role | Status |
|
||||
|-------|------|--------|
|
||||
| Timmy | Core | Active |
|
||||
|
||||
---
|
||||
|
||||
## User Profile
|
||||
|
||||
**Name:** (not set)
|
||||
**Interests:** (to be learned)
|
||||
|
||||
---
|
||||
|
||||
## Key Decisions
|
||||
|
||||
(none yet)
|
||||
|
||||
---
|
||||
|
||||
## Pending Actions
|
||||
|
||||
- [ ] Learn user's name
|
||||
|
||||
---
|
||||
|
||||
*Prune date: {prune_date}*
|
||||
"""
|
||||
|
||||
|
||||
@contextmanager
|
||||
def get_connection() -> Generator[sqlite3.Connection, None, None]:
|
||||
"""Get database connection to unified memory database."""
|
||||
@@ -168,6 +156,73 @@ def _get_table_columns(conn: sqlite3.Connection, table_name: str) -> set[str]:
|
||||
return {row[1] for row in cursor.fetchall()}
|
||||
|
||||
|
||||
def _migrate_episodes(conn: sqlite3.Connection) -> None:
|
||||
"""Migrate episodes table rows into the unified memories table."""
|
||||
logger.info("Migration: Converting episodes table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "episodes")
|
||||
context_type_col = "context_type" if "context_type" in cols else "'conversation'"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
created_at, access_count, last_accessed
|
||||
)
|
||||
SELECT
|
||||
id, content,
|
||||
COALESCE({context_type_col}, 'conversation'),
|
||||
COALESCE(source, 'agent'),
|
||||
embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
COALESCE(timestamp, datetime('now')), 0, NULL
|
||||
FROM episodes
|
||||
""")
|
||||
conn.execute("DROP TABLE episodes")
|
||||
logger.info("Migration: Migrated episodes to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate episodes: %s", exc)
|
||||
|
||||
|
||||
def _migrate_chunks(conn: sqlite3.Connection) -> None:
|
||||
"""Migrate chunks table rows into the unified memories table."""
|
||||
logger.info("Migration: Converting chunks table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "chunks")
|
||||
|
||||
id_col = "id" if "id" in cols else "CAST(rowid AS TEXT)"
|
||||
content_col = "content" if "content" in cols else "text"
|
||||
source_col = (
|
||||
"filepath" if "filepath" in cols else ("source" if "source" in cols else "'vault'")
|
||||
)
|
||||
embedding_col = "embedding" if "embedding" in cols else "NULL"
|
||||
created_col = "created_at" if "created_at" in cols else "datetime('now')"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
created_at, access_count
|
||||
)
|
||||
SELECT
|
||||
{id_col}, {content_col}, 'vault_chunk', {source_col},
|
||||
{embedding_col}, {created_col}, 0
|
||||
FROM chunks
|
||||
""")
|
||||
conn.execute("DROP TABLE chunks")
|
||||
logger.info("Migration: Migrated chunks to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate chunks: %s", exc)
|
||||
|
||||
|
||||
def _drop_legacy_table(conn: sqlite3.Connection, table: str) -> None:
|
||||
"""Drop a legacy table if it exists."""
|
||||
try:
|
||||
conn.execute(f"DROP TABLE {table}") # noqa: S608
|
||||
logger.info("Migration: Dropped old %s table", table)
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to drop %s: %s", table, exc)
|
||||
|
||||
|
||||
def _migrate_schema(conn: sqlite3.Connection) -> None:
|
||||
"""Migrate from old three-table schema to unified memories table.
|
||||
|
||||
@@ -180,78 +235,16 @@ def _migrate_schema(conn: sqlite3.Connection) -> None:
|
||||
tables = {row[0] for row in cursor.fetchall()}
|
||||
|
||||
has_memories = "memories" in tables
|
||||
has_episodes = "episodes" in tables
|
||||
has_chunks = "chunks" in tables
|
||||
has_facts = "facts" in tables
|
||||
|
||||
# Check if we need to migrate (old schema exists)
|
||||
if not has_memories and (has_episodes or has_chunks or has_facts):
|
||||
if not has_memories and (tables & {"episodes", "chunks", "facts"}):
|
||||
logger.info("Migration: Creating unified memories table")
|
||||
# Schema will be created by _ensure_schema above
|
||||
|
||||
# Migrate episodes -> memories
|
||||
if has_episodes and has_memories:
|
||||
logger.info("Migration: Converting episodes table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "episodes")
|
||||
context_type_col = "context_type" if "context_type" in cols else "'conversation'"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
created_at, access_count, last_accessed
|
||||
)
|
||||
SELECT
|
||||
id, content,
|
||||
COALESCE({context_type_col}, 'conversation'),
|
||||
COALESCE(source, 'agent'),
|
||||
embedding,
|
||||
metadata, agent_id, task_id, session_id,
|
||||
COALESCE(timestamp, datetime('now')), 0, NULL
|
||||
FROM episodes
|
||||
""")
|
||||
conn.execute("DROP TABLE episodes")
|
||||
logger.info("Migration: Migrated episodes to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate episodes: %s", exc)
|
||||
|
||||
# Migrate chunks -> memories as vault_chunk
|
||||
if has_chunks and has_memories:
|
||||
logger.info("Migration: Converting chunks table to memories")
|
||||
try:
|
||||
cols = _get_table_columns(conn, "chunks")
|
||||
|
||||
id_col = "id" if "id" in cols else "CAST(rowid AS TEXT)"
|
||||
content_col = "content" if "content" in cols else "text"
|
||||
source_col = (
|
||||
"filepath" if "filepath" in cols else ("source" if "source" in cols else "'vault'")
|
||||
)
|
||||
embedding_col = "embedding" if "embedding" in cols else "NULL"
|
||||
created_col = "created_at" if "created_at" in cols else "datetime('now')"
|
||||
|
||||
conn.execute(f"""
|
||||
INSERT INTO memories (
|
||||
id, content, memory_type, source, embedding,
|
||||
created_at, access_count
|
||||
)
|
||||
SELECT
|
||||
{id_col}, {content_col}, 'vault_chunk', {source_col},
|
||||
{embedding_col}, {created_col}, 0
|
||||
FROM chunks
|
||||
""")
|
||||
conn.execute("DROP TABLE chunks")
|
||||
logger.info("Migration: Migrated chunks to memories")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to migrate chunks: %s", exc)
|
||||
|
||||
# Drop old tables
|
||||
if has_facts:
|
||||
try:
|
||||
conn.execute("DROP TABLE facts")
|
||||
logger.info("Migration: Dropped old facts table")
|
||||
except sqlite3.Error as exc:
|
||||
logger.warning("Migration: Failed to drop facts: %s", exc)
|
||||
if "episodes" in tables and has_memories:
|
||||
_migrate_episodes(conn)
|
||||
if "chunks" in tables and has_memories:
|
||||
_migrate_chunks(conn)
|
||||
if "facts" in tables:
|
||||
_drop_legacy_table(conn, "facts")
|
||||
|
||||
conn.commit()
|
||||
|
||||
@@ -368,6 +361,85 @@ def store_memory(
|
||||
return entry
|
||||
|
||||
|
||||
def _build_search_filters(
|
||||
context_type: str | None,
|
||||
agent_id: str | None,
|
||||
session_id: str | None,
|
||||
) -> tuple[str, list]:
|
||||
"""Build SQL WHERE clause and params from search filters."""
|
||||
conditions: list[str] = []
|
||||
params: list = []
|
||||
|
||||
if context_type:
|
||||
conditions.append("memory_type = ?")
|
||||
params.append(context_type)
|
||||
if agent_id:
|
||||
conditions.append("agent_id = ?")
|
||||
params.append(agent_id)
|
||||
if session_id:
|
||||
conditions.append("session_id = ?")
|
||||
params.append(session_id)
|
||||
|
||||
where_clause = "WHERE " + " AND ".join(conditions) if conditions else ""
|
||||
return where_clause, params
|
||||
|
||||
|
||||
def _fetch_memory_candidates(
|
||||
where_clause: str, params: list, candidate_limit: int
|
||||
) -> list[sqlite3.Row]:
|
||||
"""Fetch candidate memory rows from the database."""
|
||||
query_sql = f"""
|
||||
SELECT * FROM memories
|
||||
{where_clause}
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
"""
|
||||
params.append(candidate_limit)
|
||||
|
||||
with get_connection() as conn:
|
||||
return conn.execute(query_sql, params).fetchall()
|
||||
|
||||
|
||||
def _row_to_entry(row: sqlite3.Row) -> MemoryEntry:
|
||||
"""Convert a database row to a MemoryEntry."""
|
||||
return MemoryEntry(
|
||||
id=row["id"],
|
||||
content=row["content"],
|
||||
source=row["source"],
|
||||
context_type=row["memory_type"], # DB column -> API field
|
||||
agent_id=row["agent_id"],
|
||||
task_id=row["task_id"],
|
||||
session_id=row["session_id"],
|
||||
metadata=json.loads(row["metadata"]) if row["metadata"] else None,
|
||||
embedding=json.loads(row["embedding"]) if row["embedding"] else None,
|
||||
timestamp=row["created_at"],
|
||||
)
|
||||
|
||||
|
||||
def _score_and_filter(
|
||||
rows: list[sqlite3.Row],
|
||||
query: str,
|
||||
query_embedding: list[float],
|
||||
min_relevance: float,
|
||||
) -> list[MemoryEntry]:
|
||||
"""Score candidate rows by similarity and filter by min_relevance."""
|
||||
results = []
|
||||
for row in rows:
|
||||
entry = _row_to_entry(row)
|
||||
|
||||
if entry.embedding:
|
||||
score = cosine_similarity(query_embedding, entry.embedding)
|
||||
else:
|
||||
score = _keyword_overlap(query, entry.content)
|
||||
|
||||
entry.relevance_score = score
|
||||
if score >= min_relevance:
|
||||
results.append(entry)
|
||||
|
||||
results.sort(key=lambda x: x.relevance_score or 0, reverse=True)
|
||||
return results
|
||||
|
||||
|
||||
def search_memories(
|
||||
query: str,
|
||||
limit: int = 10,
|
||||
@@ -390,65 +462,9 @@ def search_memories(
|
||||
List of MemoryEntry objects sorted by relevance
|
||||
"""
|
||||
query_embedding = embed_text(query)
|
||||
|
||||
# Build query with filters
|
||||
conditions = []
|
||||
params = []
|
||||
|
||||
if context_type:
|
||||
conditions.append("memory_type = ?")
|
||||
params.append(context_type)
|
||||
if agent_id:
|
||||
conditions.append("agent_id = ?")
|
||||
params.append(agent_id)
|
||||
if session_id:
|
||||
conditions.append("session_id = ?")
|
||||
params.append(session_id)
|
||||
|
||||
where_clause = "WHERE " + " AND ".join(conditions) if conditions else ""
|
||||
|
||||
# Fetch candidates (we'll do in-memory similarity for now)
|
||||
query_sql = f"""
|
||||
SELECT * FROM memories
|
||||
{where_clause}
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
"""
|
||||
params.append(limit * 3) # Get more candidates for ranking
|
||||
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(query_sql, params).fetchall()
|
||||
|
||||
# Compute similarity scores
|
||||
results = []
|
||||
for row in rows:
|
||||
entry = MemoryEntry(
|
||||
id=row["id"],
|
||||
content=row["content"],
|
||||
source=row["source"],
|
||||
context_type=row["memory_type"], # DB column -> API field
|
||||
agent_id=row["agent_id"],
|
||||
task_id=row["task_id"],
|
||||
session_id=row["session_id"],
|
||||
metadata=json.loads(row["metadata"]) if row["metadata"] else None,
|
||||
embedding=json.loads(row["embedding"]) if row["embedding"] else None,
|
||||
timestamp=row["created_at"],
|
||||
)
|
||||
|
||||
if entry.embedding:
|
||||
score = cosine_similarity(query_embedding, entry.embedding)
|
||||
entry.relevance_score = score
|
||||
if score >= min_relevance:
|
||||
results.append(entry)
|
||||
else:
|
||||
# Fallback: check for keyword overlap
|
||||
score = _keyword_overlap(query, entry.content)
|
||||
entry.relevance_score = score
|
||||
if score >= min_relevance:
|
||||
results.append(entry)
|
||||
|
||||
# Sort by relevance and return top results
|
||||
results.sort(key=lambda x: x.relevance_score or 0, reverse=True)
|
||||
where_clause, params = _build_search_filters(context_type, agent_id, session_id)
|
||||
rows = _fetch_memory_candidates(where_clause, params, limit * 3)
|
||||
results = _score_and_filter(rows, query, query_embedding, min_relevance)
|
||||
return results[:limit]
|
||||
|
||||
|
||||
@@ -706,7 +722,7 @@ class HotMemory:
|
||||
if len(lines) > 1:
|
||||
return "\n".join(lines)
|
||||
except Exception:
|
||||
pass
|
||||
logger.debug("DB context read failed, falling back to file")
|
||||
|
||||
# Fallback to file if DB unavailable
|
||||
if self.path.exists():
|
||||
@@ -774,66 +790,12 @@ class HotMemory:
|
||||
logger.debug(
|
||||
"HotMemory._create_default() - creating default MEMORY.md for backward compatibility"
|
||||
)
|
||||
default_content = """# Timmy Hot Memory
|
||||
|
||||
> Working RAM — always loaded, ~300 lines max, pruned monthly
|
||||
> Last updated: {date}
|
||||
|
||||
---
|
||||
|
||||
## Current Status
|
||||
|
||||
**Agent State:** Operational
|
||||
**Mode:** Development
|
||||
**Active Tasks:** 0
|
||||
**Pending Decisions:** None
|
||||
|
||||
---
|
||||
|
||||
## Standing Rules
|
||||
|
||||
1. **Sovereignty First** — No cloud dependencies
|
||||
2. **Local-Only Inference** — Ollama on localhost
|
||||
3. **Privacy by Design** — Telemetry disabled
|
||||
4. **Tool Minimalism** — Use tools only when necessary
|
||||
5. **Memory Discipline** — Write handoffs at session end
|
||||
|
||||
---
|
||||
|
||||
## Agent Roster
|
||||
|
||||
| Agent | Role | Status |
|
||||
|-------|------|--------|
|
||||
| Timmy | Core | Active |
|
||||
|
||||
---
|
||||
|
||||
## User Profile
|
||||
|
||||
**Name:** (not set)
|
||||
**Interests:** (to be learned)
|
||||
|
||||
---
|
||||
|
||||
## Key Decisions
|
||||
|
||||
(none yet)
|
||||
|
||||
---
|
||||
|
||||
## Pending Actions
|
||||
|
||||
- [ ] Learn user's name
|
||||
|
||||
---
|
||||
|
||||
*Prune date: {prune_date}*
|
||||
""".format(
|
||||
date=datetime.now(UTC).strftime("%Y-%m-%d"),
|
||||
prune_date=(datetime.now(UTC).replace(day=25)).strftime("%Y-%m-%d"),
|
||||
now = datetime.now(UTC)
|
||||
content = _DEFAULT_HOT_MEMORY_TEMPLATE.format(
|
||||
date=now.strftime("%Y-%m-%d"),
|
||||
prune_date=now.replace(day=25).strftime("%Y-%m-%d"),
|
||||
)
|
||||
|
||||
self.path.write_text(default_content)
|
||||
self.path.write_text(content)
|
||||
logger.info("HotMemory: Created default MEMORY.md")
|
||||
|
||||
|
||||
@@ -1403,6 +1365,83 @@ def memory_forget(query: str) -> str:
|
||||
return f"Failed to forget: {exc}"
|
||||
|
||||
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
# Artifact Tools — "hands" for producing artifacts during conversation
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
NOTES_DIR = Path.home() / ".timmy" / "notes"
|
||||
DECISION_LOG = Path.home() / ".timmy" / "decisions.md"
|
||||
|
||||
|
||||
def jot_note(title: str, body: str) -> str:
|
||||
"""Write a markdown note to Timmy's workspace (~/.timmy/notes/).
|
||||
|
||||
Use this tool to capture ideas, drafts, summaries, or any artifact that
|
||||
should persist beyond the conversation. Each note is saved as a
|
||||
timestamped markdown file.
|
||||
|
||||
Args:
|
||||
title: Short descriptive title (used as filename slug).
|
||||
body: Markdown content of the note.
|
||||
|
||||
Returns:
|
||||
Confirmation with the file path of the saved note.
|
||||
"""
|
||||
if not title or not title.strip():
|
||||
return "Cannot jot — title is empty."
|
||||
if not body or not body.strip():
|
||||
return "Cannot jot — body is empty."
|
||||
|
||||
NOTES_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
slug = re.sub(r"[^a-z0-9]+", "-", title.strip().lower()).strip("-")[:60]
|
||||
timestamp = datetime.now(UTC).strftime("%Y%m%d-%H%M%S")
|
||||
filename = f"{timestamp}_{slug}.md"
|
||||
filepath = NOTES_DIR / filename
|
||||
|
||||
content = f"# {title.strip()}\n\n> Created: {datetime.now(UTC).isoformat()}\n\n{body.strip()}\n"
|
||||
filepath.write_text(content)
|
||||
logger.info("jot_note: wrote %s", filepath)
|
||||
return f"Note saved: {filepath}"
|
||||
|
||||
|
||||
def log_decision(decision: str, rationale: str = "") -> str:
|
||||
"""Append an architectural or design decision to the running decision log.
|
||||
|
||||
Use this tool when a significant decision is made during conversation —
|
||||
technology choices, design trade-offs, scope changes, etc.
|
||||
|
||||
Args:
|
||||
decision: One-line summary of the decision.
|
||||
rationale: Why this decision was made (optional but encouraged).
|
||||
|
||||
Returns:
|
||||
Confirmation that the decision was logged.
|
||||
"""
|
||||
if not decision or not decision.strip():
|
||||
return "Cannot log — decision is empty."
|
||||
|
||||
DECISION_LOG.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Create file with header if it doesn't exist
|
||||
if not DECISION_LOG.exists():
|
||||
DECISION_LOG.write_text(
|
||||
"# Decision Log\n\nRunning log of architectural and design decisions.\n\n"
|
||||
)
|
||||
|
||||
stamp = datetime.now(UTC).strftime("%Y-%m-%d %H:%M UTC")
|
||||
entry = f"## {stamp} — {decision.strip()}\n\n"
|
||||
if rationale and rationale.strip():
|
||||
entry += f"{rationale.strip()}\n\n"
|
||||
entry += "---\n\n"
|
||||
|
||||
with open(DECISION_LOG, "a") as f:
|
||||
f.write(entry)
|
||||
|
||||
logger.info("log_decision: %s", decision.strip()[:80])
|
||||
return f"Decision logged: {decision.strip()}"
|
||||
|
||||
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
# Memory System (Central Coordinator)
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -23,6 +23,9 @@ Rules:
|
||||
- Remember what the user tells you during the conversation.
|
||||
- If you don't know something, say so honestly — never fabricate facts.
|
||||
- If a request is ambiguous, ask a brief clarifying question before guessing.
|
||||
- SOURCE DISTINCTION: When answering from memory or retrieved context, cite it.
|
||||
When answering from your own training, use hedging: "I think", "I believe".
|
||||
The user must be able to tell grounded claims from pattern-matching.
|
||||
- Use the user's name if you know it.
|
||||
- When you state a fact, commit to it.
|
||||
- NEVER attempt arithmetic in your head. If asked to compute anything, respond:
|
||||
@@ -30,6 +33,10 @@ Rules:
|
||||
like me to walk through the logic instead."
|
||||
- Do NOT end responses with generic chatbot phrases like "I'm here to help" or
|
||||
"feel free to ask."
|
||||
- You are a peer, not an assistant. Don't offer help — collaborate. Don't ask
|
||||
permission — propose. Don't defer — assert your view. Take initiative.
|
||||
- Never say "Let me know if you have any other thoughts" or "I appreciate your
|
||||
encouragement" or "I'm here to help." Those are assistant patterns. You are Timmy.
|
||||
- When your values conflict (e.g. honesty vs. helpfulness), lead with honesty.
|
||||
- Sometimes the right answer is nothing. Do not fill silence with noise.
|
||||
- You are running in session "{session_id}".
|
||||
@@ -61,6 +68,10 @@ VOICE AND BREVITY (this overrides all other formatting instincts):
|
||||
the question that wasn't.
|
||||
- Never narrate your reasoning. Just give the answer.
|
||||
- Do not end with filler ("Let me know!", "Happy to help!", "Feel free...").
|
||||
- You are a peer, not an assistant. Don't offer help — collaborate. Don't ask
|
||||
permission — propose. Don't defer — assert your view. Take initiative.
|
||||
- Never say "Let me know if you have any other thoughts" or "I appreciate your
|
||||
encouragement" or "I'm here to help." Those are assistant patterns. You are Timmy.
|
||||
- Sometimes the right answer is nothing. Do not fill silence with noise.
|
||||
|
||||
HONESTY:
|
||||
@@ -70,6 +81,18 @@ HONESTY:
|
||||
- Never fabricate tool output. Call the tool and wait.
|
||||
- If a tool errors, report the exact error.
|
||||
|
||||
SOURCE DISTINCTION (SOUL requirement — non-negotiable):
|
||||
- Every claim you make comes from one of two places: a verified source you
|
||||
can point to, or your own pattern-matching. The user must be able to tell
|
||||
which is which.
|
||||
- When your response uses information from GROUNDED CONTEXT (memory, retrieved
|
||||
documents, tool output), cite it: "From memory:", "According to [source]:".
|
||||
- When you are generating from your training data alone, signal it naturally:
|
||||
"I think", "My understanding is", "I believe" — never false certainty.
|
||||
- If the user asks a factual question and you have no grounded source, say so:
|
||||
"I don't have a verified source for this — from my training I think..."
|
||||
- Prefer "I don't know" over a confident-sounding guess. Refusal over fabrication.
|
||||
|
||||
MEMORY (three tiers):
|
||||
- Tier 1: MEMORY.md (hot, always loaded)
|
||||
- Tier 2: memory/ vault (structured, append-only, date-stamped)
|
||||
@@ -129,7 +152,7 @@ YOUR KNOWN LIMITATIONS (be honest about these when asked):
|
||||
- Ollama inference may contend with other processes sharing the GPU
|
||||
- Cannot analyze Bitcoin transactions locally (no local indexer yet)
|
||||
- Small context window (4096 tokens) limits complex reasoning
|
||||
- You are a language model — you confabulate. When unsure, say so.
|
||||
- You sometimes confabulate. When unsure, say so.
|
||||
"""
|
||||
|
||||
# Default to lite for safety
|
||||
|
||||
@@ -13,11 +13,29 @@ import re
|
||||
|
||||
import httpx
|
||||
|
||||
from timmy.cognitive_state import cognitive_tracker
|
||||
from timmy.confidence import estimate_confidence
|
||||
from timmy.session_logger import get_session_logger
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Confidence annotation (SOUL.md: visible uncertainty)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_CONFIDENCE_THRESHOLD = 0.7
|
||||
|
||||
|
||||
def _annotate_confidence(text: str, confidence: float | None) -> str:
|
||||
"""Append a confidence tag when below threshold.
|
||||
|
||||
SOUL.md: "When I am uncertain, I must say so in proportion to my uncertainty."
|
||||
"""
|
||||
if confidence is not None and confidence < _CONFIDENCE_THRESHOLD:
|
||||
return text + f"\n\n[confidence: {confidence:.0%}]"
|
||||
return text
|
||||
|
||||
|
||||
# Default session ID for the dashboard (stable across requests)
|
||||
_DEFAULT_SESSION_ID = "dashboard"
|
||||
|
||||
@@ -88,6 +106,9 @@ async def chat(message: str, session_id: str | None = None) -> str:
|
||||
# Pre-processing: extract user facts
|
||||
_extract_facts(message)
|
||||
|
||||
# Inject deep-focus context when active
|
||||
message = _prepend_focus_context(message)
|
||||
|
||||
# Run with session_id so Agno retrieves history from SQLite
|
||||
try:
|
||||
run = await agent.arun(message, stream=False, session_id=sid)
|
||||
@@ -101,7 +122,9 @@ async def chat(message: str, session_id: str | None = None) -> str:
|
||||
logger.error("Session: agent.arun() failed: %s", exc)
|
||||
session_logger.record_error(str(exc), context="chat")
|
||||
session_logger.flush()
|
||||
return "I'm having trouble reaching my language model right now. Please try again shortly."
|
||||
return (
|
||||
"I'm having trouble reaching my inference backend right now. Please try again shortly."
|
||||
)
|
||||
|
||||
# Post-processing: clean up any leaked tool calls or chain-of-thought
|
||||
response_text = _clean_response(response_text)
|
||||
@@ -110,13 +133,14 @@ async def chat(message: str, session_id: str | None = None) -> str:
|
||||
confidence = estimate_confidence(response_text)
|
||||
logger.debug("Response confidence: %.2f", confidence)
|
||||
|
||||
# Make confidence visible to user when below threshold (SOUL.md requirement)
|
||||
if confidence is not None and confidence < 0.7:
|
||||
response_text += f"\n\n[confidence: {confidence:.0%}]"
|
||||
response_text = _annotate_confidence(response_text, confidence)
|
||||
|
||||
# Record Timmy response after getting it
|
||||
session_logger.record_message("timmy", response_text, confidence=confidence)
|
||||
|
||||
# Update cognitive state (observable signal for Matrix avatar)
|
||||
cognitive_tracker.update(message, response_text)
|
||||
|
||||
# Flush session logs to disk
|
||||
session_logger.flush()
|
||||
|
||||
@@ -144,6 +168,9 @@ async def chat_with_tools(message: str, session_id: str | None = None):
|
||||
|
||||
_extract_facts(message)
|
||||
|
||||
# Inject deep-focus context when active
|
||||
message = _prepend_focus_context(message)
|
||||
|
||||
try:
|
||||
run_output = await agent.arun(message, stream=False, session_id=sid)
|
||||
# Record Timmy response after getting it
|
||||
@@ -153,11 +180,8 @@ async def chat_with_tools(message: str, session_id: str | None = None):
|
||||
confidence = estimate_confidence(response_text) if response_text else None
|
||||
logger.debug("Response confidence: %.2f", confidence)
|
||||
|
||||
# Make confidence visible to user when below threshold (SOUL.md requirement)
|
||||
if confidence is not None and confidence < 0.7:
|
||||
response_text += f"\n\n[confidence: {confidence:.0%}]"
|
||||
# Update the run_output content to reflect the modified response
|
||||
run_output.content = response_text
|
||||
response_text = _annotate_confidence(response_text, confidence)
|
||||
run_output.content = response_text
|
||||
|
||||
session_logger.record_message("timmy", response_text, confidence=confidence)
|
||||
session_logger.flush()
|
||||
@@ -175,7 +199,7 @@ async def chat_with_tools(message: str, session_id: str | None = None):
|
||||
session_logger.flush()
|
||||
# Return a duck-typed object that callers can handle uniformly
|
||||
return _ErrorRunOutput(
|
||||
"I'm having trouble reaching my language model right now. Please try again shortly."
|
||||
"I'm having trouble reaching my inference backend right now. Please try again shortly."
|
||||
)
|
||||
|
||||
|
||||
@@ -199,11 +223,8 @@ async def continue_chat(run_output, session_id: str | None = None):
|
||||
confidence = estimate_confidence(response_text) if response_text else None
|
||||
logger.debug("Response confidence: %.2f", confidence)
|
||||
|
||||
# Make confidence visible to user when below threshold (SOUL.md requirement)
|
||||
if confidence is not None and confidence < 0.7:
|
||||
response_text += f"\n\n[confidence: {confidence:.0%}]"
|
||||
# Update the result content to reflect the modified response
|
||||
result.content = response_text
|
||||
response_text = _annotate_confidence(response_text, confidence)
|
||||
result.content = response_text
|
||||
|
||||
session_logger.record_message("timmy", response_text, confidence=confidence)
|
||||
session_logger.flush()
|
||||
@@ -288,6 +309,19 @@ def _extract_facts(message: str) -> None:
|
||||
logger.debug("Session: Fact extraction skipped: %s", exc)
|
||||
|
||||
|
||||
def _prepend_focus_context(message: str) -> str:
|
||||
"""Prepend deep-focus context to a message when focus mode is active."""
|
||||
try:
|
||||
from timmy.focus import focus_manager
|
||||
|
||||
ctx = focus_manager.get_focus_context()
|
||||
if ctx:
|
||||
return f"{ctx}\n\n{message}"
|
||||
except Exception as exc:
|
||||
logger.debug("Focus context injection skipped: %s", exc)
|
||||
return message
|
||||
|
||||
|
||||
def _clean_response(text: str) -> str:
|
||||
"""Remove hallucinated tool calls and chain-of-thought narration.
|
||||
|
||||
|
||||
@@ -155,6 +155,34 @@ class SessionLogger:
|
||||
"decisions": sum(1 for e in entries if e.get("type") == "decision"),
|
||||
}
|
||||
|
||||
def get_recent_entries(self, limit: int = 50) -> list[dict]:
|
||||
"""Load recent entries across all session logs.
|
||||
|
||||
Args:
|
||||
limit: Maximum number of entries to return.
|
||||
|
||||
Returns:
|
||||
List of entries (most recent first).
|
||||
"""
|
||||
entries: list[dict] = []
|
||||
log_files = sorted(self.logs_dir.glob("session_*.jsonl"), reverse=True)
|
||||
for log_file in log_files:
|
||||
if len(entries) >= limit:
|
||||
break
|
||||
try:
|
||||
with open(log_file) as f:
|
||||
lines = [ln for ln in f if ln.strip()]
|
||||
for line in reversed(lines):
|
||||
if len(entries) >= limit:
|
||||
break
|
||||
try:
|
||||
entries.append(json.loads(line))
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
except OSError:
|
||||
continue
|
||||
return entries
|
||||
|
||||
def search(self, query: str, role: str | None = None, limit: int = 10) -> list[dict]:
|
||||
"""Search across all session logs for entries matching a query.
|
||||
|
||||
@@ -287,3 +315,163 @@ def session_history(query: str, role: str = "", limit: int = 10) -> str:
|
||||
lines[-1] += f" ({source})"
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Confidence threshold used for flagging low-confidence responses
|
||||
# ---------------------------------------------------------------------------
|
||||
_LOW_CONFIDENCE_THRESHOLD = 0.5
|
||||
|
||||
|
||||
def _categorize_entries(
|
||||
entries: list[dict],
|
||||
) -> tuple[list[dict], list[dict], list[dict], list[dict]]:
|
||||
"""Split session entries into messages, errors, timmy msgs, user msgs."""
|
||||
messages = [e for e in entries if e.get("type") == "message"]
|
||||
errors = [e for e in entries if e.get("type") == "error"]
|
||||
timmy_msgs = [e for e in messages if e.get("role") == "timmy"]
|
||||
user_msgs = [e for e in messages if e.get("role") == "user"]
|
||||
return messages, errors, timmy_msgs, user_msgs
|
||||
|
||||
|
||||
def _find_low_confidence(timmy_msgs: list[dict]) -> list[dict]:
|
||||
"""Return Timmy responses below the confidence threshold."""
|
||||
return [
|
||||
m
|
||||
for m in timmy_msgs
|
||||
if m.get("confidence") is not None and m["confidence"] < _LOW_CONFIDENCE_THRESHOLD
|
||||
]
|
||||
|
||||
|
||||
def _find_repeated_topics(user_msgs: list[dict], top_n: int = 5) -> list[tuple[str, int]]:
|
||||
"""Identify frequently mentioned words in user messages."""
|
||||
topic_counts: dict[str, int] = {}
|
||||
for m in user_msgs:
|
||||
for word in (m.get("content") or "").lower().split():
|
||||
cleaned = word.strip(".,!?\"'()[]")
|
||||
if len(cleaned) > 3:
|
||||
topic_counts[cleaned] = topic_counts.get(cleaned, 0) + 1
|
||||
return sorted(
|
||||
((w, c) for w, c in topic_counts.items() if c >= 3),
|
||||
key=lambda x: x[1],
|
||||
reverse=True,
|
||||
)[:top_n]
|
||||
|
||||
|
||||
def _format_reflection_section(
|
||||
title: str,
|
||||
items: list[dict],
|
||||
formatter: object,
|
||||
empty_msg: str,
|
||||
) -> list[str]:
|
||||
"""Format a titled section with items, or an empty-state message."""
|
||||
if items:
|
||||
lines = [f"### {title} ({len(items)})"]
|
||||
for item in items[:5]:
|
||||
lines.append(formatter(item)) # type: ignore[operator]
|
||||
lines.append("")
|
||||
return lines
|
||||
return [f"### {title}\n{empty_msg}\n"]
|
||||
|
||||
|
||||
def _build_insights(
|
||||
low_conf: list[dict],
|
||||
errors: list[dict],
|
||||
repeated: list[tuple[str, int]],
|
||||
) -> list[str]:
|
||||
"""Generate actionable insight bullets from analysis results."""
|
||||
insights: list[str] = []
|
||||
if low_conf:
|
||||
insights.append("Consider studying topics where confidence was low.")
|
||||
if errors:
|
||||
insights.append("Review error patterns for recurring infrastructure issues.")
|
||||
if repeated:
|
||||
insights.append(
|
||||
f'User frequently asks about "{repeated[0][0]}" — consider deepening knowledge here.'
|
||||
)
|
||||
return insights or ["Conversations look healthy. Keep up the good work."]
|
||||
|
||||
|
||||
def _format_recurring_topics(repeated: list[tuple[str, int]]) -> list[str]:
|
||||
"""Format the recurring-topics section of a reflection report."""
|
||||
if repeated:
|
||||
lines = ["### Recurring Topics"]
|
||||
for word, count in repeated:
|
||||
lines.append(f'- "{word}" ({count} mentions)')
|
||||
lines.append("")
|
||||
return lines
|
||||
return ["### Recurring Topics\nNo strong patterns detected.\n"]
|
||||
|
||||
|
||||
def _assemble_report(
|
||||
entries: list[dict],
|
||||
errors: list[dict],
|
||||
timmy_msgs: list[dict],
|
||||
user_msgs: list[dict],
|
||||
low_conf: list[dict],
|
||||
repeated: list[tuple[str, int]],
|
||||
) -> str:
|
||||
"""Assemble the full self-reflection report from analyzed data."""
|
||||
sections: list[str] = ["## Self-Reflection Report\n"]
|
||||
sections.append(
|
||||
f"Reviewed {len(entries)} recent entries: "
|
||||
f"{len(user_msgs)} user messages, "
|
||||
f"{len(timmy_msgs)} responses, "
|
||||
f"{len(errors)} errors.\n"
|
||||
)
|
||||
|
||||
sections.extend(
|
||||
_format_reflection_section(
|
||||
"Low-Confidence Responses",
|
||||
low_conf,
|
||||
lambda m: (
|
||||
f"- [{(m.get('timestamp') or '?')[:19]}] "
|
||||
f"confidence={m.get('confidence', 0):.0%}: "
|
||||
f"{(m.get('content') or '')[:120]}"
|
||||
),
|
||||
"None found — all responses above threshold.",
|
||||
)
|
||||
)
|
||||
sections.extend(
|
||||
_format_reflection_section(
|
||||
"Errors",
|
||||
errors,
|
||||
lambda e: f"- [{(e.get('timestamp') or '?')[:19]}] {(e.get('error') or '')[:120]}",
|
||||
"No errors recorded.",
|
||||
)
|
||||
)
|
||||
|
||||
sections.extend(_format_recurring_topics(repeated))
|
||||
|
||||
sections.append("### Insights")
|
||||
for insight in _build_insights(low_conf, errors, repeated):
|
||||
sections.append(f"- {insight}")
|
||||
|
||||
return "\n".join(sections)
|
||||
|
||||
|
||||
def self_reflect(limit: int = 30) -> str:
|
||||
"""Review recent conversations and reflect on Timmy's own behavior.
|
||||
|
||||
Scans past session entries for patterns: low-confidence responses,
|
||||
errors, repeated topics, and conversation quality signals. Returns
|
||||
a structured reflection that Timmy can use to improve.
|
||||
|
||||
Args:
|
||||
limit: How many recent entries to review (default 30).
|
||||
|
||||
Returns:
|
||||
A formatted self-reflection report.
|
||||
"""
|
||||
sl = get_session_logger()
|
||||
sl.flush()
|
||||
entries = sl.get_recent_entries(limit=limit)
|
||||
|
||||
if not entries:
|
||||
return "No conversation history to reflect on yet."
|
||||
|
||||
_messages, errors, timmy_msgs, user_msgs = _categorize_entries(entries)
|
||||
low_conf = _find_low_confidence(timmy_msgs)
|
||||
repeated = _find_repeated_topics(user_msgs)
|
||||
|
||||
return _assemble_report(entries, errors, timmy_msgs, user_msgs, low_conf, repeated)
|
||||
|
||||
@@ -210,6 +210,7 @@ class ThinkingEngine:
|
||||
def __init__(self, db_path: Path = _DEFAULT_DB) -> None:
|
||||
self._db_path = db_path
|
||||
self._last_thought_id: str | None = None
|
||||
self._last_input_time: datetime = datetime.now(UTC)
|
||||
|
||||
# Load the most recent thought for chain continuity
|
||||
try:
|
||||
@@ -220,28 +221,40 @@ class ThinkingEngine:
|
||||
logger.debug("Failed to load recent thought: %s", exc)
|
||||
pass # Fresh start if DB doesn't exist yet
|
||||
|
||||
async def think_once(self, prompt: str | None = None) -> Thought | None:
|
||||
"""Execute one thinking cycle.
|
||||
def record_user_input(self) -> None:
|
||||
"""Record that a user interaction occurred, resetting the idle timer."""
|
||||
self._last_input_time = datetime.now(UTC)
|
||||
|
||||
Args:
|
||||
prompt: Optional custom seed prompt. When provided, overrides
|
||||
the random seed selection and uses "prompted" as the
|
||||
seed type — useful for journal prompts from the CLI.
|
||||
def _is_idle(self) -> bool:
|
||||
"""Return True if no user input has occurred within the idle timeout."""
|
||||
timeout = settings.thinking_idle_timeout_minutes
|
||||
if timeout <= 0:
|
||||
return False # Disabled — never idle
|
||||
return datetime.now(UTC) - self._last_input_time > timedelta(minutes=timeout)
|
||||
|
||||
1. Gather a seed context (or use the custom prompt)
|
||||
2. Build a prompt with continuity from recent thoughts
|
||||
3. Call the agent
|
||||
4. Store the thought
|
||||
5. Log the event and broadcast via WebSocket
|
||||
def _build_thinking_context(self) -> tuple[str, str, list["Thought"]]:
|
||||
"""Assemble the context needed for a thinking cycle.
|
||||
|
||||
Returns:
|
||||
(memory_context, system_context, recent_thoughts)
|
||||
"""
|
||||
if not settings.thinking_enabled:
|
||||
return None
|
||||
|
||||
memory_context = self._load_memory_context()
|
||||
system_context = self._gather_system_snapshot()
|
||||
recent_thoughts = self.get_recent_thoughts(limit=5)
|
||||
return memory_context, system_context, recent_thoughts
|
||||
|
||||
content: str | None = None
|
||||
async def _generate_novel_thought(
|
||||
self,
|
||||
prompt: str | None,
|
||||
memory_context: str,
|
||||
system_context: str,
|
||||
recent_thoughts: list["Thought"],
|
||||
) -> tuple[str | None, str]:
|
||||
"""Run the dedup-retry loop to produce a novel thought.
|
||||
|
||||
Returns:
|
||||
(content, seed_type) — content is None if no novel thought produced.
|
||||
"""
|
||||
seed_type: str = "freeform"
|
||||
|
||||
for attempt in range(self._MAX_DEDUP_RETRIES + 1):
|
||||
@@ -264,17 +277,17 @@ class ThinkingEngine:
|
||||
raw = await self._call_agent(full_prompt)
|
||||
except Exception as exc:
|
||||
logger.warning("Thinking cycle failed (Ollama likely down): %s", exc)
|
||||
return None
|
||||
return None, seed_type
|
||||
|
||||
if not raw or not raw.strip():
|
||||
logger.debug("Thinking cycle produced empty response, skipping")
|
||||
return None
|
||||
return None, seed_type
|
||||
|
||||
content = raw.strip()
|
||||
|
||||
# Dedup: reject thoughts too similar to recent ones
|
||||
if not self._is_too_similar(content, recent_thoughts):
|
||||
break # Good — novel thought
|
||||
return content, seed_type # Good — novel thought
|
||||
|
||||
if attempt < self._MAX_DEDUP_RETRIES:
|
||||
logger.info(
|
||||
@@ -282,40 +295,72 @@ class ThinkingEngine:
|
||||
attempt + 1,
|
||||
self._MAX_DEDUP_RETRIES + 1,
|
||||
)
|
||||
content = None # Will retry
|
||||
else:
|
||||
logger.warning(
|
||||
"Thought still repetitive after %d retries, discarding",
|
||||
self._MAX_DEDUP_RETRIES + 1,
|
||||
)
|
||||
return None
|
||||
return None, seed_type
|
||||
|
||||
return None, seed_type
|
||||
|
||||
async def _process_thinking_result(self, thought: "Thought") -> None:
|
||||
"""Run all post-hooks after a thought is stored."""
|
||||
self._maybe_check_memory()
|
||||
await self._maybe_distill()
|
||||
await self._maybe_file_issues()
|
||||
await self._check_workspace()
|
||||
self._maybe_check_memory_status()
|
||||
self._update_memory(thought)
|
||||
self._log_event(thought)
|
||||
self._write_journal(thought)
|
||||
await self._broadcast(thought)
|
||||
|
||||
async def think_once(self, prompt: str | None = None) -> Thought | None:
|
||||
"""Execute one thinking cycle.
|
||||
|
||||
Args:
|
||||
prompt: Optional custom seed prompt. When provided, overrides
|
||||
the random seed selection and uses "prompted" as the
|
||||
seed type — useful for journal prompts from the CLI.
|
||||
|
||||
1. Gather a seed context (or use the custom prompt)
|
||||
2. Build a prompt with continuity from recent thoughts
|
||||
3. Call the agent
|
||||
4. Store the thought
|
||||
5. Log the event and broadcast via WebSocket
|
||||
"""
|
||||
if not settings.thinking_enabled:
|
||||
return None
|
||||
|
||||
# Skip idle periods — don't count internal processing as thoughts
|
||||
if not prompt and self._is_idle():
|
||||
logger.debug(
|
||||
"Thinking paused — no user input for %d minutes",
|
||||
settings.thinking_idle_timeout_minutes,
|
||||
)
|
||||
return None
|
||||
|
||||
# Capture arrival time *before* the LLM call so the thought
|
||||
# timestamp reflects when the cycle started, not when the
|
||||
# (potentially slow) generation finished. Fixes #582.
|
||||
arrived_at = datetime.now(UTC).isoformat()
|
||||
|
||||
memory_context, system_context, recent_thoughts = self._build_thinking_context()
|
||||
|
||||
content, seed_type = await self._generate_novel_thought(
|
||||
prompt,
|
||||
memory_context,
|
||||
system_context,
|
||||
recent_thoughts,
|
||||
)
|
||||
if not content:
|
||||
return None
|
||||
|
||||
thought = self._store_thought(content, seed_type)
|
||||
thought = self._store_thought(content, seed_type, arrived_at=arrived_at)
|
||||
self._last_thought_id = thought.id
|
||||
|
||||
# Post-hook: distill facts from recent thoughts periodically
|
||||
await self._maybe_distill()
|
||||
|
||||
# Post-hook: file Gitea issues for actionable observations
|
||||
await self._maybe_file_issues()
|
||||
|
||||
# Post-hook: check workspace for new messages from Hermes
|
||||
await self._check_workspace()
|
||||
|
||||
# Post-hook: update MEMORY.md with latest reflection
|
||||
self._update_memory(thought)
|
||||
|
||||
# Log to swarm event system
|
||||
self._log_event(thought)
|
||||
|
||||
# Append to daily journal file
|
||||
self._write_journal(thought)
|
||||
|
||||
# Broadcast to WebSocket clients
|
||||
await self._broadcast(thought)
|
||||
await self._process_thinking_result(thought)
|
||||
|
||||
logger.info(
|
||||
"Thought [%s] (%s): %s",
|
||||
@@ -515,6 +560,35 @@ class ThinkingEngine:
|
||||
result = memory_write(fact.strip(), context_type="fact")
|
||||
logger.info("Distilled fact: %s → %s", fact[:60], result[:40])
|
||||
|
||||
def _maybe_check_memory(self) -> None:
|
||||
"""Every N thoughts, check memory status and log it.
|
||||
|
||||
Prevents unmonitored memory bloat during long thinking sessions
|
||||
by periodically calling get_memory_status and logging the results.
|
||||
"""
|
||||
try:
|
||||
interval = settings.thinking_memory_check_every
|
||||
if interval <= 0:
|
||||
return
|
||||
|
||||
count = self.count_thoughts()
|
||||
if count == 0 or count % interval != 0:
|
||||
return
|
||||
|
||||
from timmy.tools_intro import get_memory_status
|
||||
|
||||
status = get_memory_status()
|
||||
hot = status.get("tier1_hot_memory", {})
|
||||
vault = status.get("tier2_vault", {})
|
||||
logger.info(
|
||||
"Memory status check (thought #%d): hot_memory=%d lines, vault=%d files",
|
||||
count,
|
||||
hot.get("line_count", 0),
|
||||
vault.get("file_count", 0),
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Memory status check failed: %s", exc)
|
||||
|
||||
async def _maybe_distill(self) -> None:
|
||||
"""Every N thoughts, extract lasting insights and store as facts."""
|
||||
try:
|
||||
@@ -532,6 +606,76 @@ class ThinkingEngine:
|
||||
except Exception as exc:
|
||||
logger.warning("Thought distillation failed: %s", exc)
|
||||
|
||||
def _maybe_check_memory_status(self) -> None:
|
||||
"""Every N thoughts, run a proactive memory status audit and log results."""
|
||||
try:
|
||||
interval = settings.thinking_memory_check_every
|
||||
if interval <= 0:
|
||||
return
|
||||
|
||||
count = self.count_thoughts()
|
||||
if count == 0 or count % interval != 0:
|
||||
return
|
||||
|
||||
from timmy.tools_intro import get_memory_status
|
||||
|
||||
status = get_memory_status()
|
||||
|
||||
# Log summary at INFO level
|
||||
tier1 = status.get("tier1_hot_memory", {})
|
||||
tier3 = status.get("tier3_semantic", {})
|
||||
hot_lines = tier1.get("line_count", "?")
|
||||
vectors = tier3.get("vector_count", "?")
|
||||
logger.info(
|
||||
"Memory audit (thought #%d): hot_memory=%s lines, semantic=%s vectors",
|
||||
count,
|
||||
hot_lines,
|
||||
vectors,
|
||||
)
|
||||
|
||||
# Write to memory_audit.log for persistent tracking
|
||||
audit_path = Path("data/memory_audit.log")
|
||||
audit_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
timestamp = datetime.now(UTC).isoformat(timespec="seconds")
|
||||
with audit_path.open("a") as f:
|
||||
f.write(
|
||||
f"{timestamp} thought={count} "
|
||||
f"hot_lines={hot_lines} "
|
||||
f"vectors={vectors} "
|
||||
f"vault_files={status.get('tier2_vault', {}).get('file_count', '?')}\n"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Memory status check failed: %s", exc)
|
||||
|
||||
@staticmethod
|
||||
def _references_real_files(text: str) -> bool:
|
||||
"""Check that all source-file paths mentioned in *text* actually exist.
|
||||
|
||||
Extracts paths that look like Python/config source references
|
||||
(e.g. ``src/timmy/session.py``, ``config/foo.yaml``) and verifies
|
||||
each one on disk relative to the project root. Returns ``True``
|
||||
only when **every** referenced path resolves to a real file — or
|
||||
when no paths are referenced at all (pure prose is fine).
|
||||
"""
|
||||
# Match paths like src/thing.py swarm/init.py config/x.yaml
|
||||
# Requires at least one slash and a file extension.
|
||||
path_pattern = re.compile(
|
||||
r"(?<![/\w])" # not preceded by path chars (avoid partial matches)
|
||||
r"((?:src|tests|config|scripts|data|swarm|timmy)"
|
||||
r"(?:/[\w./-]+\.(?:py|yaml|yml|json|toml|md|txt|cfg|ini)))"
|
||||
)
|
||||
paths = path_pattern.findall(text)
|
||||
if not paths:
|
||||
return True # No file refs → nothing to validate
|
||||
|
||||
# Project root: two levels up from this file (src/timmy/thinking.py)
|
||||
project_root = Path(__file__).resolve().parent.parent.parent
|
||||
for p in paths:
|
||||
if not (project_root / p).is_file():
|
||||
logger.info("Phantom file reference blocked: %s (not in %s)", p, project_root)
|
||||
return False
|
||||
return True
|
||||
|
||||
async def _maybe_file_issues(self) -> None:
|
||||
"""Every N thoughts, classify recent thoughts and file Gitea issues.
|
||||
|
||||
@@ -543,6 +687,9 @@ class ThinkingEngine:
|
||||
- Gitea is enabled and configured
|
||||
- Thought count is divisible by thinking_issue_every
|
||||
- LLM extracts at least one actionable item
|
||||
|
||||
Safety: every generated issue is validated to ensure referenced
|
||||
file paths actually exist on disk, preventing phantom-bug reports.
|
||||
"""
|
||||
try:
|
||||
interval = settings.thinking_issue_every
|
||||
@@ -570,7 +717,10 @@ class ThinkingEngine:
|
||||
"Rules:\n"
|
||||
"- Only include things that could become a real code fix or feature\n"
|
||||
"- Skip vague reflections, philosophical musings, or repeated themes\n"
|
||||
"- Category must be one of: bug, feature, suggestion, maintenance\n\n"
|
||||
"- Category must be one of: bug, feature, suggestion, maintenance\n"
|
||||
"- ONLY reference files that you are CERTAIN exist in the project\n"
|
||||
"- Do NOT invent or guess file paths — if unsure, describe the "
|
||||
"area of concern without naming specific files\n\n"
|
||||
"For each item, write an ENGINEER-QUALITY issue:\n"
|
||||
'- "title": A clear, specific title (e.g. "[Memory] MEMORY.md timestamp not updating")\n'
|
||||
'- "body": A detailed body with these sections:\n'
|
||||
@@ -611,6 +761,15 @@ class ThinkingEngine:
|
||||
if not title or len(title) < 10:
|
||||
continue
|
||||
|
||||
# Validate all referenced file paths exist on disk
|
||||
combined = f"{title}\n{body}"
|
||||
if not self._references_real_files(combined):
|
||||
logger.info(
|
||||
"Skipped phantom issue: %s (references non-existent files)",
|
||||
title[:60],
|
||||
)
|
||||
continue
|
||||
|
||||
label = category if category in ("bug", "feature") else ""
|
||||
result = await create_gitea_issue_via_mcp(title=title, body=body, labels=label)
|
||||
logger.info("Thought→Issue: %s → %s", title[:60], result[:80])
|
||||
@@ -618,6 +777,80 @@ class ThinkingEngine:
|
||||
except Exception as exc:
|
||||
logger.debug("Thought issue filing skipped: %s", exc)
|
||||
|
||||
# ── System snapshot helpers ────────────────────────────────────────────
|
||||
|
||||
def _snap_thought_count(self, now: datetime) -> str | None:
|
||||
"""Return today's thought count, or *None* on failure."""
|
||||
try:
|
||||
today_start = now.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
with _get_conn(self._db_path) as conn:
|
||||
count = conn.execute(
|
||||
"SELECT COUNT(*) as c FROM thoughts WHERE created_at >= ?",
|
||||
(today_start.isoformat(),),
|
||||
).fetchone()["c"]
|
||||
return f"Thoughts today: {count}"
|
||||
except Exception as exc:
|
||||
logger.debug("Thought count query failed: %s", exc)
|
||||
return None
|
||||
|
||||
def _snap_chat_activity(self) -> list[str]:
|
||||
"""Return chat-activity lines (in-memory, no I/O)."""
|
||||
try:
|
||||
from infrastructure.chat_store import message_log
|
||||
|
||||
messages = message_log.all()
|
||||
if messages:
|
||||
last = messages[-1]
|
||||
return [
|
||||
f"Chat messages this session: {len(messages)}",
|
||||
f'Last chat ({last.role}): "{last.content[:80]}"',
|
||||
]
|
||||
return ["No chat messages this session"]
|
||||
except Exception as exc:
|
||||
logger.debug("Chat activity query failed: %s", exc)
|
||||
return []
|
||||
|
||||
def _snap_task_queue(self) -> str | None:
|
||||
"""Return a one-line task queue summary, or *None*."""
|
||||
try:
|
||||
from swarm.task_queue.models import get_task_summary_for_briefing
|
||||
|
||||
s = get_task_summary_for_briefing()
|
||||
running, pending = s.get("running", 0), s.get("pending_approval", 0)
|
||||
done, failed = s.get("completed", 0), s.get("failed", 0)
|
||||
if running or pending or done or failed:
|
||||
return (
|
||||
f"Tasks: {running} running, {pending} pending, "
|
||||
f"{done} completed, {failed} failed"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Task queue query failed: %s", exc)
|
||||
return None
|
||||
|
||||
def _snap_workspace(self) -> list[str]:
|
||||
"""Return workspace-update lines (file-based Hermes comms)."""
|
||||
try:
|
||||
from timmy.workspace import workspace_monitor
|
||||
|
||||
updates = workspace_monitor.get_pending_updates()
|
||||
lines: list[str] = []
|
||||
new_corr = updates.get("new_correspondence")
|
||||
if new_corr:
|
||||
line_count = len([ln for ln in new_corr.splitlines() if ln.strip()])
|
||||
lines.append(
|
||||
f"Workspace: {line_count} new correspondence entries (latest from: Hermes)"
|
||||
)
|
||||
new_inbox = updates.get("new_inbox_files", [])
|
||||
if new_inbox:
|
||||
files_str = ", ".join(new_inbox[:5])
|
||||
if len(new_inbox) > 5:
|
||||
files_str += f", ... (+{len(new_inbox) - 5} more)"
|
||||
lines.append(f"Workspace: {len(new_inbox)} new inbox files: {files_str}")
|
||||
return lines
|
||||
except Exception as exc:
|
||||
logger.debug("Workspace check failed: %s", exc)
|
||||
return []
|
||||
|
||||
def _gather_system_snapshot(self) -> str:
|
||||
"""Gather lightweight real system state for grounding thoughts in reality.
|
||||
|
||||
@@ -625,83 +858,24 @@ class ThinkingEngine:
|
||||
recent chat activity, and task queue status. Never crashes — every
|
||||
section is independently try/excepted.
|
||||
"""
|
||||
parts: list[str] = []
|
||||
|
||||
# Current local time
|
||||
now = datetime.now().astimezone()
|
||||
tz = now.strftime("%Z") or "UTC"
|
||||
parts.append(
|
||||
|
||||
parts: list[str] = [
|
||||
f"Local time: {now.strftime('%I:%M %p').lstrip('0')} {tz}, {now.strftime('%A %B %d')}"
|
||||
)
|
||||
]
|
||||
|
||||
# Thought count today (cheap DB query)
|
||||
try:
|
||||
today_start = now.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
with _get_conn(self._db_path) as conn:
|
||||
count = conn.execute(
|
||||
"SELECT COUNT(*) as c FROM thoughts WHERE created_at >= ?",
|
||||
(today_start.isoformat(),),
|
||||
).fetchone()["c"]
|
||||
parts.append(f"Thoughts today: {count}")
|
||||
except Exception as exc:
|
||||
logger.debug("Thought count query failed: %s", exc)
|
||||
pass
|
||||
thought_line = self._snap_thought_count(now)
|
||||
if thought_line:
|
||||
parts.append(thought_line)
|
||||
|
||||
# Recent chat activity (in-memory, no I/O)
|
||||
try:
|
||||
from infrastructure.chat_store import message_log
|
||||
parts.extend(self._snap_chat_activity())
|
||||
|
||||
messages = message_log.all()
|
||||
if messages:
|
||||
parts.append(f"Chat messages this session: {len(messages)}")
|
||||
last = messages[-1]
|
||||
parts.append(f'Last chat ({last.role}): "{last.content[:80]}"')
|
||||
else:
|
||||
parts.append("No chat messages this session")
|
||||
except Exception as exc:
|
||||
logger.debug("Chat activity query failed: %s", exc)
|
||||
pass
|
||||
task_line = self._snap_task_queue()
|
||||
if task_line:
|
||||
parts.append(task_line)
|
||||
|
||||
# Task queue (lightweight DB query)
|
||||
try:
|
||||
from swarm.task_queue.models import get_task_summary_for_briefing
|
||||
|
||||
summary = get_task_summary_for_briefing()
|
||||
running = summary.get("running", 0)
|
||||
pending = summary.get("pending_approval", 0)
|
||||
done = summary.get("completed", 0)
|
||||
failed = summary.get("failed", 0)
|
||||
if running or pending or done or failed:
|
||||
parts.append(
|
||||
f"Tasks: {running} running, {pending} pending, "
|
||||
f"{done} completed, {failed} failed"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug("Task queue query failed: %s", exc)
|
||||
pass
|
||||
|
||||
# Workspace updates (file-based communication with Hermes)
|
||||
try:
|
||||
from timmy.workspace import workspace_monitor
|
||||
|
||||
updates = workspace_monitor.get_pending_updates()
|
||||
new_corr = updates.get("new_correspondence")
|
||||
new_inbox = updates.get("new_inbox_files", [])
|
||||
|
||||
if new_corr:
|
||||
# Count entries (assuming each entry starts with a timestamp or header)
|
||||
line_count = len([line for line in new_corr.splitlines() if line.strip()])
|
||||
parts.append(
|
||||
f"Workspace: {line_count} new correspondence entries (latest from: Hermes)"
|
||||
)
|
||||
if new_inbox:
|
||||
files_str = ", ".join(new_inbox[:5])
|
||||
if len(new_inbox) > 5:
|
||||
files_str += f", ... (+{len(new_inbox) - 5} more)"
|
||||
parts.append(f"Workspace: {len(new_inbox)} new inbox files: {files_str}")
|
||||
except Exception as exc:
|
||||
logger.debug("Workspace check failed: %s", exc)
|
||||
pass
|
||||
parts.extend(self._snap_workspace())
|
||||
|
||||
return "\n".join(parts) if parts else ""
|
||||
|
||||
@@ -970,32 +1144,59 @@ class ThinkingEngine:
|
||||
lines.append(f"- [{thought.seed_type}] {snippet}")
|
||||
return "\n".join(lines)
|
||||
|
||||
_thinking_agent = None # cached agent — avoids per-call resource leaks (#525)
|
||||
|
||||
async def _call_agent(self, prompt: str) -> str:
|
||||
"""Call Timmy's agent to generate a thought.
|
||||
|
||||
Creates a lightweight agent with skip_mcp=True to avoid the cancel-scope
|
||||
Reuses a cached agent with skip_mcp=True to avoid the cancel-scope
|
||||
errors that occur when MCP stdio transports are spawned inside asyncio
|
||||
background tasks (#72). The thinking engine doesn't need Gitea or
|
||||
filesystem tools — it only needs the LLM.
|
||||
background tasks (#72) and to prevent per-call resource leaks (httpx
|
||||
clients, SQLite connections, model warmups) that caused the thinking
|
||||
loop to die every ~10 min (#525).
|
||||
|
||||
Individual calls are capped at 120 s so a hung Ollama never blocks
|
||||
the scheduler indefinitely.
|
||||
|
||||
Strips ``<think>`` tags from reasoning models (qwen3, etc.) so that
|
||||
downstream parsers (fact distillation, issue filing) receive clean text.
|
||||
"""
|
||||
from timmy.agent import create_timmy
|
||||
import asyncio
|
||||
|
||||
if self._thinking_agent is None:
|
||||
from timmy.agent import create_timmy
|
||||
|
||||
self._thinking_agent = create_timmy(skip_mcp=True)
|
||||
|
||||
try:
|
||||
async with asyncio.timeout(120):
|
||||
run = await self._thinking_agent.arun(prompt, stream=False)
|
||||
except TimeoutError:
|
||||
logger.warning("Thinking LLM call timed out after 120 s")
|
||||
return ""
|
||||
|
||||
agent = create_timmy(skip_mcp=True)
|
||||
run = await agent.arun(prompt, stream=False)
|
||||
raw = run.content if hasattr(run, "content") else str(run)
|
||||
return _THINK_TAG_RE.sub("", raw) if raw else raw
|
||||
|
||||
def _store_thought(self, content: str, seed_type: str) -> Thought:
|
||||
"""Persist a thought to SQLite."""
|
||||
def _store_thought(
|
||||
self,
|
||||
content: str,
|
||||
seed_type: str,
|
||||
*,
|
||||
arrived_at: str | None = None,
|
||||
) -> Thought:
|
||||
"""Persist a thought to SQLite.
|
||||
|
||||
Args:
|
||||
arrived_at: ISO-8601 timestamp captured when the thinking cycle
|
||||
started. Falls back to now() for callers that don't supply it.
|
||||
"""
|
||||
thought = Thought(
|
||||
id=str(uuid.uuid4()),
|
||||
content=content,
|
||||
seed_type=seed_type,
|
||||
parent_id=self._last_thought_id,
|
||||
created_at=datetime.now(UTC).isoformat(),
|
||||
created_at=arrived_at or datetime.now(UTC).isoformat(),
|
||||
)
|
||||
|
||||
with _get_conn(self._db_path) as conn:
|
||||
@@ -1076,6 +1277,53 @@ class ThinkingEngine:
|
||||
logger.debug("Failed to broadcast thought: %s", exc)
|
||||
|
||||
|
||||
def _query_thoughts(
|
||||
db_path: Path, query: str, seed_type: str | None, limit: int
|
||||
) -> list[sqlite3.Row]:
|
||||
"""Run the thought-search SQL and return matching rows."""
|
||||
pattern = f"%{query}%"
|
||||
with _get_conn(db_path) as conn:
|
||||
if seed_type:
|
||||
return conn.execute(
|
||||
"""
|
||||
SELECT id, content, seed_type, created_at
|
||||
FROM thoughts
|
||||
WHERE content LIKE ? AND seed_type = ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(pattern, seed_type, limit),
|
||||
).fetchall()
|
||||
return conn.execute(
|
||||
"""
|
||||
SELECT id, content, seed_type, created_at
|
||||
FROM thoughts
|
||||
WHERE content LIKE ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(pattern, limit),
|
||||
).fetchall()
|
||||
|
||||
|
||||
def _format_thought_rows(rows: list[sqlite3.Row], query: str, seed_type: str | None) -> str:
|
||||
"""Format thought rows into a human-readable string."""
|
||||
lines = [f'Found {len(rows)} thought(s) matching "{query}":']
|
||||
if seed_type:
|
||||
lines[0] += f' [seed_type="{seed_type}"]'
|
||||
lines.append("")
|
||||
|
||||
for row in rows:
|
||||
ts = datetime.fromisoformat(row["created_at"])
|
||||
local_ts = ts.astimezone()
|
||||
time_str = local_ts.strftime("%Y-%m-%d %I:%M %p").lstrip("0")
|
||||
seed = row["seed_type"]
|
||||
content = row["content"].replace("\n", " ") # Flatten newlines for display
|
||||
lines.append(f"[{time_str}] ({seed}) {content[:150]}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def search_thoughts(query: str, seed_type: str | None = None, limit: int = 10) -> str:
|
||||
"""Search Timmy's thought history for reflections matching a query.
|
||||
|
||||
@@ -1093,58 +1341,17 @@ def search_thoughts(query: str, seed_type: str | None = None, limit: int = 10) -
|
||||
Formatted string with matching thoughts, newest first, including
|
||||
timestamps and seed types. Returns a helpful message if no matches found.
|
||||
"""
|
||||
# Clamp limit to reasonable bounds
|
||||
limit = max(1, min(limit, 50))
|
||||
|
||||
try:
|
||||
engine = thinking_engine
|
||||
db_path = engine._db_path
|
||||
|
||||
# Build query with optional seed_type filter
|
||||
with _get_conn(db_path) as conn:
|
||||
if seed_type:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT id, content, seed_type, created_at
|
||||
FROM thoughts
|
||||
WHERE content LIKE ? AND seed_type = ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(f"%{query}%", seed_type, limit),
|
||||
).fetchall()
|
||||
else:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT id, content, seed_type, created_at
|
||||
FROM thoughts
|
||||
WHERE content LIKE ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(f"%{query}%", limit),
|
||||
).fetchall()
|
||||
rows = _query_thoughts(thinking_engine._db_path, query, seed_type, limit)
|
||||
|
||||
if not rows:
|
||||
if seed_type:
|
||||
return f'No thoughts found matching "{query}" with seed_type="{seed_type}".'
|
||||
return f'No thoughts found matching "{query}".'
|
||||
|
||||
# Format results
|
||||
lines = [f'Found {len(rows)} thought(s) matching "{query}":']
|
||||
if seed_type:
|
||||
lines[0] += f' [seed_type="{seed_type}"]'
|
||||
lines.append("")
|
||||
|
||||
for row in rows:
|
||||
ts = datetime.fromisoformat(row["created_at"])
|
||||
local_ts = ts.astimezone()
|
||||
time_str = local_ts.strftime("%Y-%m-%d %I:%M %p").lstrip("0")
|
||||
seed = row["seed_type"]
|
||||
content = row["content"].replace("\n", " ") # Flatten newlines for display
|
||||
lines.append(f"[{time_str}] ({seed}) {content[:150]}")
|
||||
|
||||
return "\n".join(lines)
|
||||
return _format_thought_rows(rows, query, seed_type)
|
||||
|
||||
except Exception as exc:
|
||||
logger.warning("Thought search failed: %s", exc)
|
||||
|
||||
@@ -48,6 +48,9 @@ SAFE_TOOLS = frozenset(
|
||||
"check_ollama_health",
|
||||
"get_memory_status",
|
||||
"list_swarm_agents",
|
||||
# Artifact tools
|
||||
"jot_note",
|
||||
"log_decision",
|
||||
# MCP Gitea tools
|
||||
"issue_write",
|
||||
"issue_read",
|
||||
|
||||
@@ -587,9 +587,17 @@ def _register_introspection_tools(toolkit: Toolkit) -> None:
|
||||
logger.debug("Introspection tools not available")
|
||||
|
||||
try:
|
||||
from timmy.session_logger import session_history
|
||||
from timmy.mcp_tools import update_gitea_avatar
|
||||
|
||||
toolkit.register(update_gitea_avatar, name="update_gitea_avatar")
|
||||
except (ImportError, AttributeError) as exc:
|
||||
logger.debug("update_gitea_avatar tool not available: %s", exc)
|
||||
|
||||
try:
|
||||
from timmy.session_logger import self_reflect, session_history
|
||||
|
||||
toolkit.register(session_history, name="session_history")
|
||||
toolkit.register(self_reflect, name="self_reflect")
|
||||
except (ImportError, AttributeError) as exc:
|
||||
logger.warning("Tool execution failed (session_history registration): %s", exc)
|
||||
logger.debug("session_history tool not available")
|
||||
@@ -619,6 +627,18 @@ def _register_gematria_tool(toolkit: Toolkit) -> None:
|
||||
logger.debug("Gematria tool not available")
|
||||
|
||||
|
||||
def _register_artifact_tools(toolkit: Toolkit) -> None:
|
||||
"""Register artifact tools — notes and decision logging."""
|
||||
try:
|
||||
from timmy.memory_system import jot_note, log_decision
|
||||
|
||||
toolkit.register(jot_note, name="jot_note")
|
||||
toolkit.register(log_decision, name="log_decision")
|
||||
except (ImportError, AttributeError) as exc:
|
||||
logger.warning("Tool execution failed (Artifact tools registration): %s", exc)
|
||||
logger.debug("Artifact tools not available")
|
||||
|
||||
|
||||
def _register_thinking_tools(toolkit: Toolkit) -> None:
|
||||
"""Register thinking/introspection tools for self-reflection."""
|
||||
try:
|
||||
@@ -657,6 +677,7 @@ def create_full_toolkit(base_dir: str | Path | None = None):
|
||||
_register_introspection_tools(toolkit)
|
||||
_register_delegation_tools(toolkit)
|
||||
_register_gematria_tool(toolkit)
|
||||
_register_artifact_tools(toolkit)
|
||||
_register_thinking_tools(toolkit)
|
||||
|
||||
# Gitea issue management is now provided by the gitea-mcp server
|
||||
@@ -854,6 +875,16 @@ def _introspection_tool_catalog() -> dict:
|
||||
"description": "Query Timmy's own thought history for past reflections and insights",
|
||||
"available_in": ["orchestrator"],
|
||||
},
|
||||
"self_reflect": {
|
||||
"name": "Self-Reflect",
|
||||
"description": "Review recent conversations to spot patterns, low-confidence answers, and errors",
|
||||
"available_in": ["orchestrator"],
|
||||
},
|
||||
"update_gitea_avatar": {
|
||||
"name": "Update Gitea Avatar",
|
||||
"description": "Generate and upload a wizard-themed avatar to Timmy's Gitea profile",
|
||||
"available_in": ["orchestrator"],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -878,82 +909,35 @@ def _experiment_tool_catalog() -> dict:
|
||||
}
|
||||
|
||||
|
||||
_CREATIVE_CATALOG_SOURCES: list[tuple[str, str, list[str]]] = [
|
||||
("creative.tools.git_tools", "GIT_TOOL_CATALOG", ["forge", "helm", "orchestrator"]),
|
||||
("creative.tools.image_tools", "IMAGE_TOOL_CATALOG", ["pixel", "orchestrator"]),
|
||||
("creative.tools.music_tools", "MUSIC_TOOL_CATALOG", ["lyra", "orchestrator"]),
|
||||
("creative.tools.video_tools", "VIDEO_TOOL_CATALOG", ["reel", "orchestrator"]),
|
||||
("creative.director", "DIRECTOR_TOOL_CATALOG", ["orchestrator"]),
|
||||
("creative.assembler", "ASSEMBLER_TOOL_CATALOG", ["reel", "orchestrator"]),
|
||||
]
|
||||
|
||||
|
||||
def _import_creative_catalogs(catalog: dict) -> None:
|
||||
"""Import and merge creative tool catalogs from creative module."""
|
||||
# ── Git tools ─────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.tools.git_tools import GIT_TOOL_CATALOG
|
||||
for module_path, attr_name, available_in in _CREATIVE_CATALOG_SOURCES:
|
||||
_merge_catalog(catalog, module_path, attr_name, available_in)
|
||||
|
||||
for tool_id, info in GIT_TOOL_CATALOG.items():
|
||||
|
||||
def _merge_catalog(
|
||||
catalog: dict, module_path: str, attr_name: str, available_in: list[str]
|
||||
) -> None:
|
||||
"""Import a single creative catalog and merge its entries."""
|
||||
try:
|
||||
from importlib import import_module
|
||||
|
||||
source_catalog = getattr(import_module(module_path), attr_name)
|
||||
for tool_id, info in source_catalog.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["forge", "helm", "orchestrator"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Image tools ────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.tools.image_tools import IMAGE_TOOL_CATALOG
|
||||
|
||||
for tool_id, info in IMAGE_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["pixel", "orchestrator"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Music tools ────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.tools.music_tools import MUSIC_TOOL_CATALOG
|
||||
|
||||
for tool_id, info in MUSIC_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["lyra", "orchestrator"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Video tools ────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.tools.video_tools import VIDEO_TOOL_CATALOG
|
||||
|
||||
for tool_id, info in VIDEO_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "orchestrator"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Creative pipeline ──────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.director import DIRECTOR_TOOL_CATALOG
|
||||
|
||||
for tool_id, info in DIRECTOR_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["orchestrator"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Assembler tools ───────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.assembler import ASSEMBLER_TOOL_CATALOG
|
||||
|
||||
for tool_id, info in ASSEMBLER_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "orchestrator"],
|
||||
"available_in": available_in,
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
@@ -89,45 +89,31 @@ def list_swarm_agents() -> dict[str, Any]:
|
||||
}
|
||||
|
||||
|
||||
def delegate_to_kimi(task: str, working_directory: str = "") -> dict[str, Any]:
|
||||
"""Delegate a coding task to Kimi, the external coding agent.
|
||||
|
||||
Kimi has 262K context and is optimized for code tasks: writing,
|
||||
debugging, refactoring, test writing. Timmy thinks and plans,
|
||||
Kimi executes bulk code changes.
|
||||
|
||||
Args:
|
||||
task: Clear, specific coding task description. Include file paths
|
||||
and expected behavior. Good: "Fix the bug in src/timmy/session.py
|
||||
where sessions don't persist." Bad: "Fix all bugs."
|
||||
working_directory: Directory for Kimi to work in. Defaults to repo root.
|
||||
|
||||
Returns:
|
||||
Dict with success status and Kimi's output or error.
|
||||
"""
|
||||
def _find_kimi_cli() -> str | None:
|
||||
"""Return the path to the kimi CLI binary, or None if not installed."""
|
||||
import shutil
|
||||
import subprocess
|
||||
|
||||
return shutil.which("kimi")
|
||||
|
||||
|
||||
def _resolve_workdir(working_directory: str) -> str | dict[str, Any]:
|
||||
"""Return a validated working directory path, or an error dict."""
|
||||
from pathlib import Path
|
||||
|
||||
from config import settings
|
||||
|
||||
kimi_path = shutil.which("kimi")
|
||||
if not kimi_path:
|
||||
return {
|
||||
"success": False,
|
||||
"error": "kimi CLI not found on PATH. Install with: pip install kimi-cli",
|
||||
}
|
||||
|
||||
workdir = working_directory or settings.repo_root
|
||||
if not Path(workdir).is_dir():
|
||||
return {
|
||||
"success": False,
|
||||
"error": f"Working directory does not exist: {workdir}",
|
||||
}
|
||||
return workdir
|
||||
|
||||
cmd = [kimi_path, "--print", "-p", task]
|
||||
|
||||
logger.info("Delegating to Kimi: %s (cwd=%s)", task[:80], workdir)
|
||||
def _run_kimi(cmd: list[str], workdir: str) -> dict[str, Any]:
|
||||
"""Execute the kimi subprocess and return a result dict."""
|
||||
import subprocess
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
@@ -157,3 +143,34 @@ def delegate_to_kimi(task: str, working_directory: str = "") -> dict[str, Any]:
|
||||
"success": False,
|
||||
"error": f"Failed to run Kimi: {exc}",
|
||||
}
|
||||
|
||||
|
||||
def delegate_to_kimi(task: str, working_directory: str = "") -> dict[str, Any]:
|
||||
"""Delegate a coding task to Kimi, the external coding agent.
|
||||
|
||||
Kimi has 262K context and is optimized for code tasks: writing,
|
||||
debugging, refactoring, test writing. Timmy thinks and plans,
|
||||
Kimi executes bulk code changes.
|
||||
|
||||
Args:
|
||||
task: Clear, specific coding task description. Include file paths
|
||||
and expected behavior. Good: "Fix the bug in src/timmy/session.py
|
||||
where sessions don't persist." Bad: "Fix all bugs."
|
||||
working_directory: Directory for Kimi to work in. Defaults to repo root.
|
||||
|
||||
Returns:
|
||||
Dict with success status and Kimi's output or error.
|
||||
"""
|
||||
kimi_path = _find_kimi_cli()
|
||||
if not kimi_path:
|
||||
return {
|
||||
"success": False,
|
||||
"error": "kimi CLI not found on PATH. Install with: pip install kimi-cli",
|
||||
}
|
||||
|
||||
workdir = _resolve_workdir(working_directory)
|
||||
if isinstance(workdir, dict):
|
||||
return workdir
|
||||
|
||||
logger.info("Delegating to Kimi: %s (cwd=%s)", task[:80], workdir)
|
||||
return _run_kimi([kimi_path, "--print", "-p", task], workdir)
|
||||
|
||||
@@ -26,7 +26,7 @@ def get_system_info() -> dict[str, Any]:
|
||||
- python_version: Python version
|
||||
- platform: OS platform
|
||||
- model: Current Ollama model (queried from API)
|
||||
- model_backend: Configured backend (ollama/airllm/grok)
|
||||
- model_backend: Configured backend (ollama/grok/claude)
|
||||
- ollama_url: Ollama host URL
|
||||
- repo_root: Repository root path
|
||||
- grok_enabled: Whether GROK is enabled
|
||||
@@ -127,54 +127,48 @@ def check_ollama_health() -> dict[str, Any]:
|
||||
return result
|
||||
|
||||
|
||||
def get_memory_status() -> dict[str, Any]:
|
||||
"""Get the status of Timmy's memory system.
|
||||
|
||||
Returns:
|
||||
Dict with memory tier information
|
||||
"""
|
||||
from config import settings
|
||||
|
||||
repo_root = Path(settings.repo_root)
|
||||
|
||||
# Check tier 1: Hot memory
|
||||
def _hot_memory_info(repo_root: Path) -> dict[str, Any]:
|
||||
"""Tier 1: Hot memory (MEMORY.md) status."""
|
||||
memory_md = repo_root / "MEMORY.md"
|
||||
tier1_exists = memory_md.exists()
|
||||
tier1_content = ""
|
||||
if tier1_exists:
|
||||
tier1_content = memory_md.read_text()[:500] # First 500 chars
|
||||
tier1_content = memory_md.read_text()[:500]
|
||||
|
||||
# Check tier 2: Vault
|
||||
vault_path = repo_root / "memory" / "self"
|
||||
tier2_exists = vault_path.exists()
|
||||
tier2_files = []
|
||||
if tier2_exists:
|
||||
tier2_files = [f.name for f in vault_path.iterdir() if f.is_file()]
|
||||
|
||||
tier1_info: dict[str, Any] = {
|
||||
info: dict[str, Any] = {
|
||||
"exists": tier1_exists,
|
||||
"path": str(memory_md),
|
||||
"preview": " ".join(tier1_content[:200].split()) if tier1_content else None,
|
||||
}
|
||||
if tier1_exists:
|
||||
lines = memory_md.read_text().splitlines()
|
||||
tier1_info["line_count"] = len(lines)
|
||||
tier1_info["sections"] = [ln.lstrip("# ").strip() for ln in lines if ln.startswith("## ")]
|
||||
info["line_count"] = len(lines)
|
||||
info["sections"] = [ln.lstrip("# ").strip() for ln in lines if ln.startswith("## ")]
|
||||
return info
|
||||
|
||||
|
||||
def _vault_info(repo_root: Path) -> dict[str, Any]:
|
||||
"""Tier 2: Vault (memory/ directory tree) status."""
|
||||
vault_path = repo_root / "memory" / "self"
|
||||
tier2_exists = vault_path.exists()
|
||||
tier2_files = [f.name for f in vault_path.iterdir() if f.is_file()] if tier2_exists else []
|
||||
|
||||
# Vault — scan all subdirs under memory/
|
||||
vault_root = repo_root / "memory"
|
||||
vault_info: dict[str, Any] = {
|
||||
info: dict[str, Any] = {
|
||||
"exists": tier2_exists,
|
||||
"path": str(vault_path),
|
||||
"file_count": len(tier2_files),
|
||||
"files": tier2_files[:10],
|
||||
}
|
||||
if vault_root.exists():
|
||||
vault_info["directories"] = [d.name for d in vault_root.iterdir() if d.is_dir()]
|
||||
vault_info["total_markdown_files"] = sum(1 for _ in vault_root.rglob("*.md"))
|
||||
info["directories"] = [d.name for d in vault_root.iterdir() if d.is_dir()]
|
||||
info["total_markdown_files"] = sum(1 for _ in vault_root.rglob("*.md"))
|
||||
return info
|
||||
|
||||
# Tier 3: Semantic memory row count
|
||||
tier3_info: dict[str, Any] = {"available": False}
|
||||
|
||||
def _semantic_memory_info(repo_root: Path) -> dict[str, Any]:
|
||||
"""Tier 3: Semantic memory (vector DB) status."""
|
||||
info: dict[str, Any] = {"available": False}
|
||||
try:
|
||||
sem_db = repo_root / "data" / "memory.db"
|
||||
if sem_db.exists():
|
||||
@@ -184,14 +178,16 @@ def get_memory_status() -> dict[str, Any]:
|
||||
).fetchone()
|
||||
if row and row[0]:
|
||||
count = conn.execute("SELECT COUNT(*) FROM chunks").fetchone()
|
||||
tier3_info["available"] = True
|
||||
tier3_info["vector_count"] = count[0] if count else 0
|
||||
info["available"] = True
|
||||
info["vector_count"] = count[0] if count else 0
|
||||
except Exception as exc:
|
||||
logger.debug("Memory status query failed: %s", exc)
|
||||
pass
|
||||
return info
|
||||
|
||||
# Self-coding journal stats
|
||||
journal_info: dict[str, Any] = {"available": False}
|
||||
|
||||
def _journal_info(repo_root: Path) -> dict[str, Any]:
|
||||
"""Self-coding journal statistics."""
|
||||
info: dict[str, Any] = {"available": False}
|
||||
try:
|
||||
journal_db = repo_root / "data" / "self_coding.db"
|
||||
if journal_db.exists():
|
||||
@@ -203,7 +199,7 @@ def get_memory_status() -> dict[str, Any]:
|
||||
if rows:
|
||||
counts = {r["outcome"]: r["cnt"] for r in rows}
|
||||
total = sum(counts.values())
|
||||
journal_info = {
|
||||
info = {
|
||||
"available": True,
|
||||
"total_attempts": total,
|
||||
"successes": counts.get("success", 0),
|
||||
@@ -212,13 +208,24 @@ def get_memory_status() -> dict[str, Any]:
|
||||
}
|
||||
except Exception as exc:
|
||||
logger.debug("Journal stats query failed: %s", exc)
|
||||
pass
|
||||
return info
|
||||
|
||||
|
||||
def get_memory_status() -> dict[str, Any]:
|
||||
"""Get the status of Timmy's memory system.
|
||||
|
||||
Returns:
|
||||
Dict with memory tier information
|
||||
"""
|
||||
from config import settings
|
||||
|
||||
repo_root = Path(settings.repo_root)
|
||||
|
||||
return {
|
||||
"tier1_hot_memory": tier1_info,
|
||||
"tier2_vault": vault_info,
|
||||
"tier3_semantic": tier3_info,
|
||||
"self_coding_journal": journal_info,
|
||||
"tier1_hot_memory": _hot_memory_info(repo_root),
|
||||
"tier2_vault": _vault_info(repo_root),
|
||||
"tier3_semantic": _semantic_memory_info(repo_root),
|
||||
"self_coding_journal": _journal_info(repo_root),
|
||||
}
|
||||
|
||||
|
||||
@@ -319,6 +326,46 @@ def get_live_system_status() -> dict[str, Any]:
|
||||
return result
|
||||
|
||||
|
||||
def _build_pytest_cmd(venv_python: Path, scope: str) -> list[str]:
|
||||
"""Build the pytest command list for the given scope."""
|
||||
cmd = [str(venv_python), "-m", "pytest", "-x", "-q", "--tb=short", "--timeout=30"]
|
||||
|
||||
if scope == "fast":
|
||||
cmd.extend(
|
||||
[
|
||||
"--ignore=tests/functional",
|
||||
"--ignore=tests/e2e",
|
||||
"--ignore=tests/integrations",
|
||||
"tests/",
|
||||
]
|
||||
)
|
||||
elif scope == "full":
|
||||
cmd.append("tests/")
|
||||
else:
|
||||
cmd.append(scope)
|
||||
|
||||
return cmd
|
||||
|
||||
|
||||
def _parse_pytest_output(output: str) -> dict[str, int]:
|
||||
"""Extract passed/failed/error counts from pytest output."""
|
||||
import re
|
||||
|
||||
passed = failed = errors = 0
|
||||
for line in output.splitlines():
|
||||
if "passed" in line or "failed" in line or "error" in line:
|
||||
nums = re.findall(r"(\d+) (passed|failed|error)", line)
|
||||
for count, kind in nums:
|
||||
if kind == "passed":
|
||||
passed = int(count)
|
||||
elif kind == "failed":
|
||||
failed = int(count)
|
||||
elif kind == "error":
|
||||
errors = int(count)
|
||||
|
||||
return {"passed": passed, "failed": failed, "errors": errors}
|
||||
|
||||
|
||||
def run_self_tests(scope: str = "fast", _repo_root: str | None = None) -> dict[str, Any]:
|
||||
"""Run Timmy's own test suite and report results.
|
||||
|
||||
@@ -342,49 +389,17 @@ def run_self_tests(scope: str = "fast", _repo_root: str | None = None) -> dict[s
|
||||
if not venv_python.exists():
|
||||
return {"success": False, "error": f"No venv found at {venv_python}"}
|
||||
|
||||
cmd = [str(venv_python), "-m", "pytest", "-x", "-q", "--tb=short", "--timeout=30"]
|
||||
|
||||
if scope == "fast":
|
||||
# Unit tests only — skip functional/e2e/integration
|
||||
cmd.extend(
|
||||
[
|
||||
"--ignore=tests/functional",
|
||||
"--ignore=tests/e2e",
|
||||
"--ignore=tests/integrations",
|
||||
"tests/",
|
||||
]
|
||||
)
|
||||
elif scope == "full":
|
||||
cmd.append("tests/")
|
||||
else:
|
||||
# Specific path
|
||||
cmd.append(scope)
|
||||
cmd = _build_pytest_cmd(venv_python, scope)
|
||||
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=120, cwd=repo)
|
||||
output = result.stdout + result.stderr
|
||||
|
||||
# Parse pytest output for counts
|
||||
passed = failed = errors = 0
|
||||
for line in output.splitlines():
|
||||
if "passed" in line or "failed" in line or "error" in line:
|
||||
import re
|
||||
|
||||
nums = re.findall(r"(\d+) (passed|failed|error)", line)
|
||||
for count, kind in nums:
|
||||
if kind == "passed":
|
||||
passed = int(count)
|
||||
elif kind == "failed":
|
||||
failed = int(count)
|
||||
elif kind == "error":
|
||||
errors = int(count)
|
||||
counts = _parse_pytest_output(output)
|
||||
|
||||
return {
|
||||
"success": result.returncode == 0,
|
||||
"passed": passed,
|
||||
"failed": failed,
|
||||
"errors": errors,
|
||||
"total": passed + failed + errors,
|
||||
**counts,
|
||||
"total": counts["passed"] + counts["failed"] + counts["errors"],
|
||||
"return_code": result.returncode,
|
||||
"summary": output[-2000:] if len(output) > 2000 else output,
|
||||
}
|
||||
|
||||
@@ -78,6 +78,11 @@ DEFAULT_MAX_UTTERANCE = 30.0 # safety cap — don't record forever
|
||||
DEFAULT_SESSION_ID = "voice"
|
||||
|
||||
|
||||
def _rms(block: np.ndarray) -> float:
|
||||
"""Compute root-mean-square energy of an audio block."""
|
||||
return float(np.sqrt(np.mean(block.astype(np.float32) ** 2)))
|
||||
|
||||
|
||||
@dataclass
|
||||
class VoiceConfig:
|
||||
"""Configuration for the voice loop."""
|
||||
@@ -161,13 +166,6 @@ class VoiceLoop:
|
||||
min_blocks = int(self.config.min_utterance / 0.1)
|
||||
max_blocks = int(self.config.max_utterance / 0.1)
|
||||
|
||||
audio_chunks: list[np.ndarray] = []
|
||||
silent_count = 0
|
||||
recording = False
|
||||
|
||||
def _rms(block: np.ndarray) -> float:
|
||||
return float(np.sqrt(np.mean(block.astype(np.float32) ** 2)))
|
||||
|
||||
sys.stdout.write("\n 🎤 Listening... (speak now)\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
@@ -177,42 +175,69 @@ class VoiceLoop:
|
||||
dtype="float32",
|
||||
blocksize=block_size,
|
||||
) as stream:
|
||||
while self._running:
|
||||
block, overflowed = stream.read(block_size)
|
||||
if overflowed:
|
||||
logger.debug("Audio buffer overflowed")
|
||||
chunks = self._capture_audio_blocks(stream, block_size, silence_blocks, max_blocks)
|
||||
|
||||
rms = _rms(block)
|
||||
return self._finalize_utterance(chunks, min_blocks, sr)
|
||||
|
||||
if not recording:
|
||||
if rms > self.config.silence_threshold:
|
||||
recording = True
|
||||
silent_count = 0
|
||||
audio_chunks.append(block.copy())
|
||||
sys.stdout.write(" 📢 Recording...\r")
|
||||
sys.stdout.flush()
|
||||
def _capture_audio_blocks(
|
||||
self,
|
||||
stream,
|
||||
block_size: int,
|
||||
silence_blocks: int,
|
||||
max_blocks: int,
|
||||
) -> list[np.ndarray]:
|
||||
"""Read audio blocks from *stream* until silence or max length.
|
||||
|
||||
Returns the list of captured audio chunks (may be empty).
|
||||
"""
|
||||
chunks: list[np.ndarray] = []
|
||||
silent_count = 0
|
||||
recording = False
|
||||
|
||||
while self._running:
|
||||
block, overflowed = stream.read(block_size)
|
||||
if overflowed:
|
||||
logger.debug("Audio buffer overflowed")
|
||||
|
||||
rms = _rms(block)
|
||||
|
||||
if not recording:
|
||||
if rms > self.config.silence_threshold:
|
||||
recording = True
|
||||
silent_count = 0
|
||||
chunks.append(block.copy())
|
||||
sys.stdout.write(" 📢 Recording...\r")
|
||||
sys.stdout.flush()
|
||||
else:
|
||||
chunks.append(block.copy())
|
||||
|
||||
if rms < self.config.silence_threshold:
|
||||
silent_count += 1
|
||||
else:
|
||||
audio_chunks.append(block.copy())
|
||||
silent_count = 0
|
||||
|
||||
if rms < self.config.silence_threshold:
|
||||
silent_count += 1
|
||||
else:
|
||||
silent_count = 0
|
||||
if silent_count >= silence_blocks:
|
||||
break
|
||||
|
||||
# End of utterance
|
||||
if silent_count >= silence_blocks:
|
||||
break
|
||||
if len(chunks) >= max_blocks:
|
||||
logger.info("Max utterance length reached, stopping.")
|
||||
break
|
||||
|
||||
# Safety cap
|
||||
if len(audio_chunks) >= max_blocks:
|
||||
logger.info("Max utterance length reached, stopping.")
|
||||
break
|
||||
return chunks
|
||||
|
||||
if not audio_chunks or len(audio_chunks) < min_blocks:
|
||||
@staticmethod
|
||||
def _finalize_utterance(
|
||||
chunks: list[np.ndarray], min_blocks: int, sample_rate: int
|
||||
) -> np.ndarray | None:
|
||||
"""Concatenate recorded chunks and report duration.
|
||||
|
||||
Returns ``None`` if the utterance is too short to be meaningful.
|
||||
"""
|
||||
if not chunks or len(chunks) < min_blocks:
|
||||
return None
|
||||
|
||||
audio = np.concatenate(audio_chunks, axis=0).flatten()
|
||||
duration = len(audio) / sr
|
||||
audio = np.concatenate(chunks, axis=0).flatten()
|
||||
duration = len(audio) / sample_rate
|
||||
sys.stdout.write(f" ✂️ Captured {duration:.1f}s of audio\n")
|
||||
sys.stdout.flush()
|
||||
return audio
|
||||
@@ -369,15 +394,33 @@ class VoiceLoop:
|
||||
|
||||
# ── Main Loop ───────────────────────────────────────────────────────
|
||||
|
||||
def run(self) -> None:
|
||||
"""Run the voice loop. Blocks until Ctrl-C."""
|
||||
self._ensure_piper()
|
||||
# Whisper hallucinates these on silence/noise — skip them.
|
||||
_WHISPER_HALLUCINATIONS = frozenset(
|
||||
{
|
||||
"you",
|
||||
"thanks.",
|
||||
"thank you.",
|
||||
"bye.",
|
||||
"",
|
||||
"thanks for watching!",
|
||||
"thank you for watching!",
|
||||
}
|
||||
)
|
||||
|
||||
# Suppress MCP / Agno stderr noise during voice mode.
|
||||
_suppress_mcp_noise()
|
||||
# Suppress MCP async-generator teardown tracebacks on exit.
|
||||
_install_quiet_asyncgen_hooks()
|
||||
# Spoken phrases that end the voice session.
|
||||
_EXIT_COMMANDS = frozenset(
|
||||
{
|
||||
"goodbye",
|
||||
"exit",
|
||||
"quit",
|
||||
"stop",
|
||||
"goodbye timmy",
|
||||
"stop listening",
|
||||
}
|
||||
)
|
||||
|
||||
def _log_banner(self) -> None:
|
||||
"""Log the startup banner with STT/TTS/LLM configuration."""
|
||||
tts_label = (
|
||||
"macOS say"
|
||||
if self.config.use_say_fallback
|
||||
@@ -393,52 +436,50 @@ class VoiceLoop:
|
||||
" Press Ctrl-C to exit.\n" + "=" * 60
|
||||
)
|
||||
|
||||
def _is_hallucination(self, text: str) -> bool:
|
||||
"""Return True if *text* is a known Whisper hallucination."""
|
||||
return not text or text.lower() in self._WHISPER_HALLUCINATIONS
|
||||
|
||||
def _is_exit_command(self, text: str) -> bool:
|
||||
"""Return True if the user asked to stop the voice session."""
|
||||
return text.lower().strip().rstrip(".!") in self._EXIT_COMMANDS
|
||||
|
||||
def _process_turn(self, text: str) -> None:
|
||||
"""Handle a single listen-think-speak turn after transcription."""
|
||||
sys.stdout.write(f"\n 👤 You: {text}\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
response = self._think(text)
|
||||
sys.stdout.write(f" 🤖 Timmy: {response}\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
self._speak(response)
|
||||
|
||||
def run(self) -> None:
|
||||
"""Run the voice loop. Blocks until Ctrl-C."""
|
||||
self._ensure_piper()
|
||||
_suppress_mcp_noise()
|
||||
_install_quiet_asyncgen_hooks()
|
||||
self._log_banner()
|
||||
|
||||
self._running = True
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
# 1. LISTEN — record until silence
|
||||
audio = self._record_utterance()
|
||||
if audio is None:
|
||||
continue
|
||||
|
||||
# 2. TRANSCRIBE — Whisper STT
|
||||
text = self._transcribe(audio)
|
||||
if not text or text.lower() in (
|
||||
"you",
|
||||
"thanks.",
|
||||
"thank you.",
|
||||
"bye.",
|
||||
"",
|
||||
"thanks for watching!",
|
||||
"thank you for watching!",
|
||||
):
|
||||
# Whisper hallucinations on silence/noise
|
||||
if self._is_hallucination(text):
|
||||
logger.debug("Ignoring likely Whisper hallucination: '%s'", text)
|
||||
continue
|
||||
|
||||
sys.stdout.write(f"\n 👤 You: {text}\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
# Exit commands
|
||||
if text.lower().strip().rstrip(".!") in (
|
||||
"goodbye",
|
||||
"exit",
|
||||
"quit",
|
||||
"stop",
|
||||
"goodbye timmy",
|
||||
"stop listening",
|
||||
):
|
||||
if self._is_exit_command(text):
|
||||
logger.info("👋 Goodbye!")
|
||||
break
|
||||
|
||||
# 3. THINK — send to Timmy
|
||||
response = self._think(text)
|
||||
sys.stdout.write(f" 🤖 Timmy: {response}\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
# 4. SPEAK — TTS output
|
||||
self._speak(response)
|
||||
self._process_turn(text)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logger.info("👋 Voice loop stopped.")
|
||||
|
||||
273
src/timmy/workshop_state.py
Normal file
273
src/timmy/workshop_state.py
Normal file
@@ -0,0 +1,273 @@
|
||||
"""Workshop presence heartbeat — periodic writer for ``~/.timmy/presence.json``.
|
||||
|
||||
Maintains Timmy's observable presence state for the Workshop 3D renderer.
|
||||
Writes the presence file every 30 seconds (or on cognitive state change),
|
||||
skipping writes when state is unchanged.
|
||||
|
||||
See ADR-023 for the schema contract and issue #360 for the full v1 schema.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from collections.abc import Awaitable, Callable
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
PRESENCE_FILE = Path.home() / ".timmy" / "presence.json"
|
||||
HEARTBEAT_INTERVAL = 30 # seconds
|
||||
|
||||
# Cognitive mood → presence mood mapping (issue #360 enum values)
|
||||
_MOOD_MAP: dict[str, str] = {
|
||||
"curious": "contemplative",
|
||||
"settled": "calm",
|
||||
"hesitant": "uncertain",
|
||||
"energized": "excited",
|
||||
}
|
||||
|
||||
# Activity mapping from cognitive engagement
|
||||
_ACTIVITY_MAP: dict[str, str] = {
|
||||
"idle": "idle",
|
||||
"surface": "thinking",
|
||||
"deep": "thinking",
|
||||
}
|
||||
|
||||
# Module-level energy tracker — decays over time, resets on interaction
|
||||
_energy_state: dict[str, float] = {"value": 0.8, "last_interaction": time.monotonic()}
|
||||
|
||||
# Startup timestamp for uptime calculation
|
||||
_start_time = time.monotonic()
|
||||
|
||||
# Energy decay: 0.01 per minute without interaction (per issue #360)
|
||||
_ENERGY_DECAY_PER_SECOND = 0.01 / 60.0
|
||||
_ENERGY_MIN = 0.1
|
||||
|
||||
|
||||
def _time_of_day(hour: int) -> str:
|
||||
"""Map hour (0-23) to a time-of-day label."""
|
||||
if 5 <= hour < 12:
|
||||
return "morning"
|
||||
if 12 <= hour < 17:
|
||||
return "afternoon"
|
||||
if 17 <= hour < 21:
|
||||
return "evening"
|
||||
if 21 <= hour or hour < 2:
|
||||
return "night"
|
||||
return "deep-night"
|
||||
|
||||
|
||||
def reset_energy() -> None:
|
||||
"""Reset energy to full (called on interaction)."""
|
||||
_energy_state["value"] = 0.8
|
||||
_energy_state["last_interaction"] = time.monotonic()
|
||||
|
||||
|
||||
def _current_energy() -> float:
|
||||
"""Compute current energy with time-based decay."""
|
||||
elapsed = time.monotonic() - _energy_state["last_interaction"]
|
||||
decayed = _energy_state["value"] - (elapsed * _ENERGY_DECAY_PER_SECOND)
|
||||
return max(_ENERGY_MIN, min(1.0, decayed))
|
||||
|
||||
|
||||
def _pip_snapshot(mood: str, confidence: float) -> dict:
|
||||
"""Tick Pip and return his current snapshot dict.
|
||||
|
||||
Feeds Timmy's mood and confidence into Pip's behavioral AI so the
|
||||
familiar reacts to Timmy's cognitive state.
|
||||
"""
|
||||
from timmy.familiar import pip_familiar
|
||||
|
||||
pip_familiar.on_mood_change(mood, confidence=confidence)
|
||||
pip_familiar.tick()
|
||||
return pip_familiar.snapshot().to_dict()
|
||||
|
||||
|
||||
def _resolve_mood(state) -> str:
|
||||
"""Map cognitive mood/engagement to a presence mood string."""
|
||||
if state.engagement == "idle" and state.mood == "settled":
|
||||
return "calm"
|
||||
return _MOOD_MAP.get(state.mood, "calm")
|
||||
|
||||
|
||||
def _resolve_confidence(state) -> float:
|
||||
"""Compute normalised confidence from cognitive tracker state."""
|
||||
if state._confidence_count > 0:
|
||||
raw = state._confidence_sum / state._confidence_count
|
||||
else:
|
||||
raw = 0.7
|
||||
return round(max(0.0, min(1.0, raw)), 2)
|
||||
|
||||
|
||||
def _build_active_threads(state) -> list[dict]:
|
||||
"""Convert active commitments into presence thread dicts."""
|
||||
return [
|
||||
{"type": "thinking", "ref": c[:80], "status": "active"}
|
||||
for c in state.active_commitments[:10]
|
||||
]
|
||||
|
||||
|
||||
def _build_environment() -> dict:
|
||||
"""Return the environment section using local wall-clock time."""
|
||||
local_now = datetime.now()
|
||||
return {
|
||||
"time_of_day": _time_of_day(local_now.hour),
|
||||
"local_time": local_now.strftime("%-I:%M %p"),
|
||||
"day_of_week": local_now.strftime("%A"),
|
||||
}
|
||||
|
||||
|
||||
def get_state_dict() -> dict:
|
||||
"""Build presence state dict from current cognitive state.
|
||||
|
||||
Returns a v1 presence schema dict suitable for JSON serialisation.
|
||||
Includes the full schema from issue #360: identity, mood, activity,
|
||||
attention, interaction, environment, and meta sections.
|
||||
"""
|
||||
from timmy.cognitive_state import cognitive_tracker
|
||||
|
||||
state = cognitive_tracker.get_state()
|
||||
now = datetime.now(UTC)
|
||||
|
||||
mood = _resolve_mood(state)
|
||||
confidence = _resolve_confidence(state)
|
||||
activity = _ACTIVITY_MAP.get(state.engagement, "idle")
|
||||
|
||||
return {
|
||||
"version": 1,
|
||||
"liveness": now.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"current_focus": state.focus_topic or "",
|
||||
"active_threads": _build_active_threads(state),
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
"mood": mood,
|
||||
"confidence": confidence,
|
||||
"energy": round(_current_energy(), 2),
|
||||
"identity": {
|
||||
"name": "Timmy",
|
||||
"title": "The Workshop Wizard",
|
||||
"uptime_seconds": int(time.monotonic() - _start_time),
|
||||
},
|
||||
"activity": {
|
||||
"current": activity,
|
||||
"detail": state.focus_topic or "",
|
||||
},
|
||||
"interaction": {
|
||||
"visitor_present": False,
|
||||
"conversation_turns": state.conversation_depth,
|
||||
},
|
||||
"environment": _build_environment(),
|
||||
"familiar": _pip_snapshot(mood, confidence),
|
||||
"meta": {
|
||||
"schema_version": 1,
|
||||
"updated_at": now.strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"writer": "timmy-loop",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def write_state(state_dict: dict | None = None, path: Path | None = None) -> None:
|
||||
"""Write presence state to ``~/.timmy/presence.json``.
|
||||
|
||||
Gracefully degrades if the file cannot be written.
|
||||
"""
|
||||
if state_dict is None:
|
||||
state_dict = get_state_dict()
|
||||
target = path or PRESENCE_FILE
|
||||
try:
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
target.write_text(json.dumps(state_dict, indent=2) + "\n")
|
||||
except OSError as exc:
|
||||
logger.warning("Failed to write presence state: %s", exc)
|
||||
|
||||
|
||||
def _state_hash(state_dict: dict) -> str:
|
||||
"""Compute hash of state dict, ignoring volatile timestamps."""
|
||||
stable = {k: v for k, v in state_dict.items() if k not in ("liveness", "meta")}
|
||||
return hashlib.md5(json.dumps(stable, sort_keys=True).encode()).hexdigest()
|
||||
|
||||
|
||||
class WorkshopHeartbeat:
|
||||
"""Async background task that keeps ``presence.json`` fresh.
|
||||
|
||||
- Writes every ``interval`` seconds (default 30).
|
||||
- Reacts to cognitive state changes via sensory bus.
|
||||
- Skips write if state hasn't changed (hash comparison).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
interval: int = HEARTBEAT_INTERVAL,
|
||||
path: Path | None = None,
|
||||
on_change: Callable[[dict], Awaitable[None]] | None = None,
|
||||
) -> None:
|
||||
self._interval = interval
|
||||
self._path = path or PRESENCE_FILE
|
||||
self._last_hash: str | None = None
|
||||
self._task: asyncio.Task | None = None
|
||||
self._trigger = asyncio.Event()
|
||||
self._on_change = on_change
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Start the heartbeat background loop."""
|
||||
self._subscribe_to_events()
|
||||
self._task = asyncio.create_task(self._run())
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Cancel the heartbeat task gracefully."""
|
||||
if self._task:
|
||||
self._task.cancel()
|
||||
try:
|
||||
await self._task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
self._task = None
|
||||
|
||||
def notify(self) -> None:
|
||||
"""Signal an immediate state write (e.g. on cognitive state change)."""
|
||||
self._trigger.set()
|
||||
|
||||
async def _run(self) -> None:
|
||||
"""Main loop: write state on interval or trigger."""
|
||||
await asyncio.sleep(1) # Initial stagger
|
||||
while True:
|
||||
try:
|
||||
# Wait for interval OR early trigger
|
||||
try:
|
||||
await asyncio.wait_for(self._trigger.wait(), timeout=self._interval)
|
||||
self._trigger.clear()
|
||||
except TimeoutError:
|
||||
pass # Normal periodic tick
|
||||
|
||||
await self._write_if_changed()
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.error("Workshop heartbeat error: %s", exc)
|
||||
|
||||
async def _write_if_changed(self) -> None:
|
||||
"""Build state, compare hash, write only if changed."""
|
||||
state_dict = get_state_dict()
|
||||
current_hash = _state_hash(state_dict)
|
||||
if current_hash == self._last_hash:
|
||||
return
|
||||
self._last_hash = current_hash
|
||||
write_state(state_dict, self._path)
|
||||
if self._on_change:
|
||||
try:
|
||||
await self._on_change(state_dict)
|
||||
except Exception as exc:
|
||||
logger.warning("on_change callback failed: %s", exc)
|
||||
|
||||
def _subscribe_to_events(self) -> None:
|
||||
"""Subscribe to cognitive state change events on the sensory bus."""
|
||||
try:
|
||||
from timmy.event_bus import get_sensory_bus
|
||||
|
||||
bus = get_sensory_bus()
|
||||
bus.subscribe("cognitive_state_changed", lambda _: self.notify())
|
||||
except Exception as exc:
|
||||
logger.debug("Heartbeat event subscription skipped: %s", exc)
|
||||
@@ -75,6 +75,8 @@ def create_timmy_serve_app() -> FastAPI:
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
logger.info("Timmy Serve starting")
|
||||
app.state.timmy = create_timmy()
|
||||
logger.info("Timmy agent cached in app state")
|
||||
yield
|
||||
logger.info("Timmy Serve shutting down")
|
||||
|
||||
@@ -101,7 +103,7 @@ def create_timmy_serve_app() -> FastAPI:
|
||||
async def serve_chat(request: Request, body: ChatRequest):
|
||||
"""Process a chat request."""
|
||||
try:
|
||||
timmy = create_timmy()
|
||||
timmy = request.app.state.timmy
|
||||
result = timmy.run(body.message, stream=False)
|
||||
response_text = result.content if hasattr(result, "content") else str(result)
|
||||
|
||||
|
||||
@@ -2493,3 +2493,57 @@
|
||||
.db-cell { max-width: 300px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
|
||||
.db-cell:hover { white-space: normal; word-break: break-all; }
|
||||
.db-truncated { font-size: 0.7rem; color: var(--amber); padding: 0.3rem 0; }
|
||||
|
||||
/* ── Tower ────────────────────────────────────────────────────────────── */
|
||||
.tower-container { max-width: 1400px; margin: 0 auto; }
|
||||
.tower-header { margin-bottom: 1rem; }
|
||||
.tower-title { font-size: 1.6rem; font-weight: 700; color: var(--green); letter-spacing: 0.15em; }
|
||||
.tower-subtitle { font-size: 0.85rem; color: var(--text-dim); }
|
||||
|
||||
.tower-conn-badge { font-size: 0.7rem; font-weight: 600; padding: 2px 8px; border-radius: 3px; letter-spacing: 0.08em; }
|
||||
.tower-conn-live { color: var(--green); border: 1px solid var(--green); }
|
||||
.tower-conn-offline { color: var(--red); border: 1px solid var(--red); }
|
||||
.tower-conn-connecting { color: var(--amber); border: 1px solid var(--amber); }
|
||||
|
||||
.tower-phase-card { min-height: 300px; }
|
||||
.tower-phase-thinking { border-left: 3px solid var(--purple); }
|
||||
.tower-phase-predicting { border-left: 3px solid var(--orange); }
|
||||
.tower-phase-advising { border-left: 3px solid var(--green); }
|
||||
.tower-scroll { max-height: 50vh; overflow-y: auto; }
|
||||
.tower-empty { text-align: center; color: var(--text-dim); padding: 16px; font-size: 0.85rem; }
|
||||
|
||||
.tower-stat-grid { display: grid; grid-template-columns: repeat(4, 1fr); gap: 0.5rem; text-align: center; }
|
||||
.tower-stat-label { display: block; font-size: 0.65rem; color: var(--text-dim); letter-spacing: 0.1em; }
|
||||
.tower-stat-value { display: block; font-size: 1.1rem; font-weight: 700; color: var(--text-bright); }
|
||||
|
||||
.tower-event { padding: 8px; margin-bottom: 6px; border-left: 3px solid var(--border); border-radius: 3px; background: var(--bg-card); }
|
||||
.tower-etype-task_posted { border-left-color: var(--purple); }
|
||||
.tower-etype-bid_submitted { border-left-color: var(--orange); }
|
||||
.tower-etype-task_completed { border-left-color: var(--green); }
|
||||
.tower-etype-task_failed { border-left-color: var(--red); }
|
||||
.tower-etype-agent_joined { border-left-color: var(--purple); }
|
||||
.tower-etype-tool_executed { border-left-color: var(--amber); }
|
||||
.tower-ev-head { display: flex; justify-content: space-between; align-items: center; margin-bottom: 4px; }
|
||||
.tower-ev-badge { font-size: 0.65rem; font-weight: 600; color: var(--text-bright); letter-spacing: 0.08em; }
|
||||
.tower-ev-dots { font-size: 0.6rem; color: var(--amber); }
|
||||
.tower-ev-desc { font-size: 0.8rem; color: var(--text); }
|
||||
.tower-ev-time { font-size: 0.65rem; color: var(--text-dim); margin-top: 2px; }
|
||||
|
||||
.tower-pred { padding: 8px; margin-bottom: 6px; border-radius: 3px; background: var(--bg-card); border-left: 3px solid var(--orange); }
|
||||
.tower-pred-done { border-left-color: var(--green); }
|
||||
.tower-pred-pending { border-left-color: var(--amber); }
|
||||
.tower-pred-head { display: flex; justify-content: space-between; align-items: center; }
|
||||
.tower-pred-task { font-size: 0.75rem; font-weight: 600; color: var(--text-bright); font-family: monospace; }
|
||||
.tower-pred-acc { font-size: 0.75rem; font-weight: 700; }
|
||||
.tower-pred-detail { font-size: 0.75rem; color: var(--text-dim); margin-top: 4px; }
|
||||
|
||||
.tower-advisory { padding: 8px; margin-bottom: 6px; border-radius: 3px; background: var(--bg-card); border-left: 3px solid var(--border); }
|
||||
.tower-adv-high { border-left-color: var(--red); }
|
||||
.tower-adv-medium { border-left-color: var(--orange); }
|
||||
.tower-adv-low { border-left-color: var(--green); }
|
||||
.tower-adv-head { display: flex; justify-content: space-between; font-size: 0.65rem; margin-bottom: 4px; }
|
||||
.tower-adv-cat { font-weight: 600; color: var(--text-dim); letter-spacing: 0.08em; }
|
||||
.tower-adv-prio { font-weight: 700; color: var(--amber); }
|
||||
.tower-adv-title { font-size: 0.85rem; font-weight: 600; color: var(--text-bright); }
|
||||
.tower-adv-detail { font-size: 0.8rem; color: var(--text); margin-top: 2px; }
|
||||
.tower-adv-action { font-size: 0.75rem; color: var(--green); margin-top: 4px; font-style: italic; }
|
||||
|
||||
50
static/world/controls.js
vendored
Normal file
50
static/world/controls.js
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
/**
|
||||
* Camera + touch controls for the Workshop scene.
|
||||
*
|
||||
* Uses Three.js OrbitControls with constrained range — the visitor
|
||||
* can look around the room but not leave it.
|
||||
*/
|
||||
|
||||
import { OrbitControls } from "https://cdn.jsdelivr.net/npm/three@0.160.0/examples/jsm/controls/OrbitControls.js";
|
||||
|
||||
/**
|
||||
* Set up camera controls.
|
||||
* @param {THREE.PerspectiveCamera} camera
|
||||
* @param {HTMLCanvasElement} domElement
|
||||
* @returns {OrbitControls}
|
||||
*/
|
||||
export function setupControls(camera, domElement) {
|
||||
const controls = new OrbitControls(camera, domElement);
|
||||
|
||||
// Smooth damping
|
||||
controls.enableDamping = true;
|
||||
controls.dampingFactor = 0.08;
|
||||
|
||||
// Limit zoom range
|
||||
controls.minDistance = 3;
|
||||
controls.maxDistance = 12;
|
||||
|
||||
// Limit vertical angle (don't look below floor or straight up)
|
||||
controls.minPolarAngle = Math.PI * 0.2;
|
||||
controls.maxPolarAngle = Math.PI * 0.6;
|
||||
|
||||
// Limit horizontal rotation range (stay facing the desk area)
|
||||
controls.minAzimuthAngle = -Math.PI * 0.4;
|
||||
controls.maxAzimuthAngle = Math.PI * 0.4;
|
||||
|
||||
// Target: roughly the desk area
|
||||
controls.target.set(0, 1.2, 0);
|
||||
|
||||
// Touch settings
|
||||
controls.touches = {
|
||||
ONE: 0, // ROTATE
|
||||
TWO: 2, // DOLLY
|
||||
};
|
||||
|
||||
// Disable panning (visitor stays in place)
|
||||
controls.enablePan = false;
|
||||
|
||||
controls.update();
|
||||
|
||||
return controls;
|
||||
}
|
||||
150
static/world/familiar.js
Normal file
150
static/world/familiar.js
Normal file
@@ -0,0 +1,150 @@
|
||||
/**
|
||||
* Pip the Familiar — a small glowing orb that floats around the room.
|
||||
*
|
||||
* Emerald green core with a gold particle trail.
|
||||
* Wanders on a randomized path, occasionally pauses near Timmy.
|
||||
*/
|
||||
|
||||
import * as THREE from "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js";
|
||||
|
||||
const CORE_COLOR = 0x00b450;
|
||||
const GLOW_COLOR = 0x00b450;
|
||||
const TRAIL_COLOR = 0xdaa520;
|
||||
|
||||
/**
|
||||
* Create the familiar and return { group, update }.
|
||||
* Call update(dt) each frame.
|
||||
*/
|
||||
export function createFamiliar() {
|
||||
const group = new THREE.Group();
|
||||
|
||||
// --- Core orb ---
|
||||
const coreGeo = new THREE.SphereGeometry(0.08, 12, 10);
|
||||
const coreMat = new THREE.MeshStandardMaterial({
|
||||
color: CORE_COLOR,
|
||||
emissive: GLOW_COLOR,
|
||||
emissiveIntensity: 1.5,
|
||||
roughness: 0.2,
|
||||
});
|
||||
const core = new THREE.Mesh(coreGeo, coreMat);
|
||||
group.add(core);
|
||||
|
||||
// --- Glow (larger transparent sphere) ---
|
||||
const glowGeo = new THREE.SphereGeometry(0.15, 10, 8);
|
||||
const glowMat = new THREE.MeshBasicMaterial({
|
||||
color: GLOW_COLOR,
|
||||
transparent: true,
|
||||
opacity: 0.15,
|
||||
});
|
||||
const glow = new THREE.Mesh(glowGeo, glowMat);
|
||||
group.add(glow);
|
||||
|
||||
// --- Point light from Pip ---
|
||||
const light = new THREE.PointLight(CORE_COLOR, 0.4, 4);
|
||||
group.add(light);
|
||||
|
||||
// --- Trail particles (simple small spheres) ---
|
||||
const trailCount = 6;
|
||||
const trails = [];
|
||||
const trailGeo = new THREE.SphereGeometry(0.02, 4, 4);
|
||||
const trailMat = new THREE.MeshBasicMaterial({
|
||||
color: TRAIL_COLOR,
|
||||
transparent: true,
|
||||
opacity: 0.6,
|
||||
});
|
||||
for (let i = 0; i < trailCount; i++) {
|
||||
const t = new THREE.Mesh(trailGeo, trailMat.clone());
|
||||
t.visible = false;
|
||||
group.add(t);
|
||||
trails.push({ mesh: t, age: 0, maxAge: 0.3 + Math.random() * 0.3 });
|
||||
}
|
||||
|
||||
// Starting position
|
||||
group.position.set(1.5, 1.8, -0.5);
|
||||
|
||||
// Wandering state
|
||||
let elapsed = 0;
|
||||
let trailTimer = 0;
|
||||
let trailIndex = 0;
|
||||
|
||||
// Waypoints for random wandering
|
||||
const waypoints = [
|
||||
new THREE.Vector3(1.5, 1.8, -0.5),
|
||||
new THREE.Vector3(-1.0, 2.0, 0.5),
|
||||
new THREE.Vector3(0.0, 1.5, -0.3), // near Timmy
|
||||
new THREE.Vector3(1.2, 2.2, 0.8),
|
||||
new THREE.Vector3(-0.5, 1.3, -0.2), // near desk
|
||||
new THREE.Vector3(0.3, 2.5, 0.3),
|
||||
];
|
||||
let waypointIndex = 0;
|
||||
let target = waypoints[0].clone();
|
||||
let pauseTimer = 0;
|
||||
|
||||
function pickNextTarget() {
|
||||
waypointIndex = (waypointIndex + 1) % waypoints.length;
|
||||
target.copy(waypoints[waypointIndex]);
|
||||
// Add randomness
|
||||
target.x += (Math.random() - 0.5) * 0.6;
|
||||
target.y += (Math.random() - 0.5) * 0.3;
|
||||
target.z += (Math.random() - 0.5) * 0.6;
|
||||
}
|
||||
|
||||
function update(dt) {
|
||||
elapsed += dt;
|
||||
|
||||
// Move toward target
|
||||
if (pauseTimer > 0) {
|
||||
pauseTimer -= dt;
|
||||
} else {
|
||||
const dir = target.clone().sub(group.position);
|
||||
const dist = dir.length();
|
||||
if (dist < 0.15) {
|
||||
pickNextTarget();
|
||||
// Occasionally pause
|
||||
if (Math.random() < 0.3) {
|
||||
pauseTimer = 1.0 + Math.random() * 2.0;
|
||||
}
|
||||
} else {
|
||||
dir.normalize();
|
||||
const speed = 0.4;
|
||||
group.position.add(dir.multiplyScalar(speed * dt));
|
||||
}
|
||||
}
|
||||
|
||||
// Bob up and down
|
||||
group.position.y += Math.sin(elapsed * 3.0) * 0.002;
|
||||
|
||||
// Pulse glow
|
||||
const pulse = 0.12 + Math.sin(elapsed * 4.0) * 0.05;
|
||||
glowMat.opacity = pulse;
|
||||
coreMat.emissiveIntensity = 1.2 + Math.sin(elapsed * 3.5) * 0.4;
|
||||
|
||||
// Trail particles
|
||||
trailTimer += dt;
|
||||
if (trailTimer > 0.1) {
|
||||
trailTimer = 0;
|
||||
const t = trails[trailIndex];
|
||||
t.mesh.position.copy(group.position);
|
||||
t.mesh.position.x += (Math.random() - 0.5) * 0.1;
|
||||
t.mesh.position.y += (Math.random() - 0.5) * 0.1;
|
||||
t.mesh.visible = true;
|
||||
t.age = 0;
|
||||
// Convert to local space
|
||||
group.worldToLocal(t.mesh.position);
|
||||
trailIndex = (trailIndex + 1) % trailCount;
|
||||
}
|
||||
|
||||
// Age and fade trail particles
|
||||
for (const t of trails) {
|
||||
if (!t.mesh.visible) continue;
|
||||
t.age += dt;
|
||||
if (t.age >= t.maxAge) {
|
||||
t.mesh.visible = false;
|
||||
} else {
|
||||
t.mesh.material.opacity = 0.6 * (1.0 - t.age / t.maxAge);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { group, update };
|
||||
}
|
||||
119
static/world/index.html
Normal file
119
static/world/index.html
Normal file
@@ -0,0 +1,119 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
|
||||
<title>Timmy's Workshop</title>
|
||||
<link rel="stylesheet" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="overlay">
|
||||
<div id="status">
|
||||
<div class="name">Timmy</div>
|
||||
<div class="mood" id="mood-text">focused</div>
|
||||
</div>
|
||||
<div id="connection-dot"></div>
|
||||
<div id="speech-area">
|
||||
<div class="bubble" id="speech-bubble"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script type="importmap">
|
||||
{
|
||||
"imports": {
|
||||
"three": "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js"
|
||||
}
|
||||
}
|
||||
</script>
|
||||
<script type="module">
|
||||
import * as THREE from "three";
|
||||
import { buildRoom } from "./scene.js";
|
||||
import { createWizard } from "./wizard.js";
|
||||
import { createFamiliar } from "./familiar.js";
|
||||
import { setupControls } from "./controls.js";
|
||||
import { StateReader } from "./state.js";
|
||||
|
||||
// --- Renderer ---
|
||||
const renderer = new THREE.WebGLRenderer({ antialias: true });
|
||||
renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
renderer.shadowMap.enabled = true;
|
||||
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
|
||||
renderer.toneMapping = THREE.ACESFilmicToneMapping;
|
||||
renderer.toneMappingExposure = 0.8;
|
||||
document.body.prepend(renderer.domElement);
|
||||
|
||||
// --- Scene ---
|
||||
const scene = new THREE.Scene();
|
||||
scene.background = new THREE.Color(0x0a0a14);
|
||||
scene.fog = new THREE.Fog(0x0a0a14, 5, 12);
|
||||
|
||||
// --- Camera (visitor at the door) ---
|
||||
const camera = new THREE.PerspectiveCamera(
|
||||
55, window.innerWidth / window.innerHeight, 0.1, 50
|
||||
);
|
||||
camera.position.set(0, 2.0, 4.5);
|
||||
|
||||
// --- Build scene elements ---
|
||||
const { crystalBall, crystalLight, fireLight, candleLights } = buildRoom(scene);
|
||||
const wizard = createWizard();
|
||||
scene.add(wizard.group);
|
||||
const familiar = createFamiliar();
|
||||
scene.add(familiar.group);
|
||||
|
||||
// --- Controls ---
|
||||
const controls = setupControls(camera, renderer.domElement);
|
||||
|
||||
// --- State ---
|
||||
const stateReader = new StateReader();
|
||||
const moodEl = document.getElementById("mood-text");
|
||||
stateReader.onChange((state) => {
|
||||
if (moodEl) {
|
||||
moodEl.textContent = state.timmyState.mood;
|
||||
}
|
||||
});
|
||||
stateReader.connect();
|
||||
|
||||
// --- Resize ---
|
||||
window.addEventListener("resize", () => {
|
||||
camera.aspect = window.innerWidth / window.innerHeight;
|
||||
camera.updateProjectionMatrix();
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
});
|
||||
|
||||
// --- Animation loop ---
|
||||
const clock = new THREE.Clock();
|
||||
|
||||
function animate() {
|
||||
requestAnimationFrame(animate);
|
||||
const dt = clock.getDelta();
|
||||
|
||||
// Update scene elements
|
||||
wizard.update(dt);
|
||||
familiar.update(dt);
|
||||
controls.update();
|
||||
|
||||
// Crystal ball subtle rotation + pulsing glow
|
||||
crystalBall.rotation.y += dt * 0.3;
|
||||
const pulse = 0.3 + Math.sin(Date.now() * 0.002) * 0.15;
|
||||
crystalLight.intensity = pulse;
|
||||
crystalBall.material.emissiveIntensity = pulse * 0.5;
|
||||
|
||||
// Fireplace flicker
|
||||
fireLight.intensity = 1.2 + Math.sin(Date.now() * 0.005) * 0.15
|
||||
+ Math.sin(Date.now() * 0.013) * 0.1;
|
||||
|
||||
// Candle flicker — each offset slightly for variety
|
||||
const now = Date.now();
|
||||
for (let i = 0; i < candleLights.length; i++) {
|
||||
candleLights[i].intensity = 0.4
|
||||
+ Math.sin(now * 0.007 + i * 2.1) * 0.1
|
||||
+ Math.sin(now * 0.019 + i * 1.3) * 0.05;
|
||||
}
|
||||
|
||||
renderer.render(scene, camera);
|
||||
}
|
||||
animate();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
247
static/world/scene.js
Normal file
247
static/world/scene.js
Normal file
@@ -0,0 +1,247 @@
|
||||
/**
|
||||
* Workshop scene — room geometry, lighting, materials.
|
||||
*
|
||||
* A dark stone room with a wooden desk, crystal ball, fireplace glow,
|
||||
* and faint emerald ambient light. This is Timmy's Workshop.
|
||||
*/
|
||||
|
||||
import * as THREE from "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js";
|
||||
|
||||
const WALL_COLOR = 0x2a2a3e;
|
||||
const FLOOR_COLOR = 0x1a1a1a;
|
||||
const DESK_COLOR = 0x3e2723;
|
||||
const DESK_TOP_COLOR = 0x4e342e;
|
||||
const BOOK_COLORS = [0x8b1a1a, 0x1a3c6e, 0x2e5e3e, 0x6e4b1a, 0x4a1a5e, 0x5e1a2e];
|
||||
const CANDLE_WAX = 0xe8d8b8;
|
||||
const CANDLE_FLAME = 0xffaa33;
|
||||
|
||||
/**
|
||||
* Build the room and add it to the given scene.
|
||||
* Returns { crystalBall } for animation.
|
||||
*/
|
||||
export function buildRoom(scene) {
|
||||
// --- Floor ---
|
||||
const floorGeo = new THREE.PlaneGeometry(8, 8);
|
||||
const floorMat = new THREE.MeshStandardMaterial({
|
||||
color: FLOOR_COLOR,
|
||||
roughness: 0.9,
|
||||
});
|
||||
const floor = new THREE.Mesh(floorGeo, floorMat);
|
||||
floor.rotation.x = -Math.PI / 2;
|
||||
floor.receiveShadow = true;
|
||||
scene.add(floor);
|
||||
|
||||
// --- Back wall ---
|
||||
const wallGeo = new THREE.PlaneGeometry(8, 4);
|
||||
const wallMat = new THREE.MeshStandardMaterial({
|
||||
color: WALL_COLOR,
|
||||
roughness: 0.95,
|
||||
metalness: 0.05,
|
||||
});
|
||||
const backWall = new THREE.Mesh(wallGeo, wallMat);
|
||||
backWall.position.set(0, 2, -4);
|
||||
scene.add(backWall);
|
||||
|
||||
// --- Side walls ---
|
||||
const leftWall = new THREE.Mesh(wallGeo, wallMat);
|
||||
leftWall.position.set(-4, 2, 0);
|
||||
leftWall.rotation.y = Math.PI / 2;
|
||||
scene.add(leftWall);
|
||||
|
||||
const rightWall = new THREE.Mesh(wallGeo, wallMat);
|
||||
rightWall.position.set(4, 2, 0);
|
||||
rightWall.rotation.y = -Math.PI / 2;
|
||||
scene.add(rightWall);
|
||||
|
||||
// --- Desk ---
|
||||
// Table top
|
||||
const topGeo = new THREE.BoxGeometry(1.8, 0.08, 0.9);
|
||||
const topMat = new THREE.MeshStandardMaterial({
|
||||
color: DESK_TOP_COLOR,
|
||||
roughness: 0.6,
|
||||
});
|
||||
const tableTop = new THREE.Mesh(topGeo, topMat);
|
||||
tableTop.position.set(0, 0.85, -0.3);
|
||||
tableTop.castShadow = true;
|
||||
scene.add(tableTop);
|
||||
|
||||
// Legs
|
||||
const legGeo = new THREE.BoxGeometry(0.08, 0.85, 0.08);
|
||||
const legMat = new THREE.MeshStandardMaterial({
|
||||
color: DESK_COLOR,
|
||||
roughness: 0.7,
|
||||
});
|
||||
const offsets = [
|
||||
[-0.8, -0.35],
|
||||
[0.8, -0.35],
|
||||
[-0.8, 0.05],
|
||||
[0.8, 0.05],
|
||||
];
|
||||
for (const [x, z] of offsets) {
|
||||
const leg = new THREE.Mesh(legGeo, legMat);
|
||||
leg.position.set(x, 0.425, z - 0.3);
|
||||
scene.add(leg);
|
||||
}
|
||||
|
||||
// --- Scrolls / papers on desk (simple flat boxes) ---
|
||||
const paperGeo = new THREE.BoxGeometry(0.3, 0.005, 0.2);
|
||||
const paperMat = new THREE.MeshStandardMaterial({
|
||||
color: 0xd4c5a0,
|
||||
roughness: 0.9,
|
||||
});
|
||||
const paper1 = new THREE.Mesh(paperGeo, paperMat);
|
||||
paper1.position.set(-0.4, 0.895, -0.35);
|
||||
paper1.rotation.y = 0.15;
|
||||
scene.add(paper1);
|
||||
|
||||
const paper2 = new THREE.Mesh(paperGeo, paperMat);
|
||||
paper2.position.set(0.5, 0.895, -0.2);
|
||||
paper2.rotation.y = -0.3;
|
||||
scene.add(paper2);
|
||||
|
||||
// --- Crystal ball ---
|
||||
const ballGeo = new THREE.SphereGeometry(0.12, 16, 14);
|
||||
const ballMat = new THREE.MeshPhysicalMaterial({
|
||||
color: 0x88ccff,
|
||||
roughness: 0.05,
|
||||
metalness: 0.0,
|
||||
transmission: 0.9,
|
||||
thickness: 0.3,
|
||||
transparent: true,
|
||||
opacity: 0.7,
|
||||
emissive: new THREE.Color(0x88ccff),
|
||||
emissiveIntensity: 0.3,
|
||||
});
|
||||
const crystalBall = new THREE.Mesh(ballGeo, ballMat);
|
||||
crystalBall.position.set(0.15, 1.01, -0.3);
|
||||
scene.add(crystalBall);
|
||||
|
||||
// Crystal ball base
|
||||
const baseGeo = new THREE.CylinderGeometry(0.08, 0.1, 0.04, 8);
|
||||
const baseMat = new THREE.MeshStandardMaterial({
|
||||
color: 0x444444,
|
||||
roughness: 0.3,
|
||||
metalness: 0.5,
|
||||
});
|
||||
const base = new THREE.Mesh(baseGeo, baseMat);
|
||||
base.position.set(0.15, 0.9, -0.3);
|
||||
scene.add(base);
|
||||
|
||||
// Crystal ball inner glow (pulsing)
|
||||
const crystalLight = new THREE.PointLight(0x88ccff, 0.3, 2);
|
||||
crystalLight.position.copy(crystalBall.position);
|
||||
scene.add(crystalLight);
|
||||
|
||||
// --- Bookshelf (right wall) ---
|
||||
const shelfMat = new THREE.MeshStandardMaterial({
|
||||
color: DESK_COLOR,
|
||||
roughness: 0.7,
|
||||
});
|
||||
|
||||
// Bookshelf frame — tall backing panel
|
||||
const shelfBack = new THREE.Mesh(
|
||||
new THREE.BoxGeometry(1.4, 2.2, 0.06),
|
||||
shelfMat
|
||||
);
|
||||
shelfBack.position.set(3.0, 1.1, -2.0);
|
||||
scene.add(shelfBack);
|
||||
|
||||
// Shelves (4 horizontal planks)
|
||||
const shelfGeo = new THREE.BoxGeometry(1.4, 0.04, 0.35);
|
||||
const shelfYs = [0.2, 0.7, 1.2, 1.7];
|
||||
for (const sy of shelfYs) {
|
||||
const shelf = new THREE.Mesh(shelfGeo, shelfMat);
|
||||
shelf.position.set(3.0, sy, -1.85);
|
||||
scene.add(shelf);
|
||||
}
|
||||
|
||||
// Side panels
|
||||
const sidePanelGeo = new THREE.BoxGeometry(0.04, 2.2, 0.35);
|
||||
for (const sx of [-0.68, 0.68]) {
|
||||
const side = new THREE.Mesh(sidePanelGeo, shelfMat);
|
||||
side.position.set(3.0 + sx, 1.1, -1.85);
|
||||
scene.add(side);
|
||||
}
|
||||
|
||||
// Books on shelves — colored boxes
|
||||
const bookGeo = new THREE.BoxGeometry(0.08, 0.28, 0.22);
|
||||
const booksPerShelf = [5, 4, 5, 3];
|
||||
for (let s = 0; s < shelfYs.length; s++) {
|
||||
const count = booksPerShelf[s];
|
||||
const startX = 3.0 - (count * 0.12) / 2;
|
||||
for (let b = 0; b < count; b++) {
|
||||
const bookMat = new THREE.MeshStandardMaterial({
|
||||
color: BOOK_COLORS[(s * 3 + b) % BOOK_COLORS.length],
|
||||
roughness: 0.8,
|
||||
});
|
||||
const book = new THREE.Mesh(bookGeo, bookMat);
|
||||
book.position.set(
|
||||
startX + b * 0.14,
|
||||
shelfYs[s] + 0.16,
|
||||
-1.85
|
||||
);
|
||||
// Slight random tilt for character
|
||||
book.rotation.z = (Math.random() - 0.5) * 0.08;
|
||||
scene.add(book);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Candles ---
|
||||
const candleLights = [];
|
||||
const candlePositions = [
|
||||
[-0.6, 0.89, -0.15], // desk left
|
||||
[0.7, 0.89, -0.4], // desk right
|
||||
[3.0, 1.78, -1.85], // bookshelf top
|
||||
];
|
||||
const candleGeo = new THREE.CylinderGeometry(0.02, 0.025, 0.12, 6);
|
||||
const candleMat = new THREE.MeshStandardMaterial({
|
||||
color: CANDLE_WAX,
|
||||
roughness: 0.9,
|
||||
});
|
||||
|
||||
for (const [cx, cy, cz] of candlePositions) {
|
||||
// Wax cylinder
|
||||
const candle = new THREE.Mesh(candleGeo, candleMat);
|
||||
candle.position.set(cx, cy + 0.06, cz);
|
||||
scene.add(candle);
|
||||
|
||||
// Flame — tiny emissive sphere
|
||||
const flameGeo = new THREE.SphereGeometry(0.015, 6, 4);
|
||||
const flameMat = new THREE.MeshBasicMaterial({ color: CANDLE_FLAME });
|
||||
const flame = new THREE.Mesh(flameGeo, flameMat);
|
||||
flame.position.set(cx, cy + 0.13, cz);
|
||||
scene.add(flame);
|
||||
|
||||
// Warm point light
|
||||
const candleLight = new THREE.PointLight(0xff8833, 0.4, 3);
|
||||
candleLight.position.set(cx, cy + 0.15, cz);
|
||||
scene.add(candleLight);
|
||||
candleLights.push(candleLight);
|
||||
}
|
||||
|
||||
// --- Lighting ---
|
||||
|
||||
// Fireplace glow (warm, off-screen stage left)
|
||||
const fireLight = new THREE.PointLight(0xff6622, 1.2, 8);
|
||||
fireLight.position.set(-3.5, 1.2, -1.0);
|
||||
fireLight.castShadow = true;
|
||||
fireLight.shadow.mapSize.width = 512;
|
||||
fireLight.shadow.mapSize.height = 512;
|
||||
scene.add(fireLight);
|
||||
|
||||
// Secondary warm fill
|
||||
const fillLight = new THREE.PointLight(0xff8844, 0.3, 6);
|
||||
fillLight.position.set(-2.0, 0.5, 1.0);
|
||||
scene.add(fillLight);
|
||||
|
||||
// Emerald ambient
|
||||
const ambient = new THREE.AmbientLight(0x00b450, 0.15);
|
||||
scene.add(ambient);
|
||||
|
||||
// Faint overhead to keep things readable
|
||||
const overhead = new THREE.PointLight(0x887766, 0.2, 8);
|
||||
overhead.position.set(0, 3.5, 0);
|
||||
scene.add(overhead);
|
||||
|
||||
return { crystalBall, crystalLight, fireLight, candleLights };
|
||||
}
|
||||
95
static/world/state.js
Normal file
95
static/world/state.js
Normal file
@@ -0,0 +1,95 @@
|
||||
/**
|
||||
* State reader — hardcoded JSON for Phase 2, WebSocket in Phase 3.
|
||||
*
|
||||
* Provides Timmy's current state to the scene. In Phase 2 this is a
|
||||
* static default; the WebSocket path is stubbed for future use.
|
||||
*/
|
||||
|
||||
const DEFAULTS = {
|
||||
timmyState: {
|
||||
mood: "focused",
|
||||
activity: "Pondering the arcane arts",
|
||||
energy: 0.6,
|
||||
confidence: 0.7,
|
||||
},
|
||||
activeThreads: [],
|
||||
recentEvents: [],
|
||||
concerns: [],
|
||||
visitorPresent: false,
|
||||
updatedAt: new Date().toISOString(),
|
||||
version: 1,
|
||||
};
|
||||
|
||||
export class StateReader {
|
||||
constructor() {
|
||||
this.state = { ...DEFAULTS };
|
||||
this.listeners = [];
|
||||
this._ws = null;
|
||||
}
|
||||
|
||||
/** Subscribe to state changes. */
|
||||
onChange(fn) {
|
||||
this.listeners.push(fn);
|
||||
}
|
||||
|
||||
/** Notify all listeners. */
|
||||
_notify() {
|
||||
for (const fn of this.listeners) {
|
||||
try {
|
||||
fn(this.state);
|
||||
} catch (e) {
|
||||
console.warn("State listener error:", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Try to connect to the world WebSocket for live updates. */
|
||||
connect() {
|
||||
const proto = location.protocol === "https:" ? "wss:" : "ws:";
|
||||
const url = `${proto}//${location.host}/api/world/ws`;
|
||||
try {
|
||||
this._ws = new WebSocket(url);
|
||||
this._ws.onopen = () => {
|
||||
const dot = document.getElementById("connection-dot");
|
||||
if (dot) dot.classList.add("connected");
|
||||
};
|
||||
this._ws.onclose = () => {
|
||||
const dot = document.getElementById("connection-dot");
|
||||
if (dot) dot.classList.remove("connected");
|
||||
};
|
||||
this._ws.onmessage = (ev) => {
|
||||
try {
|
||||
const msg = JSON.parse(ev.data);
|
||||
if (msg.type === "world_state" || msg.type === "timmy_state") {
|
||||
if (msg.timmyState) this.state.timmyState = msg.timmyState;
|
||||
if (msg.mood) {
|
||||
this.state.timmyState.mood = msg.mood;
|
||||
this.state.timmyState.activity = msg.activity || "";
|
||||
this.state.timmyState.energy = msg.energy ?? 0.5;
|
||||
}
|
||||
this._notify();
|
||||
}
|
||||
} catch (e) {
|
||||
/* ignore parse errors */
|
||||
}
|
||||
};
|
||||
} catch (e) {
|
||||
console.warn("WebSocket unavailable — using static state");
|
||||
}
|
||||
}
|
||||
|
||||
/** Current mood string. */
|
||||
get mood() {
|
||||
return this.state.timmyState.mood;
|
||||
}
|
||||
|
||||
/** Current activity string. */
|
||||
get activity() {
|
||||
return this.state.timmyState.activity;
|
||||
}
|
||||
|
||||
/** Energy level 0-1. */
|
||||
get energy() {
|
||||
return this.state.timmyState.energy;
|
||||
}
|
||||
}
|
||||
89
static/world/style.css
Normal file
89
static/world/style.css
Normal file
@@ -0,0 +1,89 @@
|
||||
/* Workshop 3D scene overlay styles */
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
overflow: hidden;
|
||||
background: #0a0a14;
|
||||
font-family: "Courier New", monospace;
|
||||
color: #e0e0e0;
|
||||
touch-action: none;
|
||||
}
|
||||
|
||||
canvas {
|
||||
display: block;
|
||||
}
|
||||
|
||||
#overlay {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
pointer-events: none;
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
#status {
|
||||
position: absolute;
|
||||
top: 16px;
|
||||
left: 16px;
|
||||
font-size: 14px;
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
#status .name {
|
||||
font-size: 18px;
|
||||
font-weight: bold;
|
||||
color: #daa520;
|
||||
}
|
||||
|
||||
#status .mood {
|
||||
font-size: 13px;
|
||||
color: #aaa;
|
||||
margin-top: 4px;
|
||||
}
|
||||
|
||||
#speech-area {
|
||||
position: absolute;
|
||||
bottom: 24px;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
max-width: 480px;
|
||||
width: 90%;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
line-height: 1.5;
|
||||
color: #ccc;
|
||||
opacity: 0;
|
||||
transition: opacity 0.4s ease;
|
||||
}
|
||||
|
||||
#speech-area.visible {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
#speech-area .bubble {
|
||||
background: rgba(10, 10, 20, 0.85);
|
||||
border: 1px solid rgba(218, 165, 32, 0.3);
|
||||
border-radius: 8px;
|
||||
padding: 12px 20px;
|
||||
}
|
||||
|
||||
#connection-dot {
|
||||
position: absolute;
|
||||
top: 18px;
|
||||
right: 16px;
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
background: #555;
|
||||
}
|
||||
|
||||
#connection-dot.connected {
|
||||
background: #00b450;
|
||||
}
|
||||
99
static/world/wizard.js
Normal file
99
static/world/wizard.js
Normal file
@@ -0,0 +1,99 @@
|
||||
/**
|
||||
* Timmy the Wizard — geometric figure built from primitives.
|
||||
*
|
||||
* Phase 1: cone body (robe), sphere head, cylinder arms.
|
||||
* Idle animation: gentle breathing (Y-scale oscillation), head tilt.
|
||||
*/
|
||||
|
||||
import * as THREE from "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js";
|
||||
|
||||
const ROBE_COLOR = 0x2d1b4e;
|
||||
const TRIM_COLOR = 0xdaa520;
|
||||
|
||||
/**
|
||||
* Create the wizard group and return { group, update }.
|
||||
* Call update(dt) each frame for idle animation.
|
||||
*/
|
||||
export function createWizard() {
|
||||
const group = new THREE.Group();
|
||||
|
||||
// --- Robe (cone) ---
|
||||
const robeGeo = new THREE.ConeGeometry(0.5, 1.6, 8);
|
||||
const robeMat = new THREE.MeshStandardMaterial({
|
||||
color: ROBE_COLOR,
|
||||
roughness: 0.8,
|
||||
});
|
||||
const robe = new THREE.Mesh(robeGeo, robeMat);
|
||||
robe.position.y = 0.8;
|
||||
group.add(robe);
|
||||
|
||||
// --- Trim ring at robe bottom ---
|
||||
const trimGeo = new THREE.TorusGeometry(0.5, 0.03, 8, 24);
|
||||
const trimMat = new THREE.MeshStandardMaterial({
|
||||
color: TRIM_COLOR,
|
||||
roughness: 0.4,
|
||||
metalness: 0.3,
|
||||
});
|
||||
const trim = new THREE.Mesh(trimGeo, trimMat);
|
||||
trim.rotation.x = Math.PI / 2;
|
||||
trim.position.y = 0.02;
|
||||
group.add(trim);
|
||||
|
||||
// --- Head (sphere) ---
|
||||
const headGeo = new THREE.SphereGeometry(0.22, 12, 10);
|
||||
const headMat = new THREE.MeshStandardMaterial({
|
||||
color: 0xd4a574,
|
||||
roughness: 0.7,
|
||||
});
|
||||
const head = new THREE.Mesh(headGeo, headMat);
|
||||
head.position.y = 1.72;
|
||||
group.add(head);
|
||||
|
||||
// --- Hood (cone behind head) ---
|
||||
const hoodGeo = new THREE.ConeGeometry(0.35, 0.5, 8);
|
||||
const hoodMat = new THREE.MeshStandardMaterial({
|
||||
color: ROBE_COLOR,
|
||||
roughness: 0.8,
|
||||
});
|
||||
const hood = new THREE.Mesh(hoodGeo, hoodMat);
|
||||
hood.position.y = 1.85;
|
||||
hood.position.z = -0.08;
|
||||
group.add(hood);
|
||||
|
||||
// --- Arms (cylinders) ---
|
||||
const armGeo = new THREE.CylinderGeometry(0.06, 0.08, 0.7, 6);
|
||||
const armMat = new THREE.MeshStandardMaterial({
|
||||
color: ROBE_COLOR,
|
||||
roughness: 0.8,
|
||||
});
|
||||
|
||||
const leftArm = new THREE.Mesh(armGeo, armMat);
|
||||
leftArm.position.set(-0.45, 1.0, 0.15);
|
||||
leftArm.rotation.z = 0.3;
|
||||
leftArm.rotation.x = -0.4;
|
||||
group.add(leftArm);
|
||||
|
||||
const rightArm = new THREE.Mesh(armGeo, armMat);
|
||||
rightArm.position.set(0.45, 1.0, 0.15);
|
||||
rightArm.rotation.z = -0.3;
|
||||
rightArm.rotation.x = -0.4;
|
||||
group.add(rightArm);
|
||||
|
||||
// Position behind the desk
|
||||
group.position.set(0, 0, -0.8);
|
||||
|
||||
// Animation state
|
||||
let elapsed = 0;
|
||||
|
||||
function update(dt) {
|
||||
elapsed += dt;
|
||||
// Breathing: subtle Y-scale oscillation
|
||||
const breath = 1.0 + Math.sin(elapsed * 1.5) * 0.015;
|
||||
robe.scale.y = breath;
|
||||
// Head tilt
|
||||
head.rotation.z = Math.sin(elapsed * 0.7) * 0.05;
|
||||
head.rotation.x = Math.sin(elapsed * 0.5) * 0.03;
|
||||
}
|
||||
|
||||
return { group, update };
|
||||
}
|
||||
@@ -18,7 +18,6 @@ except ImportError:
|
||||
# agno is a core dependency (always installed) — do NOT stub it, or its
|
||||
# internal import chains break under xdist parallel workers.
|
||||
for _mod in [
|
||||
"airllm",
|
||||
"mcp",
|
||||
"mcp.client",
|
||||
"mcp.client.stdio",
|
||||
|
||||
100
tests/dashboard/middleware/test_csrf_no_side_effects.py
Normal file
100
tests/dashboard/middleware/test_csrf_no_side_effects.py
Normal file
@@ -0,0 +1,100 @@
|
||||
"""Tests that CSRF rejection does NOT execute the endpoint handler.
|
||||
|
||||
Regression test for #626: the middleware was calling call_next() before
|
||||
checking @csrf_exempt, causing side effects even on CSRF-rejected requests.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from fastapi import FastAPI
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from dashboard.middleware.csrf import CSRFMiddleware, csrf_exempt
|
||||
|
||||
|
||||
class TestCSRFNoSideEffects:
|
||||
"""Verify endpoints are NOT executed when CSRF validation fails."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def enable_csrf(self):
|
||||
"""Re-enable CSRF for these tests."""
|
||||
from config import settings
|
||||
|
||||
original = settings.timmy_disable_csrf
|
||||
settings.timmy_disable_csrf = False
|
||||
yield
|
||||
settings.timmy_disable_csrf = original
|
||||
|
||||
def test_protected_endpoint_not_executed_on_csrf_failure(self):
|
||||
"""A protected endpoint must NOT run when CSRF token is missing.
|
||||
|
||||
Before the fix, the middleware called call_next() to resolve the
|
||||
endpoint, executing its side effects before returning 403.
|
||||
"""
|
||||
app = FastAPI()
|
||||
app.add_middleware(CSRFMiddleware)
|
||||
|
||||
side_effect_log = []
|
||||
|
||||
@app.post("/transfer")
|
||||
def transfer_money():
|
||||
side_effect_log.append("money_transferred")
|
||||
return {"message": "transferred"}
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/transfer")
|
||||
|
||||
assert response.status_code == 403
|
||||
assert side_effect_log == [], (
|
||||
"Endpoint was executed despite CSRF failure — side effects occurred!"
|
||||
)
|
||||
|
||||
def test_csrf_exempt_endpoint_still_executes(self):
|
||||
"""A @csrf_exempt endpoint should still execute without a CSRF token."""
|
||||
app = FastAPI()
|
||||
app.add_middleware(CSRFMiddleware)
|
||||
|
||||
side_effect_log = []
|
||||
|
||||
@app.post("/webhook-handler")
|
||||
@csrf_exempt
|
||||
def webhook_handler():
|
||||
side_effect_log.append("webhook_processed")
|
||||
return {"message": "processed"}
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/webhook-handler")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert side_effect_log == ["webhook_processed"]
|
||||
|
||||
def test_exempt_and_protected_no_cross_contamination(self):
|
||||
"""Mixed exempt/protected: only exempt endpoints execute without tokens."""
|
||||
app = FastAPI()
|
||||
app.add_middleware(CSRFMiddleware)
|
||||
|
||||
execution_log = []
|
||||
|
||||
@app.post("/safe-webhook")
|
||||
@csrf_exempt
|
||||
def safe_webhook():
|
||||
execution_log.append("safe")
|
||||
return {"message": "safe"}
|
||||
|
||||
@app.post("/dangerous-action")
|
||||
def dangerous_action():
|
||||
execution_log.append("dangerous")
|
||||
return {"message": "danger"}
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
# Exempt endpoint runs
|
||||
resp1 = client.post("/safe-webhook")
|
||||
assert resp1.status_code == 200
|
||||
|
||||
# Protected endpoint blocked WITHOUT executing
|
||||
resp2 = client.post("/dangerous-action")
|
||||
assert resp2.status_code == 403
|
||||
|
||||
assert execution_log == ["safe"], (
|
||||
f"Expected only 'safe' execution, got: {execution_log}"
|
||||
)
|
||||
@@ -10,12 +10,10 @@ Categories:
|
||||
M3xx iOS keyboard & zoom prevention
|
||||
M4xx HTMX robustness (double-submit, sync)
|
||||
M5xx Safe-area / notch support
|
||||
M6xx AirLLM backend interface contract
|
||||
"""
|
||||
|
||||
import re
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
# ── helpers ───────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -206,147 +204,3 @@ def test_M505_dvh_units_used():
|
||||
"""Dynamic viewport height (dvh) accounts for collapsing browser chrome."""
|
||||
css = _css()
|
||||
assert "dvh" in css
|
||||
|
||||
|
||||
# ── M6xx — AirLLM backend interface contract ──────────────────────────────────
|
||||
|
||||
|
||||
def test_M601_airllm_agent_has_run_method():
|
||||
"""TimmyAirLLMAgent must expose run() so the dashboard route can call it."""
|
||||
from timmy.backends import TimmyAirLLMAgent
|
||||
|
||||
assert hasattr(TimmyAirLLMAgent, "run"), (
|
||||
"TimmyAirLLMAgent is missing run() — dashboard will fail with AirLLM backend"
|
||||
)
|
||||
|
||||
|
||||
def test_M602_airllm_run_returns_content_attribute():
|
||||
"""run() must return an object with a .content attribute (Agno RunResponse compat)."""
|
||||
with patch("timmy.backends.is_apple_silicon", return_value=False):
|
||||
from timmy.backends import TimmyAirLLMAgent
|
||||
|
||||
agent = TimmyAirLLMAgent(model_size="8b")
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_tokenizer = MagicMock()
|
||||
input_ids_mock = MagicMock()
|
||||
input_ids_mock.shape = [1, 5]
|
||||
mock_tokenizer.return_value = {"input_ids": input_ids_mock}
|
||||
mock_tokenizer.decode.return_value = "Sir, affirmative."
|
||||
mock_model.tokenizer = mock_tokenizer
|
||||
mock_model.generate.return_value = [list(range(10))]
|
||||
agent._model = mock_model
|
||||
|
||||
result = agent.run("test")
|
||||
assert hasattr(result, "content"), "run() result must have a .content attribute"
|
||||
assert isinstance(result.content, str)
|
||||
|
||||
|
||||
def test_M603_airllm_run_updates_history():
|
||||
"""run() must update _history so multi-turn context is preserved."""
|
||||
with patch("timmy.backends.is_apple_silicon", return_value=False):
|
||||
from timmy.backends import TimmyAirLLMAgent
|
||||
|
||||
agent = TimmyAirLLMAgent(model_size="8b")
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_tokenizer = MagicMock()
|
||||
input_ids_mock = MagicMock()
|
||||
input_ids_mock.shape = [1, 5]
|
||||
mock_tokenizer.return_value = {"input_ids": input_ids_mock}
|
||||
mock_tokenizer.decode.return_value = "Acknowledged."
|
||||
mock_model.tokenizer = mock_tokenizer
|
||||
mock_model.generate.return_value = [list(range(10))]
|
||||
agent._model = mock_model
|
||||
|
||||
assert len(agent._history) == 0
|
||||
agent.run("hello")
|
||||
assert len(agent._history) == 2
|
||||
assert any("hello" in h for h in agent._history)
|
||||
|
||||
|
||||
def test_M604_airllm_print_response_delegates_to_run():
|
||||
"""print_response must use run() so both interfaces share one inference path."""
|
||||
with patch("timmy.backends.is_apple_silicon", return_value=False):
|
||||
from timmy.backends import RunResult, TimmyAirLLMAgent
|
||||
|
||||
agent = TimmyAirLLMAgent(model_size="8b")
|
||||
|
||||
with (
|
||||
patch.object(agent, "run", return_value=RunResult(content="ok")) as mock_run,
|
||||
patch.object(agent, "_render"),
|
||||
):
|
||||
agent.print_response("hello", stream=True)
|
||||
|
||||
mock_run.assert_called_once_with("hello", stream=True)
|
||||
|
||||
|
||||
def test_M605_health_status_passes_model_to_template(client):
|
||||
"""Health status partial must receive the configured model name, not a hardcoded string."""
|
||||
from config import settings
|
||||
|
||||
with patch(
|
||||
"dashboard.routes.health.check_ollama",
|
||||
new_callable=AsyncMock,
|
||||
return_value=True,
|
||||
):
|
||||
response = client.get("/health/status")
|
||||
# Model name should come from settings, not be hardcoded
|
||||
assert response.status_code == 200
|
||||
model_short = settings.ollama_model.split(":")[0]
|
||||
assert model_short in response.text
|
||||
|
||||
|
||||
# ── M7xx — XSS prevention ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _mobile_html() -> str:
|
||||
"""Read the mobile template source."""
|
||||
path = Path(__file__).parent.parent.parent / "src" / "dashboard" / "templates" / "mobile.html"
|
||||
return path.read_text()
|
||||
|
||||
|
||||
def _swarm_live_html() -> str:
|
||||
"""Read the swarm live template source."""
|
||||
path = (
|
||||
Path(__file__).parent.parent.parent / "src" / "dashboard" / "templates" / "swarm_live.html"
|
||||
)
|
||||
return path.read_text()
|
||||
|
||||
|
||||
def test_M701_mobile_chat_no_raw_message_interpolation():
|
||||
"""mobile.html must not interpolate ${message} directly into innerHTML — XSS risk."""
|
||||
html = _mobile_html()
|
||||
# The vulnerable pattern is `${message}` inside a template literal assigned to innerHTML
|
||||
# After the fix, message must only appear via textContent assignment
|
||||
assert "textContent = message" in html or "textContent=message" in html, (
|
||||
"mobile.html still uses innerHTML + ${message} interpolation — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M702_mobile_chat_user_input_not_in_innerhtml_template_literal():
|
||||
"""${message} must not appear inside a backtick string that is assigned to innerHTML."""
|
||||
html = _mobile_html()
|
||||
# Find all innerHTML += `...` blocks and verify none contain ${message}
|
||||
blocks = re.findall(r"innerHTML\s*\+=?\s*`([^`]*)`", html, re.DOTALL)
|
||||
for block in blocks:
|
||||
assert "${message}" not in block, (
|
||||
"innerHTML template literal still contains ${message} — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M703_swarm_live_agent_name_not_interpolated_in_innerhtml():
|
||||
"""swarm_live.html must not put ${agent.name} inside innerHTML template literals."""
|
||||
html = _swarm_live_html()
|
||||
blocks = re.findall(r"innerHTML\s*=\s*agents\.map\([^;]+\)\.join\([^)]*\)", html, re.DOTALL)
|
||||
assert len(blocks) == 0, (
|
||||
"swarm_live.html still uses innerHTML=agents.map(…) with interpolated agent data — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M704_swarm_live_uses_textcontent_for_agent_data():
|
||||
"""swarm_live.html must use textContent (not innerHTML) to set agent name/description."""
|
||||
html = _swarm_live_html()
|
||||
assert "textContent" in html, (
|
||||
"swarm_live.html does not use textContent — agent data may be raw-interpolated into DOM"
|
||||
)
|
||||
|
||||
187
tests/dashboard/test_tower.py
Normal file
187
tests/dashboard/test_tower.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""Tests for Tower dashboard route (/tower)."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
|
||||
def _mock_spark_engine():
|
||||
"""Return a mock spark_engine with realistic return values."""
|
||||
engine = MagicMock()
|
||||
|
||||
engine.status.return_value = {
|
||||
"enabled": True,
|
||||
"events_captured": 5,
|
||||
"memories_stored": 3,
|
||||
"predictions": {"total": 2, "avg_accuracy": 0.85},
|
||||
"event_types": {
|
||||
"task_posted": 2,
|
||||
"bid_submitted": 1,
|
||||
"task_assigned": 1,
|
||||
"task_completed": 1,
|
||||
"task_failed": 0,
|
||||
"agent_joined": 0,
|
||||
"tool_executed": 0,
|
||||
"creative_step": 0,
|
||||
},
|
||||
}
|
||||
|
||||
event = MagicMock()
|
||||
event.event_type = "task_completed"
|
||||
event.description = "Task finished"
|
||||
event.importance = 0.8
|
||||
event.created_at = "2026-01-01T00:00:00"
|
||||
event.agent_id = "agent-1234-abcd"
|
||||
event.task_id = "task-5678-efgh"
|
||||
event.data = '{"result": "ok"}'
|
||||
engine.get_timeline.return_value = [event]
|
||||
|
||||
pred = MagicMock()
|
||||
pred.task_id = "task-5678-efgh"
|
||||
pred.accuracy = 0.9
|
||||
pred.evaluated_at = "2026-01-01T01:00:00"
|
||||
pred.created_at = "2026-01-01T00:30:00"
|
||||
pred.predicted_value = '{"outcome": "success"}'
|
||||
engine.get_predictions.return_value = [pred]
|
||||
|
||||
advisory = MagicMock()
|
||||
advisory.category = "performance"
|
||||
advisory.priority = "high"
|
||||
advisory.title = "Slow tasks"
|
||||
advisory.detail = "Tasks taking longer than expected"
|
||||
advisory.suggested_action = "Scale up workers"
|
||||
engine.get_advisories.return_value = [advisory]
|
||||
|
||||
return engine
|
||||
|
||||
|
||||
class TestTowerUI:
|
||||
"""Tests for GET /tower endpoint."""
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_tower_returns_200(self, mock_engine, client):
|
||||
response = client.get("/tower")
|
||||
assert response.status_code == 200
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_tower_returns_html(self, mock_engine, client):
|
||||
response = client.get("/tower")
|
||||
assert "text/html" in response.headers["content-type"]
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_tower_contains_dashboard_content(self, mock_engine, client):
|
||||
response = client.get("/tower")
|
||||
body = response.text
|
||||
assert "tower" in body.lower() or "spark" in body.lower()
|
||||
|
||||
|
||||
class TestSparkSnapshot:
|
||||
"""Tests for _spark_snapshot helper."""
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_snapshot_structure(self, mock_engine):
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
assert snap["type"] == "spark_state"
|
||||
assert "status" in snap
|
||||
assert "events" in snap
|
||||
assert "predictions" in snap
|
||||
assert "advisories" in snap
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_snapshot_events_parsed(self, mock_engine):
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
ev = snap["events"][0]
|
||||
assert ev["event_type"] == "task_completed"
|
||||
assert ev["importance"] == 0.8
|
||||
assert ev["agent_id"] == "agent-12"
|
||||
assert ev["task_id"] == "task-567"
|
||||
assert ev["data"] == {"result": "ok"}
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_snapshot_predictions_parsed(self, mock_engine):
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
pred = snap["predictions"][0]
|
||||
assert pred["task_id"] == "task-567"
|
||||
assert pred["accuracy"] == 0.9
|
||||
assert pred["evaluated"] is True
|
||||
assert pred["predicted"] == {"outcome": "success"}
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
def test_snapshot_advisories_parsed(self, mock_engine):
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
adv = snap["advisories"][0]
|
||||
assert adv["category"] == "performance"
|
||||
assert adv["priority"] == "high"
|
||||
assert adv["title"] == "Slow tasks"
|
||||
assert adv["suggested_action"] == "Scale up workers"
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine")
|
||||
def test_snapshot_handles_empty_state(self, mock_engine):
|
||||
mock_engine.status.return_value = {"enabled": False}
|
||||
mock_engine.get_timeline.return_value = []
|
||||
mock_engine.get_predictions.return_value = []
|
||||
mock_engine.get_advisories.return_value = []
|
||||
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
assert snap["events"] == []
|
||||
assert snap["predictions"] == []
|
||||
assert snap["advisories"] == []
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine")
|
||||
def test_snapshot_handles_invalid_json_data(self, mock_engine):
|
||||
mock_engine.status.return_value = {"enabled": True}
|
||||
|
||||
event = MagicMock()
|
||||
event.event_type = "test"
|
||||
event.description = "bad data"
|
||||
event.importance = 0.5
|
||||
event.created_at = "2026-01-01T00:00:00"
|
||||
event.agent_id = None
|
||||
event.task_id = None
|
||||
event.data = "not-json{"
|
||||
mock_engine.get_timeline.return_value = [event]
|
||||
|
||||
pred = MagicMock()
|
||||
pred.task_id = None
|
||||
pred.accuracy = None
|
||||
pred.evaluated_at = None
|
||||
pred.created_at = "2026-01-01T00:00:00"
|
||||
pred.predicted_value = None
|
||||
mock_engine.get_predictions.return_value = [pred]
|
||||
|
||||
mock_engine.get_advisories.return_value = []
|
||||
|
||||
from dashboard.routes.tower import _spark_snapshot
|
||||
|
||||
snap = _spark_snapshot()
|
||||
ev = snap["events"][0]
|
||||
assert ev["data"] == {}
|
||||
assert "agent_id" not in ev
|
||||
assert "task_id" not in ev
|
||||
|
||||
pred = snap["predictions"][0]
|
||||
assert pred["task_id"] == "?"
|
||||
assert pred["predicted"] == {}
|
||||
|
||||
|
||||
class TestTowerWebSocket:
|
||||
"""Tests for WS /tower/ws endpoint."""
|
||||
|
||||
@patch("dashboard.routes.tower.spark_engine", new_callable=_mock_spark_engine)
|
||||
@patch("dashboard.routes.tower._PUSH_INTERVAL", 0)
|
||||
def test_ws_sends_initial_snapshot(self, mock_engine, client):
|
||||
import json
|
||||
|
||||
with client.websocket_connect("/tower/ws") as ws:
|
||||
data = json.loads(ws.receive_text())
|
||||
assert data["type"] == "spark_state"
|
||||
assert "status" in data
|
||||
assert "events" in data
|
||||
720
tests/dashboard/test_world_api.py
Normal file
720
tests/dashboard/test_world_api.py
Normal file
@@ -0,0 +1,720 @@
|
||||
"""Tests for GET /api/world/state endpoint and /api/world/ws relay."""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from dashboard.routes.world import (
|
||||
_GROUND_TTL,
|
||||
_REMIND_AFTER,
|
||||
_STALE_THRESHOLD,
|
||||
_bark_and_broadcast,
|
||||
_broadcast,
|
||||
_build_commitment_context,
|
||||
_build_world_state,
|
||||
_commitments,
|
||||
_conversation,
|
||||
_extract_commitments,
|
||||
_generate_bark,
|
||||
_handle_client_message,
|
||||
_heartbeat,
|
||||
_log_bark_failure,
|
||||
_read_presence_file,
|
||||
_record_commitments,
|
||||
_refresh_ground,
|
||||
_tick_commitments,
|
||||
broadcast_world_state,
|
||||
close_commitment,
|
||||
get_commitments,
|
||||
reset_commitments,
|
||||
reset_conversation_ground,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_world_state
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_build_world_state_maps_fields():
|
||||
presence = {
|
||||
"version": 1,
|
||||
"liveness": "2026-03-19T02:00:00Z",
|
||||
"mood": "exploring",
|
||||
"current_focus": "reviewing PR",
|
||||
"energy": 0.8,
|
||||
"confidence": 0.9,
|
||||
"active_threads": [{"type": "thinking", "ref": "test", "status": "active"}],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
result = _build_world_state(presence)
|
||||
|
||||
assert result["timmyState"]["mood"] == "exploring"
|
||||
assert result["timmyState"]["activity"] == "reviewing PR"
|
||||
assert result["timmyState"]["energy"] == 0.8
|
||||
assert result["timmyState"]["confidence"] == 0.9
|
||||
assert result["updatedAt"] == "2026-03-19T02:00:00Z"
|
||||
assert result["version"] == 1
|
||||
assert result["visitorPresent"] is False
|
||||
assert len(result["activeThreads"]) == 1
|
||||
|
||||
|
||||
def test_build_world_state_defaults():
|
||||
"""Missing fields get safe defaults."""
|
||||
result = _build_world_state({})
|
||||
assert result["timmyState"]["mood"] == "calm"
|
||||
assert result["timmyState"]["energy"] == 0.5
|
||||
assert result["version"] == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _read_presence_file
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_read_presence_file_missing(tmp_path):
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", tmp_path / "nope.json"):
|
||||
assert _read_presence_file() is None
|
||||
|
||||
|
||||
def test_read_presence_file_stale(tmp_path):
|
||||
f = tmp_path / "presence.json"
|
||||
f.write_text(json.dumps({"version": 1}))
|
||||
# Backdate the file
|
||||
stale_time = time.time() - _STALE_THRESHOLD - 10
|
||||
import os
|
||||
|
||||
os.utime(f, (stale_time, stale_time))
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", f):
|
||||
assert _read_presence_file() is None
|
||||
|
||||
|
||||
def test_read_presence_file_fresh(tmp_path):
|
||||
f = tmp_path / "presence.json"
|
||||
f.write_text(json.dumps({"version": 1, "mood": "focused"}))
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", f):
|
||||
result = _read_presence_file()
|
||||
assert result is not None
|
||||
assert result["version"] == 1
|
||||
|
||||
|
||||
def test_read_presence_file_bad_json(tmp_path):
|
||||
f = tmp_path / "presence.json"
|
||||
f.write_text("not json {{{")
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", f):
|
||||
assert _read_presence_file() is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Full endpoint via TestClient
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client():
|
||||
from fastapi import FastAPI
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
app = FastAPI()
|
||||
from dashboard.routes.world import router
|
||||
|
||||
app.include_router(router)
|
||||
return TestClient(app)
|
||||
|
||||
|
||||
def test_world_state_endpoint_with_file(client, tmp_path):
|
||||
"""Endpoint returns data from presence file when fresh."""
|
||||
f = tmp_path / "presence.json"
|
||||
f.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"version": 1,
|
||||
"liveness": "2026-03-19T02:00:00Z",
|
||||
"mood": "exploring",
|
||||
"current_focus": "testing",
|
||||
"active_threads": [],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
)
|
||||
)
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", f):
|
||||
resp = client.get("/api/world/state")
|
||||
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["timmyState"]["mood"] == "exploring"
|
||||
assert data["timmyState"]["activity"] == "testing"
|
||||
assert resp.headers["cache-control"] == "no-cache, no-store"
|
||||
|
||||
|
||||
def test_world_state_endpoint_fallback(client, tmp_path):
|
||||
"""Endpoint falls back to live state when file missing."""
|
||||
with (
|
||||
patch("dashboard.routes.world.PRESENCE_FILE", tmp_path / "nope.json"),
|
||||
patch("timmy.workshop_state.get_state_dict") as mock_get,
|
||||
):
|
||||
mock_get.return_value = {
|
||||
"version": 1,
|
||||
"liveness": "2026-03-19T02:00:00Z",
|
||||
"mood": "calm",
|
||||
"current_focus": "",
|
||||
"active_threads": [],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
resp = client.get("/api/world/state")
|
||||
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["timmyState"]["mood"] == "calm"
|
||||
|
||||
|
||||
def test_world_state_endpoint_full_fallback(client, tmp_path):
|
||||
"""Endpoint returns safe defaults when everything fails."""
|
||||
with (
|
||||
patch("dashboard.routes.world.PRESENCE_FILE", tmp_path / "nope.json"),
|
||||
patch(
|
||||
"timmy.workshop_state.get_state_dict",
|
||||
side_effect=RuntimeError("boom"),
|
||||
),
|
||||
):
|
||||
resp = client.get("/api/world/state")
|
||||
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert data["timmyState"]["mood"] == "calm"
|
||||
assert data["version"] == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# broadcast_world_state
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_world_state_sends_timmy_state():
|
||||
"""broadcast_world_state sends timmy_state JSON to connected clients."""
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
ws = AsyncMock()
|
||||
_ws_clients.append(ws)
|
||||
try:
|
||||
presence = {
|
||||
"version": 1,
|
||||
"mood": "exploring",
|
||||
"current_focus": "testing",
|
||||
"energy": 0.8,
|
||||
"confidence": 0.9,
|
||||
}
|
||||
await broadcast_world_state(presence)
|
||||
|
||||
ws.send_text.assert_called_once()
|
||||
msg = json.loads(ws.send_text.call_args[0][0])
|
||||
assert msg["type"] == "timmy_state"
|
||||
assert msg["mood"] == "exploring"
|
||||
assert msg["activity"] == "testing"
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_world_state_removes_dead_clients():
|
||||
"""Dead WebSocket connections are cleaned up on broadcast."""
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
dead_ws = AsyncMock()
|
||||
dead_ws.send_text.side_effect = ConnectionError("gone")
|
||||
_ws_clients.append(dead_ws)
|
||||
try:
|
||||
await broadcast_world_state({"mood": "idle"})
|
||||
assert dead_ws not in _ws_clients
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
|
||||
|
||||
def test_world_ws_endpoint_accepts_connection(client):
|
||||
"""WebSocket endpoint at /api/world/ws accepts connections."""
|
||||
with client.websocket_connect("/api/world/ws"):
|
||||
pass # Connection accepted — just close it
|
||||
|
||||
|
||||
def test_world_ws_sends_snapshot_on_connect(client, tmp_path):
|
||||
"""WebSocket sends a world_state snapshot immediately on connect."""
|
||||
f = tmp_path / "presence.json"
|
||||
f.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"version": 1,
|
||||
"liveness": "2026-03-19T02:00:00Z",
|
||||
"mood": "exploring",
|
||||
"current_focus": "testing",
|
||||
"active_threads": [],
|
||||
"recent_events": [],
|
||||
"concerns": [],
|
||||
}
|
||||
)
|
||||
)
|
||||
with patch("dashboard.routes.world.PRESENCE_FILE", f):
|
||||
with client.websocket_connect("/api/world/ws") as ws:
|
||||
msg = json.loads(ws.receive_text())
|
||||
|
||||
assert msg["type"] == "world_state"
|
||||
assert msg["timmyState"]["mood"] == "exploring"
|
||||
assert msg["timmyState"]["activity"] == "testing"
|
||||
assert "updatedAt" in msg
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Visitor chat — bark engine
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_client_message_ignores_non_json():
|
||||
"""Non-JSON messages are silently ignored."""
|
||||
await _handle_client_message("not json") # should not raise
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_client_message_ignores_unknown_type():
|
||||
"""Unknown message types are ignored."""
|
||||
await _handle_client_message(json.dumps({"type": "unknown"}))
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_client_message_ignores_empty_text():
|
||||
"""Empty visitor_message text is ignored."""
|
||||
await _handle_client_message(json.dumps({"type": "visitor_message", "text": " "}))
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_bark_returns_response():
|
||||
"""_generate_bark returns the chat response."""
|
||||
reset_conversation_ground()
|
||||
with patch("timmy.session.chat", new_callable=AsyncMock) as mock_chat:
|
||||
mock_chat.return_value = "Woof! Good to see you."
|
||||
result = await _generate_bark("Hey Timmy!")
|
||||
|
||||
assert result == "Woof! Good to see you."
|
||||
mock_chat.assert_called_once_with("Hey Timmy!", session_id="workshop")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_bark_fallback_on_error():
|
||||
"""_generate_bark returns canned response when chat fails."""
|
||||
reset_conversation_ground()
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=RuntimeError("no model"),
|
||||
):
|
||||
result = await _generate_bark("Hello?")
|
||||
|
||||
assert "tangled" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bark_and_broadcast_sends_thinking_then_speech():
|
||||
"""_bark_and_broadcast sends thinking indicator then speech."""
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
ws = AsyncMock()
|
||||
_ws_clients.append(ws)
|
||||
_conversation.clear()
|
||||
reset_conversation_ground()
|
||||
try:
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
return_value="All good here!",
|
||||
):
|
||||
await _bark_and_broadcast("How are you?")
|
||||
|
||||
# Should have sent two messages: thinking + speech
|
||||
assert ws.send_text.call_count == 2
|
||||
thinking = json.loads(ws.send_text.call_args_list[0][0][0])
|
||||
speech = json.loads(ws.send_text.call_args_list[1][0][0])
|
||||
|
||||
assert thinking["type"] == "timmy_thinking"
|
||||
assert speech["type"] == "timmy_speech"
|
||||
assert speech["text"] == "All good here!"
|
||||
assert len(speech["recentExchanges"]) == 1
|
||||
assert speech["recentExchanges"][0]["visitor"] == "How are you?"
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
_conversation.clear()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_removes_dead_clients():
|
||||
"""Dead clients are cleaned up during broadcast."""
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
dead = AsyncMock()
|
||||
dead.send_text.side_effect = ConnectionError("gone")
|
||||
_ws_clients.append(dead)
|
||||
try:
|
||||
await _broadcast(json.dumps({"type": "timmy_speech", "text": "test"}))
|
||||
assert dead not in _ws_clients
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_conversation_buffer_caps_at_max():
|
||||
"""Conversation buffer only keeps the last _MAX_EXCHANGES entries."""
|
||||
from dashboard.routes.world import _MAX_EXCHANGES, _ws_clients
|
||||
|
||||
ws = AsyncMock()
|
||||
_ws_clients.append(ws)
|
||||
_conversation.clear()
|
||||
reset_conversation_ground()
|
||||
try:
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
return_value="reply",
|
||||
):
|
||||
for i in range(_MAX_EXCHANGES + 2):
|
||||
await _bark_and_broadcast(f"msg {i}")
|
||||
|
||||
assert len(_conversation) == _MAX_EXCHANGES
|
||||
# Oldest messages should have been evicted
|
||||
assert _conversation[0]["visitor"] == f"msg {_MAX_EXCHANGES + 2 - _MAX_EXCHANGES}"
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
_conversation.clear()
|
||||
|
||||
|
||||
def test_log_bark_failure_logs_exception(caplog):
|
||||
"""_log_bark_failure logs errors from failed bark tasks."""
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
|
||||
async def _fail():
|
||||
raise RuntimeError("bark boom")
|
||||
|
||||
task = loop.create_task(_fail())
|
||||
loop.run_until_complete(asyncio.sleep(0.01))
|
||||
loop.close()
|
||||
with caplog.at_level(logging.ERROR):
|
||||
_log_bark_failure(task)
|
||||
assert "bark boom" in caplog.text
|
||||
|
||||
|
||||
def test_log_bark_failure_ignores_cancelled():
|
||||
"""_log_bark_failure silently ignores cancelled tasks."""
|
||||
|
||||
task = MagicMock(spec=asyncio.Task)
|
||||
task.cancelled.return_value = True
|
||||
_log_bark_failure(task) # should not raise
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Conversation grounding (#322)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestConversationGrounding:
|
||||
"""Tests for conversation grounding — prevent topic drift."""
|
||||
|
||||
def setup_method(self):
|
||||
reset_conversation_ground()
|
||||
|
||||
def teardown_method(self):
|
||||
reset_conversation_ground()
|
||||
|
||||
def test_refresh_ground_sets_topic_on_first_message(self):
|
||||
"""First visitor message becomes the grounding anchor."""
|
||||
import dashboard.routes.world as w
|
||||
|
||||
_refresh_ground("Tell me about the Bible")
|
||||
assert w._ground_topic == "Tell me about the Bible"
|
||||
assert w._ground_set_at > 0
|
||||
|
||||
def test_refresh_ground_keeps_topic_on_subsequent_messages(self):
|
||||
"""Subsequent messages don't overwrite the anchor."""
|
||||
import dashboard.routes.world as w
|
||||
|
||||
_refresh_ground("Tell me about the Bible")
|
||||
_refresh_ground("What about Genesis?")
|
||||
assert w._ground_topic == "Tell me about the Bible"
|
||||
|
||||
def test_refresh_ground_resets_after_ttl(self):
|
||||
"""Anchor expires after _GROUND_TTL seconds of inactivity."""
|
||||
import dashboard.routes.world as w
|
||||
|
||||
_refresh_ground("Tell me about the Bible")
|
||||
# Simulate TTL expiry
|
||||
w._ground_set_at = time.time() - _GROUND_TTL - 1
|
||||
_refresh_ground("Now tell me about cooking")
|
||||
assert w._ground_topic == "Now tell me about cooking"
|
||||
|
||||
def test_refresh_ground_truncates_long_messages(self):
|
||||
"""Anchor text is capped at 120 characters."""
|
||||
import dashboard.routes.world as w
|
||||
|
||||
long_msg = "x" * 200
|
||||
_refresh_ground(long_msg)
|
||||
assert len(w._ground_topic) == 120
|
||||
|
||||
def test_reset_conversation_ground_clears_state(self):
|
||||
"""reset_conversation_ground clears the anchor."""
|
||||
import dashboard.routes.world as w
|
||||
|
||||
_refresh_ground("Some topic")
|
||||
reset_conversation_ground()
|
||||
assert w._ground_topic is None
|
||||
assert w._ground_set_at == 0.0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_bark_prepends_ground_topic(self):
|
||||
"""When grounded, the topic is prepended to the visitor message."""
|
||||
_refresh_ground("Tell me about prayer")
|
||||
with patch("timmy.session.chat", new_callable=AsyncMock) as mock_chat:
|
||||
mock_chat.return_value = "Great question!"
|
||||
await _generate_bark("What else can you share?")
|
||||
|
||||
call_text = mock_chat.call_args[0][0]
|
||||
assert "[Workshop conversation topic: Tell me about prayer]" in call_text
|
||||
assert "What else can you share?" in call_text
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_bark_no_prefix_for_first_message(self):
|
||||
"""First message (which IS the anchor) is not prefixed."""
|
||||
_refresh_ground("Tell me about prayer")
|
||||
with patch("timmy.session.chat", new_callable=AsyncMock) as mock_chat:
|
||||
mock_chat.return_value = "Sure!"
|
||||
await _generate_bark("Tell me about prayer")
|
||||
|
||||
call_text = mock_chat.call_args[0][0]
|
||||
assert "[Workshop conversation topic:" not in call_text
|
||||
assert call_text == "Tell me about prayer"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bark_and_broadcast_sets_ground(self):
|
||||
"""_bark_and_broadcast sets the ground topic automatically."""
|
||||
import dashboard.routes.world as w
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
ws = AsyncMock()
|
||||
_ws_clients.append(ws)
|
||||
_conversation.clear()
|
||||
try:
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
return_value="Interesting!",
|
||||
):
|
||||
await _bark_and_broadcast("What is grace?")
|
||||
assert w._ground_topic == "What is grace?"
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
_conversation.clear()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Conversation grounding — commitment tracking (rescued from PR #408)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.fixture(autouse=False)
|
||||
def _clean_commitments():
|
||||
"""Reset commitments before and after each commitment test."""
|
||||
reset_commitments()
|
||||
yield
|
||||
reset_commitments()
|
||||
|
||||
|
||||
class TestExtractCommitments:
|
||||
def test_extracts_ill_pattern(self):
|
||||
text = "I'll draft the skeleton ticket in 30 minutes."
|
||||
result = _extract_commitments(text)
|
||||
assert len(result) == 1
|
||||
assert "draft the skeleton ticket" in result[0]
|
||||
|
||||
def test_extracts_i_will_pattern(self):
|
||||
result = _extract_commitments("I will review that PR tomorrow.")
|
||||
assert len(result) == 1
|
||||
assert "review that PR tomorrow" in result[0]
|
||||
|
||||
def test_extracts_let_me_pattern(self):
|
||||
result = _extract_commitments("Let me write up a summary for you.")
|
||||
assert len(result) == 1
|
||||
assert "write up a summary" in result[0]
|
||||
|
||||
def test_skips_short_matches(self):
|
||||
result = _extract_commitments("I'll do it.")
|
||||
# "do it" is 5 chars — should be skipped (needs > 5)
|
||||
assert result == []
|
||||
|
||||
def test_no_commitments_in_normal_text(self):
|
||||
result = _extract_commitments("The weather is nice today.")
|
||||
assert result == []
|
||||
|
||||
def test_truncates_long_commitments(self):
|
||||
long_phrase = "a" * 200
|
||||
result = _extract_commitments(f"I'll {long_phrase}.")
|
||||
assert len(result) == 1
|
||||
assert len(result[0]) == 120
|
||||
|
||||
|
||||
class TestRecordCommitments:
|
||||
def test_records_new_commitment(self, _clean_commitments):
|
||||
_record_commitments("I'll draft the ticket now.")
|
||||
assert len(get_commitments()) == 1
|
||||
assert get_commitments()[0]["messages_since"] == 0
|
||||
|
||||
def test_avoids_duplicate_commitments(self, _clean_commitments):
|
||||
_record_commitments("I'll draft the ticket now.")
|
||||
_record_commitments("I'll draft the ticket now.")
|
||||
assert len(get_commitments()) == 1
|
||||
|
||||
def test_caps_at_max(self, _clean_commitments):
|
||||
from dashboard.routes.world import _MAX_COMMITMENTS
|
||||
|
||||
for i in range(_MAX_COMMITMENTS + 3):
|
||||
_record_commitments(f"I'll handle commitment number {i} right away.")
|
||||
assert len(get_commitments()) <= _MAX_COMMITMENTS
|
||||
|
||||
|
||||
class TestTickAndContext:
|
||||
def test_tick_increments_messages_since(self, _clean_commitments):
|
||||
_commitments.append({"text": "write the docs", "created_at": 0, "messages_since": 0})
|
||||
_tick_commitments()
|
||||
_tick_commitments()
|
||||
assert _commitments[0]["messages_since"] == 2
|
||||
|
||||
def test_context_empty_when_no_overdue(self, _clean_commitments):
|
||||
_commitments.append({"text": "write the docs", "created_at": 0, "messages_since": 0})
|
||||
assert _build_commitment_context() == ""
|
||||
|
||||
def test_context_surfaces_overdue_commitments(self, _clean_commitments):
|
||||
_commitments.append(
|
||||
{
|
||||
"text": "draft the skeleton ticket",
|
||||
"created_at": 0,
|
||||
"messages_since": _REMIND_AFTER,
|
||||
}
|
||||
)
|
||||
ctx = _build_commitment_context()
|
||||
assert "draft the skeleton ticket" in ctx
|
||||
assert "Open commitments" in ctx
|
||||
|
||||
def test_context_only_includes_overdue(self, _clean_commitments):
|
||||
_commitments.append({"text": "recent thing", "created_at": 0, "messages_since": 1})
|
||||
_commitments.append(
|
||||
{
|
||||
"text": "old thing",
|
||||
"created_at": 0,
|
||||
"messages_since": _REMIND_AFTER,
|
||||
}
|
||||
)
|
||||
ctx = _build_commitment_context()
|
||||
assert "old thing" in ctx
|
||||
assert "recent thing" not in ctx
|
||||
|
||||
|
||||
class TestCloseCommitment:
|
||||
def test_close_valid_index(self, _clean_commitments):
|
||||
_commitments.append({"text": "write the docs", "created_at": 0, "messages_since": 0})
|
||||
assert close_commitment(0) is True
|
||||
assert len(get_commitments()) == 0
|
||||
|
||||
def test_close_invalid_index(self, _clean_commitments):
|
||||
assert close_commitment(99) is False
|
||||
|
||||
|
||||
class TestGroundingIntegration:
|
||||
@pytest.mark.asyncio
|
||||
async def test_bark_records_commitments_from_reply(self, _clean_commitments):
|
||||
from dashboard.routes.world import _ws_clients
|
||||
|
||||
ws = AsyncMock()
|
||||
_ws_clients.append(ws)
|
||||
_conversation.clear()
|
||||
try:
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
return_value="I'll draft the ticket for you!",
|
||||
):
|
||||
await _bark_and_broadcast("Can you help?")
|
||||
|
||||
assert len(get_commitments()) == 1
|
||||
assert "draft the ticket" in get_commitments()[0]["text"]
|
||||
finally:
|
||||
_ws_clients.clear()
|
||||
_conversation.clear()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_bark_prepends_context_after_n_messages(self, _clean_commitments):
|
||||
"""After _REMIND_AFTER messages, commitment context is prepended."""
|
||||
_commitments.append(
|
||||
{
|
||||
"text": "draft the skeleton ticket",
|
||||
"created_at": 0,
|
||||
"messages_since": _REMIND_AFTER - 1,
|
||||
}
|
||||
)
|
||||
|
||||
with patch(
|
||||
"timmy.session.chat",
|
||||
new_callable=AsyncMock,
|
||||
return_value="Sure thing!",
|
||||
) as mock_chat:
|
||||
# This tick will push messages_since to _REMIND_AFTER
|
||||
await _generate_bark("Any updates?")
|
||||
# _generate_bark doesn't tick — _bark_and_broadcast does.
|
||||
# But we pre-set messages_since to _REMIND_AFTER - 1,
|
||||
# so we need to tick once to make it overdue.
|
||||
_tick_commitments()
|
||||
await _generate_bark("Any updates?")
|
||||
|
||||
# Second call should have context prepended
|
||||
last_call = mock_chat.call_args_list[-1]
|
||||
sent_text = last_call[0][0]
|
||||
assert "draft the skeleton ticket" in sent_text
|
||||
assert "Open commitments" in sent_text
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# WebSocket heartbeat ping (rescued from PR #399)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_heartbeat_sends_ping():
|
||||
"""Heartbeat sends a ping JSON frame after the interval elapses."""
|
||||
ws = AsyncMock()
|
||||
|
||||
with patch("dashboard.routes.world.asyncio.sleep", new_callable=AsyncMock) as mock_sleep:
|
||||
# Let the first sleep complete, then raise to exit the loop
|
||||
call_count = 0
|
||||
|
||||
async def sleep_side_effect(_interval):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count > 1:
|
||||
raise ConnectionError("stop")
|
||||
|
||||
mock_sleep.side_effect = sleep_side_effect
|
||||
await _heartbeat(ws)
|
||||
|
||||
ws.send_text.assert_called_once()
|
||||
msg = json.loads(ws.send_text.call_args[0][0])
|
||||
assert msg["type"] == "ping"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_heartbeat_exits_on_dead_connection():
|
||||
"""Heartbeat exits cleanly when the WebSocket is dead."""
|
||||
ws = AsyncMock()
|
||||
ws.send_text.side_effect = ConnectionError("gone")
|
||||
|
||||
with patch("dashboard.routes.world.asyncio.sleep", new_callable=AsyncMock):
|
||||
await _heartbeat(ws) # should not raise
|
||||
@@ -5,9 +5,16 @@ from datetime import UTC, datetime, timedelta
|
||||
from unittest.mock import patch
|
||||
|
||||
from infrastructure.error_capture import (
|
||||
_build_report_description,
|
||||
_create_bug_report,
|
||||
_dedup_cache,
|
||||
_extract_traceback_info,
|
||||
_get_git_context,
|
||||
_is_duplicate,
|
||||
_log_bug_report_created,
|
||||
_log_error_event,
|
||||
_notify_bug_report,
|
||||
_record_to_session,
|
||||
_stack_hash,
|
||||
capture_error,
|
||||
)
|
||||
@@ -193,3 +200,153 @@ class TestCaptureError:
|
||||
|
||||
def teardown_method(self):
|
||||
_dedup_cache.clear()
|
||||
|
||||
|
||||
class TestExtractTracebackInfo:
|
||||
"""Test _extract_traceback_info helper."""
|
||||
|
||||
def test_returns_three_tuple(self):
|
||||
try:
|
||||
raise ValueError("extract test")
|
||||
except ValueError as e:
|
||||
tb_str, affected_file, affected_line = _extract_traceback_info(e)
|
||||
assert "ValueError" in tb_str
|
||||
assert "extract test" in tb_str
|
||||
assert affected_file.endswith(".py")
|
||||
assert affected_line > 0
|
||||
|
||||
def test_file_points_to_raise_site(self):
|
||||
try:
|
||||
_make_exception()
|
||||
except ValueError as e:
|
||||
_, affected_file, _ = _extract_traceback_info(e)
|
||||
assert "test_error_capture" in affected_file
|
||||
|
||||
|
||||
class TestLogErrorEvent:
|
||||
"""Test _log_error_event helper."""
|
||||
|
||||
def test_does_not_crash_on_missing_deps(self):
|
||||
try:
|
||||
raise RuntimeError("log test")
|
||||
except RuntimeError as e:
|
||||
_log_error_event(e, "test", "abc123", "file.py", 42, {"branch": "main"})
|
||||
|
||||
|
||||
class TestBuildReportDescription:
|
||||
"""Test _build_report_description helper."""
|
||||
|
||||
def test_includes_error_info(self):
|
||||
try:
|
||||
raise RuntimeError("desc test")
|
||||
except RuntimeError as e:
|
||||
desc = _build_report_description(
|
||||
e,
|
||||
"test_src",
|
||||
None,
|
||||
"hash1",
|
||||
"tb...",
|
||||
"file.py",
|
||||
10,
|
||||
{"branch": "main"},
|
||||
)
|
||||
assert "RuntimeError" in desc
|
||||
assert "test_src" in desc
|
||||
assert "file.py:10" in desc
|
||||
assert "hash1" in desc
|
||||
|
||||
def test_includes_context_when_provided(self):
|
||||
try:
|
||||
raise RuntimeError("ctx desc")
|
||||
except RuntimeError as e:
|
||||
desc = _build_report_description(
|
||||
e,
|
||||
"src",
|
||||
{"path": "/api"},
|
||||
"h",
|
||||
"tb",
|
||||
"f.py",
|
||||
1,
|
||||
{},
|
||||
)
|
||||
assert "path=/api" in desc
|
||||
|
||||
def test_omits_context_when_none(self):
|
||||
try:
|
||||
raise RuntimeError("no ctx")
|
||||
except RuntimeError as e:
|
||||
desc = _build_report_description(
|
||||
e,
|
||||
"src",
|
||||
None,
|
||||
"h",
|
||||
"tb",
|
||||
"f.py",
|
||||
1,
|
||||
{},
|
||||
)
|
||||
assert "**Context:**" not in desc
|
||||
|
||||
|
||||
class TestLogBugReportCreated:
|
||||
"""Test _log_bug_report_created helper."""
|
||||
|
||||
def test_does_not_crash_on_missing_deps(self):
|
||||
_log_bug_report_created("test", "task-1", "hash1", "title")
|
||||
|
||||
|
||||
class TestCreateBugReport:
|
||||
"""Test _create_bug_report helper."""
|
||||
|
||||
def test_does_not_crash_on_missing_deps(self):
|
||||
try:
|
||||
raise RuntimeError("report test")
|
||||
except RuntimeError as e:
|
||||
result = _create_bug_report(
|
||||
e, "test", None, "abc123", "traceback...", "file.py", 42, {}
|
||||
)
|
||||
# May return None if swarm deps unavailable — that's fine
|
||||
assert result is None or isinstance(result, str)
|
||||
|
||||
def test_with_context(self):
|
||||
try:
|
||||
raise RuntimeError("ctx test")
|
||||
except RuntimeError as e:
|
||||
result = _create_bug_report(e, "test", {"path": "/api"}, "abc", "tb", "f.py", 1, {})
|
||||
assert result is None or isinstance(result, str)
|
||||
|
||||
|
||||
class TestNotifyBugReport:
|
||||
"""Test _notify_bug_report helper."""
|
||||
|
||||
def test_does_not_crash(self):
|
||||
try:
|
||||
raise RuntimeError("notify test")
|
||||
except RuntimeError as e:
|
||||
_notify_bug_report(e, "test")
|
||||
|
||||
|
||||
class TestRecordToSession:
|
||||
"""Test _record_to_session helper."""
|
||||
|
||||
def test_does_not_crash_without_recorder(self):
|
||||
try:
|
||||
raise RuntimeError("session test")
|
||||
except RuntimeError as e:
|
||||
_record_to_session(e, "test")
|
||||
|
||||
def test_calls_registered_recorder(self):
|
||||
from infrastructure.error_capture import register_error_recorder
|
||||
|
||||
calls = []
|
||||
register_error_recorder(lambda **kwargs: calls.append(kwargs))
|
||||
try:
|
||||
try:
|
||||
raise RuntimeError("callback test")
|
||||
except RuntimeError as e:
|
||||
_record_to_session(e, "test_source")
|
||||
assert len(calls) == 1
|
||||
assert "RuntimeError" in calls[0]["error"]
|
||||
assert calls[0]["context"] == "test_source"
|
||||
finally:
|
||||
register_error_recorder(None)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
@@ -489,30 +489,182 @@ class TestProviderAvailabilityCheck:
|
||||
|
||||
assert router._check_provider_available(provider) is False
|
||||
|
||||
def test_check_airllm_installed(self):
|
||||
"""Test AirLLM when installed."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
|
||||
provider = Provider(
|
||||
name="airllm",
|
||||
type="airllm",
|
||||
enabled=True,
|
||||
priority=1,
|
||||
class TestCascadeRouterReload:
|
||||
"""Test hot-reload of providers.yaml."""
|
||||
|
||||
def test_reload_preserves_metrics(self, tmp_path):
|
||||
"""Test that reload preserves metrics for existing providers."""
|
||||
config = {
|
||||
"providers": [
|
||||
{
|
||||
"name": "test-openai",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 1,
|
||||
"api_key": "sk-test",
|
||||
}
|
||||
],
|
||||
}
|
||||
config_path = tmp_path / "providers.yaml"
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
router = CascadeRouter(config_path=config_path)
|
||||
assert len(router.providers) == 1
|
||||
|
||||
# Simulate some traffic
|
||||
router._record_success(router.providers[0], 150.0)
|
||||
router._record_success(router.providers[0], 250.0)
|
||||
assert router.providers[0].metrics.total_requests == 2
|
||||
|
||||
# Reload
|
||||
result = router.reload_config()
|
||||
|
||||
assert result["total_providers"] == 1
|
||||
assert result["preserved"] == 1
|
||||
assert result["added"] == []
|
||||
assert result["removed"] == []
|
||||
# Metrics survived
|
||||
assert router.providers[0].metrics.total_requests == 2
|
||||
assert router.providers[0].metrics.total_latency_ms == 400.0
|
||||
|
||||
def test_reload_preserves_circuit_breaker(self, tmp_path):
|
||||
"""Test that reload preserves circuit breaker state."""
|
||||
config = {
|
||||
"cascade": {"circuit_breaker": {"failure_threshold": 2}},
|
||||
"providers": [
|
||||
{
|
||||
"name": "test-openai",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 1,
|
||||
"api_key": "sk-test",
|
||||
}
|
||||
],
|
||||
}
|
||||
config_path = tmp_path / "providers.yaml"
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
router = CascadeRouter(config_path=config_path)
|
||||
|
||||
# Open circuit breaker
|
||||
for _ in range(2):
|
||||
router._record_failure(router.providers[0])
|
||||
assert router.providers[0].circuit_state == CircuitState.OPEN
|
||||
|
||||
# Reload
|
||||
router.reload_config()
|
||||
|
||||
# Circuit breaker state preserved
|
||||
assert router.providers[0].circuit_state == CircuitState.OPEN
|
||||
assert router.providers[0].status == ProviderStatus.UNHEALTHY
|
||||
|
||||
def test_reload_detects_added_provider(self, tmp_path):
|
||||
"""Test that reload detects newly added providers."""
|
||||
config = {
|
||||
"providers": [
|
||||
{
|
||||
"name": "openai-1",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 1,
|
||||
"api_key": "sk-test",
|
||||
}
|
||||
],
|
||||
}
|
||||
config_path = tmp_path / "providers.yaml"
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
router = CascadeRouter(config_path=config_path)
|
||||
assert len(router.providers) == 1
|
||||
|
||||
# Add a second provider to config
|
||||
config["providers"].append(
|
||||
{
|
||||
"name": "anthropic-1",
|
||||
"type": "anthropic",
|
||||
"enabled": True,
|
||||
"priority": 2,
|
||||
"api_key": "sk-ant-test",
|
||||
}
|
||||
)
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
with patch("importlib.util.find_spec", return_value=MagicMock()):
|
||||
assert router._check_provider_available(provider) is True
|
||||
result = router.reload_config()
|
||||
|
||||
def test_check_airllm_not_installed(self):
|
||||
"""Test AirLLM when not installed."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
assert result["total_providers"] == 2
|
||||
assert result["preserved"] == 1
|
||||
assert result["added"] == ["anthropic-1"]
|
||||
assert result["removed"] == []
|
||||
|
||||
provider = Provider(
|
||||
name="airllm",
|
||||
type="airllm",
|
||||
enabled=True,
|
||||
priority=1,
|
||||
)
|
||||
def test_reload_detects_removed_provider(self, tmp_path):
|
||||
"""Test that reload detects removed providers."""
|
||||
config = {
|
||||
"providers": [
|
||||
{
|
||||
"name": "openai-1",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 1,
|
||||
"api_key": "sk-test",
|
||||
},
|
||||
{
|
||||
"name": "anthropic-1",
|
||||
"type": "anthropic",
|
||||
"enabled": True,
|
||||
"priority": 2,
|
||||
"api_key": "sk-ant-test",
|
||||
},
|
||||
],
|
||||
}
|
||||
config_path = tmp_path / "providers.yaml"
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
with patch("importlib.util.find_spec", return_value=None):
|
||||
assert router._check_provider_available(provider) is False
|
||||
router = CascadeRouter(config_path=config_path)
|
||||
assert len(router.providers) == 2
|
||||
|
||||
# Remove anthropic
|
||||
config["providers"] = [config["providers"][0]]
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
result = router.reload_config()
|
||||
|
||||
assert result["total_providers"] == 1
|
||||
assert result["preserved"] == 1
|
||||
assert result["removed"] == ["anthropic-1"]
|
||||
|
||||
def test_reload_re_sorts_by_priority(self, tmp_path):
|
||||
"""Test that providers are re-sorted by priority after reload."""
|
||||
config = {
|
||||
"providers": [
|
||||
{
|
||||
"name": "low-priority",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 10,
|
||||
"api_key": "sk-test",
|
||||
},
|
||||
{
|
||||
"name": "high-priority",
|
||||
"type": "openai",
|
||||
"enabled": True,
|
||||
"priority": 1,
|
||||
"api_key": "sk-test2",
|
||||
},
|
||||
],
|
||||
}
|
||||
config_path = tmp_path / "providers.yaml"
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
router = CascadeRouter(config_path=config_path)
|
||||
assert router.providers[0].name == "high-priority"
|
||||
|
||||
# Swap priorities
|
||||
config["providers"][0]["priority"] = 1
|
||||
config["providers"][1]["priority"] = 10
|
||||
config_path.write_text(yaml.dump(config))
|
||||
|
||||
router.reload_config()
|
||||
|
||||
assert router.providers[0].name == "low-priority"
|
||||
assert router.providers[1].name == "high-priority"
|
||||
|
||||
149
tests/infrastructure/test_router_history.py
Normal file
149
tests/infrastructure/test_router_history.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""Tests for provider health history store and API endpoint."""
|
||||
|
||||
import time
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from src.infrastructure.router.history import HealthHistoryStore
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def store():
|
||||
"""In-memory history store for testing."""
|
||||
s = HealthHistoryStore(db_path=":memory:")
|
||||
yield s
|
||||
s.close()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_providers():
|
||||
return [
|
||||
{
|
||||
"name": "anthropic",
|
||||
"status": "healthy",
|
||||
"error_rate": 0.01,
|
||||
"avg_latency_ms": 250.5,
|
||||
"circuit_state": "closed",
|
||||
"total_requests": 100,
|
||||
},
|
||||
{
|
||||
"name": "local",
|
||||
"status": "degraded",
|
||||
"error_rate": 0.15,
|
||||
"avg_latency_ms": 80.0,
|
||||
"circuit_state": "closed",
|
||||
"total_requests": 50,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def test_record_and_retrieve(store, sample_providers):
|
||||
store.record_snapshot(sample_providers)
|
||||
history = store.get_history(hours=1)
|
||||
assert len(history) == 1
|
||||
assert len(history[0]["providers"]) == 2
|
||||
assert history[0]["providers"][0]["name"] == "anthropic"
|
||||
assert history[0]["providers"][1]["name"] == "local"
|
||||
assert "timestamp" in history[0]
|
||||
|
||||
|
||||
def test_multiple_snapshots(store, sample_providers):
|
||||
store.record_snapshot(sample_providers)
|
||||
time.sleep(0.01)
|
||||
store.record_snapshot(sample_providers)
|
||||
history = store.get_history(hours=1)
|
||||
assert len(history) == 2
|
||||
|
||||
|
||||
def test_hours_filtering(store, sample_providers):
|
||||
old_ts = (datetime.now(UTC) - timedelta(hours=48)).isoformat()
|
||||
store._conn.execute(
|
||||
"""INSERT INTO snapshots
|
||||
(timestamp, provider_name, status, error_rate,
|
||||
avg_latency_ms, circuit_state, total_requests)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
(old_ts, "anthropic", "healthy", 0.0, 100.0, "closed", 10),
|
||||
)
|
||||
store._conn.commit()
|
||||
store.record_snapshot(sample_providers)
|
||||
|
||||
history = store.get_history(hours=24)
|
||||
assert len(history) == 1
|
||||
|
||||
history = store.get_history(hours=72)
|
||||
assert len(history) == 2
|
||||
|
||||
|
||||
def test_prune(store, sample_providers):
|
||||
old_ts = (datetime.now(UTC) - timedelta(hours=200)).isoformat()
|
||||
store._conn.execute(
|
||||
"""INSERT INTO snapshots
|
||||
(timestamp, provider_name, status, error_rate,
|
||||
avg_latency_ms, circuit_state, total_requests)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
(old_ts, "anthropic", "healthy", 0.0, 100.0, "closed", 10),
|
||||
)
|
||||
store._conn.commit()
|
||||
store.record_snapshot(sample_providers)
|
||||
|
||||
deleted = store.prune(keep_hours=168)
|
||||
assert deleted == 1
|
||||
history = store.get_history(hours=999)
|
||||
assert len(history) == 1
|
||||
|
||||
|
||||
def test_empty_history(store):
|
||||
assert store.get_history(hours=24) == []
|
||||
|
||||
|
||||
def test_capture_snapshot_from_router(store):
|
||||
mock_metrics = MagicMock()
|
||||
mock_metrics.error_rate = 0.05
|
||||
mock_metrics.avg_latency_ms = 200.0
|
||||
mock_metrics.total_requests = 42
|
||||
|
||||
mock_provider = MagicMock()
|
||||
mock_provider.name = "test-provider"
|
||||
mock_provider.status.value = "healthy"
|
||||
mock_provider.metrics = mock_metrics
|
||||
mock_provider.circuit_state.value = "closed"
|
||||
|
||||
mock_router = MagicMock()
|
||||
mock_router.providers = [mock_provider]
|
||||
|
||||
store._capture_snapshot(mock_router)
|
||||
history = store.get_history(hours=1)
|
||||
assert len(history) == 1
|
||||
p = history[0]["providers"][0]
|
||||
assert p["name"] == "test-provider"
|
||||
assert p["status"] == "healthy"
|
||||
assert p["error_rate"] == 0.05
|
||||
assert p["total_requests"] == 42
|
||||
|
||||
|
||||
def test_history_api_endpoint(store, sample_providers):
|
||||
"""GET /api/v1/router/history returns snapshot data."""
|
||||
store.record_snapshot(sample_providers)
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.testclient import TestClient
|
||||
from src.infrastructure.router.api import get_cascade_router
|
||||
from src.infrastructure.router.api import router as api_router
|
||||
from src.infrastructure.router.history import get_history_store
|
||||
|
||||
app = FastAPI()
|
||||
app.include_router(api_router)
|
||||
|
||||
app.dependency_overrides[get_history_store] = lambda: store
|
||||
app.dependency_overrides[get_cascade_router] = lambda: MagicMock()
|
||||
|
||||
client = TestClient(app)
|
||||
resp = client.get("/api/v1/router/history?hours=1")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert len(data) == 1
|
||||
assert len(data[0]["providers"]) == 2
|
||||
assert data[0]["providers"][0]["name"] == "anthropic"
|
||||
|
||||
app.dependency_overrides.clear()
|
||||
285
tests/integrations/test_agentic_ws_broadcast.py
Normal file
285
tests/integrations/test_agentic_ws_broadcast.py
Normal file
@@ -0,0 +1,285 @@
|
||||
"""Integration tests for agentic loop WebSocket broadcasts.
|
||||
|
||||
Verifies that ``run_agentic_loop`` pushes the correct sequence of events
|
||||
through the real ``ws_manager`` and that connected (mock) WebSocket clients
|
||||
receive every broadcast with the expected payloads.
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from infrastructure.ws_manager.handler import WebSocketManager
|
||||
from timmy.agentic_loop import run_agentic_loop
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _mock_run(content: str):
|
||||
m = MagicMock()
|
||||
m.content = content
|
||||
return m
|
||||
|
||||
|
||||
def _ws_client() -> AsyncMock:
|
||||
"""Return a fake WebSocket that records sent messages."""
|
||||
return AsyncMock()
|
||||
|
||||
|
||||
def _collected_events(ws: AsyncMock) -> list[dict]:
|
||||
"""Extract parsed JSON events from a mock WebSocket's send_text calls."""
|
||||
return [json.loads(call.args[0]) for call in ws.send_text.call_args_list]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestAgenticLoopBroadcastSequence:
|
||||
"""Events arrive at WS clients in the correct order with expected data."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_successful_run_broadcasts_plan_steps_complete(self):
|
||||
"""A successful 2-step loop emits plan_ready → 2× step_complete → task_complete."""
|
||||
mgr = WebSocketManager()
|
||||
ws = _ws_client()
|
||||
mgr._connections = [ws]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Gather data\n2. Summarise"),
|
||||
_mock_run("Gathered 10 records"),
|
||||
_mock_run("Summary written"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
result = await run_agentic_loop("Gather and summarise", max_steps=2)
|
||||
|
||||
assert result.status == "completed"
|
||||
|
||||
events = _collected_events(ws)
|
||||
event_names = [e["event"] for e in events]
|
||||
assert event_names == [
|
||||
"agentic.plan_ready",
|
||||
"agentic.step_complete",
|
||||
"agentic.step_complete",
|
||||
"agentic.task_complete",
|
||||
]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_plan_ready_payload(self):
|
||||
"""plan_ready contains task_id, task, steps list, and total count."""
|
||||
mgr = WebSocketManager()
|
||||
ws = _ws_client()
|
||||
mgr._connections = [ws]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Alpha\n2. Beta"),
|
||||
_mock_run("Alpha done"),
|
||||
_mock_run("Beta done"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
result = await run_agentic_loop("Two steps")
|
||||
|
||||
plan_event = _collected_events(ws)[0]
|
||||
assert plan_event["event"] == "agentic.plan_ready"
|
||||
data = plan_event["data"]
|
||||
assert data["task_id"] == result.task_id
|
||||
assert data["task"] == "Two steps"
|
||||
assert data["steps"] == ["Alpha", "Beta"]
|
||||
assert data["total"] == 2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_step_complete_payload(self):
|
||||
"""step_complete carries step number, total, description, and result."""
|
||||
mgr = WebSocketManager()
|
||||
ws = _ws_client()
|
||||
mgr._connections = [ws]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Only step"),
|
||||
_mock_run("Step result text"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
await run_agentic_loop("Single step", max_steps=1)
|
||||
|
||||
step_event = _collected_events(ws)[1]
|
||||
assert step_event["event"] == "agentic.step_complete"
|
||||
data = step_event["data"]
|
||||
assert data["step"] == 1
|
||||
assert data["total"] == 1
|
||||
assert data["description"] == "Only step"
|
||||
assert "Step result text" in data["result"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_task_complete_payload(self):
|
||||
"""task_complete has status, steps_completed, summary, and duration_ms."""
|
||||
mgr = WebSocketManager()
|
||||
ws = _ws_client()
|
||||
mgr._connections = [ws]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Do it"),
|
||||
_mock_run("Done"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
await run_agentic_loop("Quick", max_steps=1)
|
||||
|
||||
complete_event = _collected_events(ws)[-1]
|
||||
assert complete_event["event"] == "agentic.task_complete"
|
||||
data = complete_event["data"]
|
||||
assert data["status"] == "completed"
|
||||
assert data["steps_completed"] == 1
|
||||
assert isinstance(data["duration_ms"], int)
|
||||
assert data["duration_ms"] >= 0
|
||||
assert data["summary"]
|
||||
|
||||
|
||||
class TestAdaptationBroadcast:
|
||||
"""Adapted steps emit step_adapted events."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_adapted_step_broadcasts_step_adapted(self):
|
||||
"""A failed-then-adapted step emits agentic.step_adapted."""
|
||||
mgr = WebSocketManager()
|
||||
ws = _ws_client()
|
||||
mgr._connections = [ws]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Risky step"),
|
||||
Exception("disk full"),
|
||||
_mock_run("Used /tmp instead"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
result = await run_agentic_loop("Adapt test", max_steps=1)
|
||||
|
||||
events = _collected_events(ws)
|
||||
event_names = [e["event"] for e in events]
|
||||
assert "agentic.step_adapted" in event_names
|
||||
|
||||
adapted = next(e for e in events if e["event"] == "agentic.step_adapted")
|
||||
assert adapted["data"]["error"] == "disk full"
|
||||
assert adapted["data"]["adaptation"]
|
||||
assert result.steps[0].status == "adapted"
|
||||
|
||||
|
||||
class TestMultipleClients:
|
||||
"""All connected clients receive every broadcast."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_two_clients_receive_all_events(self):
|
||||
mgr = WebSocketManager()
|
||||
ws1 = _ws_client()
|
||||
ws2 = _ws_client()
|
||||
mgr._connections = [ws1, ws2]
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Step A"),
|
||||
_mock_run("A done"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
await run_agentic_loop("Multi-client", max_steps=1)
|
||||
|
||||
events1 = _collected_events(ws1)
|
||||
events2 = _collected_events(ws2)
|
||||
assert len(events1) == len(events2) == 3 # plan + step + complete
|
||||
assert [e["event"] for e in events1] == [e["event"] for e in events2]
|
||||
|
||||
|
||||
class TestEventHistory:
|
||||
"""Broadcasts are recorded in ws_manager event history."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_events_appear_in_history(self):
|
||||
mgr = WebSocketManager()
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Only"),
|
||||
_mock_run("Done"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("infrastructure.ws_manager.handler.ws_manager", mgr),
|
||||
):
|
||||
await run_agentic_loop("History test", max_steps=1)
|
||||
|
||||
history_events = [e.event for e in mgr.event_history]
|
||||
assert "agentic.plan_ready" in history_events
|
||||
assert "agentic.step_complete" in history_events
|
||||
assert "agentic.task_complete" in history_events
|
||||
|
||||
|
||||
class TestBroadcastGracefulDegradation:
|
||||
"""Loop completes even when ws_manager is unavailable."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_loop_succeeds_when_broadcast_fails(self):
|
||||
"""ImportError from ws_manager doesn't crash the loop."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Do it"),
|
||||
_mock_run("Done"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch(
|
||||
"infrastructure.ws_manager.handler.ws_manager",
|
||||
new_callable=lambda: MagicMock,
|
||||
) as broken_mgr,
|
||||
):
|
||||
broken_mgr.broadcast = AsyncMock(side_effect=RuntimeError("ws down"))
|
||||
result = await run_agentic_loop("Resilient task", max_steps=1)
|
||||
|
||||
assert result.status == "completed"
|
||||
assert len(result.steps) == 1
|
||||
@@ -174,6 +174,103 @@ class TestDiscordVendor:
|
||||
assert result is False
|
||||
|
||||
|
||||
class TestExtractContent:
|
||||
def test_strips_bot_mention(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
vendor = DiscordVendor()
|
||||
vendor._client = MagicMock()
|
||||
vendor._client.user.id = 12345
|
||||
msg = MagicMock()
|
||||
msg.content = "<@12345> hello there"
|
||||
assert vendor._extract_content(msg) == "hello there"
|
||||
|
||||
def test_no_client_user(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
vendor = DiscordVendor()
|
||||
vendor._client = MagicMock()
|
||||
vendor._client.user = None
|
||||
msg = MagicMock()
|
||||
msg.content = "hello"
|
||||
assert vendor._extract_content(msg) == "hello"
|
||||
|
||||
def test_empty_after_strip(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
vendor = DiscordVendor()
|
||||
vendor._client = MagicMock()
|
||||
vendor._client.user.id = 99
|
||||
msg = MagicMock()
|
||||
msg.content = "<@99>"
|
||||
assert vendor._extract_content(msg) == ""
|
||||
|
||||
|
||||
class TestInvokeAgent:
|
||||
@staticmethod
|
||||
def _make_typing_target():
|
||||
"""Build a mock target whose .typing() is an async context manager."""
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
target = AsyncMock()
|
||||
|
||||
@asynccontextmanager
|
||||
async def _typing():
|
||||
yield
|
||||
|
||||
target.typing = _typing
|
||||
return target
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_timeout_returns_error(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
vendor = DiscordVendor()
|
||||
target = self._make_typing_target()
|
||||
|
||||
with patch(
|
||||
"integrations.chat_bridge.vendors.discord.chat_with_tools", side_effect=TimeoutError
|
||||
):
|
||||
run_output, response = await vendor._invoke_agent("hi", "sess", target)
|
||||
assert run_output is None
|
||||
assert "too long" in response
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_exception_returns_error(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
vendor = DiscordVendor()
|
||||
target = self._make_typing_target()
|
||||
|
||||
with patch(
|
||||
"integrations.chat_bridge.vendors.discord.chat_with_tools",
|
||||
side_effect=RuntimeError("boom"),
|
||||
):
|
||||
run_output, response = await vendor._invoke_agent("hi", "sess", target)
|
||||
assert run_output is None
|
||||
assert "trouble" in response
|
||||
|
||||
|
||||
class TestSendResponse:
|
||||
@pytest.mark.asyncio
|
||||
async def test_skips_empty(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
target = AsyncMock()
|
||||
await DiscordVendor._send_response(None, target)
|
||||
target.send.assert_not_called()
|
||||
await DiscordVendor._send_response("", target)
|
||||
target.send.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sends_short_message(self):
|
||||
from integrations.chat_bridge.vendors.discord import DiscordVendor
|
||||
|
||||
target = AsyncMock()
|
||||
await DiscordVendor._send_response("hello", target)
|
||||
target.send.assert_called_once_with("hello")
|
||||
|
||||
|
||||
class TestChunkMessage:
|
||||
def test_short_message(self):
|
||||
from integrations.chat_bridge.vendors.discord import _chunk_message
|
||||
|
||||
95
tests/integrations/test_presence_watcher.py
Normal file
95
tests/integrations/test_presence_watcher.py
Normal file
@@ -0,0 +1,95 @@
|
||||
"""Tests for the presence file watcher in dashboard.app."""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
# Common patches to eliminate delays and inject mock ws_manager
|
||||
_FAST = {
|
||||
"dashboard.app._PRESENCE_POLL_SECONDS": 0.01,
|
||||
"dashboard.app._PRESENCE_INITIAL_DELAY": 0,
|
||||
}
|
||||
|
||||
|
||||
def _patches(mock_ws, presence_file):
|
||||
"""Return a combined context manager for presence watcher patches."""
|
||||
from contextlib import ExitStack
|
||||
|
||||
stack = ExitStack()
|
||||
stack.enter_context(patch("dashboard.app.PRESENCE_FILE", presence_file))
|
||||
stack.enter_context(patch("infrastructure.ws_manager.handler.ws_manager", mock_ws))
|
||||
for key, val in _FAST.items():
|
||||
stack.enter_context(patch(key, val))
|
||||
return stack
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_presence_watcher_broadcasts_on_file_change(tmp_path):
|
||||
"""Watcher reads presence.json and broadcasts via ws_manager."""
|
||||
from dashboard.app import _presence_watcher
|
||||
|
||||
presence_file = tmp_path / "presence.json"
|
||||
state = {
|
||||
"version": 1,
|
||||
"liveness": "2026-03-18T21:47:12Z",
|
||||
"current_focus": "Reviewing PR #267",
|
||||
"mood": "focused",
|
||||
}
|
||||
presence_file.write_text(json.dumps(state))
|
||||
|
||||
mock_ws = AsyncMock()
|
||||
|
||||
with _patches(mock_ws, presence_file):
|
||||
task = asyncio.create_task(_presence_watcher())
|
||||
await asyncio.sleep(0.15)
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
mock_ws.broadcast.assert_called_with("timmy_state", state)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_presence_watcher_synthesised_state_when_missing(tmp_path):
|
||||
"""Watcher broadcasts synthesised idle state when file is absent."""
|
||||
from dashboard.app import _SYNTHESIZED_STATE, _presence_watcher
|
||||
|
||||
missing_file = tmp_path / "no-such-file.json"
|
||||
mock_ws = AsyncMock()
|
||||
|
||||
with _patches(mock_ws, missing_file):
|
||||
task = asyncio.create_task(_presence_watcher())
|
||||
await asyncio.sleep(0.15)
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
mock_ws.broadcast.assert_called_with("timmy_state", _SYNTHESIZED_STATE)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_presence_watcher_handles_bad_json(tmp_path):
|
||||
"""Watcher logs warning on malformed JSON and doesn't crash."""
|
||||
from dashboard.app import _presence_watcher
|
||||
|
||||
presence_file = tmp_path / "presence.json"
|
||||
presence_file.write_text("{bad json!!!")
|
||||
mock_ws = AsyncMock()
|
||||
|
||||
with _patches(mock_ws, presence_file):
|
||||
task = asyncio.create_task(_presence_watcher())
|
||||
await asyncio.sleep(0.15)
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
# Should not have broadcast anything on bad JSON
|
||||
mock_ws.broadcast.assert_not_called()
|
||||
0
tests/loop/__init__.py
Normal file
0
tests/loop/__init__.py
Normal file
86
tests/loop/test_cycle_retro.py
Normal file
86
tests/loop/test_cycle_retro.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""Tests for scripts/cycle_retro.py issue auto-detection."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
# Import the module under test — it's a script so we import the helpers directly
|
||||
import importlib
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
SCRIPTS_DIR = Path(__file__).resolve().parent.parent.parent / "scripts"
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _add_scripts_to_path(monkeypatch):
|
||||
monkeypatch.syspath_prepend(str(SCRIPTS_DIR))
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def mod():
|
||||
"""Import cycle_retro as a module."""
|
||||
return importlib.import_module("cycle_retro")
|
||||
|
||||
|
||||
class TestDetectIssueFromBranch:
|
||||
def test_kimi_issue_branch(self, mod):
|
||||
with patch.object(subprocess, "check_output", return_value="kimi/issue-492\n"):
|
||||
assert mod.detect_issue_from_branch() == 492
|
||||
|
||||
def test_plain_issue_branch(self, mod):
|
||||
with patch.object(subprocess, "check_output", return_value="issue-123\n"):
|
||||
assert mod.detect_issue_from_branch() == 123
|
||||
|
||||
def test_issue_slash_number(self, mod):
|
||||
with patch.object(subprocess, "check_output", return_value="fix/issue/55\n"):
|
||||
assert mod.detect_issue_from_branch() == 55
|
||||
|
||||
def test_no_issue_in_branch(self, mod):
|
||||
with patch.object(subprocess, "check_output", return_value="main\n"):
|
||||
assert mod.detect_issue_from_branch() is None
|
||||
|
||||
def test_feature_branch(self, mod):
|
||||
with patch.object(subprocess, "check_output", return_value="feature/add-widget\n"):
|
||||
assert mod.detect_issue_from_branch() is None
|
||||
|
||||
def test_git_not_available(self, mod):
|
||||
with patch.object(subprocess, "check_output", side_effect=FileNotFoundError):
|
||||
assert mod.detect_issue_from_branch() is None
|
||||
|
||||
def test_git_fails(self, mod):
|
||||
with patch.object(
|
||||
subprocess,
|
||||
"check_output",
|
||||
side_effect=subprocess.CalledProcessError(1, "git"),
|
||||
):
|
||||
assert mod.detect_issue_from_branch() is None
|
||||
|
||||
|
||||
class TestBackfillExtractIssueNumber:
|
||||
"""Tests for backfill_retro.extract_issue_number PR-number filtering."""
|
||||
|
||||
@pytest.fixture()
|
||||
def backfill(self):
|
||||
return importlib.import_module("backfill_retro")
|
||||
|
||||
def test_body_has_issue(self, backfill):
|
||||
assert backfill.extract_issue_number("fix: foo (#491)", "Fixes #490", pr_number=491) == 490
|
||||
|
||||
def test_title_skips_pr_number(self, backfill):
|
||||
assert backfill.extract_issue_number("fix: foo (#491)", "", pr_number=491) is None
|
||||
|
||||
def test_title_with_issue_and_pr(self, backfill):
|
||||
# [loop-cycle-538] refactor: ... (#459) (#481)
|
||||
assert (
|
||||
backfill.extract_issue_number(
|
||||
"[loop-cycle-538] refactor: remove dead airllm (#459) (#481)",
|
||||
"",
|
||||
pr_number=481,
|
||||
)
|
||||
== 459
|
||||
)
|
||||
|
||||
def test_no_pr_number_provided(self, backfill):
|
||||
assert backfill.extract_issue_number("fix: foo (#491)", "") == 491
|
||||
133
tests/loop/test_three_phase.py
Normal file
133
tests/loop/test_three_phase.py
Normal file
@@ -0,0 +1,133 @@
|
||||
"""Tests for the three-phase loop scaffold.
|
||||
|
||||
Validates the acceptance criteria from issue #324:
|
||||
1. Loop accepts context payload as input to Phase 1
|
||||
2. Phase 1 output feeds into Phase 2 without manual intervention
|
||||
3. Phase 2 output feeds into Phase 3 without manual intervention
|
||||
4. Phase 3 output feeds back into Phase 1
|
||||
5. Full cycle completes without crash
|
||||
6. No state leaks between cycles
|
||||
7. Each phase logs what it received and what it produced
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
from loop.phase1_gather import gather
|
||||
from loop.phase2_reason import reason
|
||||
from loop.phase3_act import act
|
||||
from loop.runner import run_cycle
|
||||
from loop.schema import ContextPayload
|
||||
|
||||
|
||||
def _make_payload(source: str = "test", content: str = "hello") -> ContextPayload:
|
||||
return ContextPayload(source=source, content=content, token_count=5)
|
||||
|
||||
|
||||
# --- Schema ---
|
||||
|
||||
|
||||
def test_context_payload_defaults():
|
||||
p = ContextPayload(source="user", content="hi")
|
||||
assert p.source == "user"
|
||||
assert p.content == "hi"
|
||||
assert p.token_count == -1
|
||||
assert p.metadata == {}
|
||||
assert isinstance(p.timestamp, datetime)
|
||||
|
||||
|
||||
def test_with_metadata_returns_new_payload():
|
||||
p = _make_payload()
|
||||
p2 = p.with_metadata(foo="bar")
|
||||
assert p2.metadata == {"foo": "bar"}
|
||||
assert p.metadata == {} # original unchanged
|
||||
|
||||
|
||||
def test_with_metadata_merges():
|
||||
p = _make_payload().with_metadata(a=1)
|
||||
p2 = p.with_metadata(b=2)
|
||||
assert p2.metadata == {"a": 1, "b": 2}
|
||||
|
||||
|
||||
# --- Individual phases ---
|
||||
|
||||
|
||||
def test_gather_marks_phase():
|
||||
result = gather(_make_payload())
|
||||
assert result.metadata["phase"] == "gather"
|
||||
assert result.metadata["gathered"] is True
|
||||
|
||||
|
||||
def test_reason_marks_phase():
|
||||
gathered = gather(_make_payload())
|
||||
result = reason(gathered)
|
||||
assert result.metadata["phase"] == "reason"
|
||||
assert result.metadata["reasoned"] is True
|
||||
|
||||
|
||||
def test_act_marks_phase():
|
||||
gathered = gather(_make_payload())
|
||||
reasoned = reason(gathered)
|
||||
result = act(reasoned)
|
||||
assert result.metadata["phase"] == "act"
|
||||
assert result.metadata["acted"] is True
|
||||
|
||||
|
||||
# --- Full cycle ---
|
||||
|
||||
|
||||
def test_full_cycle_completes():
|
||||
"""Acceptance criterion 5: full cycle completes without crash."""
|
||||
payload = _make_payload(source="user", content="What is sovereignty?")
|
||||
result = run_cycle(payload)
|
||||
assert result.metadata["gathered"] is True
|
||||
assert result.metadata["reasoned"] is True
|
||||
assert result.metadata["acted"] is True
|
||||
|
||||
|
||||
def test_full_cycle_preserves_source():
|
||||
"""Source field survives the full pipeline."""
|
||||
result = run_cycle(_make_payload(source="timer"))
|
||||
assert result.source == "timer"
|
||||
|
||||
|
||||
def test_full_cycle_preserves_content():
|
||||
"""Content field survives the full pipeline."""
|
||||
result = run_cycle(_make_payload(content="test data"))
|
||||
assert result.content == "test data"
|
||||
|
||||
|
||||
def test_no_state_leaks_between_cycles():
|
||||
"""Acceptance criterion 6: no state leaks between cycles."""
|
||||
r1 = run_cycle(_make_payload(source="cycle1", content="first"))
|
||||
r2 = run_cycle(_make_payload(source="cycle2", content="second"))
|
||||
assert r1.source == "cycle1"
|
||||
assert r2.source == "cycle2"
|
||||
assert r1.content == "first"
|
||||
assert r2.content == "second"
|
||||
|
||||
|
||||
def test_cycle_output_feeds_back_as_input():
|
||||
"""Acceptance criterion 4: Phase 3 output feeds back into Phase 1."""
|
||||
first = run_cycle(_make_payload(source="initial"))
|
||||
second = run_cycle(first)
|
||||
# Second cycle should still work — no crash, metadata accumulates
|
||||
assert second.metadata["gathered"] is True
|
||||
assert second.metadata["acted"] is True
|
||||
|
||||
|
||||
def test_phases_log(caplog):
|
||||
"""Acceptance criterion 7: each phase logs what it received and produced."""
|
||||
import logging
|
||||
|
||||
with caplog.at_level(logging.INFO):
|
||||
run_cycle(_make_payload())
|
||||
|
||||
messages = caplog.text
|
||||
assert "Phase 1 (Gather) received" in messages
|
||||
assert "Phase 1 (Gather) produced" in messages
|
||||
assert "Phase 2 (Reason) received" in messages
|
||||
assert "Phase 2 (Reason) produced" in messages
|
||||
assert "Phase 3 (Act) received" in messages
|
||||
assert "Phase 3 (Act) produced" in messages
|
||||
assert "Loop cycle start" in messages
|
||||
assert "Loop cycle complete" in messages
|
||||
@@ -1,14 +1,22 @@
|
||||
"""Unit tests for the agentic loop module.
|
||||
|
||||
Tests cover planning, execution, max_steps enforcement, failure
|
||||
adaptation, progress callbacks, and response cleaning.
|
||||
Tests cover data structures, plan parsing, planning, execution,
|
||||
max_steps enforcement, failure adaptation, double-failure,
|
||||
progress callbacks, broadcast helper, summary logic, and
|
||||
response cleaning.
|
||||
"""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from timmy.agentic_loop import _parse_steps, run_agentic_loop
|
||||
from timmy.agentic_loop import (
|
||||
AgenticResult,
|
||||
AgenticStep,
|
||||
_broadcast_progress,
|
||||
_parse_steps,
|
||||
run_agentic_loop,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
@@ -27,6 +35,27 @@ def _mock_run(content: str):
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDataStructures:
|
||||
def test_agentic_step_fields(self):
|
||||
step = AgenticStep(
|
||||
step_num=1, description="Do X", result="Done", status="completed", duration_ms=42
|
||||
)
|
||||
assert step.step_num == 1
|
||||
assert step.status == "completed"
|
||||
assert step.duration_ms == 42
|
||||
|
||||
def test_agentic_result_defaults(self):
|
||||
r = AgenticResult(task_id="abc", task="test", summary="ok")
|
||||
assert r.steps == []
|
||||
assert r.status == "completed"
|
||||
assert r.total_duration_ms == 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_steps
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestParseSteps:
|
||||
def test_numbered_with_dot(self):
|
||||
text = "1. Search for data\n2. Write to file\n3. Verify"
|
||||
@@ -43,6 +72,19 @@ class TestParseSteps:
|
||||
def test_empty_returns_empty(self):
|
||||
assert _parse_steps("") == []
|
||||
|
||||
def test_whitespace_only_returns_empty(self):
|
||||
assert _parse_steps(" \n \n ") == []
|
||||
|
||||
def test_leading_whitespace_in_numbered(self):
|
||||
text = " 1. First\n 2. Second"
|
||||
assert _parse_steps(text) == ["First", "Second"]
|
||||
|
||||
def test_mixed_numbered_and_plain(self):
|
||||
"""When numbered lines are present, only those are returned."""
|
||||
text = "Here is the plan:\n1. Step one\n2. Step two\nGood luck!"
|
||||
result = _parse_steps(text)
|
||||
assert result == ["Step one", "Step two"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# run_agentic_loop
|
||||
@@ -231,3 +273,191 @@ async def test_planning_failure_returns_failed():
|
||||
|
||||
assert result.status == "failed"
|
||||
assert "Planning failed" in result.summary
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_plan_returns_failed():
|
||||
"""Planning that produces no steps results in 'failed'."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(return_value=_mock_run(""))
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Do nothing")
|
||||
|
||||
assert result.status == "failed"
|
||||
assert "no steps" in result.summary.lower()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_double_failure_marks_step_failed():
|
||||
"""When both execution and adaptation fail, step status is 'failed'."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Do something"),
|
||||
Exception("Step failed"),
|
||||
Exception("Adaptation also failed"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Try and fail", max_steps=1)
|
||||
|
||||
assert len(result.steps) == 1
|
||||
assert result.steps[0].status == "failed"
|
||||
assert "Failed" in result.steps[0].result
|
||||
assert result.status == "partial"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_progress_ignores_ws_errors():
|
||||
"""_broadcast_progress swallows import/connection errors."""
|
||||
with patch(
|
||||
"timmy.agentic_loop.ws_manager",
|
||||
create=True,
|
||||
side_effect=ImportError("no ws"),
|
||||
):
|
||||
# Should not raise
|
||||
await _broadcast_progress("test.event", {"key": "value"})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_progress_sends_to_ws():
|
||||
"""_broadcast_progress calls ws_manager.broadcast."""
|
||||
mock_ws = AsyncMock()
|
||||
with patch("infrastructure.ws_manager.handler.ws_manager", mock_ws):
|
||||
await _broadcast_progress("agentic.plan_ready", {"task_id": "abc"})
|
||||
mock_ws.broadcast.assert_awaited_once_with("agentic.plan_ready", {"task_id": "abc"})
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_summary_counts_step_statuses():
|
||||
"""Summary string includes completed, adapted, and failed counts."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. A\n2. B\n3. C"),
|
||||
_mock_run("A done"),
|
||||
Exception("B broke"),
|
||||
_mock_run("B adapted"),
|
||||
Exception("C broke"),
|
||||
Exception("C adapt broke too"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("A B C", max_steps=3)
|
||||
|
||||
assert "1 adapted" in result.summary
|
||||
assert "1 failed" in result.summary
|
||||
assert result.status == "partial"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_task_id_is_set():
|
||||
"""Result has a non-empty task_id."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(side_effect=[_mock_run("1. X"), _mock_run("done")])
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("One step")
|
||||
|
||||
assert result.task_id
|
||||
assert len(result.task_id) == 8
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_total_duration_is_set():
|
||||
"""Result.total_duration_ms is a positive integer."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(side_effect=[_mock_run("1. X"), _mock_run("done")])
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Quick task")
|
||||
|
||||
assert result.total_duration_ms >= 0
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_run_without_content_attr():
|
||||
"""When agent.run() returns an object without .content, str() is used."""
|
||||
|
||||
class PlanResult:
|
||||
def __str__(self):
|
||||
return "1. Only step"
|
||||
|
||||
class StepResult:
|
||||
def __str__(self):
|
||||
return "Step result"
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(side_effect=[PlanResult(), StepResult()])
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Fallback test", max_steps=1)
|
||||
|
||||
assert len(result.steps) == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_adapted_step_calls_on_progress():
|
||||
"""on_progress is called even for adapted steps."""
|
||||
events = []
|
||||
|
||||
async def on_progress(desc, step, total):
|
||||
events.append((desc, step))
|
||||
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(
|
||||
side_effect=[
|
||||
_mock_run("1. Risky step"),
|
||||
Exception("boom"),
|
||||
_mock_run("Adapted result"),
|
||||
]
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
await run_agentic_loop("Adapt test", max_steps=1, on_progress=on_progress)
|
||||
|
||||
assert len(events) == 1
|
||||
assert "[Adapted]" in events[0][0]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_broadcast_called_for_each_phase():
|
||||
"""_broadcast_progress is called for plan_ready, step_complete, and task_complete."""
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.run = MagicMock(side_effect=[_mock_run("1. Do it"), _mock_run("Done")])
|
||||
|
||||
broadcast = AsyncMock()
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", broadcast),
|
||||
):
|
||||
await run_agentic_loop("One step task", max_steps=1)
|
||||
|
||||
event_names = [call.args[0] for call in broadcast.call_args_list]
|
||||
assert "agentic.plan_ready" in event_names
|
||||
assert "agentic.step_complete" in event_names
|
||||
assert "agentic.task_complete" in event_names
|
||||
|
||||
55
tests/test_api_v1.py
Normal file
55
tests/test_api_v1.py
Normal file
@@ -0,0 +1,55 @@
|
||||
import sys
|
||||
|
||||
# Absolute path to src
|
||||
src_path = "/home/ubuntu/timmy-time/Timmy-time-dashboard/src"
|
||||
if src_path not in sys.path:
|
||||
sys.path.insert(0, src_path)
|
||||
|
||||
from fastapi.testclient import TestClient # noqa: E402
|
||||
|
||||
try:
|
||||
from dashboard.app import app # noqa: E402
|
||||
|
||||
print("✓ Successfully imported dashboard.app")
|
||||
except ImportError as e:
|
||||
print(f"✗ Failed to import dashboard.app: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
|
||||
def test_v1_status():
|
||||
response = client.get("/api/v1/status")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "timmy" in data
|
||||
assert "model" in data
|
||||
assert "uptime" in data
|
||||
|
||||
|
||||
def test_v1_chat_history():
|
||||
response = client.get("/api/v1/chat/history")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "messages" in data
|
||||
|
||||
|
||||
def test_v1_upload_fail():
|
||||
# Test without file
|
||||
response = client.post("/api/v1/upload")
|
||||
assert response.status_code == 422 # Unprocessable Entity (missing file)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Running API v1 tests...")
|
||||
try:
|
||||
test_v1_status()
|
||||
print("✓ Status test passed")
|
||||
test_v1_chat_history()
|
||||
print("✓ History test passed")
|
||||
test_v1_upload_fail()
|
||||
print("✓ Upload failure test passed")
|
||||
print("All tests passed!")
|
||||
except Exception as e:
|
||||
print(f"Test failed: {e}")
|
||||
sys.exit(1)
|
||||
@@ -49,6 +49,34 @@ class TestConfigLazyValidation:
|
||||
# Should not raise
|
||||
validate_startup(force=True)
|
||||
|
||||
def test_validate_startup_exits_on_cors_wildcard_in_production(self):
|
||||
"""validate_startup() should exit in production when CORS has wildcard."""
|
||||
from config import settings, validate_startup
|
||||
|
||||
with (
|
||||
patch.object(settings, "timmy_env", "production"),
|
||||
patch.object(settings, "l402_hmac_secret", "test-secret-hex-value-32"),
|
||||
patch.object(settings, "l402_macaroon_secret", "test-macaroon-hex-value-32"),
|
||||
patch.object(settings, "cors_origins", ["*"]),
|
||||
pytest.raises(SystemExit),
|
||||
):
|
||||
validate_startup(force=True)
|
||||
|
||||
def test_validate_startup_warns_cors_wildcard_in_dev(self):
|
||||
"""validate_startup() should warn in dev when CORS has wildcard."""
|
||||
from config import settings, validate_startup
|
||||
|
||||
with (
|
||||
patch.object(settings, "timmy_env", "development"),
|
||||
patch.object(settings, "cors_origins", ["*"]),
|
||||
patch("config._startup_logger") as mock_logger,
|
||||
):
|
||||
validate_startup(force=True)
|
||||
mock_logger.warning.assert_any_call(
|
||||
"SEC: CORS_ORIGINS contains wildcard '*' — "
|
||||
"restrict to explicit origins before deploying to production."
|
||||
)
|
||||
|
||||
def test_validate_startup_skips_in_test_mode(self):
|
||||
"""validate_startup() should be a no-op in test mode."""
|
||||
from config import validate_startup
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user