Compare commits

..

1 Commits

Author SHA1 Message Date
kimi
3de7db770f refactor: break up think_once() into _generate_thought() and _finalize_thought()
Extracts the dedup retry loop and post-hook/broadcast pipeline into
focused helpers, reducing think_once() from 118 lines to ~20.

Fixes #513

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 19:41:28 -04:00
12 changed files with 162 additions and 75 deletions

View File

@@ -82,6 +82,7 @@ cp .env.example .env
| `OLLAMA_MODEL` | `qwen3:30b` | Primary model for reasoning and tool calling. Fallback: `llama3.1:8b-instruct` |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` \| `airllm` \| `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` \| `70b` \| `405b` |
| `L402_HMAC_SECRET` | *(default — change in prod)* | HMAC signing key for macaroons |
| `L402_MACAROON_SECRET` | *(default — change in prod)* | Macaroon secret |
| `LIGHTNING_BACKEND` | `mock` | `mock` (production-ready) \| `lnd` (scaffolded, not yet functional) |
@@ -176,6 +177,7 @@ timmy chat "Explain self-custody" --backend airllm --model-size 70b
Or set once in `.env`:
```bash
TIMMY_MODEL_BACKEND=auto
AIRLLM_MODEL_SIZE=70b
```
| Flag | Parameters | RAM needed |

View File

@@ -111,7 +111,7 @@ pytest: error: unrecognized arguments: -n --dist worksteal
### 4a. Missing Error-Path Testing
Many modules have happy-path tests but lack coverage for:
- **Graceful degradation paths**: The architecture mandates graceful degradation when Ollama/Redis are unavailable, but most fallback paths are untested (e.g., `cascade.py` lines 563655)
- **Graceful degradation paths**: The architecture mandates graceful degradation when Ollama/Redis/AirLLM are unavailable, but most fallback paths are untested (e.g., `cascade.py` lines 563655)
- **`brain/client.py`**: Only 14.8% covered — connection failures, retries, and error handling are untested
- **`infrastructure/error_capture.py`**: 0% — the error capture system itself has no tests

View File

@@ -63,11 +63,11 @@ $ python -m pytest -q
## 2. Feature-by-Feature Audit
### 2.1 Timmy Agent
**Claimed**: Agno-powered conversational agent backed by Ollama, SQLite memory
**Claimed**: Agno-powered conversational agent backed by Ollama, AirLLM for 70B-405B models, SQLite memory
**Verdict: REAL & FUNCTIONAL**
- `src/timmy/agent.py` (79 lines): Creates a genuine `agno.Agent` with Ollama model, SQLite persistence, tools, and system prompt
- Backend selection (`backends.py`) implements real Ollama switching with Apple Silicon detection
- Backend selection (`backends.py`) implements real Ollama/AirLLM switching with Apple Silicon detection
- CLI (`cli.py`) provides working `timmy chat`, `timmy think`, `timmy status` commands
- Approval workflow (`approvals.py`) implements real human-in-the-loop with SQLite-backed state
- Briefing system (`briefing.py`) generates real scheduled briefings

View File

@@ -100,7 +100,7 @@ Bitcoin Lightning economics. No cloud AI.
make install && make dev → http://localhost:8000
## What's Here
- Timmy Agent (Ollama)
- Timmy Agent (Ollama/AirLLM)
- Mission Control Dashboard (FastAPI + HTMX)
- Swarm Coordinator (multi-agent auctions)
- Lightning Payments (L402 gating)

View File

@@ -6,7 +6,7 @@ This document outlines the security architecture, threat model, and recent audit
Timmy Time is built on the principle of **AI Sovereignty**. Security is not just about preventing unauthorized access, but about ensuring the user maintains full control over their data and AI models.
1. **Local-First Execution:** All primary AI inference (Ollama) runs on localhost. No data is sent to third-party cloud providers unless explicitly configured (e.g., Grok).
1. **Local-First Execution:** All primary AI inference (Ollama/AirLLM) runs on localhost. No data is sent to third-party cloud providers unless explicitly configured (e.g., Grok).
2. **Air-Gapped Ready:** The system is designed to run without an internet connection once dependencies and models are cached.
3. **Secret Management:** Secrets are never hard-coded. They are managed via Pydantic-settings from `.env` or environment variables.

View File

@@ -59,7 +59,7 @@ already works.
| LLM routing | CascadeRouter with circuit breakers | Good |
| Memory tiers | Hot (MEMORY.md) → Vault (markdown) → Semantic (SQLite+vectors) | Good foundation |
| Module boundaries | 8 packages with clear responsibilities | Good |
| Multi-backend LLM | Ollama/Grok/Claude with auto-detection | Good |
| Multi-backend LLM | Ollama/AirLLM/Grok/Claude with auto-detection | Good |
| Security posture | CSRF, security headers, secret validation, telemetry off | Good |
### Architecture Diagram (Current State)
@@ -473,7 +473,7 @@ The proposal enforces a strict 2,000-line limit for `src/timmy/`:
| `workflow_engine.py` | ~200 | YAML loader, step executor, state machine |
| `tool_registry.py` | ~200 | Dynamic tool discovery, spawn, health check |
| `memory_system.py` | ~300 | Hot/Vault/Semantic memory interface (existing) |
| `backends.py` | ~200 | Ollama/Claude/Grok adapters |
| `backends.py` | ~200 | Ollama/AirLLM/Claude/Grok adapters |
| `config.py` | ~150 | Pydantic-settings (existing) |
| `lightning_wallet.py` | ~200 | L402 handling, invoice generation, balance |
| `utils/` | ~300 | Shared helpers, logging, serialization |

View File

@@ -4,6 +4,7 @@
Proposed
## Context
Currently, the Timmy agent (`src/timmy/agent.py`) uses `src/timmy/backends.py` which provides a simple abstraction over Ollama and AirLLM. However, this lacks:
- Automatic failover between multiple LLM providers
- Circuit breaker pattern for failing providers
- Cost and latency tracking per provider
@@ -18,13 +19,14 @@ Integrate the Cascade Router as the primary LLM routing layer for Timmy, replaci
### Current Flow
```
User Request → Timmy Agent → backends.py → Ollama
User Request → Timmy Agent → backends.py → Ollama/AirLLM
```
### Proposed Flow
```
User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
↓ (if fail)
Provider 2 (Local AirLLM)
↓ (if fail)
Provider 3 (API - optional)
@@ -39,6 +41,7 @@ User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
- Expose provider status in agent responses
2. **Cascade Router** (`src/router/cascade.py`)
- Already supports: Ollama, OpenAI, Anthropic, AirLLM
- Already has: Circuit breakers, metrics, failover logic
- Add: Integration with existing `src/timmy/prompts.py`
@@ -54,6 +57,7 @@ User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
### Provider Priority Order
1. **Ollama (local)** - Priority 1, always try first
2. **AirLLM (local)** - Priority 2, if Ollama unavailable
3. **API providers** - Priority 3+, only if configured
### Data Flow

View File

@@ -1 +1 @@
"""Timmy — Core AI agent (Ollama backend, CLI, prompts)."""
"""Timmy — Core AI agent (Ollama/AirLLM backends, CLI, prompts)."""

View File

@@ -232,29 +232,58 @@ class ThinkingEngine:
return False # Disabled — never idle
return datetime.now(UTC) - self._last_input_time > timedelta(minutes=timeout)
def _build_thinking_context(self) -> tuple[str, str, list["Thought"]]:
"""Assemble the context needed for a thinking cycle.
async def think_once(self, prompt: str | None = None) -> Thought | None:
"""Execute one thinking cycle.
Args:
prompt: Optional custom seed prompt. When provided, overrides
the random seed selection and uses "prompted" as the
seed type — useful for journal prompts from the CLI.
1. Gather a seed context (or use the custom prompt)
2. Build a prompt with continuity from recent thoughts
3. Call the agent
4. Store the thought
5. Log the event and broadcast via WebSocket
"""
if not settings.thinking_enabled:
return None
# Skip idle periods — don't count internal processing as thoughts
if not prompt and self._is_idle():
logger.debug(
"Thinking paused — no user input for %d minutes",
settings.thinking_idle_timeout_minutes,
)
return None
content, seed_type = await self._generate_thought(prompt)
if not content:
return None
thought = self._store_thought(content, seed_type)
self._last_thought_id = thought.id
await self._finalize_thought(thought)
return thought
async def _generate_thought(self, prompt: str | None = None) -> tuple[str | None, str]:
"""Generate novel thought content via the dedup retry loop.
Gathers context, builds the LLM prompt, calls the agent, and
retries with a fresh seed if the result is too similar to recent
thoughts.
Returns:
(memory_context, system_context, recent_thoughts)
A (content, seed_type) tuple. *content* is ``None`` when the
cycle should be skipped (agent failure, empty response, or
all retries exhausted).
"""
memory_context = self._load_memory_context()
system_context = self._gather_system_snapshot()
recent_thoughts = self.get_recent_thoughts(limit=5)
return memory_context, system_context, recent_thoughts
async def _generate_novel_thought(
self,
prompt: str | None,
memory_context: str,
system_context: str,
recent_thoughts: list["Thought"],
) -> tuple[str | None, str]:
"""Run the dedup-retry loop to produce a novel thought.
Returns:
(content, seed_type) — content is None if no novel thought produced.
"""
content: str | None = None
seed_type: str = "freeform"
for attempt in range(self._MAX_DEDUP_RETRIES + 1):
@@ -287,7 +316,7 @@ class ThinkingEngine:
# Dedup: reject thoughts too similar to recent ones
if not self._is_too_similar(content, recent_thoughts):
return content, seed_type # Good — novel thought
break # Good — novel thought
if attempt < self._MAX_DEDUP_RETRIES:
logger.info(
@@ -295,6 +324,7 @@ class ThinkingEngine:
attempt + 1,
self._MAX_DEDUP_RETRIES + 1,
)
content = None # Will retry
else:
logger.warning(
"Thought still repetitive after %d retries, discarding",
@@ -302,10 +332,10 @@ class ThinkingEngine:
)
return None, seed_type
return None, seed_type
return content, seed_type
async def _process_thinking_result(self, thought: "Thought") -> None:
"""Run all post-hooks after a thought is stored."""
async def _finalize_thought(self, thought: Thought) -> None:
"""Run post-hooks, log, journal, and broadcast a stored thought."""
self._maybe_check_memory()
await self._maybe_distill()
await self._maybe_file_issues()
@@ -316,54 +346,12 @@ class ThinkingEngine:
self._write_journal(thought)
await self._broadcast(thought)
async def think_once(self, prompt: str | None = None) -> Thought | None:
"""Execute one thinking cycle.
Args:
prompt: Optional custom seed prompt. When provided, overrides
the random seed selection and uses "prompted" as the
seed type — useful for journal prompts from the CLI.
1. Gather a seed context (or use the custom prompt)
2. Build a prompt with continuity from recent thoughts
3. Call the agent
4. Store the thought
5. Log the event and broadcast via WebSocket
"""
if not settings.thinking_enabled:
return None
# Skip idle periods — don't count internal processing as thoughts
if not prompt and self._is_idle():
logger.debug(
"Thinking paused — no user input for %d minutes",
settings.thinking_idle_timeout_minutes,
)
return None
memory_context, system_context, recent_thoughts = self._build_thinking_context()
content, seed_type = await self._generate_novel_thought(
prompt,
memory_context,
system_context,
recent_thoughts,
)
if not content:
return None
thought = self._store_thought(content, seed_type)
self._last_thought_id = thought.id
await self._process_thinking_result(thought)
logger.info(
"Thought [%s] (%s): %s",
thought.id[:8],
seed_type,
thought.seed_type,
thought.content[:80],
)
return thought
def get_recent_thoughts(self, limit: int = 20) -> list[Thought]:
"""Retrieve the most recent thoughts."""

View File

@@ -10,7 +10,7 @@ Categories:
M3xx iOS keyboard & zoom prevention
M4xx HTMX robustness (double-submit, sync)
M5xx Safe-area / notch support
M6xx Backend interface contract
M6xx AirLLM backend interface contract
"""
import re
@@ -208,7 +208,7 @@ def test_M505_dvh_units_used():
assert "dvh" in css
# ── M6xx — Backend interface contract ──────────────────────────────────
# ── M6xx — AirLLM backend interface contract ──────────────────────────────────
def test_M601_airllm_agent_has_run_method():

View File

@@ -1,4 +1,4 @@
"""Tests for src/timmy/backends.py — backend helpers."""
"""Tests for src/timmy/backends.py — AirLLM wrapper and helpers."""
import sys
from unittest.mock import MagicMock, patch

View File

@@ -250,6 +250,99 @@ def test_continuity_includes_recent(tmp_path):
# ---------------------------------------------------------------------------
# ---------------------------------------------------------------------------
# _generate_thought helper
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_generate_thought_returns_content_and_seed_type(tmp_path):
"""_generate_thought should return (content, seed_type) on success."""
from timmy.thinking import SEED_TYPES
engine = _make_engine(tmp_path)
with patch.object(engine, "_call_agent", return_value="A novel idea."):
content, seed_type = await engine._generate_thought()
assert content == "A novel idea."
assert seed_type in SEED_TYPES
@pytest.mark.asyncio
async def test_generate_thought_with_prompt(tmp_path):
"""_generate_thought(prompt=...) should use 'prompted' seed type."""
engine = _make_engine(tmp_path)
with patch.object(engine, "_call_agent", return_value="A prompted idea."):
content, seed_type = await engine._generate_thought(prompt="Reflect on joy")
assert content == "A prompted idea."
assert seed_type == "prompted"
@pytest.mark.asyncio
async def test_generate_thought_returns_none_on_agent_failure(tmp_path):
"""_generate_thought should return (None, ...) when the agent fails."""
engine = _make_engine(tmp_path)
with patch.object(engine, "_call_agent", side_effect=Exception("Ollama down")):
content, seed_type = await engine._generate_thought()
assert content is None
@pytest.mark.asyncio
async def test_generate_thought_returns_none_on_empty(tmp_path):
"""_generate_thought should return (None, ...) when agent returns empty."""
engine = _make_engine(tmp_path)
with patch.object(engine, "_call_agent", return_value=" "):
content, seed_type = await engine._generate_thought()
assert content is None
# ---------------------------------------------------------------------------
# _finalize_thought helper
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_finalize_thought_calls_all_hooks(tmp_path):
"""_finalize_thought should call all post-hooks, log, journal, and broadcast."""
engine = _make_engine(tmp_path)
thought = engine._store_thought("Test finalize.", "freeform")
with (
patch.object(engine, "_maybe_check_memory") as m_mem,
patch.object(engine, "_maybe_distill", new_callable=AsyncMock) as m_distill,
patch.object(engine, "_maybe_file_issues", new_callable=AsyncMock) as m_issues,
patch.object(engine, "_check_workspace", new_callable=AsyncMock) as m_ws,
patch.object(engine, "_maybe_check_memory_status") as m_status,
patch.object(engine, "_update_memory") as m_update,
patch.object(engine, "_log_event") as m_log,
patch.object(engine, "_write_journal") as m_journal,
patch.object(engine, "_broadcast", new_callable=AsyncMock) as m_broadcast,
):
await engine._finalize_thought(thought)
m_mem.assert_called_once()
m_distill.assert_awaited_once()
m_issues.assert_awaited_once()
m_ws.assert_awaited_once()
m_status.assert_called_once()
m_update.assert_called_once_with(thought)
m_log.assert_called_once_with(thought)
m_journal.assert_called_once_with(thought)
m_broadcast.assert_awaited_once_with(thought)
# ---------------------------------------------------------------------------
# think_once (async)
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_think_once_stores_thought(tmp_path):
"""think_once should store a thought in the DB."""