Compare commits

..

1 Commits

Author SHA1 Message Date
Timmy
c19be679f0 refactor: remove airllm references from docs, docstrings, and README (#486) 2026-03-19 19:47:33 -04:00
12 changed files with 84 additions and 252 deletions

View File

@@ -82,7 +82,6 @@ cp .env.example .env
| `OLLAMA_MODEL` | `qwen3:30b` | Primary model for reasoning and tool calling. Fallback: `llama3.1:8b-instruct` |
| `DEBUG` | `false` | Enable `/docs` and `/redoc` |
| `TIMMY_MODEL_BACKEND` | `ollama` | `ollama` \| `airllm` \| `auto` |
| `AIRLLM_MODEL_SIZE` | `70b` | `8b` \| `70b` \| `405b` |
| `L402_HMAC_SECRET` | *(default — change in prod)* | HMAC signing key for macaroons |
| `L402_MACAROON_SECRET` | *(default — change in prod)* | Macaroon secret |
| `LIGHTNING_BACKEND` | `mock` | `mock` (production-ready) \| `lnd` (scaffolded, not yet functional) |
@@ -177,7 +176,6 @@ timmy chat "Explain self-custody" --backend airllm --model-size 70b
Or set once in `.env`:
```bash
TIMMY_MODEL_BACKEND=auto
AIRLLM_MODEL_SIZE=70b
```
| Flag | Parameters | RAM needed |

View File

@@ -111,7 +111,7 @@ pytest: error: unrecognized arguments: -n --dist worksteal
### 4a. Missing Error-Path Testing
Many modules have happy-path tests but lack coverage for:
- **Graceful degradation paths**: The architecture mandates graceful degradation when Ollama/Redis/AirLLM are unavailable, but most fallback paths are untested (e.g., `cascade.py` lines 563655)
- **Graceful degradation paths**: The architecture mandates graceful degradation when Ollama/Redis are unavailable, but most fallback paths are untested (e.g., `cascade.py` lines 563655)
- **`brain/client.py`**: Only 14.8% covered — connection failures, retries, and error handling are untested
- **`infrastructure/error_capture.py`**: 0% — the error capture system itself has no tests

View File

@@ -63,11 +63,11 @@ $ python -m pytest -q
## 2. Feature-by-Feature Audit
### 2.1 Timmy Agent
**Claimed**: Agno-powered conversational agent backed by Ollama, AirLLM for 70B-405B models, SQLite memory
**Claimed**: Agno-powered conversational agent backed by Ollama, SQLite memory
**Verdict: REAL & FUNCTIONAL**
- `src/timmy/agent.py` (79 lines): Creates a genuine `agno.Agent` with Ollama model, SQLite persistence, tools, and system prompt
- Backend selection (`backends.py`) implements real Ollama/AirLLM switching with Apple Silicon detection
- Backend selection (`backends.py`) implements real Ollama switching with Apple Silicon detection
- CLI (`cli.py`) provides working `timmy chat`, `timmy think`, `timmy status` commands
- Approval workflow (`approvals.py`) implements real human-in-the-loop with SQLite-backed state
- Briefing system (`briefing.py`) generates real scheduled briefings

View File

@@ -100,7 +100,7 @@ Bitcoin Lightning economics. No cloud AI.
make install && make dev → http://localhost:8000
## What's Here
- Timmy Agent (Ollama/AirLLM)
- Timmy Agent (Ollama)
- Mission Control Dashboard (FastAPI + HTMX)
- Swarm Coordinator (multi-agent auctions)
- Lightning Payments (L402 gating)

View File

@@ -6,7 +6,7 @@ This document outlines the security architecture, threat model, and recent audit
Timmy Time is built on the principle of **AI Sovereignty**. Security is not just about preventing unauthorized access, but about ensuring the user maintains full control over their data and AI models.
1. **Local-First Execution:** All primary AI inference (Ollama/AirLLM) runs on localhost. No data is sent to third-party cloud providers unless explicitly configured (e.g., Grok).
1. **Local-First Execution:** All primary AI inference (Ollama) runs on localhost. No data is sent to third-party cloud providers unless explicitly configured (e.g., Grok).
2. **Air-Gapped Ready:** The system is designed to run without an internet connection once dependencies and models are cached.
3. **Secret Management:** Secrets are never hard-coded. They are managed via Pydantic-settings from `.env` or environment variables.

View File

@@ -59,7 +59,7 @@ already works.
| LLM routing | CascadeRouter with circuit breakers | Good |
| Memory tiers | Hot (MEMORY.md) → Vault (markdown) → Semantic (SQLite+vectors) | Good foundation |
| Module boundaries | 8 packages with clear responsibilities | Good |
| Multi-backend LLM | Ollama/AirLLM/Grok/Claude with auto-detection | Good |
| Multi-backend LLM | Ollama/Grok/Claude with auto-detection | Good |
| Security posture | CSRF, security headers, secret validation, telemetry off | Good |
### Architecture Diagram (Current State)
@@ -473,7 +473,7 @@ The proposal enforces a strict 2,000-line limit for `src/timmy/`:
| `workflow_engine.py` | ~200 | YAML loader, step executor, state machine |
| `tool_registry.py` | ~200 | Dynamic tool discovery, spawn, health check |
| `memory_system.py` | ~300 | Hot/Vault/Semantic memory interface (existing) |
| `backends.py` | ~200 | Ollama/AirLLM/Claude/Grok adapters |
| `backends.py` | ~200 | Ollama/Claude/Grok adapters |
| `config.py` | ~150 | Pydantic-settings (existing) |
| `lightning_wallet.py` | ~200 | L402 handling, invoice generation, balance |
| `utils/` | ~300 | Shared helpers, logging, serialization |

View File

@@ -4,7 +4,6 @@
Proposed
## Context
Currently, the Timmy agent (`src/timmy/agent.py`) uses `src/timmy/backends.py` which provides a simple abstraction over Ollama and AirLLM. However, this lacks:
- Automatic failover between multiple LLM providers
- Circuit breaker pattern for failing providers
- Cost and latency tracking per provider
@@ -19,14 +18,13 @@ Integrate the Cascade Router as the primary LLM routing layer for Timmy, replaci
### Current Flow
```
User Request → Timmy Agent → backends.py → Ollama/AirLLM
User Request → Timmy Agent → backends.py → Ollama
```
### Proposed Flow
```
User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
↓ (if fail)
Provider 2 (Local AirLLM)
↓ (if fail)
Provider 3 (API - optional)
@@ -41,7 +39,6 @@ User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
- Expose provider status in agent responses
2. **Cascade Router** (`src/router/cascade.py`)
- Already supports: Ollama, OpenAI, Anthropic, AirLLM
- Already has: Circuit breakers, metrics, failover logic
- Add: Integration with existing `src/timmy/prompts.py`
@@ -57,7 +54,6 @@ User Request → Timmy Agent → Cascade Router → Provider 1 (Ollama)
### Provider Priority Order
1. **Ollama (local)** - Priority 1, always try first
2. **AirLLM (local)** - Priority 2, if Ollama unavailable
3. **API providers** - Priority 3+, only if configured
### Data Flow

View File

@@ -1 +1 @@
"""Timmy — Core AI agent (Ollama/AirLLM backends, CLI, prompts)."""
"""Timmy — Core AI agent (Ollama backend, CLI, prompts)."""

View File

@@ -197,113 +197,6 @@ def _resolve_backend(requested: str | None) -> str:
return "ollama"
def _build_tools_list(use_tools: bool, skip_mcp: bool) -> list:
"""Build the Agno tools list (toolkit + optional MCP servers).
Args:
use_tools: Whether the model supports tool calling.
skip_mcp: If True, omit MCP tool servers.
Returns:
List of Toolkit / MCPTools, possibly empty.
"""
if not use_tools:
logger.info("Tools disabled (model too small for reliable tool calling)")
return []
toolkit = create_full_toolkit()
tools_list: list = [toolkit]
# Add MCP tool servers (lazy-connected on first arun()).
# Skipped when skip_mcp=True — MCP's stdio transport uses anyio cancel
# scopes that conflict with asyncio background task cancellation (#72).
if not skip_mcp:
try:
from timmy.mcp_tools import create_filesystem_mcp_tools, create_gitea_mcp_tools
gitea_mcp = create_gitea_mcp_tools()
if gitea_mcp:
tools_list.append(gitea_mcp)
fs_mcp = create_filesystem_mcp_tools()
if fs_mcp:
tools_list.append(fs_mcp)
except Exception as exc:
logger.debug("MCP tools unavailable: %s", exc)
return tools_list
def _build_prompt(use_tools: bool, session_id: str) -> str:
"""Build the full system prompt with optional memory context.
Args:
use_tools: Whether tools are enabled (affects prompt tier and context budget).
session_id: Session identifier for the prompt.
Returns:
Complete system prompt string.
"""
base_prompt = get_system_prompt(tools_enabled=use_tools, session_id=session_id)
try:
from timmy.memory_system import memory_system
memory_context = memory_system.get_system_context()
if memory_context:
# Truncate if too long — smaller budget for small models
# since the expanded prompt (roster, guardrails) uses more tokens
max_context = 2000 if not use_tools else 8000
if len(memory_context) > max_context:
memory_context = memory_context[:max_context] + "\n... [truncated]"
return (
f"{base_prompt}\n\n"
f"## GROUNDED CONTEXT (verified sources — cite when using)\n\n"
f"{memory_context}"
)
except Exception as exc:
logger.warning("Failed to load memory context: %s", exc)
return base_prompt
def _create_ollama_agent(
model_name: str,
db_file: str,
tools_list: list,
full_prompt: str,
use_tools: bool,
) -> Agent:
"""Construct the Agno Agent with an Ollama model.
Args:
model_name: Resolved Ollama model name.
db_file: SQLite file for conversation memory.
tools_list: Pre-built tools list (may be empty).
full_prompt: Complete system prompt.
use_tools: Whether tools are enabled.
Returns:
Configured Agno Agent.
"""
model_kwargs = {}
if settings.ollama_num_ctx > 0:
model_kwargs["options"] = {"num_ctx": settings.ollama_num_ctx}
return Agent(
name="Agent",
model=Ollama(id=model_name, host=settings.ollama_url, timeout=300, **model_kwargs),
db=SqliteDb(db_file=db_file),
description=full_prompt,
add_history_to_context=True,
num_history_runs=20,
markdown=False,
tools=tools_list if tools_list else None,
tool_call_limit=settings.max_agent_steps if use_tools else None,
telemetry=settings.telemetry_enabled,
)
def create_timmy(
db_file: str = "timmy.db",
backend: str | None = None,
@@ -345,12 +238,16 @@ def create_timmy(
return TimmyAirLLMAgent(model_size=size)
# Default: Ollama via Agno.
# Resolve model with automatic pulling and fallback
model_name, is_fallback = _resolve_model_with_fallback(
requested_model=None,
require_vision=False,
auto_pull=True,
)
# If Ollama is completely unreachable, fail loudly.
# Sovereignty: never silently send data to a cloud API.
# Use --backend claude explicitly if you want cloud inference.
if not _check_model_available(model_name):
logger.error(
"Ollama unreachable and no local models available. "
@@ -361,9 +258,74 @@ def create_timmy(
logger.info("Using fallback model %s (requested was unavailable)", model_name)
use_tools = _model_supports_tools(model_name)
tools_list = _build_tools_list(use_tools, skip_mcp)
full_prompt = _build_prompt(use_tools, session_id)
agent = _create_ollama_agent(model_name, db_file, tools_list, full_prompt, use_tools)
# Conditionally include tools — small models get none
toolkit = create_full_toolkit() if use_tools else None
if not use_tools:
logger.info("Tools disabled for model %s (too small for reliable tool calling)", model_name)
# Build the tools list — Agno accepts a list of Toolkit / MCPTools
tools_list: list = []
if toolkit:
tools_list.append(toolkit)
# Add MCP tool servers (lazy-connected on first arun()).
# Skipped when skip_mcp=True — MCP's stdio transport uses anyio cancel
# scopes that conflict with asyncio background task cancellation (#72).
if use_tools and not skip_mcp:
try:
from timmy.mcp_tools import create_filesystem_mcp_tools, create_gitea_mcp_tools
gitea_mcp = create_gitea_mcp_tools()
if gitea_mcp:
tools_list.append(gitea_mcp)
fs_mcp = create_filesystem_mcp_tools()
if fs_mcp:
tools_list.append(fs_mcp)
except Exception as exc:
logger.debug("MCP tools unavailable: %s", exc)
# Select prompt tier based on tool capability
base_prompt = get_system_prompt(tools_enabled=use_tools, session_id=session_id)
# Try to load memory context
try:
from timmy.memory_system import memory_system
memory_context = memory_system.get_system_context()
if memory_context:
# Truncate if too long — smaller budget for small models
# since the expanded prompt (roster, guardrails) uses more tokens
max_context = 2000 if not use_tools else 8000
if len(memory_context) > max_context:
memory_context = memory_context[:max_context] + "\n... [truncated]"
full_prompt = (
f"{base_prompt}\n\n"
f"## GROUNDED CONTEXT (verified sources — cite when using)\n\n"
f"{memory_context}"
)
else:
full_prompt = base_prompt
except Exception as exc:
logger.warning("Failed to load memory context: %s", exc)
full_prompt = base_prompt
model_kwargs = {}
if settings.ollama_num_ctx > 0:
model_kwargs["options"] = {"num_ctx": settings.ollama_num_ctx}
agent = Agent(
name="Agent",
model=Ollama(id=model_name, host=settings.ollama_url, timeout=300, **model_kwargs),
db=SqliteDb(db_file=db_file),
description=full_prompt,
add_history_to_context=True,
num_history_runs=20,
markdown=False,
tools=tools_list if tools_list else None,
tool_call_limit=settings.max_agent_steps if use_tools else None,
telemetry=settings.telemetry_enabled,
)
_warmup_model(model_name)
return agent

View File

@@ -10,7 +10,7 @@ Categories:
M3xx iOS keyboard & zoom prevention
M4xx HTMX robustness (double-submit, sync)
M5xx Safe-area / notch support
M6xx AirLLM backend interface contract
M6xx Backend interface contract
"""
import re
@@ -208,7 +208,7 @@ def test_M505_dvh_units_used():
assert "dvh" in css
# ── M6xx — AirLLM backend interface contract ──────────────────────────────────
# ── M6xx — Backend interface contract ──────────────────────────────────
def test_M601_airllm_agent_has_run_method():

View File

@@ -454,127 +454,3 @@ def test_no_hardcoded_fallback_constants_in_agent():
assert not hasattr(agent_mod, "VISION_MODEL_FALLBACKS"), (
"Hardcoded VISION_MODEL_FALLBACKS still exists — use settings.vision_fallback_models"
)
# ── _build_tools_list helper ─────────────────────────────────────────────────
def test_build_tools_list_returns_empty_when_no_tools():
"""When use_tools=False, _build_tools_list returns an empty list."""
from timmy.agent import _build_tools_list
result = _build_tools_list(use_tools=False, skip_mcp=False)
assert result == []
def test_build_tools_list_includes_toolkit():
"""When use_tools=True, _build_tools_list includes the toolkit."""
mock_toolkit = MagicMock()
with patch("timmy.agent.create_full_toolkit", return_value=mock_toolkit):
from timmy.agent import _build_tools_list
result = _build_tools_list(use_tools=True, skip_mcp=True)
assert mock_toolkit in result
def test_build_tools_list_adds_mcp_when_not_skipped():
"""When skip_mcp=False, _build_tools_list attempts MCP tools."""
mock_toolkit = MagicMock()
mock_gitea = MagicMock()
with (
patch("timmy.agent.create_full_toolkit", return_value=mock_toolkit),
patch("timmy.mcp_tools.create_gitea_mcp_tools", return_value=mock_gitea),
patch("timmy.mcp_tools.create_filesystem_mcp_tools", return_value=None),
):
from timmy.agent import _build_tools_list
result = _build_tools_list(use_tools=True, skip_mcp=False)
assert mock_toolkit in result
assert mock_gitea in result
# ── _build_prompt helper ─────────────────────────────────────────────────────
def test_build_prompt_returns_base_when_no_memory():
"""_build_prompt returns base prompt when memory context is empty."""
with patch("timmy.memory_system.memory_system") as mock_mem:
mock_mem.get_system_context.return_value = ""
from timmy.agent import _build_prompt
result = _build_prompt(use_tools=True, session_id="test")
assert "Timmy" in result
def test_build_prompt_appends_memory_context():
"""_build_prompt appends memory context when available."""
with patch("timmy.memory_system.memory_system") as mock_mem:
mock_mem.get_system_context.return_value = "User likes pizza"
from timmy.agent import _build_prompt
result = _build_prompt(use_tools=True, session_id="test")
assert "User likes pizza" in result
assert "GROUNDED CONTEXT" in result
def test_build_prompt_truncates_long_memory_for_small_models():
"""_build_prompt truncates memory for small models (use_tools=False)."""
long_context = "x" * 5000
with patch("timmy.memory_system.memory_system") as mock_mem:
mock_mem.get_system_context.return_value = long_context
from timmy.agent import _build_prompt
result = _build_prompt(use_tools=False, session_id="test")
# Max context is 2000 for small models + truncation marker
assert "[truncated]" in result
# ── _create_ollama_agent helper ──────────────────────────────────────────────
def test_create_ollama_agent_passes_correct_kwargs():
"""_create_ollama_agent passes the expected kwargs to Agent()."""
with (
patch("timmy.agent.Agent") as MockAgent,
patch("timmy.agent.Ollama"),
patch("timmy.agent.SqliteDb"),
):
from timmy.agent import _create_ollama_agent
_create_ollama_agent(
model_name="test-model",
db_file="test.db",
tools_list=[MagicMock()],
full_prompt="Test prompt",
use_tools=True,
)
kwargs = MockAgent.call_args.kwargs
assert kwargs["description"] == "Test prompt"
assert kwargs["tools"] is not None
def test_create_ollama_agent_none_tools_when_empty():
"""_create_ollama_agent passes tools=None when tools_list is empty."""
with (
patch("timmy.agent.Agent") as MockAgent,
patch("timmy.agent.Ollama"),
patch("timmy.agent.SqliteDb"),
):
from timmy.agent import _create_ollama_agent
_create_ollama_agent(
model_name="test-model",
db_file="test.db",
tools_list=[],
full_prompt="Test prompt",
use_tools=False,
)
kwargs = MockAgent.call_args.kwargs
assert kwargs["tools"] is None

View File

@@ -1,4 +1,4 @@
"""Tests for src/timmy/backends.py — AirLLM wrapper and helpers."""
"""Tests for src/timmy/backends.py — backend helpers."""
import sys
from unittest.mock import MagicMock, patch