ruff (#169)
* polish: streamline nav, extract inline styles, improve tablet UX - Restructure desktop nav from 8+ flat links + overflow dropdown into 5 grouped dropdowns (Core, Agents, Intel, System, More) matching the mobile menu structure to reduce decision fatigue - Extract all inline styles from mission_control.html and base.html notification elements into mission-control.css with semantic classes - Replace JS-built innerHTML with secure DOM construction in notification loader and chat history - Add CONNECTING state to connection indicator (amber) instead of showing OFFLINE before WebSocket connects - Add tablet breakpoint (1024px) with larger touch targets for Apple Pencil / stylus use and safe-area padding for iPad toolbar - Add active-link highlighting in desktop dropdown menus - Rename "Mission Control" page title to "System Overview" to disambiguate from the chat home page - Add "Home — Timmy Time" page title to index.html https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h * fix(security): move auth-gate credentials to environment variables Hardcoded username, password, and HMAC secret in auth-gate.py replaced with os.environ lookups. Startup now refuses to run if any variable is unset. Added AUTH_GATE_SECRET/USER/PASS to .env.example. https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h * refactor(tooling): migrate from black+isort+bandit to ruff Replace three separate linting/formatting tools with a single ruff invocation. Updates tox.ini (lint, format, pre-push, pre-commit envs), .pre-commit-config.yaml, and CI workflow. Fixes all ruff errors including unused imports, missing raise-from, and undefined names. Ruff config maps existing bandit skips to equivalent S-rules. https://claude.ai/code/session_015uPUoKyYa8M2UAcyk5Gt6h --------- Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
committed by
GitHub
parent
708c8a2477
commit
9d78eb31d1
@@ -80,6 +80,13 @@
|
||||
# AUTORESEARCH_MAX_ITERATIONS=100
|
||||
# AUTORESEARCH_METRIC=val_bpb
|
||||
|
||||
# ── Auth Gate (nginx auth_request) ─────────────────────────────────────────
|
||||
# Required when running auth-gate.py for nginx auth_request.
|
||||
# Generate secret with: python3 -c "import secrets; print(secrets.token_hex(32))"
|
||||
# AUTH_GATE_SECRET=<your-secret-here>
|
||||
# AUTH_GATE_USER=<your-username>
|
||||
# AUTH_GATE_PASS=<your-password>
|
||||
|
||||
# ── Docker Production ────────────────────────────────────────────────────────
|
||||
# When deploying with docker-compose.prod.yml:
|
||||
# - Containers run as non-root user "timmy" (defined in Dockerfile)
|
||||
|
||||
2
.github/workflows/tests.yml
vendored
2
.github/workflows/tests.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- name: Install tox
|
||||
run: pip install tox
|
||||
|
||||
- name: Lint (black + isort + bandit via tox)
|
||||
- name: Lint (ruff via tox)
|
||||
run: tox -e lint
|
||||
|
||||
test:
|
||||
|
||||
@@ -4,20 +4,13 @@
|
||||
# Run manually: pre-commit run --all-files
|
||||
|
||||
repos:
|
||||
# Code formatting
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 26.3.0
|
||||
# Linting + formatting (ruff replaces black + isort + bandit)
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.9.7
|
||||
hooks:
|
||||
- id: black
|
||||
language_version: python3.11
|
||||
args: [--line-length=100]
|
||||
|
||||
# Import sorting
|
||||
- repo: https://github.com/PyCQA/isort
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
args: [--profile=black, --line-length=100]
|
||||
- id: ruff
|
||||
args: [--fix]
|
||||
- id: ruff-format
|
||||
|
||||
# Type checking (optional - can be slow)
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
@@ -42,14 +35,6 @@ repos:
|
||||
- id: debug-statements
|
||||
- id: mixed-line-ending
|
||||
|
||||
# Security checks (optional)
|
||||
- repo: https://github.com/PyCQA/bandit
|
||||
rev: 1.7.5
|
||||
hooks:
|
||||
- id: bandit
|
||||
args: [-ll, --skip, B101,B601]
|
||||
exclude: ^tests/
|
||||
stages: [manual]
|
||||
|
||||
# Format + unit tests via tox (30s wall-clock limit).
|
||||
# Runs tox pre-commit env; full suite runs in CI.
|
||||
|
||||
17
Makefile
17
Makefile
@@ -1,4 +1,4 @@
|
||||
.PHONY: install install-bigbrain install-hooks dev nuke fresh test test-cov test-cov-html watch lint clean help \
|
||||
.PHONY: bootstrap install install-bigbrain install-hooks dev nuke fresh test test-cov test-cov-html watch lint clean help \
|
||||
up down logs \
|
||||
docker-build docker-up docker-down docker-agent docker-logs docker-shell \
|
||||
test-docker test-docker-cov test-docker-functional test-docker-build test-docker-down \
|
||||
@@ -9,6 +9,16 @@ TOX := tox
|
||||
|
||||
# ── Setup ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
bootstrap:
|
||||
@echo " Bootstrapping Timmy Time development environment..."
|
||||
@python3 -c "import sys; exit(0 if sys.version_info >= (3,11) else 1)" \
|
||||
|| { echo "ERROR: Python 3.11+ required (found $$(python3 --version))"; exit 1; }
|
||||
poetry install --with dev
|
||||
@[ -f .env ] || { cp .env.example .env; echo " Created .env from .env.example — edit as needed"; }
|
||||
git config core.hooksPath .githooks
|
||||
@command -v pre-commit >/dev/null 2>&1 && pre-commit install || true
|
||||
@echo " Ready. Run 'make dev' to start the dashboard."
|
||||
|
||||
install:
|
||||
poetry install --with dev
|
||||
git config core.hooksPath .githooks
|
||||
@@ -297,6 +307,7 @@ help:
|
||||
@echo ""
|
||||
@echo " Local Development"
|
||||
@echo " ─────────────────────────────────────────────────"
|
||||
@echo " make bootstrap one-command setup for new developers"
|
||||
@echo " make install install deps via Poetry"
|
||||
@echo " make install-bigbrain install with AirLLM (big-model backend)"
|
||||
@echo " make dev clean up + start dashboard (auto-fixes errno 48)"
|
||||
@@ -307,8 +318,8 @@ help:
|
||||
@echo " make test-cov tests + coverage report (terminal + XML)"
|
||||
@echo " make test-cov-html tests + HTML coverage report"
|
||||
@echo " make watch self-TDD watchdog (60s poll)"
|
||||
@echo " make lint run ruff or flake8"
|
||||
@echo " make format format code (black, isort)"
|
||||
@echo " make lint check formatting + imports + security (ruff)"
|
||||
@echo " make format auto-format code (ruff)"
|
||||
@echo " make type-check run type checking (mypy)"
|
||||
@echo " make pre-commit-run run all pre-commit checks"
|
||||
@echo " make test-unit run unit tests only"
|
||||
|
||||
10
auth-gate.py
10
auth-gate.py
@@ -1,10 +1,10 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tiny auth gate for nginx auth_request. Sets a cookie after successful basic auth."""
|
||||
import hashlib, hmac, http.server, time, base64, os
|
||||
import hashlib, hmac, http.server, time, base64, os, sys
|
||||
|
||||
SECRET = "sovereign-timmy-gate-2026"
|
||||
USER = "Rockachopa"
|
||||
PASS = "Iamrockachopathegend"
|
||||
SECRET = os.environ.get("AUTH_GATE_SECRET", "")
|
||||
USER = os.environ.get("AUTH_GATE_USER", "")
|
||||
PASS = os.environ.get("AUTH_GATE_PASS", "")
|
||||
COOKIE_NAME = "sovereign_gate"
|
||||
COOKIE_MAX_AGE = 86400 * 7 # 7 days
|
||||
|
||||
@@ -63,6 +63,8 @@ class Handler(http.server.BaseHTTPRequestHandler):
|
||||
self.end_headers()
|
||||
|
||||
if __name__ == "__main__":
|
||||
if not all([SECRET, USER, PASS]):
|
||||
sys.exit("ERROR: AUTH_GATE_SECRET, AUTH_GATE_USER, and AUTH_GATE_PASS must be set")
|
||||
s = http.server.HTTPServer(("127.0.0.1", 9876), Handler)
|
||||
print("Auth gate listening on 127.0.0.1:9876")
|
||||
s.serve_forever()
|
||||
|
||||
@@ -24,17 +24,17 @@ packages = [
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.11,<4"
|
||||
agno = { version = ">=1.4.0", extras = ["sqlite"] }
|
||||
ollama = ">=0.3.0"
|
||||
agno = { version = ">=1.4.0,<2.0", extras = ["sqlite"] }
|
||||
ollama = ">=0.3.0,<1.0"
|
||||
openai = ">=1.0.0"
|
||||
fastapi = ">=0.115.0"
|
||||
uvicorn = { version = ">=0.32.0", extras = ["standard"] }
|
||||
fastapi = ">=0.115.0,<1.0"
|
||||
uvicorn = { version = ">=0.32.0,<1.0", extras = ["standard"] }
|
||||
jinja2 = ">=3.1.0"
|
||||
httpx = ">=0.27.0"
|
||||
python-multipart = ">=0.0.12"
|
||||
typer = ">=0.12.0"
|
||||
rich = ">=13.0.0"
|
||||
pydantic-settings = ">=2.0.0"
|
||||
pydantic-settings = ">=2.0.0,<3.0"
|
||||
# Optional extras
|
||||
redis = { version = ">=5.0.0", optional = true }
|
||||
celery = { version = ">=5.3.0", extras = ["redis"], optional = true }
|
||||
@@ -72,8 +72,7 @@ pytest-timeout = ">=2.3.0"
|
||||
selenium = ">=4.20.0"
|
||||
pytest-randomly = "^4.0.1"
|
||||
pytest-xdist = "^3.8.0"
|
||||
black = ">=24.0.0"
|
||||
isort = ">=5.13.0"
|
||||
ruff = ">=0.8.0"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
timmy = "timmy.cli:main"
|
||||
@@ -102,11 +101,31 @@ markers = [
|
||||
"skip_ci: Skip in CI environment (local development only)",
|
||||
]
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
line_length = 100
|
||||
src_paths = ["src", "tests"]
|
||||
known_first_party = ["brain", "config", "dashboard", "infrastructure", "integrations", "spark", "swarm", "timmy", "timmy_serve"]
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
target-version = "py311"
|
||||
src = ["src", "tests"]
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "UP", "B", "S"]
|
||||
ignore = [
|
||||
# Mapped from existing bandit skips: B101→S101, B104→S104, etc.
|
||||
"S101", "S104", "S307", "S310", "S324", "S601", "S608",
|
||||
# Project patterns: graceful degradation (try/except pass), FastAPI Depends()
|
||||
"S110", "S112", "B008",
|
||||
# Subprocess usage in scripts/infrastructure
|
||||
"S603", "S607",
|
||||
# Non-cryptographic random is fine for non-security contexts
|
||||
"S311",
|
||||
# Line length handled by formatter; long strings/URLs can't always be broken
|
||||
"E501",
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = ["brain", "config", "dashboard", "infrastructure", "integrations", "spark", "swarm", "timmy", "timmy_serve"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"tests/**" = ["S"]
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["src"]
|
||||
|
||||
@@ -10,7 +10,7 @@ import logging
|
||||
import os
|
||||
import socket
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
@@ -26,7 +26,7 @@ class BrainClient:
|
||||
All writes go to leader, reads can come from local node.
|
||||
"""
|
||||
|
||||
def __init__(self, rqlite_url: Optional[str] = None, node_id: Optional[str] = None):
|
||||
def __init__(self, rqlite_url: str | None = None, node_id: str | None = None):
|
||||
from config import settings
|
||||
|
||||
self.rqlite_url = rqlite_url or settings.rqlite_url or DEFAULT_RQLITE_URL
|
||||
@@ -49,10 +49,10 @@ class BrainClient:
|
||||
async def remember(
|
||||
self,
|
||||
content: str,
|
||||
tags: Optional[List[str]] = None,
|
||||
source: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory with embedding.
|
||||
|
||||
Args:
|
||||
@@ -100,8 +100,8 @@ class BrainClient:
|
||||
raise
|
||||
|
||||
async def recall(
|
||||
self, query: str, limit: int = 5, sources: Optional[List[str]] = None
|
||||
) -> List[str]:
|
||||
self, query: str, limit: int = 5, sources: list[str] | None = None
|
||||
) -> list[str]:
|
||||
"""Semantic search for memories.
|
||||
|
||||
Args:
|
||||
@@ -154,8 +154,8 @@ class BrainClient:
|
||||
return []
|
||||
|
||||
async def get_recent(
|
||||
self, hours: int = 24, limit: int = 20, sources: Optional[List[str]] = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
self, hours: int = 24, limit: int = 20, sources: list[str] | None = None
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories by time.
|
||||
|
||||
Args:
|
||||
@@ -239,8 +239,8 @@ class BrainClient:
|
||||
content: str,
|
||||
task_type: str = "general",
|
||||
priority: int = 0,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Submit a task to the distributed queue.
|
||||
|
||||
Args:
|
||||
@@ -281,8 +281,8 @@ class BrainClient:
|
||||
raise
|
||||
|
||||
async def claim_task(
|
||||
self, capabilities: List[str], node_id: Optional[str] = None
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
self, capabilities: list[str], node_id: str | None = None
|
||||
) -> dict[str, Any] | None:
|
||||
"""Atomically claim next available task.
|
||||
|
||||
Uses UPDATE ... RETURNING pattern for atomic claim.
|
||||
@@ -341,7 +341,7 @@ class BrainClient:
|
||||
return None
|
||||
|
||||
async def complete_task(
|
||||
self, task_id: int, success: bool, result: Optional[str] = None, error: Optional[str] = None
|
||||
self, task_id: int, success: bool, result: str | None = None, error: str | None = None
|
||||
) -> None:
|
||||
"""Mark task as completed or failed.
|
||||
|
||||
@@ -370,7 +370,7 @@ class BrainClient:
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to complete task {task_id}: {e}")
|
||||
|
||||
async def get_pending_tasks(self, limit: int = 100) -> List[Dict[str, Any]]:
|
||||
async def get_pending_tasks(self, limit: int = 100) -> list[dict[str, Any]]:
|
||||
"""Get list of pending tasks (for dashboard/monitoring).
|
||||
|
||||
Args:
|
||||
|
||||
@@ -6,7 +6,6 @@ No OpenAI dependency. Runs 100% locally on CPU.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import List, Union
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -48,7 +47,7 @@ class LocalEmbedder:
|
||||
)
|
||||
raise
|
||||
|
||||
def encode(self, text: Union[str, List[str]]):
|
||||
def encode(self, text: str | list[str]):
|
||||
"""Encode text to embedding vector(s).
|
||||
|
||||
Args:
|
||||
|
||||
@@ -31,9 +31,9 @@ import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -64,9 +64,9 @@ class UnifiedMemory:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
source: str = "default",
|
||||
use_rqlite: Optional[bool] = None,
|
||||
use_rqlite: bool | None = None,
|
||||
):
|
||||
self.db_path = db_path or _get_db_path()
|
||||
self.source = source
|
||||
@@ -143,10 +143,10 @@ class UnifiedMemory:
|
||||
async def remember(
|
||||
self,
|
||||
content: str,
|
||||
tags: Optional[List[str]] = None,
|
||||
source: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory.
|
||||
|
||||
Args:
|
||||
@@ -167,10 +167,10 @@ class UnifiedMemory:
|
||||
def remember_sync(
|
||||
self,
|
||||
content: str,
|
||||
tags: Optional[List[str]] = None,
|
||||
source: Optional[str] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
tags: list[str] | None = None,
|
||||
source: str | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Store a memory (synchronous, local SQLite only).
|
||||
|
||||
Args:
|
||||
@@ -182,7 +182,7 @@ class UnifiedMemory:
|
||||
Returns:
|
||||
Dict with 'id' and 'status'.
|
||||
"""
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
embedding_bytes = None
|
||||
|
||||
embedder = self._get_embedder()
|
||||
@@ -217,8 +217,8 @@ class UnifiedMemory:
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 5,
|
||||
sources: Optional[List[str]] = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Semantic search for memories.
|
||||
|
||||
If embeddings are available, uses cosine similarity.
|
||||
@@ -242,8 +242,8 @@ class UnifiedMemory:
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 5,
|
||||
sources: Optional[List[str]] = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Semantic search (synchronous, local SQLite).
|
||||
|
||||
Uses numpy dot product for cosine similarity when embeddings
|
||||
@@ -259,9 +259,9 @@ class UnifiedMemory:
|
||||
self,
|
||||
query: str,
|
||||
limit: int,
|
||||
sources: Optional[List[str]],
|
||||
sources: list[str] | None,
|
||||
embedder,
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Vector similarity search over local SQLite."""
|
||||
import numpy as np
|
||||
|
||||
@@ -320,8 +320,8 @@ class UnifiedMemory:
|
||||
self,
|
||||
query: str,
|
||||
limit: int,
|
||||
sources: Optional[List[str]],
|
||||
) -> List[Dict[str, Any]]:
|
||||
sources: list[str] | None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Keyword-based fallback search."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
@@ -363,7 +363,7 @@ class UnifiedMemory:
|
||||
content: str,
|
||||
confidence: float = 0.8,
|
||||
source: str = "extracted",
|
||||
) -> Dict[str, Any]:
|
||||
) -> dict[str, Any]:
|
||||
"""Store a long-term fact.
|
||||
|
||||
Args:
|
||||
@@ -383,10 +383,10 @@ class UnifiedMemory:
|
||||
content: str,
|
||||
confidence: float = 0.8,
|
||||
source: str = "extracted",
|
||||
) -> Dict[str, Any]:
|
||||
) -> dict[str, Any]:
|
||||
"""Store a long-term fact (synchronous)."""
|
||||
fact_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
@@ -403,10 +403,10 @@ class UnifiedMemory:
|
||||
|
||||
async def get_facts(
|
||||
self,
|
||||
category: Optional[str] = None,
|
||||
query: Optional[str] = None,
|
||||
category: str | None = None,
|
||||
query: str | None = None,
|
||||
limit: int = 10,
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Retrieve facts from long-term memory.
|
||||
|
||||
Args:
|
||||
@@ -421,10 +421,10 @@ class UnifiedMemory:
|
||||
|
||||
def get_facts_sync(
|
||||
self,
|
||||
category: Optional[str] = None,
|
||||
query: Optional[str] = None,
|
||||
category: str | None = None,
|
||||
query: str | None = None,
|
||||
limit: int = 10,
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Retrieve facts (synchronous)."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
@@ -451,7 +451,7 @@ class UnifiedMemory:
|
||||
for row in rows:
|
||||
conn.execute(
|
||||
"UPDATE facts SET access_count = access_count + 1, last_accessed = ? WHERE id = ?",
|
||||
(datetime.now(timezone.utc).isoformat(), row["id"]),
|
||||
(datetime.now(UTC).isoformat(), row["id"]),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
@@ -478,8 +478,8 @@ class UnifiedMemory:
|
||||
self,
|
||||
hours: int = 24,
|
||||
limit: int = 20,
|
||||
sources: Optional[List[str]] = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories by time."""
|
||||
if self._use_rqlite:
|
||||
client = self._get_rqlite_client()
|
||||
@@ -491,8 +491,8 @@ class UnifiedMemory:
|
||||
self,
|
||||
hours: int = 24,
|
||||
limit: int = 20,
|
||||
sources: Optional[List[str]] = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
sources: list[str] | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Get recent memories (synchronous)."""
|
||||
conn = self._get_conn()
|
||||
try:
|
||||
@@ -577,7 +577,7 @@ class UnifiedMemory:
|
||||
# Stats
|
||||
# ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
def get_stats(self) -> dict[str, Any]:
|
||||
"""Get memory statistics.
|
||||
|
||||
Returns:
|
||||
@@ -609,7 +609,7 @@ class UnifiedMemory:
|
||||
# Module-level convenience
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_default_memory: Optional[UnifiedMemory] = None
|
||||
_default_memory: UnifiedMemory | None = None
|
||||
|
||||
|
||||
def get_memory(source: str = "agent") -> UnifiedMemory:
|
||||
|
||||
@@ -11,8 +11,8 @@ import logging
|
||||
import os
|
||||
import socket
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
from typing import Any, Callable, Dict, List, Optional
|
||||
from collections.abc import Callable
|
||||
from typing import Any
|
||||
|
||||
from brain.client import BrainClient
|
||||
|
||||
@@ -26,15 +26,15 @@ class DistributedWorker:
|
||||
executes them immediately, stores results.
|
||||
"""
|
||||
|
||||
def __init__(self, brain_client: Optional[BrainClient] = None):
|
||||
def __init__(self, brain_client: BrainClient | None = None):
|
||||
self.brain = brain_client or BrainClient()
|
||||
self.node_id = f"{socket.gethostname()}-{os.getpid()}"
|
||||
self.capabilities = self._detect_capabilities()
|
||||
self.running = False
|
||||
self._handlers: Dict[str, Callable] = {}
|
||||
self._handlers: dict[str, Callable] = {}
|
||||
self._register_default_handlers()
|
||||
|
||||
def _detect_capabilities(self) -> List[str]:
|
||||
def _detect_capabilities(self) -> list[str]:
|
||||
"""Detect what this node can do."""
|
||||
caps = ["general", "shell", "file_ops", "git"]
|
||||
|
||||
@@ -260,13 +260,13 @@ class DistributedWorker:
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"LLM failed: {e}")
|
||||
raise Exception(f"LLM failed: {e}") from e
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
# Main Loop
|
||||
# ──────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def execute_task(self, task: Dict[str, Any]) -> Dict[str, Any]:
|
||||
async def execute_task(self, task: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Execute a claimed task."""
|
||||
task_type = task.get("type", "general")
|
||||
content = task.get("content", "")
|
||||
|
||||
@@ -342,8 +342,7 @@ def get_effective_ollama_model() -> str:
|
||||
# Try primary
|
||||
if check_ollama_model_available(OLLAMA_MODEL_PRIMARY):
|
||||
_startup_logger.warning(
|
||||
f"Requested model '{user_model}' not available. "
|
||||
f"Using primary: {OLLAMA_MODEL_PRIMARY}"
|
||||
f"Requested model '{user_model}' not available. Using primary: {OLLAMA_MODEL_PRIMARY}"
|
||||
)
|
||||
return OLLAMA_MODEL_PRIMARY
|
||||
|
||||
|
||||
@@ -309,8 +309,8 @@ async def lifespan(app: FastAPI):
|
||||
yield
|
||||
|
||||
# Cleanup on shutdown
|
||||
from integrations.telegram_bot.bot import telegram_bot
|
||||
from integrations.chat_bridge.vendors.discord import discord_bot
|
||||
from integrations.telegram_bot.bot import telegram_bot
|
||||
|
||||
await discord_bot.stop()
|
||||
await telegram_bot.stop()
|
||||
|
||||
@@ -4,11 +4,10 @@ Provides CSRF token generation, validation, and middleware integration
|
||||
to protect state-changing endpoints from cross-site request attacks.
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
import secrets
|
||||
from collections.abc import Callable
|
||||
from functools import wraps
|
||||
from typing import Callable, Optional
|
||||
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.requests import Request
|
||||
@@ -112,7 +111,7 @@ class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
def __init__(
|
||||
self,
|
||||
app,
|
||||
secret: Optional[str] = None,
|
||||
secret: str | None = None,
|
||||
cookie_name: str = "csrf_token",
|
||||
header_name: str = "X-CSRF-Token",
|
||||
form_field: str = "csrf_token",
|
||||
@@ -240,7 +239,7 @@ class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
|
||||
return False
|
||||
|
||||
async def _validate_request(self, request: Request, csrf_cookie: Optional[str]) -> bool:
|
||||
async def _validate_request(self, request: Request, csrf_cookie: str | None) -> bool:
|
||||
"""Validate the CSRF token in the request.
|
||||
|
||||
Checks for token in:
|
||||
|
||||
@@ -7,7 +7,6 @@ for monitoring and debugging purposes.
|
||||
import logging
|
||||
import time
|
||||
import uuid
|
||||
from typing import List, Optional
|
||||
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.requests import Request
|
||||
@@ -38,7 +37,7 @@ class RequestLoggingMiddleware(BaseHTTPMiddleware):
|
||||
log_level: Logging level for successful requests.
|
||||
"""
|
||||
|
||||
def __init__(self, app, skip_paths: Optional[List[str]] = None, log_level: int = logging.INFO):
|
||||
def __init__(self, app, skip_paths: list[str] | None = None, log_level: int = logging.INFO):
|
||||
super().__init__(app)
|
||||
self.skip_paths = set(skip_paths or [])
|
||||
self.log_level = log_level
|
||||
|
||||
@@ -4,8 +4,6 @@ Adds common security headers to all HTTP responses to improve
|
||||
application security posture against various attacks.
|
||||
"""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.requests import Request
|
||||
from starlette.responses import Response
|
||||
@@ -39,7 +37,7 @@ class SecurityHeadersMiddleware(BaseHTTPMiddleware):
|
||||
app,
|
||||
production: bool = False,
|
||||
csp_report_only: bool = False,
|
||||
custom_csp: Optional[str] = None,
|
||||
custom_csp: str | None = None,
|
||||
):
|
||||
super().__init__(app)
|
||||
self.production = production
|
||||
|
||||
@@ -1,15 +1,13 @@
|
||||
from datetime import date, datetime
|
||||
from enum import Enum as PyEnum
|
||||
from enum import StrEnum
|
||||
|
||||
from sqlalchemy import JSON, Boolean, Column, Date, DateTime
|
||||
from sqlalchemy import JSON, Boolean, Column, Date, DateTime, Index, Integer, String
|
||||
from sqlalchemy import Enum as SQLEnum
|
||||
from sqlalchemy import ForeignKey, Index, Integer, String
|
||||
from sqlalchemy.orm import relationship
|
||||
|
||||
from .database import Base # Assuming a shared Base in models/database.py
|
||||
|
||||
|
||||
class TaskState(str, PyEnum):
|
||||
class TaskState(StrEnum):
|
||||
LATER = "LATER"
|
||||
NEXT = "NEXT"
|
||||
NOW = "NOW"
|
||||
@@ -17,7 +15,7 @@ class TaskState(str, PyEnum):
|
||||
DEFERRED = "DEFERRED" # Task pushed to tomorrow
|
||||
|
||||
|
||||
class TaskCertainty(str, PyEnum):
|
||||
class TaskCertainty(StrEnum):
|
||||
FUZZY = "FUZZY" # An intention without a time
|
||||
SOFT = "SOFT" # A flexible task with a time
|
||||
HARD = "HARD" # A fixed meeting/appointment
|
||||
|
||||
@@ -4,7 +4,7 @@ from pathlib import Path
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.exc import OperationalError
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import Session, sessionmaker
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ POST /briefing/approvals/{id}/reject — reject an item (HTMX)
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
@@ -29,7 +29,7 @@ async def get_briefing(request: Request):
|
||||
briefing = briefing_engine.get_or_generate()
|
||||
except Exception:
|
||||
logger.exception("Briefing generation failed")
|
||||
now = datetime.now(timezone.utc)
|
||||
now = datetime.now(UTC)
|
||||
briefing = Briefing(
|
||||
generated_at=now,
|
||||
summary=(
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
import logging
|
||||
from datetime import date, datetime
|
||||
from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, Form, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from dashboard.models.calm import JournalEntry, Task, TaskCertainty, TaskState
|
||||
from dashboard.models.database import SessionLocal, create_tables, engine, get_db
|
||||
from dashboard.models.database import create_tables, get_db
|
||||
from dashboard.templating import templates
|
||||
|
||||
# Ensure CALM tables exist (safe to call multiple times)
|
||||
@@ -19,15 +18,15 @@ router = APIRouter(tags=["calm"])
|
||||
|
||||
|
||||
# Helper functions for state machine logic
|
||||
def get_now_task(db: Session) -> Optional[Task]:
|
||||
def get_now_task(db: Session) -> Task | None:
|
||||
return db.query(Task).filter(Task.state == TaskState.NOW).first()
|
||||
|
||||
|
||||
def get_next_task(db: Session) -> Optional[Task]:
|
||||
def get_next_task(db: Session) -> Task | None:
|
||||
return db.query(Task).filter(Task.state == TaskState.NEXT).first()
|
||||
|
||||
|
||||
def get_later_tasks(db: Session) -> List[Task]:
|
||||
def get_later_tasks(db: Session) -> list[Task]:
|
||||
return (
|
||||
db.query(Task)
|
||||
.filter(Task.state == TaskState.LATER)
|
||||
@@ -220,7 +219,7 @@ async def create_new_task(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
title: str = Form(...),
|
||||
description: Optional[str] = Form(None),
|
||||
description: str | None = Form(None),
|
||||
is_mit: bool = Form(False),
|
||||
certainty: TaskCertainty = Form(TaskCertainty.SOFT),
|
||||
):
|
||||
@@ -347,7 +346,7 @@ async def reorder_tasks(
|
||||
db: Session = Depends(get_db),
|
||||
# Expecting a comma-separated string of task IDs in new order
|
||||
later_task_ids: str = Form(""),
|
||||
next_task_id: Optional[int] = Form(None),
|
||||
next_task_id: int | None = Form(None),
|
||||
):
|
||||
# Reorder LATER tasks
|
||||
if later_task_ids:
|
||||
|
||||
@@ -7,8 +7,6 @@ Endpoints:
|
||||
GET /discord/oauth-url — get the bot's OAuth2 authorization URL
|
||||
"""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, File, Form, UploadFile
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -59,8 +57,8 @@ async def discord_status():
|
||||
|
||||
@router.post("/join")
|
||||
async def join_from_image(
|
||||
image: Optional[UploadFile] = File(None),
|
||||
invite_url: Optional[str] = Form(None),
|
||||
image: UploadFile | None = File(None),
|
||||
invite_url: str | None = Form(None),
|
||||
):
|
||||
"""Extract a Discord invite from a screenshot or text and validate it.
|
||||
|
||||
@@ -120,8 +118,7 @@ async def join_from_image(
|
||||
)
|
||||
else:
|
||||
result["message"] = (
|
||||
"Invite found but bot is not connected. "
|
||||
"Configure a bot token first via /discord/setup."
|
||||
"Invite found but bot is not connected. Configure a bot token first via /discord/setup."
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
@@ -11,7 +11,7 @@ GET /grok/stats — Usage statistics (JSON)
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, Form, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from config import settings
|
||||
from dashboard.templating import templates
|
||||
@@ -225,7 +225,7 @@ def _render_toggle_card(active: bool) -> str:
|
||||
style="background: {color}; color: #000; border: none;
|
||||
border-radius: 8px; padding: 8px 20px; cursor: pointer;
|
||||
font-weight: 700; font-family: inherit;">
|
||||
{'DEACTIVATE' if active else 'ACTIVATE'}
|
||||
{"DEACTIVATE" if active else "ACTIVATE"}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -7,7 +7,7 @@ for the Mission Control dashboard.
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
@@ -49,7 +49,7 @@ class HealthStatus(BaseModel):
|
||||
|
||||
|
||||
# Simple uptime tracking
|
||||
_START_TIME = datetime.now(timezone.utc)
|
||||
_START_TIME = datetime.now(UTC)
|
||||
|
||||
# Ollama health cache (30-second TTL)
|
||||
_ollama_cache: DependencyStatus | None = None
|
||||
@@ -189,7 +189,7 @@ async def health_check():
|
||||
Returns legacy format for backward compatibility with existing tests,
|
||||
plus extended information for the Mission Control dashboard.
|
||||
"""
|
||||
uptime = (datetime.now(timezone.utc) - _START_TIME).total_seconds()
|
||||
uptime = (datetime.now(UTC) - _START_TIME).total_seconds()
|
||||
|
||||
# Legacy format for test compatibility
|
||||
ollama_ok = await check_ollama()
|
||||
@@ -205,7 +205,7 @@ async def health_check():
|
||||
"agent": {"status": agent_status},
|
||||
},
|
||||
# Extended fields for Mission Control
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"version": "2.0.0",
|
||||
"uptime_seconds": uptime,
|
||||
"llm_backend": settings.timmy_model_backend,
|
||||
@@ -232,7 +232,7 @@ async def health_status_panel(request: Request):
|
||||
<h1>System Health</h1>
|
||||
<p>Ollama: <span style="color: {status_color}; font-weight: bold;">{status_text}</span></p>
|
||||
<p>Model: {model}</p>
|
||||
<p>Timestamp: {datetime.now(timezone.utc).isoformat()}</p>
|
||||
<p>Timestamp: {datetime.now(UTC).isoformat()}</p>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
@@ -258,7 +258,7 @@ async def sovereignty_check():
|
||||
return SovereigntyReport(
|
||||
overall_score=overall,
|
||||
dependencies=dependencies,
|
||||
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
recommendations=recommendations,
|
||||
)
|
||||
|
||||
@@ -272,5 +272,5 @@ async def component_status():
|
||||
"model_backend": settings.timmy_model_backend,
|
||||
"ollama_model": settings.ollama_model,
|
||||
},
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@ This module is kept for UI compatibility.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from brain.client import BrainClient
|
||||
from dashboard.templating import templates
|
||||
@@ -19,7 +19,7 @@ AGENT_CATALOG = [
|
||||
"name": "Orchestrator",
|
||||
"role": "Local AI",
|
||||
"description": (
|
||||
"Primary AI agent. Coordinates tasks, manages memory. " "Uses distributed brain."
|
||||
"Primary AI agent. Coordinates tasks, manages memory. Uses distributed brain."
|
||||
),
|
||||
"capabilities": "chat,reasoning,coordination,memory",
|
||||
"rate_sats": 0,
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
"""Memory (vector store) routes for browsing and searching memories."""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Form, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
|
||||
@@ -9,10 +7,8 @@ from dashboard.templating import templates
|
||||
from timmy.memory.vector_store import (
|
||||
delete_memory,
|
||||
get_memory_stats,
|
||||
recall_personal_facts,
|
||||
recall_personal_facts_with_ids,
|
||||
search_memories,
|
||||
store_memory,
|
||||
store_personal_fact,
|
||||
update_personal_fact,
|
||||
)
|
||||
@@ -23,9 +19,9 @@ router = APIRouter(prefix="/memory", tags=["memory"])
|
||||
@router.get("", response_class=HTMLResponse)
|
||||
async def memory_page(
|
||||
request: Request,
|
||||
query: Optional[str] = None,
|
||||
context_type: Optional[str] = None,
|
||||
agent_id: Optional[str] = None,
|
||||
query: str | None = None,
|
||||
context_type: str | None = None,
|
||||
agent_id: str | None = None,
|
||||
):
|
||||
"""Memory browser and search page."""
|
||||
results = []
|
||||
@@ -59,7 +55,7 @@ async def memory_page(
|
||||
async def memory_search(
|
||||
request: Request,
|
||||
query: str = Form(...),
|
||||
context_type: Optional[str] = Form(None),
|
||||
context_type: str | None = Form(None),
|
||||
):
|
||||
"""Search memories (form submission)."""
|
||||
query = query.strip()
|
||||
@@ -87,7 +83,7 @@ async def memory_search(
|
||||
async def add_fact(
|
||||
request: Request,
|
||||
fact: str = Form(...),
|
||||
agent_id: Optional[str] = Form(None),
|
||||
agent_id: str | None = Form(None),
|
||||
):
|
||||
"""Add a personal fact to memory."""
|
||||
fact = fact.strip()
|
||||
|
||||
@@ -6,7 +6,7 @@ to swarm agents. Inspired by OpenClaw-RL's multi-model orchestration.
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
@@ -17,7 +17,6 @@ from dashboard.templating import templates
|
||||
from infrastructure.models.registry import (
|
||||
CustomModel,
|
||||
ModelFormat,
|
||||
ModelRegistry,
|
||||
ModelRole,
|
||||
model_registry,
|
||||
)
|
||||
@@ -61,7 +60,7 @@ class SetActiveRequest(BaseModel):
|
||||
|
||||
|
||||
@api_router.get("")
|
||||
async def list_models(role: Optional[str] = None) -> dict[str, Any]:
|
||||
async def list_models(role: str | None = None) -> dict[str, Any]:
|
||||
"""List all registered custom models."""
|
||||
model_role = ModelRole(role) if role else None
|
||||
models = model_registry.list_models(role=model_role)
|
||||
@@ -96,14 +95,14 @@ async def register_model(request: RegisterModelRequest) -> dict[str, Any]:
|
||||
status_code=400,
|
||||
detail=f"Invalid format: {request.format}. "
|
||||
f"Choose from: {[f.value for f in ModelFormat]}",
|
||||
)
|
||||
) from None
|
||||
try:
|
||||
role = ModelRole(request.role)
|
||||
except ValueError:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Invalid role: {request.role}. " f"Choose from: {[r.value for r in ModelRole]}",
|
||||
)
|
||||
detail=f"Invalid role: {request.role}. Choose from: {[r.value for r in ModelRole]}",
|
||||
) from None
|
||||
|
||||
# Validate path exists for non-Ollama formats
|
||||
if fmt != ModelFormat.OLLAMA:
|
||||
|
||||
@@ -5,7 +5,6 @@ All business logic lives in the bridge — these routes stay thin.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
@@ -40,7 +39,7 @@ async def paperclip_status():
|
||||
|
||||
|
||||
@router.get("/issues")
|
||||
async def list_issues(status: Optional[str] = None):
|
||||
async def list_issues(status: str | None = None):
|
||||
"""List all issues in the company."""
|
||||
if not settings.paperclip_enabled:
|
||||
return _disabled_response()
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
"""Swarm-related dashboard routes (events, live feed)."""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Request, WebSocket, WebSocketDisconnect
|
||||
from fastapi.responses import HTMLResponse
|
||||
@@ -19,9 +17,9 @@ router = APIRouter(prefix="/swarm", tags=["swarm"])
|
||||
@router.get("/events", response_class=HTMLResponse)
|
||||
async def swarm_events(
|
||||
request: Request,
|
||||
task_id: Optional[str] = None,
|
||||
agent_id: Optional[str] = None,
|
||||
event_type: Optional[str] = None,
|
||||
task_id: str | None = None,
|
||||
agent_id: str | None = None,
|
||||
event_type: str | None = None,
|
||||
):
|
||||
"""Event log page."""
|
||||
events = spark_engine.get_timeline(limit=100)
|
||||
|
||||
@@ -5,7 +5,6 @@ import sqlite3
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Form, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
|
||||
@@ -7,7 +7,7 @@ from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, Form, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from dashboard.templating import templates
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from dataclasses import dataclass, field
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
|
||||
@@ -15,8 +15,7 @@ Usage:
|
||||
import hashlib
|
||||
import logging
|
||||
import traceback
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Optional
|
||||
from datetime import UTC, datetime, timedelta
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -45,7 +44,7 @@ def _is_duplicate(error_hash: str) -> bool:
|
||||
"""Check if this error was seen recently (within dedup window)."""
|
||||
from config import settings
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
now = datetime.now(UTC)
|
||||
window = timedelta(seconds=settings.error_dedup_window_seconds)
|
||||
|
||||
if error_hash in _dedup_cache:
|
||||
@@ -95,8 +94,8 @@ def _get_git_context() -> dict:
|
||||
def capture_error(
|
||||
exc: Exception,
|
||||
source: str = "unknown",
|
||||
context: Optional[dict] = None,
|
||||
) -> Optional[str]:
|
||||
context: dict | None = None,
|
||||
) -> str | None:
|
||||
"""Capture an error and optionally create a bug report.
|
||||
|
||||
Args:
|
||||
@@ -165,7 +164,7 @@ def capture_error(
|
||||
f"**Source:** {source}",
|
||||
f"**File:** {affected_file}:{affected_line}",
|
||||
f"**Git:** {git_ctx.get('branch', '?')} @ {git_ctx.get('commit', '?')}",
|
||||
f"**Time:** {datetime.now(timezone.utc).isoformat()}",
|
||||
f"**Time:** {datetime.now(UTC).isoformat()}",
|
||||
f"**Hash:** {error_hash}",
|
||||
]
|
||||
|
||||
|
||||
@@ -5,7 +5,6 @@ via WebSocket for real-time activity feed updates.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
@@ -82,7 +81,7 @@ class EventBroadcaster:
|
||||
in the event loop if one is running.
|
||||
"""
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
asyncio.get_running_loop()
|
||||
# Schedule in background, don't wait
|
||||
asyncio.create_task(self.broadcast(event))
|
||||
except RuntimeError:
|
||||
|
||||
@@ -9,10 +9,11 @@ import asyncio
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
from collections.abc import Callable, Coroutine
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine, Optional
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -24,8 +25,8 @@ class Event:
|
||||
type: str # e.g., "agent.task.assigned", "tool.execution.completed"
|
||||
source: str # Agent or component that emitted the event
|
||||
data: dict = field(default_factory=dict)
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
id: str = field(default_factory=lambda: f"evt_{datetime.now(timezone.utc).timestamp()}")
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
id: str = field(default_factory=lambda: f"evt_{datetime.now(UTC).timestamp()}")
|
||||
|
||||
|
||||
# Type alias for event handlers
|
||||
@@ -78,7 +79,7 @@ class EventBus:
|
||||
self._subscribers: dict[str, list[EventHandler]] = {}
|
||||
self._history: list[Event] = []
|
||||
self._max_history = 1000
|
||||
self._persistence_db_path: Optional[Path] = None
|
||||
self._persistence_db_path: Path | None = None
|
||||
logger.info("EventBus initialized")
|
||||
|
||||
# ── Persistence ──────────────────────────────────────────────────────
|
||||
@@ -107,7 +108,7 @@ class EventBus:
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def _get_persistence_conn(self) -> Optional[sqlite3.Connection]:
|
||||
def _get_persistence_conn(self) -> sqlite3.Connection | None:
|
||||
"""Get a connection to the persistence database."""
|
||||
if self._persistence_db_path is None:
|
||||
return None
|
||||
@@ -148,9 +149,9 @@ class EventBus:
|
||||
|
||||
def replay(
|
||||
self,
|
||||
event_type: Optional[str] = None,
|
||||
source: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
event_type: str | None = None,
|
||||
source: str | None = None,
|
||||
task_id: str | None = None,
|
||||
limit: int = 100,
|
||||
) -> list[Event]:
|
||||
"""Replay persisted events from SQLite with optional filters.
|
||||
@@ -322,7 +323,7 @@ def get_event_bus() -> EventBus:
|
||||
return _event_bus
|
||||
|
||||
|
||||
def init_event_bus_persistence(db_path: Optional[Path] = None) -> None:
|
||||
def init_event_bus_persistence(db_path: Path | None = None) -> None:
|
||||
"""Enable persistence on the module-level EventBus singleton.
|
||||
|
||||
Call this during app startup to enable durable event storage.
|
||||
|
||||
@@ -18,7 +18,6 @@ import asyncio
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -60,7 +59,7 @@ class GitHand:
|
||||
rather than raising.
|
||||
"""
|
||||
|
||||
def __init__(self, repo_dir: Optional[str] = None, timeout: int = 60) -> None:
|
||||
def __init__(self, repo_dir: str | None = None, timeout: int = 60) -> None:
|
||||
self._repo_dir = repo_dir or settings.repo_root or None
|
||||
self._timeout = timeout
|
||||
logger.info("GitHand initialised — repo=%s", self._repo_dir)
|
||||
@@ -75,7 +74,7 @@ class GitHand:
|
||||
async def run(
|
||||
self,
|
||||
args: str,
|
||||
timeout: Optional[int] = None,
|
||||
timeout: int | None = None,
|
||||
allow_destructive: bool = False,
|
||||
) -> GitResult:
|
||||
"""Execute a git command.
|
||||
@@ -119,7 +118,7 @@ class GitHand:
|
||||
stdout_bytes, stderr_bytes = await asyncio.wait_for(
|
||||
proc.communicate(), timeout=effective_timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
except TimeoutError:
|
||||
proc.kill()
|
||||
await proc.wait()
|
||||
latency = (time.time() - start) * 1000
|
||||
|
||||
@@ -19,7 +19,6 @@ import logging
|
||||
import shlex
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -95,9 +94,9 @@ class ShellHand:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
allowed_prefixes: Optional[tuple[str, ...]] = None,
|
||||
allowed_prefixes: tuple[str, ...] | None = None,
|
||||
default_timeout: int = 60,
|
||||
working_dir: Optional[str] = None,
|
||||
working_dir: str | None = None,
|
||||
) -> None:
|
||||
self._allowed_prefixes = allowed_prefixes or DEFAULT_ALLOWED_PREFIXES
|
||||
self._default_timeout = default_timeout
|
||||
@@ -114,7 +113,7 @@ class ShellHand:
|
||||
def enabled(self) -> bool:
|
||||
return self._enabled
|
||||
|
||||
def _validate_command(self, command: str) -> Optional[str]:
|
||||
def _validate_command(self, command: str) -> str | None:
|
||||
"""Validate a command against the allow-list.
|
||||
|
||||
Returns None if valid, or an error message if blocked.
|
||||
@@ -148,9 +147,9 @@ class ShellHand:
|
||||
async def run(
|
||||
self,
|
||||
command: str,
|
||||
timeout: Optional[int] = None,
|
||||
working_dir: Optional[str] = None,
|
||||
env: Optional[dict] = None,
|
||||
timeout: int | None = None,
|
||||
working_dir: str | None = None,
|
||||
env: dict | None = None,
|
||||
) -> ShellResult:
|
||||
"""Execute a shell command in a sandboxed environment.
|
||||
|
||||
@@ -197,7 +196,7 @@ class ShellHand:
|
||||
stdout_bytes, stderr_bytes = await asyncio.wait_for(
|
||||
proc.communicate(), timeout=effective_timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
except TimeoutError:
|
||||
proc.kill()
|
||||
await proc.wait()
|
||||
latency = (time.time() - start) * 1000
|
||||
|
||||
@@ -12,7 +12,6 @@ No cloud by default — tries local first, falls back through configured options
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum, auto
|
||||
from typing import Optional
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -278,7 +277,7 @@ class ModelInfo:
|
||||
capabilities: set[ModelCapability] = field(default_factory=set)
|
||||
is_available: bool = False
|
||||
is_pulled: bool = False
|
||||
size_mb: Optional[int] = None
|
||||
size_mb: int | None = None
|
||||
description: str = ""
|
||||
|
||||
def supports(self, capability: ModelCapability) -> bool:
|
||||
@@ -296,7 +295,7 @@ class MultiModalManager:
|
||||
4. Routes requests to appropriate models based on content type
|
||||
"""
|
||||
|
||||
def __init__(self, ollama_url: Optional[str] = None) -> None:
|
||||
def __init__(self, ollama_url: str | None = None) -> None:
|
||||
self.ollama_url = ollama_url or settings.ollama_url
|
||||
self._available_models: dict[str, ModelInfo] = {}
|
||||
self._fallback_chains: dict[ModelCapability, list[str]] = dict(DEFAULT_FALLBACK_CHAINS)
|
||||
@@ -366,8 +365,8 @@ class MultiModalManager:
|
||||
return [info for info in self._available_models.values() if capability in info.capabilities]
|
||||
|
||||
def get_best_model_for(
|
||||
self, capability: ModelCapability, preferred_model: Optional[str] = None
|
||||
) -> Optional[str]:
|
||||
self, capability: ModelCapability, preferred_model: str | None = None
|
||||
) -> str | None:
|
||||
"""Get the best available model for a specific capability.
|
||||
|
||||
Args:
|
||||
@@ -407,7 +406,7 @@ class MultiModalManager:
|
||||
def pull_model_with_fallback(
|
||||
self,
|
||||
primary_model: str,
|
||||
capability: Optional[ModelCapability] = None,
|
||||
capability: ModelCapability | None = None,
|
||||
auto_pull: bool = True,
|
||||
) -> tuple[str, bool]:
|
||||
"""Pull a model with automatic fallback if unavailable.
|
||||
@@ -505,7 +504,7 @@ class MultiModalManager:
|
||||
def get_model_for_content(
|
||||
self,
|
||||
content_type: str, # "text", "image", "audio", "multimodal"
|
||||
preferred_model: Optional[str] = None,
|
||||
preferred_model: str | None = None,
|
||||
) -> tuple[str, bool]:
|
||||
"""Get appropriate model based on content type.
|
||||
|
||||
@@ -543,7 +542,7 @@ class MultiModalManager:
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
_multimodal_manager: Optional[MultiModalManager] = None
|
||||
_multimodal_manager: MultiModalManager | None = None
|
||||
|
||||
|
||||
def get_multimodal_manager() -> MultiModalManager:
|
||||
@@ -555,15 +554,15 @@ def get_multimodal_manager() -> MultiModalManager:
|
||||
|
||||
|
||||
def get_model_for_capability(
|
||||
capability: ModelCapability, preferred_model: Optional[str] = None
|
||||
) -> Optional[str]:
|
||||
capability: ModelCapability, preferred_model: str | None = None
|
||||
) -> str | None:
|
||||
"""Convenience function to get best model for a capability."""
|
||||
return get_multimodal_manager().get_best_model_for(capability, preferred_model)
|
||||
|
||||
|
||||
def pull_model_with_fallback(
|
||||
primary_model: str,
|
||||
capability: Optional[ModelCapability] = None,
|
||||
capability: ModelCapability | None = None,
|
||||
auto_pull: bool = True,
|
||||
) -> tuple[str, bool]:
|
||||
"""Convenience function to pull model with fallback."""
|
||||
|
||||
@@ -11,20 +11,17 @@ model roles (student, teacher, judge/PRM) run on dedicated resources.
|
||||
import logging
|
||||
import sqlite3
|
||||
import threading
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from enum import Enum
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime
|
||||
from enum import StrEnum
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DB_PATH = Path("data/swarm.db")
|
||||
|
||||
|
||||
class ModelFormat(str, Enum):
|
||||
class ModelFormat(StrEnum):
|
||||
"""Supported model weight formats."""
|
||||
|
||||
GGUF = "gguf" # Ollama-compatible quantised weights
|
||||
@@ -33,7 +30,7 @@ class ModelFormat(str, Enum):
|
||||
OLLAMA = "ollama" # Already loaded in Ollama by name
|
||||
|
||||
|
||||
class ModelRole(str, Enum):
|
||||
class ModelRole(StrEnum):
|
||||
"""Role a model can play in the system (OpenClaw-RL style)."""
|
||||
|
||||
GENERAL = "general" # Default agent inference
|
||||
@@ -60,7 +57,7 @@ class CustomModel:
|
||||
|
||||
def __post_init__(self):
|
||||
if not self.registered_at:
|
||||
self.registered_at = datetime.now(timezone.utc).isoformat()
|
||||
self.registered_at = datetime.now(UTC).isoformat()
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
@@ -178,11 +175,11 @@ class ModelRegistry:
|
||||
logger.info("Unregistered model: %s", name)
|
||||
return True
|
||||
|
||||
def get(self, name: str) -> Optional[CustomModel]:
|
||||
def get(self, name: str) -> CustomModel | None:
|
||||
"""Look up a model by name."""
|
||||
return self._models.get(name)
|
||||
|
||||
def list_models(self, role: Optional[ModelRole] = None) -> list[CustomModel]:
|
||||
def list_models(self, role: ModelRole | None = None) -> list[CustomModel]:
|
||||
"""List all registered models, optionally filtered by role."""
|
||||
models = list(self._models.values())
|
||||
if role is not None:
|
||||
@@ -212,7 +209,7 @@ class ModelRegistry:
|
||||
if model_name not in self._models:
|
||||
return False
|
||||
with self._lock:
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
@@ -243,7 +240,7 @@ class ModelRegistry:
|
||||
del self._agent_assignments[agent_id]
|
||||
return True
|
||||
|
||||
def get_agent_model(self, agent_id: str) -> Optional[CustomModel]:
|
||||
def get_agent_model(self, agent_id: str) -> CustomModel | None:
|
||||
"""Get the model assigned to an agent, or None for default."""
|
||||
model_name = self._agent_assignments.get(agent_id)
|
||||
if model_name:
|
||||
@@ -256,13 +253,13 @@ class ModelRegistry:
|
||||
|
||||
# ── Role-based lookups ─────────────────────────────────────────────────
|
||||
|
||||
def get_reward_model(self) -> Optional[CustomModel]:
|
||||
def get_reward_model(self) -> CustomModel | None:
|
||||
"""Get the active reward/PRM model, if any."""
|
||||
reward_models = self.list_models(role=ModelRole.REWARD)
|
||||
active = [m for m in reward_models if m.active]
|
||||
return active[0] if active else None
|
||||
|
||||
def get_teacher_model(self) -> Optional[CustomModel]:
|
||||
def get_teacher_model(self) -> CustomModel | None:
|
||||
"""Get the active teacher model for distillation."""
|
||||
teacher_models = self.list_models(role=ModelRole.TEACHER)
|
||||
active = [m for m in teacher_models if m.active]
|
||||
|
||||
@@ -12,9 +12,9 @@ import logging
|
||||
import platform
|
||||
import subprocess
|
||||
from collections import deque
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
from datetime import UTC, datetime
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -25,7 +25,7 @@ class Notification:
|
||||
title: str
|
||||
message: str
|
||||
category: str # swarm | task | agent | system | payment
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
read: bool = False
|
||||
|
||||
|
||||
@@ -86,7 +86,7 @@ class PushNotifier:
|
||||
except Exception as exc:
|
||||
logger.debug("Native notification failed: %s", exc)
|
||||
|
||||
def recent(self, limit: int = 20, category: Optional[str] = None) -> list[Notification]:
|
||||
def recent(self, limit: int = 20, category: str | None = None) -> list[Notification]:
|
||||
"""Get recent notifications, optionally filtered by category."""
|
||||
notifs = list(self._notifications)
|
||||
if category:
|
||||
@@ -114,7 +114,7 @@ class PushNotifier:
|
||||
def clear(self) -> None:
|
||||
self._notifications.clear()
|
||||
|
||||
def add_listener(self, callback: "Callable[[Notification], None]") -> None:
|
||||
def add_listener(self, callback: Callable[[Notification], None]) -> None:
|
||||
"""Register a callback for real-time notification delivery."""
|
||||
self._listeners.append(callback)
|
||||
|
||||
@@ -139,7 +139,7 @@ async def notify_briefing_ready(briefing) -> None:
|
||||
logger.info("Briefing ready but no pending approvals — skipping native notification")
|
||||
return
|
||||
|
||||
message = f"Your morning briefing is ready. " f"{n_approvals} item(s) await your approval."
|
||||
message = f"Your morning briefing is ready. {n_approvals} item(s) await your approval."
|
||||
notifier.notify(
|
||||
title="Morning Briefing Ready",
|
||||
message=message,
|
||||
|
||||
@@ -13,7 +13,7 @@ as callable tool endpoints.
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -50,7 +50,7 @@ class OpenFangClient:
|
||||
returns a ``HandResult(success=False)`` rather than raising.
|
||||
"""
|
||||
|
||||
def __init__(self, base_url: Optional[str] = None, timeout: int = 60) -> None:
|
||||
def __init__(self, base_url: str | None = None, timeout: int = 60) -> None:
|
||||
self._base_url = (base_url or settings.openfang_url).rstrip("/")
|
||||
self._timeout = timeout
|
||||
self._healthy = False
|
||||
@@ -90,7 +90,7 @@ class OpenFangClient:
|
||||
self,
|
||||
hand: str,
|
||||
params: dict[str, Any],
|
||||
timeout: Optional[int] = None,
|
||||
timeout: int | None = None,
|
||||
) -> HandResult:
|
||||
"""Execute an OpenFang Hand and return the result.
|
||||
|
||||
|
||||
@@ -60,7 +60,7 @@ async def complete(
|
||||
)
|
||||
return result
|
||||
except RuntimeError as exc:
|
||||
raise HTTPException(status_code=503, detail=str(exc))
|
||||
raise HTTPException(status_code=503, detail=str(exc)) from exc
|
||||
|
||||
|
||||
@router.get("/status")
|
||||
|
||||
@@ -13,10 +13,10 @@ import base64
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from datetime import UTC, datetime
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
try:
|
||||
import yaml
|
||||
@@ -65,8 +65,8 @@ class ProviderMetrics:
|
||||
successful_requests: int = 0
|
||||
failed_requests: int = 0
|
||||
total_latency_ms: float = 0.0
|
||||
last_request_time: Optional[str] = None
|
||||
last_error_time: Optional[str] = None
|
||||
last_request_time: str | None = None
|
||||
last_error_time: str | None = None
|
||||
consecutive_failures: int = 0
|
||||
|
||||
@property
|
||||
@@ -103,19 +103,19 @@ class Provider:
|
||||
type: str # ollama, openai, anthropic, airllm
|
||||
enabled: bool
|
||||
priority: int
|
||||
url: Optional[str] = None
|
||||
api_key: Optional[str] = None
|
||||
base_url: Optional[str] = None
|
||||
url: str | None = None
|
||||
api_key: str | None = None
|
||||
base_url: str | None = None
|
||||
models: list[dict] = field(default_factory=list)
|
||||
|
||||
# Runtime state
|
||||
status: ProviderStatus = ProviderStatus.HEALTHY
|
||||
metrics: ProviderMetrics = field(default_factory=ProviderMetrics)
|
||||
circuit_state: CircuitState = CircuitState.CLOSED
|
||||
circuit_opened_at: Optional[float] = None
|
||||
circuit_opened_at: float | None = None
|
||||
half_open_calls: int = 0
|
||||
|
||||
def get_default_model(self) -> Optional[str]:
|
||||
def get_default_model(self) -> str | None:
|
||||
"""Get the default model for this provider."""
|
||||
for model in self.models:
|
||||
if model.get("default"):
|
||||
@@ -124,7 +124,7 @@ class Provider:
|
||||
return self.models[0]["name"]
|
||||
return None
|
||||
|
||||
def get_model_with_capability(self, capability: str) -> Optional[str]:
|
||||
def get_model_with_capability(self, capability: str) -> str | None:
|
||||
"""Get a model that supports the given capability."""
|
||||
for model in self.models:
|
||||
capabilities = model.get("capabilities", [])
|
||||
@@ -191,14 +191,14 @@ class CascadeRouter:
|
||||
metrics = router.get_metrics()
|
||||
"""
|
||||
|
||||
def __init__(self, config_path: Optional[Path] = None) -> None:
|
||||
def __init__(self, config_path: Path | None = None) -> None:
|
||||
self.config_path = config_path or Path("config/providers.yaml")
|
||||
self.providers: list[Provider] = []
|
||||
self.config: RouterConfig = RouterConfig()
|
||||
self._load_config()
|
||||
|
||||
# Initialize multi-modal manager if available
|
||||
self._mm_manager: Optional[Any] = None
|
||||
self._mm_manager: Any | None = None
|
||||
try:
|
||||
from infrastructure.models.multimodal import get_multimodal_manager
|
||||
|
||||
@@ -310,10 +310,10 @@ class CascadeRouter:
|
||||
elif provider.type == "airllm":
|
||||
# Check if airllm is installed
|
||||
try:
|
||||
import airllm
|
||||
import importlib.util
|
||||
|
||||
return True
|
||||
except ImportError:
|
||||
return importlib.util.find_spec("airllm") is not None
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
return False
|
||||
|
||||
elif provider.type in ("openai", "anthropic", "grok"):
|
||||
@@ -368,7 +368,7 @@ class CascadeRouter:
|
||||
|
||||
def _get_fallback_model(
|
||||
self, provider: Provider, original_model: str, content_type: ContentType
|
||||
) -> Optional[str]:
|
||||
) -> str | None:
|
||||
"""Get a fallback model for the given content type."""
|
||||
# Map content type to capability
|
||||
capability_map = {
|
||||
@@ -397,9 +397,9 @@ class CascadeRouter:
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict],
|
||||
model: Optional[str] = None,
|
||||
model: str | None = None,
|
||||
temperature: float = 0.7,
|
||||
max_tokens: Optional[int] = None,
|
||||
max_tokens: int | None = None,
|
||||
) -> dict:
|
||||
"""Complete a chat conversation with automatic failover.
|
||||
|
||||
@@ -523,7 +523,7 @@ class CascadeRouter:
|
||||
messages: list[dict],
|
||||
model: str,
|
||||
temperature: float,
|
||||
max_tokens: Optional[int],
|
||||
max_tokens: int | None,
|
||||
content_type: ContentType = ContentType.TEXT,
|
||||
) -> dict:
|
||||
"""Try a single provider request."""
|
||||
@@ -649,7 +649,7 @@ class CascadeRouter:
|
||||
messages: list[dict],
|
||||
model: str,
|
||||
temperature: float,
|
||||
max_tokens: Optional[int],
|
||||
max_tokens: int | None,
|
||||
) -> dict:
|
||||
"""Call OpenAI API."""
|
||||
import openai
|
||||
@@ -681,7 +681,7 @@ class CascadeRouter:
|
||||
messages: list[dict],
|
||||
model: str,
|
||||
temperature: float,
|
||||
max_tokens: Optional[int],
|
||||
max_tokens: int | None,
|
||||
) -> dict:
|
||||
"""Call Anthropic API."""
|
||||
import anthropic
|
||||
@@ -727,7 +727,7 @@ class CascadeRouter:
|
||||
messages: list[dict],
|
||||
model: str,
|
||||
temperature: float,
|
||||
max_tokens: Optional[int],
|
||||
max_tokens: int | None,
|
||||
) -> dict:
|
||||
"""Call xAI Grok API via OpenAI-compatible SDK."""
|
||||
import httpx
|
||||
@@ -759,7 +759,7 @@ class CascadeRouter:
|
||||
provider.metrics.total_requests += 1
|
||||
provider.metrics.successful_requests += 1
|
||||
provider.metrics.total_latency_ms += latency_ms
|
||||
provider.metrics.last_request_time = datetime.now(timezone.utc).isoformat()
|
||||
provider.metrics.last_request_time = datetime.now(UTC).isoformat()
|
||||
provider.metrics.consecutive_failures = 0
|
||||
|
||||
# Close circuit breaker if half-open
|
||||
@@ -778,7 +778,7 @@ class CascadeRouter:
|
||||
"""Record a failed request."""
|
||||
provider.metrics.total_requests += 1
|
||||
provider.metrics.failed_requests += 1
|
||||
provider.metrics.last_error_time = datetime.now(timezone.utc).isoformat()
|
||||
provider.metrics.last_error_time = datetime.now(UTC).isoformat()
|
||||
provider.metrics.consecutive_failures += 1
|
||||
|
||||
# Check if we should open circuit breaker
|
||||
@@ -864,7 +864,7 @@ class CascadeRouter:
|
||||
self,
|
||||
prompt: str,
|
||||
image_path: str,
|
||||
model: Optional[str] = None,
|
||||
model: str | None = None,
|
||||
temperature: float = 0.7,
|
||||
) -> dict:
|
||||
"""Convenience method for vision requests.
|
||||
@@ -893,7 +893,7 @@ class CascadeRouter:
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
cascade_router: Optional[CascadeRouter] = None
|
||||
cascade_router: CascadeRouter | None = None
|
||||
|
||||
|
||||
def get_router() -> CascadeRouter:
|
||||
|
||||
@@ -6,13 +6,11 @@ to provide a live feed of agent activity, task auctions, and
|
||||
system events.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import collections
|
||||
import json
|
||||
import logging
|
||||
from dataclasses import asdict, dataclass
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from fastapi import WebSocket
|
||||
|
||||
@@ -67,7 +65,7 @@ class WebSocketManager:
|
||||
ws_event = WSEvent(
|
||||
event=event,
|
||||
data=data or {},
|
||||
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
)
|
||||
self._event_history.append(ws_event)
|
||||
|
||||
|
||||
@@ -14,9 +14,9 @@ Architecture:
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from enum import Enum, auto
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
|
||||
class PlatformState(Enum):
|
||||
@@ -36,9 +36,9 @@ class ChatMessage:
|
||||
author: str
|
||||
channel_id: str
|
||||
platform: str
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
message_id: Optional[str] = None
|
||||
thread_id: Optional[str] = None
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
message_id: str | None = None
|
||||
thread_id: str | None = None
|
||||
attachments: list[str] = field(default_factory=list)
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
@@ -51,7 +51,7 @@ class ChatThread:
|
||||
title: str
|
||||
channel_id: str
|
||||
platform: str
|
||||
created_at: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
created_at: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
archived: bool = False
|
||||
message_count: int = 0
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
@@ -64,7 +64,7 @@ class InviteInfo:
|
||||
url: str
|
||||
code: str
|
||||
platform: str
|
||||
guild_name: Optional[str] = None
|
||||
guild_name: str | None = None
|
||||
source: str = "unknown" # "qr", "vision", "text"
|
||||
|
||||
|
||||
@@ -77,7 +77,7 @@ class PlatformStatus:
|
||||
token_set: bool
|
||||
guild_count: int = 0
|
||||
thread_count: int = 0
|
||||
error: Optional[str] = None
|
||||
error: str | None = None
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
return {
|
||||
@@ -112,7 +112,7 @@ class ChatPlatform(ABC):
|
||||
"""Current connection state."""
|
||||
|
||||
@abstractmethod
|
||||
async def start(self, token: Optional[str] = None) -> bool:
|
||||
async def start(self, token: str | None = None) -> bool:
|
||||
"""Start the platform connection. Returns True on success."""
|
||||
|
||||
@abstractmethod
|
||||
@@ -121,14 +121,14 @@ class ChatPlatform(ABC):
|
||||
|
||||
@abstractmethod
|
||||
async def send_message(
|
||||
self, channel_id: str, content: str, thread_id: Optional[str] = None
|
||||
) -> Optional[ChatMessage]:
|
||||
self, channel_id: str, content: str, thread_id: str | None = None
|
||||
) -> ChatMessage | None:
|
||||
"""Send a message. Optionally within a thread."""
|
||||
|
||||
@abstractmethod
|
||||
async def create_thread(
|
||||
self, channel_id: str, title: str, initial_message: Optional[str] = None
|
||||
) -> Optional[ChatThread]:
|
||||
self, channel_id: str, title: str, initial_message: str | None = None
|
||||
) -> ChatThread | None:
|
||||
"""Create a new thread in a channel."""
|
||||
|
||||
@abstractmethod
|
||||
@@ -144,5 +144,5 @@ class ChatPlatform(ABC):
|
||||
"""Persist token for restarts."""
|
||||
|
||||
@abstractmethod
|
||||
def load_token(self) -> Optional[str]:
|
||||
def load_token(self) -> str | None:
|
||||
"""Load persisted token."""
|
||||
|
||||
@@ -23,7 +23,6 @@ Usage:
|
||||
import io
|
||||
import logging
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
from integrations.chat_bridge.base import InviteInfo
|
||||
|
||||
@@ -36,7 +35,7 @@ _DISCORD_PATTERNS = [
|
||||
]
|
||||
|
||||
|
||||
def _extract_discord_code(text: str) -> Optional[str]:
|
||||
def _extract_discord_code(text: str) -> str | None:
|
||||
"""Extract a Discord invite code from text."""
|
||||
for pattern in _DISCORD_PATTERNS:
|
||||
match = pattern.search(text)
|
||||
@@ -52,7 +51,7 @@ class InviteParser:
|
||||
then regex on raw text. All local, no cloud.
|
||||
"""
|
||||
|
||||
async def parse_image(self, image_data: bytes) -> Optional[InviteInfo]:
|
||||
async def parse_image(self, image_data: bytes) -> InviteInfo | None:
|
||||
"""Extract an invite from image bytes (screenshot or QR photo).
|
||||
|
||||
Tries strategies in order:
|
||||
@@ -70,7 +69,7 @@ class InviteParser:
|
||||
logger.info("No invite found in image via any strategy.")
|
||||
return None
|
||||
|
||||
def parse_text(self, text: str) -> Optional[InviteInfo]:
|
||||
def parse_text(self, text: str) -> InviteInfo | None:
|
||||
"""Extract an invite from plain text."""
|
||||
code = _extract_discord_code(text)
|
||||
if code:
|
||||
@@ -82,7 +81,7 @@ class InviteParser:
|
||||
)
|
||||
return None
|
||||
|
||||
def _try_qr_decode(self, image_data: bytes) -> Optional[InviteInfo]:
|
||||
def _try_qr_decode(self, image_data: bytes) -> InviteInfo | None:
|
||||
"""Strategy 1: Decode QR codes from image using pyzbar."""
|
||||
try:
|
||||
from PIL import Image
|
||||
@@ -111,7 +110,7 @@ class InviteParser:
|
||||
|
||||
return None
|
||||
|
||||
async def _try_ollama_vision(self, image_data: bytes) -> Optional[InviteInfo]:
|
||||
async def _try_ollama_vision(self, image_data: bytes) -> InviteInfo | None:
|
||||
"""Strategy 2: Use Ollama vision model for local OCR."""
|
||||
try:
|
||||
import base64
|
||||
|
||||
@@ -13,7 +13,6 @@ Usage:
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from integrations.chat_bridge.base import ChatPlatform, PlatformStatus
|
||||
|
||||
@@ -42,7 +41,7 @@ class PlatformRegistry:
|
||||
return True
|
||||
return False
|
||||
|
||||
def get(self, name: str) -> Optional[ChatPlatform]:
|
||||
def get(self, name: str) -> ChatPlatform | None:
|
||||
"""Get a platform by name."""
|
||||
return self._platforms.get(name)
|
||||
|
||||
|
||||
27
src/integrations/chat_bridge/vendors/discord.py
vendored
27
src/integrations/chat_bridge/vendors/discord.py
vendored
@@ -18,13 +18,12 @@ import asyncio
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from integrations.chat_bridge.base import (
|
||||
ChatMessage,
|
||||
ChatPlatform,
|
||||
ChatThread,
|
||||
InviteInfo,
|
||||
PlatformState,
|
||||
PlatformStatus,
|
||||
)
|
||||
@@ -108,9 +107,9 @@ class DiscordVendor(ChatPlatform):
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._client = None
|
||||
self._token: Optional[str] = None
|
||||
self._token: str | None = None
|
||||
self._state: PlatformState = PlatformState.DISCONNECTED
|
||||
self._task: Optional[asyncio.Task] = None
|
||||
self._task: asyncio.Task | None = None
|
||||
self._guild_count: int = 0
|
||||
self._active_threads: dict[str, str] = {} # channel_id -> thread_id
|
||||
self._pending_actions: dict[str, dict] = {} # approval_id -> action details
|
||||
@@ -125,7 +124,7 @@ class DiscordVendor(ChatPlatform):
|
||||
def state(self) -> PlatformState:
|
||||
return self._state
|
||||
|
||||
async def start(self, token: Optional[str] = None) -> bool:
|
||||
async def start(self, token: str | None = None) -> bool:
|
||||
"""Start the Discord bot. Returns True on success."""
|
||||
if self._state == PlatformState.CONNECTED:
|
||||
return True
|
||||
@@ -198,15 +197,13 @@ class DiscordVendor(ChatPlatform):
|
||||
self._task = None
|
||||
|
||||
async def send_message(
|
||||
self, channel_id: str, content: str, thread_id: Optional[str] = None
|
||||
) -> Optional[ChatMessage]:
|
||||
self, channel_id: str, content: str, thread_id: str | None = None
|
||||
) -> ChatMessage | None:
|
||||
"""Send a message to a Discord channel or thread."""
|
||||
if not self._client or self._state != PlatformState.CONNECTED:
|
||||
return None
|
||||
|
||||
try:
|
||||
import discord
|
||||
|
||||
target_id = int(thread_id) if thread_id else int(channel_id)
|
||||
channel = self._client.get_channel(target_id)
|
||||
|
||||
@@ -228,8 +225,8 @@ class DiscordVendor(ChatPlatform):
|
||||
return None
|
||||
|
||||
async def create_thread(
|
||||
self, channel_id: str, title: str, initial_message: Optional[str] = None
|
||||
) -> Optional[ChatThread]:
|
||||
self, channel_id: str, title: str, initial_message: str | None = None
|
||||
) -> ChatThread | None:
|
||||
"""Create a new thread in a Discord channel."""
|
||||
if not self._client or self._state != PlatformState.CONNECTED:
|
||||
return None
|
||||
@@ -272,8 +269,6 @@ class DiscordVendor(ChatPlatform):
|
||||
return False
|
||||
|
||||
try:
|
||||
import discord
|
||||
|
||||
invite = await self._client.fetch_invite(invite_code)
|
||||
logger.info(
|
||||
"Validated invite for server '%s' (code: %s)",
|
||||
@@ -301,7 +296,7 @@ class DiscordVendor(ChatPlatform):
|
||||
except Exception as exc:
|
||||
logger.error("Failed to save Discord token: %s", exc)
|
||||
|
||||
def load_token(self) -> Optional[str]:
|
||||
def load_token(self) -> str | None:
|
||||
"""Load token from state file or config."""
|
||||
try:
|
||||
if _STATE_FILE.exists():
|
||||
@@ -321,7 +316,7 @@ class DiscordVendor(ChatPlatform):
|
||||
|
||||
# ── OAuth2 URL generation ──────────────────────────────────────────────
|
||||
|
||||
def get_oauth2_url(self) -> Optional[str]:
|
||||
def get_oauth2_url(self) -> str | None:
|
||||
"""Generate the OAuth2 URL for adding this bot to a server.
|
||||
|
||||
Requires the bot to be connected to read its application ID.
|
||||
@@ -514,7 +509,7 @@ class DiscordVendor(ChatPlatform):
|
||||
asyncio.to_thread(chat_with_tools, content, session_id),
|
||||
timeout=300,
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
except TimeoutError:
|
||||
logger.error("Discord: chat_with_tools() timed out after 300s")
|
||||
response = "Sorry, that took too long. Please try a simpler request."
|
||||
except Exception as exc:
|
||||
|
||||
@@ -7,7 +7,7 @@ and approves/rejects work. All business logic lives here; routes stay thin.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.client import PaperclipClient, paperclip
|
||||
@@ -30,7 +30,7 @@ class PaperclipBridge:
|
||||
reviews results, and manages the company's goals.
|
||||
"""
|
||||
|
||||
def __init__(self, client: Optional[PaperclipClient] = None):
|
||||
def __init__(self, client: PaperclipClient | None = None):
|
||||
self.client = client or paperclip
|
||||
|
||||
# ── status / health ──────────────────────────────────────────────────
|
||||
@@ -75,10 +75,10 @@ class PaperclipBridge:
|
||||
self,
|
||||
title: str,
|
||||
description: str = "",
|
||||
assignee_id: Optional[str] = None,
|
||||
priority: Optional[str] = None,
|
||||
assignee_id: str | None = None,
|
||||
priority: str | None = None,
|
||||
wake: bool = True,
|
||||
) -> Optional[PaperclipIssue]:
|
||||
) -> PaperclipIssue | None:
|
||||
"""Create an issue and optionally assign + wake an agent.
|
||||
|
||||
This is the primary CEO action: decide what needs doing, create
|
||||
@@ -110,7 +110,7 @@ class PaperclipBridge:
|
||||
self,
|
||||
issue_id: str,
|
||||
agent_id: str,
|
||||
message: Optional[str] = None,
|
||||
message: str | None = None,
|
||||
) -> bool:
|
||||
"""Assign an existing issue to an agent and wake them."""
|
||||
updated = await self.client.update_issue(
|
||||
@@ -129,7 +129,7 @@ class PaperclipBridge:
|
||||
async def review_issue(
|
||||
self,
|
||||
issue_id: str,
|
||||
) -> Dict[str, Any]:
|
||||
) -> dict[str, Any]:
|
||||
"""Gather all context for CEO review of an issue."""
|
||||
issue = await self.client.get_issue(issue_id)
|
||||
comments = await self.client.list_comments(issue_id)
|
||||
@@ -139,7 +139,7 @@ class PaperclipBridge:
|
||||
"comments": [c.model_dump() for c in comments],
|
||||
}
|
||||
|
||||
async def close_issue(self, issue_id: str, comment: Optional[str] = None) -> bool:
|
||||
async def close_issue(self, issue_id: str, comment: str | None = None) -> bool:
|
||||
"""Close an issue as the CEO."""
|
||||
if comment:
|
||||
await self.client.add_comment(issue_id, f"[CEO] {comment}")
|
||||
@@ -151,25 +151,25 @@ class PaperclipBridge:
|
||||
|
||||
# ── CEO actions: team management ─────────────────────────────────────
|
||||
|
||||
async def get_team(self) -> List[PaperclipAgent]:
|
||||
async def get_team(self) -> list[PaperclipAgent]:
|
||||
"""Get the full agent roster."""
|
||||
return await self.client.list_agents()
|
||||
|
||||
async def get_org_chart(self) -> Optional[Dict[str, Any]]:
|
||||
async def get_org_chart(self) -> dict[str, Any] | None:
|
||||
"""Get the organizational hierarchy."""
|
||||
return await self.client.get_org()
|
||||
|
||||
# ── CEO actions: goal management ─────────────────────────────────────
|
||||
|
||||
async def list_goals(self) -> List[PaperclipGoal]:
|
||||
async def list_goals(self) -> list[PaperclipGoal]:
|
||||
return await self.client.list_goals()
|
||||
|
||||
async def set_goal(self, title: str, description: str = "") -> Optional[PaperclipGoal]:
|
||||
async def set_goal(self, title: str, description: str = "") -> PaperclipGoal | None:
|
||||
return await self.client.create_goal(title, description)
|
||||
|
||||
# ── CEO actions: approvals ───────────────────────────────────────────
|
||||
|
||||
async def pending_approvals(self) -> List[Dict[str, Any]]:
|
||||
async def pending_approvals(self) -> list[dict[str, Any]]:
|
||||
return await self.client.list_approvals()
|
||||
|
||||
async def approve(self, approval_id: str, comment: str = "") -> bool:
|
||||
@@ -182,7 +182,7 @@ class PaperclipBridge:
|
||||
|
||||
# ── CEO actions: monitoring ──────────────────────────────────────────
|
||||
|
||||
async def active_runs(self) -> List[Dict[str, Any]]:
|
||||
async def active_runs(self) -> list[dict[str, Any]]:
|
||||
"""Get currently running heartbeat executions."""
|
||||
return await self.client.list_heartbeat_runs()
|
||||
|
||||
|
||||
@@ -11,15 +11,13 @@ re-uses the session cookie thereafter.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.models import (
|
||||
AddCommentRequest,
|
||||
CreateIssueRequest,
|
||||
PaperclipAgent,
|
||||
PaperclipComment,
|
||||
@@ -40,20 +38,20 @@ class PaperclipClient:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
base_url: str | None = None,
|
||||
api_key: str | None = None,
|
||||
timeout: int = 30,
|
||||
):
|
||||
self._base_url = (base_url or settings.paperclip_url).rstrip("/")
|
||||
self._api_key = api_key or settings.paperclip_api_key
|
||||
self._timeout = timeout or settings.paperclip_timeout
|
||||
self._client: Optional[httpx.AsyncClient] = None
|
||||
self._client: httpx.AsyncClient | None = None
|
||||
|
||||
# ── lifecycle ────────────────────────────────────────────────────────
|
||||
|
||||
def _get_client(self) -> httpx.AsyncClient:
|
||||
if self._client is None or self._client.is_closed:
|
||||
headers: Dict[str, str] = {"Accept": "application/json"}
|
||||
headers: dict[str, str] = {"Accept": "application/json"}
|
||||
if self._api_key:
|
||||
headers["Authorization"] = f"Bearer {self._api_key}"
|
||||
self._client = httpx.AsyncClient(
|
||||
@@ -69,7 +67,7 @@ class PaperclipClient:
|
||||
|
||||
# ── helpers ──────────────────────────────────────────────────────────
|
||||
|
||||
async def _get(self, path: str, params: Optional[Dict] = None) -> Optional[Any]:
|
||||
async def _get(self, path: str, params: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().get(path, params=params)
|
||||
resp.raise_for_status()
|
||||
@@ -78,7 +76,7 @@ class PaperclipClient:
|
||||
logger.warning("Paperclip GET %s failed: %s", path, exc)
|
||||
return None
|
||||
|
||||
async def _post(self, path: str, json: Optional[Dict] = None) -> Optional[Any]:
|
||||
async def _post(self, path: str, json: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().post(path, json=json)
|
||||
resp.raise_for_status()
|
||||
@@ -87,7 +85,7 @@ class PaperclipClient:
|
||||
logger.warning("Paperclip POST %s failed: %s", path, exc)
|
||||
return None
|
||||
|
||||
async def _patch(self, path: str, json: Optional[Dict] = None) -> Optional[Any]:
|
||||
async def _patch(self, path: str, json: dict | None = None) -> Any | None:
|
||||
try:
|
||||
resp = await self._get_client().patch(path, json=json)
|
||||
resp.raise_for_status()
|
||||
@@ -114,13 +112,13 @@ class PaperclipClient:
|
||||
|
||||
# ── companies ────────────────────────────────────────────────────────
|
||||
|
||||
async def list_companies(self) -> List[Dict[str, Any]]:
|
||||
async def list_companies(self) -> list[dict[str, Any]]:
|
||||
data = await self._get("/api/companies")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
# ── agents ───────────────────────────────────────────────────────────
|
||||
|
||||
async def list_agents(self, company_id: Optional[str] = None) -> List[PaperclipAgent]:
|
||||
async def list_agents(self, company_id: str | None = None) -> list[PaperclipAgent]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
logger.warning("paperclip_company_id not set — cannot list agents")
|
||||
@@ -130,25 +128,25 @@ class PaperclipClient:
|
||||
return []
|
||||
return [PaperclipAgent(**a) for a in data]
|
||||
|
||||
async def get_agent(self, agent_id: str) -> Optional[PaperclipAgent]:
|
||||
async def get_agent(self, agent_id: str) -> PaperclipAgent | None:
|
||||
data = await self._get(f"/api/agents/{agent_id}")
|
||||
return PaperclipAgent(**data) if data else None
|
||||
|
||||
async def wake_agent(
|
||||
self,
|
||||
agent_id: str,
|
||||
issue_id: Optional[str] = None,
|
||||
message: Optional[str] = None,
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
issue_id: str | None = None,
|
||||
message: str | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Trigger a heartbeat wake for an agent."""
|
||||
body: Dict[str, Any] = {}
|
||||
body: dict[str, Any] = {}
|
||||
if issue_id:
|
||||
body["issueId"] = issue_id
|
||||
if message:
|
||||
body["message"] = message
|
||||
return await self._post(f"/api/agents/{agent_id}/wakeup", json=body)
|
||||
|
||||
async def get_org(self, company_id: Optional[str] = None) -> Optional[Dict[str, Any]]:
|
||||
async def get_org(self, company_id: str | None = None) -> dict[str, Any] | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return None
|
||||
@@ -158,13 +156,13 @@ class PaperclipClient:
|
||||
|
||||
async def list_issues(
|
||||
self,
|
||||
company_id: Optional[str] = None,
|
||||
status: Optional[str] = None,
|
||||
) -> List[PaperclipIssue]:
|
||||
company_id: str | None = None,
|
||||
status: str | None = None,
|
||||
) -> list[PaperclipIssue]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
params: Dict[str, str] = {}
|
||||
params: dict[str, str] = {}
|
||||
if status:
|
||||
params["status"] = status
|
||||
data = await self._get(f"/api/companies/{cid}/issues", params=params)
|
||||
@@ -172,15 +170,15 @@ class PaperclipClient:
|
||||
return []
|
||||
return [PaperclipIssue(**i) for i in data]
|
||||
|
||||
async def get_issue(self, issue_id: str) -> Optional[PaperclipIssue]:
|
||||
async def get_issue(self, issue_id: str) -> PaperclipIssue | None:
|
||||
data = await self._get(f"/api/issues/{issue_id}")
|
||||
return PaperclipIssue(**data) if data else None
|
||||
|
||||
async def create_issue(
|
||||
self,
|
||||
req: CreateIssueRequest,
|
||||
company_id: Optional[str] = None,
|
||||
) -> Optional[PaperclipIssue]:
|
||||
company_id: str | None = None,
|
||||
) -> PaperclipIssue | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
logger.warning("paperclip_company_id not set — cannot create issue")
|
||||
@@ -195,7 +193,7 @@ class PaperclipClient:
|
||||
self,
|
||||
issue_id: str,
|
||||
req: UpdateIssueRequest,
|
||||
) -> Optional[PaperclipIssue]:
|
||||
) -> PaperclipIssue | None:
|
||||
data = await self._patch(
|
||||
f"/api/issues/{issue_id}",
|
||||
json=req.model_dump(exclude_none=True),
|
||||
@@ -207,7 +205,7 @@ class PaperclipClient:
|
||||
|
||||
# ── issue comments ───────────────────────────────────────────────────
|
||||
|
||||
async def list_comments(self, issue_id: str) -> List[PaperclipComment]:
|
||||
async def list_comments(self, issue_id: str) -> list[PaperclipComment]:
|
||||
data = await self._get(f"/api/issues/{issue_id}/comments")
|
||||
if not isinstance(data, list):
|
||||
return []
|
||||
@@ -217,7 +215,7 @@ class PaperclipClient:
|
||||
self,
|
||||
issue_id: str,
|
||||
content: str,
|
||||
) -> Optional[PaperclipComment]:
|
||||
) -> PaperclipComment | None:
|
||||
data = await self._post(
|
||||
f"/api/issues/{issue_id}/comments",
|
||||
json={"content": content},
|
||||
@@ -226,20 +224,20 @@ class PaperclipClient:
|
||||
|
||||
# ── issue workflow ───────────────────────────────────────────────────
|
||||
|
||||
async def checkout_issue(self, issue_id: str) -> Optional[Dict[str, Any]]:
|
||||
async def checkout_issue(self, issue_id: str) -> dict[str, Any] | None:
|
||||
"""Assign an issue to Timmy (checkout)."""
|
||||
body: Dict[str, Any] = {}
|
||||
body: dict[str, Any] = {}
|
||||
if settings.paperclip_agent_id:
|
||||
body["agentId"] = settings.paperclip_agent_id
|
||||
return await self._post(f"/api/issues/{issue_id}/checkout", json=body)
|
||||
|
||||
async def release_issue(self, issue_id: str) -> Optional[Dict[str, Any]]:
|
||||
async def release_issue(self, issue_id: str) -> dict[str, Any] | None:
|
||||
"""Release a checked-out issue."""
|
||||
return await self._post(f"/api/issues/{issue_id}/release")
|
||||
|
||||
# ── goals ────────────────────────────────────────────────────────────
|
||||
|
||||
async def list_goals(self, company_id: Optional[str] = None) -> List[PaperclipGoal]:
|
||||
async def list_goals(self, company_id: str | None = None) -> list[PaperclipGoal]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
@@ -252,8 +250,8 @@ class PaperclipClient:
|
||||
self,
|
||||
title: str,
|
||||
description: str = "",
|
||||
company_id: Optional[str] = None,
|
||||
) -> Optional[PaperclipGoal]:
|
||||
company_id: str | None = None,
|
||||
) -> PaperclipGoal | None:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return None
|
||||
@@ -267,38 +265,38 @@ class PaperclipClient:
|
||||
|
||||
async def list_heartbeat_runs(
|
||||
self,
|
||||
company_id: Optional[str] = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
company_id: str | None = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/heartbeat-runs")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def get_run_events(self, run_id: str) -> List[Dict[str, Any]]:
|
||||
async def get_run_events(self, run_id: str) -> list[dict[str, Any]]:
|
||||
data = await self._get(f"/api/heartbeat-runs/{run_id}/events")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def cancel_run(self, run_id: str) -> Optional[Dict[str, Any]]:
|
||||
async def cancel_run(self, run_id: str) -> dict[str, Any] | None:
|
||||
return await self._post(f"/api/heartbeat-runs/{run_id}/cancel")
|
||||
|
||||
# ── approvals ────────────────────────────────────────────────────────
|
||||
|
||||
async def list_approvals(self, company_id: Optional[str] = None) -> List[Dict[str, Any]]:
|
||||
async def list_approvals(self, company_id: str | None = None) -> list[dict[str, Any]]:
|
||||
cid = company_id or settings.paperclip_company_id
|
||||
if not cid:
|
||||
return []
|
||||
data = await self._get(f"/api/companies/{cid}/approvals")
|
||||
return data if isinstance(data, list) else []
|
||||
|
||||
async def approve(self, approval_id: str, comment: str = "") -> Optional[Dict[str, Any]]:
|
||||
body: Dict[str, Any] = {}
|
||||
async def approve(self, approval_id: str, comment: str = "") -> dict[str, Any] | None:
|
||||
body: dict[str, Any] = {}
|
||||
if comment:
|
||||
body["comment"] = comment
|
||||
return await self._post(f"/api/approvals/{approval_id}/approve", json=body)
|
||||
|
||||
async def reject(self, approval_id: str, comment: str = "") -> Optional[Dict[str, Any]]:
|
||||
body: Dict[str, Any] = {}
|
||||
async def reject(self, approval_id: str, comment: str = "") -> dict[str, Any] | None:
|
||||
body: dict[str, Any] = {}
|
||||
if comment:
|
||||
body["comment"] = comment
|
||||
return await self._post(f"/api/approvals/{approval_id}/reject", json=body)
|
||||
|
||||
@@ -2,9 +2,6 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
# ── Inbound: Paperclip → Timmy ──────────────────────────────────────────────
|
||||
@@ -17,12 +14,12 @@ class PaperclipIssue(BaseModel):
|
||||
title: str
|
||||
description: str = ""
|
||||
status: str = "open"
|
||||
priority: Optional[str] = None
|
||||
assignee_id: Optional[str] = None
|
||||
project_id: Optional[str] = None
|
||||
labels: List[str] = Field(default_factory=list)
|
||||
created_at: Optional[str] = None
|
||||
updated_at: Optional[str] = None
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
project_id: str | None = None
|
||||
labels: list[str] = Field(default_factory=list)
|
||||
created_at: str | None = None
|
||||
updated_at: str | None = None
|
||||
|
||||
|
||||
class PaperclipComment(BaseModel):
|
||||
@@ -31,8 +28,8 @@ class PaperclipComment(BaseModel):
|
||||
id: str
|
||||
issue_id: str
|
||||
content: str
|
||||
author: Optional[str] = None
|
||||
created_at: Optional[str] = None
|
||||
author: str | None = None
|
||||
created_at: str | None = None
|
||||
|
||||
|
||||
class PaperclipAgent(BaseModel):
|
||||
@@ -42,8 +39,8 @@ class PaperclipAgent(BaseModel):
|
||||
name: str
|
||||
role: str = ""
|
||||
status: str = "active"
|
||||
adapter_type: Optional[str] = None
|
||||
company_id: Optional[str] = None
|
||||
adapter_type: str | None = None
|
||||
company_id: str | None = None
|
||||
|
||||
|
||||
class PaperclipGoal(BaseModel):
|
||||
@@ -53,7 +50,7 @@ class PaperclipGoal(BaseModel):
|
||||
title: str
|
||||
description: str = ""
|
||||
status: str = "active"
|
||||
company_id: Optional[str] = None
|
||||
company_id: str | None = None
|
||||
|
||||
|
||||
class HeartbeatRun(BaseModel):
|
||||
@@ -62,9 +59,9 @@ class HeartbeatRun(BaseModel):
|
||||
id: str
|
||||
agent_id: str
|
||||
status: str
|
||||
issue_id: Optional[str] = None
|
||||
started_at: Optional[str] = None
|
||||
finished_at: Optional[str] = None
|
||||
issue_id: str | None = None
|
||||
started_at: str | None = None
|
||||
finished_at: str | None = None
|
||||
|
||||
|
||||
# ── Outbound: Timmy → Paperclip ─────────────────────────────────────────────
|
||||
@@ -75,20 +72,20 @@ class CreateIssueRequest(BaseModel):
|
||||
|
||||
title: str
|
||||
description: str = ""
|
||||
priority: Optional[str] = None
|
||||
assignee_id: Optional[str] = None
|
||||
project_id: Optional[str] = None
|
||||
labels: List[str] = Field(default_factory=list)
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
project_id: str | None = None
|
||||
labels: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class UpdateIssueRequest(BaseModel):
|
||||
"""Request to update an existing issue."""
|
||||
|
||||
title: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
status: Optional[str] = None
|
||||
priority: Optional[str] = None
|
||||
assignee_id: Optional[str] = None
|
||||
title: str | None = None
|
||||
description: str | None = None
|
||||
status: str | None = None
|
||||
priority: str | None = None
|
||||
assignee_id: str | None = None
|
||||
|
||||
|
||||
class AddCommentRequest(BaseModel):
|
||||
@@ -100,8 +97,8 @@ class AddCommentRequest(BaseModel):
|
||||
class WakeAgentRequest(BaseModel):
|
||||
"""Request to wake an agent via heartbeat."""
|
||||
|
||||
issue_id: Optional[str] = None
|
||||
message: Optional[str] = None
|
||||
issue_id: str | None = None
|
||||
message: str | None = None
|
||||
|
||||
|
||||
# ── API route models ─────────────────────────────────────────────────────────
|
||||
@@ -116,4 +113,4 @@ class PaperclipStatusResponse(BaseModel):
|
||||
company_id: str = ""
|
||||
agent_count: int = 0
|
||||
issue_count: int = 0
|
||||
error: Optional[str] = None
|
||||
error: str | None = None
|
||||
|
||||
@@ -17,7 +17,8 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Any, Callable, Coroutine, Dict, List, Optional, Protocol, runtime_checkable
|
||||
from collections.abc import Callable, Coroutine
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
from config import settings
|
||||
from integrations.paperclip.bridge import PaperclipBridge
|
||||
@@ -37,7 +38,7 @@ class Orchestrator(Protocol):
|
||||
def _wrap_orchestrator(orch: Orchestrator) -> Callable:
|
||||
"""Adapt an orchestrator's execute_task to the process_fn signature."""
|
||||
|
||||
async def _process(task_id: str, description: str, context: Dict) -> str:
|
||||
async def _process(task_id: str, description: str, context: dict) -> str:
|
||||
raw = await orch.execute_task(task_id, description, context)
|
||||
# execute_task may return str or dict — normalise to str
|
||||
if isinstance(raw, dict):
|
||||
@@ -60,9 +61,9 @@ class TaskRunner:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bridge: Optional[PaperclipBridge] = None,
|
||||
orchestrator: Optional[Orchestrator] = None,
|
||||
process_fn: Optional[Callable[[str, str, Dict], Coroutine[Any, Any, str]]] = None,
|
||||
bridge: PaperclipBridge | None = None,
|
||||
orchestrator: Orchestrator | None = None,
|
||||
process_fn: Callable[[str, str, dict], Coroutine[Any, Any, str]] | None = None,
|
||||
):
|
||||
self.bridge = bridge or default_bridge
|
||||
self.orchestrator = orchestrator
|
||||
@@ -79,7 +80,7 @@ class TaskRunner:
|
||||
|
||||
# ── single cycle ──────────────────────────────────────────────────
|
||||
|
||||
async def grab_next_task(self) -> Optional[PaperclipIssue]:
|
||||
async def grab_next_task(self) -> PaperclipIssue | None:
|
||||
"""Grab the first open issue assigned to Timmy."""
|
||||
agent_id = settings.paperclip_agent_id
|
||||
if not agent_id:
|
||||
@@ -126,7 +127,7 @@ class TaskRunner:
|
||||
|
||||
async def create_follow_up(
|
||||
self, original: PaperclipIssue, result: str
|
||||
) -> Optional[PaperclipIssue]:
|
||||
) -> PaperclipIssue | None:
|
||||
"""Create a recursive follow-up task for Timmy.
|
||||
|
||||
Timmy muses about task automation and writes a follow-up issue
|
||||
@@ -149,7 +150,7 @@ class TaskRunner:
|
||||
wake=False, # Don't wake immediately — let the next poll pick it up
|
||||
)
|
||||
|
||||
async def run_once(self) -> Optional[Dict[str, Any]]:
|
||||
async def run_once(self) -> dict[str, Any] | None:
|
||||
"""Execute one full cycle of the green-path workflow.
|
||||
|
||||
Returns a summary dict on success, None if no work found.
|
||||
|
||||
@@ -90,12 +90,11 @@ class TelegramBot:
|
||||
from telegram.ext import (
|
||||
Application,
|
||||
CommandHandler,
|
||||
ContextTypes,
|
||||
MessageHandler,
|
||||
filters,
|
||||
)
|
||||
except ImportError:
|
||||
logger.error("python-telegram-bot is not installed. " 'Run: pip install ".[telegram]"')
|
||||
logger.error('python-telegram-bot is not installed. Run: pip install ".[telegram]"')
|
||||
return False
|
||||
|
||||
try:
|
||||
|
||||
@@ -18,7 +18,6 @@ Intents:
|
||||
import logging
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -142,7 +141,7 @@ def detect_intent(text: str) -> Intent:
|
||||
return intent
|
||||
|
||||
|
||||
def extract_command(text: str) -> Optional[str]:
|
||||
def extract_command(text: str) -> str | None:
|
||||
"""Extract a direct command from text, if present.
|
||||
|
||||
Commands are prefixed with '/' or 'timmy,' — e.g.:
|
||||
|
||||
@@ -13,9 +13,7 @@ Categories
|
||||
|
||||
import json
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
from spark import eidos as spark_eidos
|
||||
from spark import memory as spark_memory
|
||||
@@ -35,7 +33,7 @@ class Advisory:
|
||||
title: str # Short headline
|
||||
detail: str # Longer explanation
|
||||
suggested_action: str # What to do about it
|
||||
subject: Optional[str] = None # agent_id or None for system-level
|
||||
subject: str | None = None # agent_id or None for system-level
|
||||
evidence_count: int = 0 # Number of supporting events
|
||||
|
||||
|
||||
|
||||
@@ -17,9 +17,8 @@ import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -34,10 +33,10 @@ class Prediction:
|
||||
task_id: str
|
||||
prediction_type: str # outcome, best_agent, bid_range
|
||||
predicted_value: str # JSON-encoded prediction
|
||||
actual_value: Optional[str] # JSON-encoded actual (filled on evaluation)
|
||||
accuracy: Optional[float] # 0.0–1.0 (filled on evaluation)
|
||||
actual_value: str | None # JSON-encoded actual (filled on evaluation)
|
||||
accuracy: float | None # 0.0–1.0 (filled on evaluation)
|
||||
created_at: str
|
||||
evaluated_at: Optional[str]
|
||||
evaluated_at: str | None
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
@@ -71,7 +70,7 @@ def predict_task_outcome(
|
||||
task_id: str,
|
||||
task_description: str,
|
||||
candidate_agents: list[str],
|
||||
agent_history: Optional[dict] = None,
|
||||
agent_history: dict | None = None,
|
||||
) -> dict:
|
||||
"""Predict the outcome of a task before it's assigned.
|
||||
|
||||
@@ -119,7 +118,7 @@ def predict_task_outcome(
|
||||
|
||||
# Store prediction
|
||||
pred_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
@@ -141,10 +140,10 @@ def predict_task_outcome(
|
||||
|
||||
def evaluate_prediction(
|
||||
task_id: str,
|
||||
actual_winner: Optional[str],
|
||||
actual_winner: str | None,
|
||||
task_succeeded: bool,
|
||||
winning_bid: Optional[int] = None,
|
||||
) -> Optional[dict]:
|
||||
winning_bid: int | None = None,
|
||||
) -> dict | None:
|
||||
"""Evaluate a stored prediction against actual outcomes.
|
||||
|
||||
Returns the evaluation result or None if no prediction exists.
|
||||
@@ -172,7 +171,7 @@ def evaluate_prediction(
|
||||
|
||||
# Calculate accuracy
|
||||
accuracy = _compute_accuracy(predicted, actual)
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
|
||||
conn.execute(
|
||||
"""
|
||||
@@ -239,7 +238,7 @@ def _compute_accuracy(predicted: dict, actual: dict) -> float:
|
||||
|
||||
|
||||
def get_predictions(
|
||||
task_id: Optional[str] = None,
|
||||
task_id: str | None = None,
|
||||
evaluated_only: bool = False,
|
||||
limit: int = 50,
|
||||
) -> list[Prediction]:
|
||||
|
||||
@@ -23,7 +23,6 @@ Usage
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from spark import advisor as spark_advisor
|
||||
from spark import eidos as spark_eidos
|
||||
@@ -52,8 +51,8 @@ class SparkEngine:
|
||||
self,
|
||||
task_id: str,
|
||||
description: str,
|
||||
candidate_agents: Optional[list[str]] = None,
|
||||
) -> Optional[str]:
|
||||
candidate_agents: list[str] | None = None,
|
||||
) -> str | None:
|
||||
"""Capture a task-posted event and generate a prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -81,7 +80,7 @@ class SparkEngine:
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
bid_sats: int,
|
||||
) -> Optional[str]:
|
||||
) -> str | None:
|
||||
"""Capture a bid event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -101,7 +100,7 @@ class SparkEngine:
|
||||
self,
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
) -> Optional[str]:
|
||||
) -> str | None:
|
||||
"""Capture a task-assigned event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -121,8 +120,8 @@ class SparkEngine:
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
result: str,
|
||||
winning_bid: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
winning_bid: int | None = None,
|
||||
) -> str | None:
|
||||
"""Capture a task-completed event and evaluate EIDOS prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -168,7 +167,7 @@ class SparkEngine:
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
reason: str,
|
||||
) -> Optional[str]:
|
||||
) -> str | None:
|
||||
"""Capture a task-failed event and evaluate EIDOS prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -194,7 +193,7 @@ class SparkEngine:
|
||||
logger.debug("Spark: captured failure %s by %s", task_id[:8], agent_id[:8])
|
||||
return event_id
|
||||
|
||||
def on_agent_joined(self, agent_id: str, name: str) -> Optional[str]:
|
||||
def on_agent_joined(self, agent_id: str, name: str) -> str | None:
|
||||
"""Capture an agent-joined event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
@@ -211,10 +210,10 @@ class SparkEngine:
|
||||
self,
|
||||
agent_id: str,
|
||||
tool_name: str,
|
||||
task_id: Optional[str] = None,
|
||||
task_id: str | None = None,
|
||||
success: bool = True,
|
||||
duration_ms: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
duration_ms: int | None = None,
|
||||
) -> str | None:
|
||||
"""Capture an individual tool invocation.
|
||||
|
||||
Tracks which tools each agent uses, success rates, and latency
|
||||
@@ -243,9 +242,9 @@ class SparkEngine:
|
||||
project_id: str,
|
||||
step_name: str,
|
||||
agent_id: str,
|
||||
output_path: Optional[str] = None,
|
||||
output_path: str | None = None,
|
||||
success: bool = True,
|
||||
) -> Optional[str]:
|
||||
) -> str | None:
|
||||
"""Capture a creative pipeline step (storyboard, music, video, assembly).
|
||||
|
||||
Tracks pipeline progress and creative output quality metrics
|
||||
|
||||
@@ -13,9 +13,8 @@ spark_memories — consolidated insights extracted from event patterns
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
DB_PATH = Path("data/spark.db")
|
||||
|
||||
@@ -31,8 +30,8 @@ class SparkEvent:
|
||||
|
||||
id: str
|
||||
event_type: str # task_posted, bid, assignment, completion, failure
|
||||
agent_id: Optional[str]
|
||||
task_id: Optional[str]
|
||||
agent_id: str | None
|
||||
task_id: str | None
|
||||
description: str
|
||||
data: str # JSON payload
|
||||
importance: float # 0.0–1.0
|
||||
@@ -50,7 +49,7 @@ class SparkMemory:
|
||||
confidence: float # 0.0–1.0
|
||||
source_events: int # How many events contributed
|
||||
created_at: str
|
||||
expires_at: Optional[str]
|
||||
expires_at: str | None
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
@@ -129,16 +128,16 @@ def score_importance(event_type: str, data: dict) -> float:
|
||||
def record_event(
|
||||
event_type: str,
|
||||
description: str,
|
||||
agent_id: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
agent_id: str | None = None,
|
||||
task_id: str | None = None,
|
||||
data: str = "{}",
|
||||
importance: Optional[float] = None,
|
||||
importance: float | None = None,
|
||||
) -> str:
|
||||
"""Record a swarm event. Returns the event id."""
|
||||
import json
|
||||
|
||||
event_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
|
||||
if importance is None:
|
||||
try:
|
||||
@@ -162,9 +161,9 @@ def record_event(
|
||||
|
||||
|
||||
def get_events(
|
||||
event_type: Optional[str] = None,
|
||||
agent_id: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
event_type: str | None = None,
|
||||
agent_id: str | None = None,
|
||||
task_id: str | None = None,
|
||||
limit: int = 100,
|
||||
min_importance: float = 0.0,
|
||||
) -> list[SparkEvent]:
|
||||
@@ -203,7 +202,7 @@ def get_events(
|
||||
]
|
||||
|
||||
|
||||
def count_events(event_type: Optional[str] = None) -> int:
|
||||
def count_events(event_type: str | None = None) -> int:
|
||||
"""Count events, optionally filtered by type."""
|
||||
conn = _get_conn()
|
||||
if event_type:
|
||||
@@ -226,11 +225,11 @@ def store_memory(
|
||||
content: str,
|
||||
confidence: float = 0.5,
|
||||
source_events: int = 0,
|
||||
expires_at: Optional[str] = None,
|
||||
expires_at: str | None = None,
|
||||
) -> str:
|
||||
"""Store a consolidated memory. Returns the memory id."""
|
||||
mem_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
@@ -246,8 +245,8 @@ def store_memory(
|
||||
|
||||
|
||||
def get_memories(
|
||||
memory_type: Optional[str] = None,
|
||||
subject: Optional[str] = None,
|
||||
memory_type: str | None = None,
|
||||
subject: str | None = None,
|
||||
min_confidence: float = 0.0,
|
||||
limit: int = 50,
|
||||
) -> list[SparkMemory]:
|
||||
@@ -283,7 +282,7 @@ def get_memories(
|
||||
]
|
||||
|
||||
|
||||
def count_memories(memory_type: Optional[str] = None) -> int:
|
||||
def count_memories(memory_type: str | None = None) -> int:
|
||||
"""Count memories, optionally filtered by type."""
|
||||
conn = _get_conn()
|
||||
if memory_type:
|
||||
|
||||
@@ -12,10 +12,9 @@ import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -131,7 +130,7 @@ def _publish_to_event_bus(entry: EventLogEntry) -> None:
|
||||
def log_event(
|
||||
event_type: EventType,
|
||||
source: str = "",
|
||||
data: Optional[dict] = None,
|
||||
data: dict | None = None,
|
||||
task_id: str = "",
|
||||
agent_id: str = "",
|
||||
) -> EventLogEntry:
|
||||
@@ -144,7 +143,7 @@ def log_event(
|
||||
id=str(uuid.uuid4()),
|
||||
event_type=event_type,
|
||||
source=source,
|
||||
timestamp=datetime.now(timezone.utc).isoformat(),
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
data=data or {},
|
||||
task_id=task_id,
|
||||
agent_id=agent_id,
|
||||
|
||||
@@ -14,7 +14,7 @@ Handoff Protocol maintains continuity across sessions.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Optional, Union
|
||||
from typing import TYPE_CHECKING, Union
|
||||
|
||||
from agno.agent import Agent
|
||||
from agno.db.sqlite import SqliteDb
|
||||
@@ -101,7 +101,7 @@ def _pull_model(model_name: str) -> bool:
|
||||
|
||||
|
||||
def _resolve_model_with_fallback(
|
||||
requested_model: Optional[str] = None,
|
||||
requested_model: str | None = None,
|
||||
require_vision: bool = False,
|
||||
auto_pull: bool = True,
|
||||
) -> tuple[str, bool]:
|
||||
@@ -180,7 +180,7 @@ def _resolve_backend(requested: str | None) -> str:
|
||||
return configured
|
||||
|
||||
# "auto" path — lazy import to keep startup fast and tests clean.
|
||||
from timmy.backends import airllm_available, claude_available, grok_available, is_apple_silicon
|
||||
from timmy.backends import airllm_available, is_apple_silicon
|
||||
|
||||
if is_apple_silicon() and airllm_available():
|
||||
return "airllm"
|
||||
|
||||
@@ -19,9 +19,9 @@ All methods return effects that can be logged, audited, and replayed.
|
||||
import uuid
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from enum import Enum, auto
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
|
||||
class PerceptionType(Enum):
|
||||
@@ -74,7 +74,7 @@ class AgentIdentity:
|
||||
id: str
|
||||
name: str
|
||||
version: str
|
||||
created_at: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
created_at: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
|
||||
@classmethod
|
||||
def generate(cls, name: str, version: str = "1.0.0") -> "AgentIdentity":
|
||||
@@ -96,7 +96,7 @@ class Perception:
|
||||
|
||||
type: PerceptionType
|
||||
data: Any # Content depends on type
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
source: str = "unknown" # e.g., "camera_1", "microphone", "user_input"
|
||||
metadata: dict = field(default_factory=dict)
|
||||
|
||||
@@ -129,9 +129,9 @@ class Action:
|
||||
|
||||
type: ActionType
|
||||
payload: Any # Action-specific data
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
confidence: float = 1.0 # 0-1, agent's certainty
|
||||
deadline: Optional[str] = None # When action must complete
|
||||
deadline: str | None = None # When action must complete
|
||||
|
||||
@classmethod
|
||||
def respond(cls, text: str, confidence: float = 1.0) -> "Action":
|
||||
@@ -163,14 +163,14 @@ class Memory:
|
||||
content: Any
|
||||
created_at: str
|
||||
access_count: int = 0
|
||||
last_accessed: Optional[str] = None
|
||||
last_accessed: str | None = None
|
||||
importance: float = 0.5 # 0-1, for pruning decisions
|
||||
tags: list[str] = field(default_factory=list)
|
||||
|
||||
def touch(self) -> None:
|
||||
"""Mark memory as accessed."""
|
||||
self.access_count += 1
|
||||
self.last_accessed = datetime.now(timezone.utc).isoformat()
|
||||
self.last_accessed = datetime.now(UTC).isoformat()
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -180,7 +180,7 @@ class Communication:
|
||||
sender: str
|
||||
recipient: str
|
||||
content: Any
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
protocol: str = "direct" # e.g., "http", "websocket", "speech"
|
||||
encrypted: bool = False
|
||||
|
||||
@@ -321,10 +321,9 @@ class TimAgent(ABC):
|
||||
"state": self._state.copy(),
|
||||
}
|
||||
|
||||
def shutdown(self) -> None:
|
||||
def shutdown(self) -> None: # noqa: B027
|
||||
"""Graceful shutdown. Persist state, close connections."""
|
||||
# Override in subclass for cleanup
|
||||
pass
|
||||
|
||||
|
||||
class AgentEffect:
|
||||
@@ -338,7 +337,7 @@ class AgentEffect:
|
||||
- Training: Learn from agent experiences
|
||||
"""
|
||||
|
||||
def __init__(self, log_path: Optional[str] = None) -> None:
|
||||
def __init__(self, log_path: str | None = None) -> None:
|
||||
self._effects: list[dict] = []
|
||||
self._log_path = log_path
|
||||
|
||||
@@ -350,7 +349,7 @@ class AgentEffect:
|
||||
"perception_type": perception.type.name,
|
||||
"source": perception.source,
|
||||
"memory_id": memory_id,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -361,7 +360,7 @@ class AgentEffect:
|
||||
"type": "reason",
|
||||
"query": query,
|
||||
"action_type": action_type.name,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -373,7 +372,7 @@ class AgentEffect:
|
||||
"action_type": action.type.name,
|
||||
"confidence": action.confidence,
|
||||
"result_type": type(result).__name__,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Usage:
|
||||
result = agent.act(action)
|
||||
"""
|
||||
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from timmy.agent import _resolve_model_with_fallback, create_timmy
|
||||
from timmy.agent_core.interface import (
|
||||
@@ -51,8 +51,8 @@ class OllamaAgent(TimAgent):
|
||||
def __init__(
|
||||
self,
|
||||
identity: AgentIdentity,
|
||||
model: Optional[str] = None,
|
||||
effect_log: Optional[str] = None,
|
||||
model: str | None = None,
|
||||
effect_log: str | None = None,
|
||||
require_vision: bool = False,
|
||||
) -> None:
|
||||
"""Initialize Ollama-based agent.
|
||||
@@ -268,7 +268,7 @@ Respond naturally and helpfully."""
|
||||
|
||||
return "\n".join(parts)
|
||||
|
||||
def get_effect_log(self) -> Optional[list[dict]]:
|
||||
def get_effect_log(self) -> list[dict] | None:
|
||||
"""Export effect log if logging is enabled."""
|
||||
if self._effect_log:
|
||||
return self._effect_log.export()
|
||||
|
||||
@@ -20,8 +20,8 @@ import logging
|
||||
import re
|
||||
import time
|
||||
import uuid
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Callable, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -96,7 +96,7 @@ async def run_agentic_loop(
|
||||
*,
|
||||
session_id: str = "agentic",
|
||||
max_steps: int = 0,
|
||||
on_progress: Optional[Callable] = None,
|
||||
on_progress: Callable | None = None,
|
||||
) -> AgenticResult:
|
||||
"""Execute a multi-step task with planning, execution, and adaptation.
|
||||
|
||||
@@ -276,7 +276,7 @@ async def run_agentic_loop(
|
||||
summary_prompt = (
|
||||
f"Task: {task}\n"
|
||||
f"Results:\n" + "\n".join(completed_results) + "\n\n"
|
||||
f"Summarise what was accomplished in 2-3 sentences."
|
||||
"Summarise what was accomplished in 2-3 sentences."
|
||||
)
|
||||
try:
|
||||
summary_run = await asyncio.to_thread(
|
||||
|
||||
@@ -12,7 +12,7 @@ SubAgent is the concrete implementation used for all persona-based agents
|
||||
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from agno.agent import Agent
|
||||
from agno.models.ollama import Ollama
|
||||
@@ -48,7 +48,7 @@ class BaseAgent(ABC):
|
||||
self.agent = self._create_agent(system_prompt)
|
||||
|
||||
# Event bus for communication
|
||||
self.event_bus: Optional[EventBus] = None
|
||||
self.event_bus: EventBus | None = None
|
||||
|
||||
logger.info("%s agent initialized (id: %s)", name, agent_id)
|
||||
|
||||
|
||||
@@ -5,15 +5,12 @@ Uses the three-tier memory system and MCP tools.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
|
||||
from agno.agent import Agent
|
||||
from agno.models.ollama import Ollama
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
from infrastructure.events.bus import EventBus, event_bus
|
||||
from infrastructure.events.bus import event_bus
|
||||
from timmy.agents.base import BaseAgent, SubAgent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -44,7 +41,7 @@ def build_timmy_context_sync() -> dict[str, Any]:
|
||||
global _timmy_context
|
||||
|
||||
ctx: dict[str, Any] = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"repo_root": settings.repo_root,
|
||||
"git_log": "",
|
||||
"agents": [],
|
||||
@@ -143,14 +140,14 @@ def format_timmy_prompt(base_prompt: str, context: dict[str, Any]) -> str:
|
||||
repo_root = context.get("repo_root", settings.repo_root)
|
||||
|
||||
context_block = f"""
|
||||
## Current System Context (as of {context.get('timestamp', datetime.now(timezone.utc).isoformat())})
|
||||
## Current System Context (as of {context.get("timestamp", datetime.now(UTC).isoformat())})
|
||||
|
||||
### Repository
|
||||
**Root:** `{repo_root}`
|
||||
|
||||
### Recent Commits (last 20):
|
||||
```
|
||||
{context.get('git_log', '(unavailable)')}
|
||||
{context.get("git_log", "(unavailable)")}
|
||||
```
|
||||
|
||||
### Active Sub-Agents:
|
||||
@@ -164,7 +161,7 @@ def format_timmy_prompt(base_prompt: str, context: dict[str, Any]) -> str:
|
||||
{hands_list}
|
||||
|
||||
### Hot Memory:
|
||||
{context.get('memory', '(unavailable)')[:1000]}
|
||||
{context.get("memory", "(unavailable)")[:1000]}
|
||||
"""
|
||||
|
||||
# Replace {REPO_ROOT} placeholder with actual path
|
||||
|
||||
@@ -13,10 +13,9 @@ Default is always True. The owner changes this intentionally.
|
||||
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GOLDEN TIMMY RULE
|
||||
@@ -94,7 +93,7 @@ def create_item(
|
||||
description=description,
|
||||
proposed_action=proposed_action,
|
||||
impact=impact,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
created_at=datetime.now(UTC),
|
||||
status="pending",
|
||||
)
|
||||
conn = _get_conn(db_path)
|
||||
@@ -137,14 +136,14 @@ def list_all(db_path: Path = _DEFAULT_DB) -> list[ApprovalItem]:
|
||||
return [_row_to_item(r) for r in rows]
|
||||
|
||||
|
||||
def get_item(item_id: str, db_path: Path = _DEFAULT_DB) -> Optional[ApprovalItem]:
|
||||
def get_item(item_id: str, db_path: Path = _DEFAULT_DB) -> ApprovalItem | None:
|
||||
conn = _get_conn(db_path)
|
||||
row = conn.execute("SELECT * FROM approval_items WHERE id = ?", (item_id,)).fetchone()
|
||||
conn.close()
|
||||
return _row_to_item(row) if row else None
|
||||
|
||||
|
||||
def approve(item_id: str, db_path: Path = _DEFAULT_DB) -> Optional[ApprovalItem]:
|
||||
def approve(item_id: str, db_path: Path = _DEFAULT_DB) -> ApprovalItem | None:
|
||||
"""Mark an approval item as approved."""
|
||||
conn = _get_conn(db_path)
|
||||
conn.execute("UPDATE approval_items SET status = 'approved' WHERE id = ?", (item_id,))
|
||||
@@ -153,7 +152,7 @@ def approve(item_id: str, db_path: Path = _DEFAULT_DB) -> Optional[ApprovalItem]
|
||||
return get_item(item_id, db_path)
|
||||
|
||||
|
||||
def reject(item_id: str, db_path: Path = _DEFAULT_DB) -> Optional[ApprovalItem]:
|
||||
def reject(item_id: str, db_path: Path = _DEFAULT_DB) -> ApprovalItem | None:
|
||||
"""Mark an approval item as rejected."""
|
||||
conn = _get_conn(db_path)
|
||||
conn.execute("UPDATE approval_items SET status = 'rejected' WHERE id = ?", (item_id,))
|
||||
@@ -164,7 +163,7 @@ def reject(item_id: str, db_path: Path = _DEFAULT_DB) -> Optional[ApprovalItem]:
|
||||
|
||||
def expire_old(db_path: Path = _DEFAULT_DB) -> int:
|
||||
"""Auto-expire pending items older than EXPIRY_DAYS. Returns count removed."""
|
||||
cutoff = (datetime.now(timezone.utc) - timedelta(days=_EXPIRY_DAYS)).isoformat()
|
||||
cutoff = (datetime.now(UTC) - timedelta(days=_EXPIRY_DAYS)).isoformat()
|
||||
conn = _get_conn(db_path)
|
||||
cursor = conn.execute(
|
||||
"DELETE FROM approval_items WHERE status = 'pending' AND created_at < ?",
|
||||
|
||||
@@ -21,7 +21,7 @@ import re
|
||||
import subprocess
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Optional
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -146,7 +146,7 @@ def run_experiment(
|
||||
}
|
||||
|
||||
|
||||
def _extract_metric(output: str, metric_name: str = "val_bpb") -> Optional[float]:
|
||||
def _extract_metric(output: str, metric_name: str = "val_bpb") -> float | None:
|
||||
"""Extract the last occurrence of a metric value from training output."""
|
||||
pattern = re.compile(rf"{re.escape(metric_name)}[:\s]+([0-9]+\.?[0-9]*)")
|
||||
matches = pattern.findall(output)
|
||||
@@ -179,9 +179,9 @@ def evaluate_result(
|
||||
pct = (delta / baseline) * 100 if baseline != 0 else 0.0
|
||||
|
||||
if delta < 0:
|
||||
return f"Improvement: {metric_name} {baseline:.4f} -> {current:.4f} " f"({pct:+.2f}%)"
|
||||
return f"Improvement: {metric_name} {baseline:.4f} -> {current:.4f} ({pct:+.2f}%)"
|
||||
elif delta > 0:
|
||||
return f"Regression: {metric_name} {baseline:.4f} -> {current:.4f} " f"({pct:+.2f}%)"
|
||||
return f"Regression: {metric_name} {baseline:.4f} -> {current:.4f} ({pct:+.2f}%)"
|
||||
else:
|
||||
return f"No change: {metric_name} = {current:.4f}"
|
||||
|
||||
|
||||
@@ -15,8 +15,8 @@ No cloud by default. No telemetry. Sats are sovereignty, boss.
|
||||
import logging
|
||||
import platform
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Literal, Optional
|
||||
from dataclasses import dataclass
|
||||
from typing import Literal
|
||||
|
||||
from timmy.prompts import SYSTEM_PROMPT
|
||||
|
||||
@@ -69,7 +69,7 @@ class TimmyAirLLMAgent:
|
||||
model_id = _AIRLLM_MODELS.get(model_size)
|
||||
if model_id is None:
|
||||
raise ValueError(
|
||||
f"Unknown model size {model_size!r}. " f"Choose from: {list(_AIRLLM_MODELS)}"
|
||||
f"Unknown model size {model_size!r}. Choose from: {list(_AIRLLM_MODELS)}"
|
||||
)
|
||||
|
||||
if is_apple_silicon():
|
||||
@@ -167,7 +167,7 @@ class GrokUsageStats:
|
||||
total_completion_tokens: int = 0
|
||||
total_latency_ms: float = 0.0
|
||||
errors: int = 0
|
||||
last_request_at: Optional[float] = None
|
||||
last_request_at: float | None = None
|
||||
|
||||
@property
|
||||
def estimated_cost_sats(self) -> int:
|
||||
@@ -194,8 +194,8 @@ class GrokBackend:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: Optional[str] = None,
|
||||
model: Optional[str] = None,
|
||||
api_key: str | None = None,
|
||||
model: str | None = None,
|
||||
) -> None:
|
||||
from config import settings
|
||||
|
||||
@@ -206,8 +206,7 @@ class GrokBackend:
|
||||
|
||||
if not self._api_key:
|
||||
logger.warning(
|
||||
"GrokBackend created without XAI_API_KEY — "
|
||||
"calls will fail until key is configured"
|
||||
"GrokBackend created without XAI_API_KEY — calls will fail until key is configured"
|
||||
)
|
||||
|
||||
def _get_client(self):
|
||||
@@ -398,7 +397,7 @@ class GrokBackend:
|
||||
|
||||
# ── Module-level Grok singleton ─────────────────────────────────────────────
|
||||
|
||||
_grok_backend: Optional[GrokBackend] = None
|
||||
_grok_backend: GrokBackend | None = None
|
||||
|
||||
|
||||
def get_grok_backend() -> GrokBackend:
|
||||
@@ -443,8 +442,8 @@ class ClaudeBackend:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: Optional[str] = None,
|
||||
model: Optional[str] = None,
|
||||
api_key: str | None = None,
|
||||
model: str | None = None,
|
||||
) -> None:
|
||||
from config import settings
|
||||
|
||||
@@ -550,7 +549,7 @@ class ClaudeBackend:
|
||||
|
||||
# ── Module-level Claude singleton ──────────────────────────────────────────
|
||||
|
||||
_claude_backend: Optional[ClaudeBackend] = None
|
||||
_claude_backend: ClaudeBackend | None = None
|
||||
|
||||
|
||||
def get_claude_backend() -> ClaudeBackend:
|
||||
|
||||
@@ -11,9 +11,8 @@ regenerates the briefing every 6 hours.
|
||||
import logging
|
||||
import sqlite3
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -48,10 +47,8 @@ class Briefing:
|
||||
generated_at: datetime
|
||||
summary: str # 150-300 words
|
||||
approval_items: list[ApprovalItem] = field(default_factory=list)
|
||||
period_start: datetime = field(
|
||||
default_factory=lambda: datetime.now(timezone.utc) - timedelta(hours=6)
|
||||
)
|
||||
period_end: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
period_start: datetime = field(default_factory=lambda: datetime.now(UTC) - timedelta(hours=6))
|
||||
period_end: datetime = field(default_factory=lambda: datetime.now(UTC))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -94,7 +91,7 @@ def _save_briefing(briefing: Briefing, db_path: Path = _DEFAULT_DB) -> None:
|
||||
conn.close()
|
||||
|
||||
|
||||
def _load_latest(db_path: Path = _DEFAULT_DB) -> Optional[Briefing]:
|
||||
def _load_latest(db_path: Path = _DEFAULT_DB) -> Briefing | None:
|
||||
"""Load the most-recently cached briefing, or None if there is none."""
|
||||
conn = _get_cache_conn(db_path)
|
||||
row = conn.execute("SELECT * FROM briefings ORDER BY generated_at DESC LIMIT 1").fetchone()
|
||||
@@ -111,9 +108,9 @@ def _load_latest(db_path: Path = _DEFAULT_DB) -> Optional[Briefing]:
|
||||
|
||||
def is_fresh(briefing: Briefing, max_age_minutes: int = _CACHE_MINUTES) -> bool:
|
||||
"""Return True if the briefing was generated within max_age_minutes."""
|
||||
now = datetime.now(timezone.utc)
|
||||
now = datetime.now(UTC)
|
||||
age = (
|
||||
now - briefing.generated_at.replace(tzinfo=timezone.utc)
|
||||
now - briefing.generated_at.replace(tzinfo=UTC)
|
||||
if briefing.generated_at.tzinfo is None
|
||||
else now - briefing.generated_at
|
||||
)
|
||||
@@ -224,7 +221,7 @@ class BriefingEngine:
|
||||
def __init__(self, db_path: Path = _DEFAULT_DB) -> None:
|
||||
self._db_path = db_path
|
||||
|
||||
def get_cached(self) -> Optional[Briefing]:
|
||||
def get_cached(self) -> Briefing | None:
|
||||
"""Return the cached briefing if it exists, without regenerating."""
|
||||
return _load_latest(self._db_path)
|
||||
|
||||
@@ -237,7 +234,7 @@ class BriefingEngine:
|
||||
|
||||
def generate(self) -> Briefing:
|
||||
"""Generate a fresh briefing. May take a few seconds (LLM call)."""
|
||||
now = datetime.now(timezone.utc)
|
||||
now = datetime.now(UTC)
|
||||
period_start = now - timedelta(hours=6)
|
||||
|
||||
swarm_info = _gather_swarm_summary(period_start)
|
||||
|
||||
@@ -8,7 +8,6 @@ Provides automatic failover between LLM providers with:
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
from infrastructure.router.cascade import CascadeRouter
|
||||
from timmy.prompts import SYSTEM_PROMPT
|
||||
@@ -36,7 +35,7 @@ class TimmyCascadeAdapter:
|
||||
print(f"Provider: {response.provider_used}")
|
||||
"""
|
||||
|
||||
def __init__(self, router: Optional[CascadeRouter] = None) -> None:
|
||||
def __init__(self, router: CascadeRouter | None = None) -> None:
|
||||
"""Initialize adapter with Cascade Router.
|
||||
|
||||
Args:
|
||||
@@ -45,7 +44,7 @@ class TimmyCascadeAdapter:
|
||||
self.router = router or CascadeRouter()
|
||||
logger.info("TimmyCascadeAdapter initialized with %d providers", len(self.router.providers))
|
||||
|
||||
async def chat(self, message: str, context: Optional[str] = None) -> TimmyResponse:
|
||||
async def chat(self, message: str, context: str | None = None) -> TimmyResponse:
|
||||
"""Send message through cascade router with automatic failover.
|
||||
|
||||
Args:
|
||||
@@ -114,7 +113,7 @@ class TimmyCascadeAdapter:
|
||||
for p in self.router.providers
|
||||
]
|
||||
|
||||
def get_preferred_provider(self) -> Optional[str]:
|
||||
def get_preferred_provider(self) -> str | None:
|
||||
"""Get name of highest-priority healthy provider.
|
||||
|
||||
Returns:
|
||||
@@ -127,7 +126,7 @@ class TimmyCascadeAdapter:
|
||||
|
||||
|
||||
# Global singleton for reuse
|
||||
_cascade_adapter: Optional[TimmyCascadeAdapter] = None
|
||||
_cascade_adapter: TimmyCascadeAdapter | None = None
|
||||
|
||||
|
||||
def get_cascade_adapter() -> TimmyCascadeAdapter:
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import subprocess
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
|
||||
@@ -40,8 +39,8 @@ def tick():
|
||||
@app.command()
|
||||
def think(
|
||||
topic: str = typer.Argument(..., help="Topic to reason about"),
|
||||
backend: Optional[str] = _BACKEND_OPTION,
|
||||
model_size: Optional[str] = _MODEL_SIZE_OPTION,
|
||||
backend: str | None = _BACKEND_OPTION,
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Ask Timmy to think carefully about a topic."""
|
||||
timmy = create_timmy(backend=backend, model_size=model_size)
|
||||
@@ -51,8 +50,8 @@ def think(
|
||||
@app.command()
|
||||
def chat(
|
||||
message: str = typer.Argument(..., help="Message to send"),
|
||||
backend: Optional[str] = _BACKEND_OPTION,
|
||||
model_size: Optional[str] = _MODEL_SIZE_OPTION,
|
||||
backend: str | None = _BACKEND_OPTION,
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Send a message to Timmy."""
|
||||
timmy = create_timmy(backend=backend, model_size=model_size)
|
||||
@@ -61,8 +60,8 @@ def chat(
|
||||
|
||||
@app.command()
|
||||
def status(
|
||||
backend: Optional[str] = _BACKEND_OPTION,
|
||||
model_size: Optional[str] = _MODEL_SIZE_OPTION,
|
||||
backend: str | None = _BACKEND_OPTION,
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Print Timmy's operational status."""
|
||||
timmy = create_timmy(backend=backend, model_size=model_size)
|
||||
@@ -71,8 +70,8 @@ def status(
|
||||
|
||||
@app.command()
|
||||
def interview(
|
||||
backend: Optional[str] = _BACKEND_OPTION,
|
||||
model_size: Optional[str] = _MODEL_SIZE_OPTION,
|
||||
backend: str | None = _BACKEND_OPTION,
|
||||
model_size: str | None = _MODEL_SIZE_OPTION,
|
||||
):
|
||||
"""Initialize Timmy and run a structured interview.
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@ Tracks conversation state, intent, and context to improve:
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -18,9 +17,9 @@ logger = logging.getLogger(__name__)
|
||||
class ConversationContext:
|
||||
"""Tracks the current conversation state."""
|
||||
|
||||
user_name: Optional[str] = None
|
||||
current_topic: Optional[str] = None
|
||||
last_intent: Optional[str] = None
|
||||
user_name: str | None = None
|
||||
current_topic: str | None = None
|
||||
last_intent: str | None = None
|
||||
turn_count: int = 0
|
||||
started_at: datetime = field(default_factory=datetime.now)
|
||||
|
||||
@@ -131,7 +130,7 @@ class ConversationManager:
|
||||
}
|
||||
)
|
||||
|
||||
def extract_user_name(self, message: str) -> Optional[str]:
|
||||
def extract_user_name(self, message: str) -> str | None:
|
||||
"""Try to extract user's name from message."""
|
||||
message_lower = message.lower()
|
||||
|
||||
|
||||
@@ -6,8 +6,8 @@ a post-initialization health check.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -62,8 +62,8 @@ class InterviewEntry:
|
||||
|
||||
def run_interview(
|
||||
chat_fn: Callable[[str], str],
|
||||
questions: Optional[list[dict[str, str]]] = None,
|
||||
on_answer: Optional[Callable[[InterviewEntry], None]] = None,
|
||||
questions: list[dict[str, str]] | None = None,
|
||||
on_answer: Callable[[InterviewEntry], None] | None = None,
|
||||
) -> list[InterviewEntry]:
|
||||
"""Run a structured interview using the provided chat function.
|
||||
|
||||
|
||||
@@ -8,9 +8,8 @@ import json
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
DB_PATH = Path(__file__).parent.parent.parent.parent / "data" / "swarm.db"
|
||||
|
||||
@@ -97,13 +96,13 @@ class MemoryEntry:
|
||||
content: str = "" # The actual text content
|
||||
source: str = "" # Where it came from (agent, user, system)
|
||||
context_type: str = "conversation" # conversation, document, fact, etc.
|
||||
agent_id: Optional[str] = None
|
||||
task_id: Optional[str] = None
|
||||
session_id: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
embedding: Optional[list[float]] = None
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
relevance_score: Optional[float] = None # Set during search
|
||||
agent_id: str | None = None
|
||||
task_id: str | None = None
|
||||
session_id: str | None = None
|
||||
metadata: dict | None = None
|
||||
embedding: list[float] | None = None
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
relevance_score: float | None = None # Set during search
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
@@ -152,10 +151,10 @@ def store_memory(
|
||||
content: str,
|
||||
source: str,
|
||||
context_type: str = "conversation",
|
||||
agent_id: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
session_id: Optional[str] = None,
|
||||
metadata: Optional[dict] = None,
|
||||
agent_id: str | None = None,
|
||||
task_id: str | None = None,
|
||||
session_id: str | None = None,
|
||||
metadata: dict | None = None,
|
||||
compute_embedding: bool = True,
|
||||
) -> MemoryEntry:
|
||||
"""Store a memory entry with optional embedding.
|
||||
@@ -218,9 +217,9 @@ def store_memory(
|
||||
def search_memories(
|
||||
query: str,
|
||||
limit: int = 10,
|
||||
context_type: Optional[str] = None,
|
||||
agent_id: Optional[str] = None,
|
||||
session_id: Optional[str] = None,
|
||||
context_type: str | None = None,
|
||||
agent_id: str | None = None,
|
||||
session_id: str | None = None,
|
||||
min_relevance: float = 0.0,
|
||||
) -> list[MemoryEntry]:
|
||||
"""Search for memories by semantic similarity.
|
||||
@@ -305,7 +304,7 @@ def search_memories(
|
||||
|
||||
def _cosine_similarity(a: list[float], b: list[float]) -> float:
|
||||
"""Compute cosine similarity between two vectors."""
|
||||
dot = sum(x * y for x, y in zip(a, b))
|
||||
dot = sum(x * y for x, y in zip(a, b, strict=False))
|
||||
norm_a = sum(x * x for x in a) ** 0.5
|
||||
norm_b = sum(x * x for x in b) ** 0.5
|
||||
if norm_a == 0 or norm_b == 0:
|
||||
@@ -353,7 +352,7 @@ def get_memory_context(query: str, max_tokens: int = 2000, **filters) -> str:
|
||||
return "Relevant context from memory:\n" + "\n\n".join(context_parts)
|
||||
|
||||
|
||||
def recall_personal_facts(agent_id: Optional[str] = None) -> list[str]:
|
||||
def recall_personal_facts(agent_id: str | None = None) -> list[str]:
|
||||
"""Recall personal facts about the user or system.
|
||||
|
||||
Args:
|
||||
@@ -388,7 +387,7 @@ def recall_personal_facts(agent_id: Optional[str] = None) -> list[str]:
|
||||
return [r["content"] for r in rows]
|
||||
|
||||
|
||||
def recall_personal_facts_with_ids(agent_id: Optional[str] = None) -> list[dict]:
|
||||
def recall_personal_facts_with_ids(agent_id: str | None = None) -> list[dict]:
|
||||
"""Recall personal facts with their IDs for edit/delete operations."""
|
||||
conn = _get_conn()
|
||||
if agent_id:
|
||||
@@ -417,7 +416,7 @@ def update_personal_fact(memory_id: str, new_content: str) -> bool:
|
||||
return updated
|
||||
|
||||
|
||||
def store_personal_fact(fact: str, agent_id: Optional[str] = None) -> MemoryEntry:
|
||||
def store_personal_fact(fact: str, agent_id: str | None = None) -> MemoryEntry:
|
||||
"""Store a personal fact about the user or system.
|
||||
|
||||
Args:
|
||||
@@ -496,7 +495,7 @@ def prune_memories(older_than_days: int = 90, keep_facts: bool = True) -> int:
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (datetime.now(timezone.utc) - timedelta(days=older_than_days)).isoformat()
|
||||
cutoff = (datetime.now(UTC) - timedelta(days=older_than_days)).isoformat()
|
||||
|
||||
conn = _get_conn()
|
||||
|
||||
|
||||
@@ -12,9 +12,8 @@ Handoff Protocol:
|
||||
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -30,8 +29,8 @@ class HotMemory:
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.path = HOT_MEMORY_PATH
|
||||
self._content: Optional[str] = None
|
||||
self._last_modified: Optional[float] = None
|
||||
self._content: str | None = None
|
||||
self._last_modified: float | None = None
|
||||
|
||||
def read(self, force_refresh: bool = False) -> str:
|
||||
"""Read hot memory, with caching."""
|
||||
@@ -130,8 +129,8 @@ class HotMemory:
|
||||
|
||||
*Prune date: {prune_date}*
|
||||
""".format(
|
||||
date=datetime.now(timezone.utc).strftime("%Y-%m-%d"),
|
||||
prune_date=(datetime.now(timezone.utc).replace(day=25)).strftime("%Y-%m-%d"),
|
||||
date=datetime.now(UTC).strftime("%Y-%m-%d"),
|
||||
prune_date=(datetime.now(UTC).replace(day=25)).strftime("%Y-%m-%d"),
|
||||
)
|
||||
|
||||
self.path.write_text(default_content)
|
||||
@@ -154,14 +153,14 @@ class VaultMemory:
|
||||
def write_note(self, name: str, content: str, namespace: str = "notes") -> Path:
|
||||
"""Write a note to the vault."""
|
||||
# Add timestamp to filename
|
||||
timestamp = datetime.now(timezone.utc).strftime("%Y%m%d")
|
||||
timestamp = datetime.now(UTC).strftime("%Y%m%d")
|
||||
filename = f"{timestamp}_{name}.md"
|
||||
filepath = self.path / namespace / filename
|
||||
|
||||
# Add header
|
||||
full_content = f"""# {name.replace("_", " ").title()}
|
||||
|
||||
> Created: {datetime.now(timezone.utc).isoformat()}
|
||||
> Created: {datetime.now(UTC).isoformat()}
|
||||
> Namespace: {namespace}
|
||||
|
||||
---
|
||||
@@ -190,7 +189,7 @@ class VaultMemory:
|
||||
return []
|
||||
return sorted(dir_path.glob(pattern))
|
||||
|
||||
def get_latest(self, namespace: str = "notes", pattern: str = "*.md") -> Optional[Path]:
|
||||
def get_latest(self, namespace: str = "notes", pattern: str = "*.md") -> Path | None:
|
||||
"""Get most recent file in namespace."""
|
||||
files = self.list_files(namespace, pattern)
|
||||
return files[-1] if files else None
|
||||
@@ -219,7 +218,7 @@ class VaultMemory:
|
||||
# Update last_updated
|
||||
content = re.sub(
|
||||
r"\*Last updated:.*\*",
|
||||
f"*Last updated: {datetime.now(timezone.utc).strftime('%Y-%m-%d')}*",
|
||||
f"*Last updated: {datetime.now(UTC).strftime('%Y-%m-%d')}*",
|
||||
content,
|
||||
)
|
||||
|
||||
@@ -255,7 +254,7 @@ class VaultMemory:
|
||||
---
|
||||
|
||||
*Last updated: {date}*
|
||||
""".format(date=datetime.now(timezone.utc).strftime("%Y-%m-%d"))
|
||||
""".format(date=datetime.now(UTC).strftime("%Y-%m-%d"))
|
||||
|
||||
profile_path.write_text(default)
|
||||
|
||||
@@ -277,7 +276,7 @@ class HandoffProtocol:
|
||||
"""Write handoff at session end."""
|
||||
content = f"""# Last Session Handoff
|
||||
|
||||
**Session End:** {datetime.now(timezone.utc).isoformat()}
|
||||
**Session End:** {datetime.now(UTC).isoformat()}
|
||||
**Duration:** (calculated on read)
|
||||
|
||||
## Summary
|
||||
@@ -316,7 +315,7 @@ The user was last working on: {session_summary[:200]}...
|
||||
len(open_items),
|
||||
)
|
||||
|
||||
def read_handoff(self) -> Optional[str]:
|
||||
def read_handoff(self) -> str | None:
|
||||
"""Read handoff if exists."""
|
||||
if not self.path.exists():
|
||||
return None
|
||||
@@ -336,13 +335,13 @@ class MemorySystem:
|
||||
self.hot = HotMemory()
|
||||
self.vault = VaultMemory()
|
||||
self.handoff = HandoffProtocol()
|
||||
self.session_start_time: Optional[datetime] = None
|
||||
self.session_start_time: datetime | None = None
|
||||
self.session_decisions: list[str] = []
|
||||
self.session_open_items: list[str] = []
|
||||
|
||||
def start_session(self) -> str:
|
||||
"""Start a new session, loading context from memory."""
|
||||
self.session_start_time = datetime.now(timezone.utc)
|
||||
self.session_start_time = datetime.now(UTC)
|
||||
|
||||
# Build context
|
||||
context_parts = []
|
||||
@@ -379,7 +378,7 @@ class MemorySystem:
|
||||
# Update hot memory
|
||||
self.hot.update_section(
|
||||
"Current Session",
|
||||
f"**Last Session:** {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M')}\n"
|
||||
f"**Last Session:** {datetime.now(UTC).strftime('%Y-%m-%d %H:%M')}\n"
|
||||
+ f"**Summary:** {summary[:100]}...",
|
||||
)
|
||||
|
||||
|
||||
@@ -16,9 +16,8 @@ import json
|
||||
import logging
|
||||
import sqlite3
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -84,7 +83,7 @@ def cosine_similarity(a: list[float], b: list[float]) -> float:
|
||||
"""Calculate cosine similarity between two vectors."""
|
||||
import math
|
||||
|
||||
dot = sum(x * y for x, y in zip(a, b))
|
||||
dot = sum(x * y for x, y in zip(a, b, strict=False))
|
||||
mag_a = math.sqrt(sum(x * x for x in a))
|
||||
mag_b = math.sqrt(sum(x * x for x in b))
|
||||
if mag_a == 0 or mag_b == 0:
|
||||
@@ -154,7 +153,7 @@ class SemanticMemory:
|
||||
chunks = self._split_into_chunks(content)
|
||||
|
||||
# Index each chunk
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
now = datetime.now(UTC).isoformat()
|
||||
for i, chunk_text in enumerate(chunks):
|
||||
if len(chunk_text.strip()) < 20: # Skip tiny chunks
|
||||
continue
|
||||
|
||||
@@ -10,7 +10,6 @@ let Agno's session_id mechanism handle conversation continuity.
|
||||
|
||||
import logging
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -60,7 +59,7 @@ def _get_agent():
|
||||
return _agent
|
||||
|
||||
|
||||
def chat(message: str, session_id: Optional[str] = None) -> str:
|
||||
def chat(message: str, session_id: str | None = None) -> str:
|
||||
"""Send a message to Timmy and get a response.
|
||||
|
||||
Uses a persistent agent and session_id so Agno's SQLite history
|
||||
@@ -93,7 +92,7 @@ def chat(message: str, session_id: Optional[str] = None) -> str:
|
||||
return response_text
|
||||
|
||||
|
||||
def chat_with_tools(message: str, session_id: Optional[str] = None):
|
||||
def chat_with_tools(message: str, session_id: str | None = None):
|
||||
"""Send a message and return the full Agno RunOutput.
|
||||
|
||||
Callers should check ``run_output.status``:
|
||||
@@ -117,7 +116,7 @@ def chat_with_tools(message: str, session_id: Optional[str] = None):
|
||||
)
|
||||
|
||||
|
||||
def continue_chat(run_output, session_id: Optional[str] = None):
|
||||
def continue_chat(run_output, session_id: str | None = None):
|
||||
"""Resume a paused run after tool confirmation / rejection.
|
||||
|
||||
Args:
|
||||
@@ -150,7 +149,7 @@ class _ErrorRunOutput:
|
||||
return []
|
||||
|
||||
|
||||
def chat_raw(message: str, session_id: Optional[str] = None) -> tuple[str, str]:
|
||||
def chat_raw(message: str, session_id: str | None = None) -> tuple[str, str]:
|
||||
"""Send a message and return both cleaned and raw responses.
|
||||
|
||||
Backward-compatible wrapper around :func:`chat_with_tools`.
|
||||
@@ -165,7 +164,7 @@ def chat_raw(message: str, session_id: Optional[str] = None) -> tuple[str, str]:
|
||||
return cleaned, raw_response
|
||||
|
||||
|
||||
def reset_session(session_id: Optional[str] = None) -> None:
|
||||
def reset_session(session_id: str | None = None) -> None:
|
||||
"""Reset a session (clear conversation context).
|
||||
|
||||
This clears the ConversationManager state. Agno's SQLite history
|
||||
|
||||
@@ -17,15 +17,13 @@ Usage::
|
||||
chain = thinking_engine.get_thought_chain(thought_id)
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -127,7 +125,7 @@ class Thought:
|
||||
id: str
|
||||
content: str
|
||||
seed_type: str
|
||||
parent_id: Optional[str]
|
||||
parent_id: str | None
|
||||
created_at: str
|
||||
|
||||
|
||||
@@ -165,7 +163,7 @@ class ThinkingEngine:
|
||||
|
||||
def __init__(self, db_path: Path = _DEFAULT_DB) -> None:
|
||||
self._db_path = db_path
|
||||
self._last_thought_id: Optional[str] = None
|
||||
self._last_thought_id: str | None = None
|
||||
|
||||
# Load the most recent thought for chain continuity
|
||||
try:
|
||||
@@ -175,7 +173,7 @@ class ThinkingEngine:
|
||||
except Exception:
|
||||
pass # Fresh start if DB doesn't exist yet
|
||||
|
||||
async def think_once(self) -> Optional[Thought]:
|
||||
async def think_once(self) -> Thought | None:
|
||||
"""Execute one thinking cycle.
|
||||
|
||||
1. Gather a seed context
|
||||
@@ -235,7 +233,7 @@ class ThinkingEngine:
|
||||
conn.close()
|
||||
return [_row_to_thought(r) for r in rows]
|
||||
|
||||
def get_thought(self, thought_id: str) -> Optional[Thought]:
|
||||
def get_thought(self, thought_id: str) -> Thought | None:
|
||||
"""Retrieve a single thought by ID."""
|
||||
conn = _get_conn(self._db_path)
|
||||
row = conn.execute("SELECT * FROM thoughts WHERE id = ?", (thought_id,)).fetchone()
|
||||
@@ -248,7 +246,7 @@ class ThinkingEngine:
|
||||
Returns thoughts in chronological order (oldest first).
|
||||
"""
|
||||
chain = []
|
||||
current_id: Optional[str] = thought_id
|
||||
current_id: str | None = thought_id
|
||||
conn = _get_conn(self._db_path)
|
||||
|
||||
for _ in range(max_depth):
|
||||
@@ -316,7 +314,7 @@ class ThinkingEngine:
|
||||
|
||||
from timmy.briefing import _gather_swarm_summary, _gather_task_queue_summary
|
||||
|
||||
since = datetime.now(timezone.utc) - timedelta(hours=1)
|
||||
since = datetime.now(UTC) - timedelta(hours=1)
|
||||
swarm = _gather_swarm_summary(since)
|
||||
tasks = _gather_task_queue_summary()
|
||||
reflection = random.choice(self._SWARM_REFLECTIONS)
|
||||
@@ -356,7 +354,7 @@ class ThinkingEngine:
|
||||
|
||||
from timmy.briefing import _gather_swarm_summary, _gather_task_queue_summary
|
||||
|
||||
since = datetime.now(timezone.utc) - timedelta(hours=2)
|
||||
since = datetime.now(UTC) - timedelta(hours=2)
|
||||
swarm = _gather_swarm_summary(since)
|
||||
tasks = _gather_task_queue_summary()
|
||||
if swarm:
|
||||
@@ -403,7 +401,7 @@ class ThinkingEngine:
|
||||
content=content,
|
||||
seed_type=seed_type,
|
||||
parent_id=self._last_thought_id,
|
||||
created_at=datetime.now(timezone.utc).isoformat(),
|
||||
created_at=datetime.now(UTC).isoformat(),
|
||||
)
|
||||
|
||||
conn = _get_conn(self._db_path)
|
||||
|
||||
@@ -15,10 +15,12 @@ from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import math
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -79,7 +81,7 @@ def _track_tool_usage(agent_id: str, tool_name: str, success: bool = True) -> No
|
||||
_TOOL_USAGE[agent_id].append(
|
||||
{
|
||||
"tool": tool_name,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"success": success,
|
||||
}
|
||||
)
|
||||
@@ -497,7 +499,7 @@ def create_full_toolkit(base_dir: str | Path | None = None):
|
||||
|
||||
# Spawn as a background task on the running event loop
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
asyncio.get_running_loop()
|
||||
future = asyncio.ensure_future(_launch())
|
||||
task_id = id(future)
|
||||
logger.info("Agentic loop started (task=%s)", task[:80])
|
||||
@@ -507,8 +509,8 @@ def create_full_toolkit(base_dir: str | Path | None = None):
|
||||
return f"Task completed: {result.summary}"
|
||||
|
||||
return (
|
||||
f"Background task started. I'll execute this step-by-step "
|
||||
f"and stream progress updates. You can monitor via the dashboard."
|
||||
"Background task started. I'll execute this step-by-step "
|
||||
"and stream progress updates. You can monitor via the dashboard."
|
||||
)
|
||||
|
||||
toolkit.register(plan_and_execute, name="plan_and_execute")
|
||||
|
||||
@@ -7,7 +7,7 @@ being told about it in the system prompt.
|
||||
import logging
|
||||
import platform
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
@@ -281,7 +281,7 @@ def get_live_system_status() -> dict[str, Any]:
|
||||
try:
|
||||
from dashboard.routes.health import _START_TIME
|
||||
|
||||
uptime = (datetime.now(timezone.utc) - _START_TIME).total_seconds()
|
||||
uptime = (datetime.now(UTC) - _START_TIME).total_seconds()
|
||||
result["uptime_seconds"] = int(uptime)
|
||||
except Exception:
|
||||
result["uptime_seconds"] = None
|
||||
@@ -294,5 +294,5 @@ def get_live_system_status() -> dict[str, Any]:
|
||||
except Exception:
|
||||
result["discord"] = {"state": "unknown"}
|
||||
|
||||
result["timestamp"] = datetime.now(timezone.utc).isoformat()
|
||||
result["timestamp"] = datetime.now(UTC).isoformat()
|
||||
return result
|
||||
|
||||
@@ -12,7 +12,6 @@ import logging
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from contextlib import asynccontextmanager
|
||||
from typing import Dict, List
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
from pydantic import BaseModel
|
||||
@@ -46,7 +45,7 @@ class RateLimitMiddleware(BaseHTTPMiddleware):
|
||||
super().__init__(app)
|
||||
self.limit = limit
|
||||
self.window = window
|
||||
self.requests: Dict[str, List[float]] = defaultdict(list)
|
||||
self.requests: dict[str, list[float]] = defaultdict(list)
|
||||
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
# Only rate limit chat endpoint
|
||||
@@ -110,7 +109,7 @@ def create_timmy_serve_app() -> FastAPI:
|
||||
|
||||
except Exception as exc:
|
||||
logger.error("Chat processing error: %s", exc)
|
||||
raise HTTPException(status_code=500, detail=f"Processing error: {exc}")
|
||||
raise HTTPException(status_code=500, detail=f"Processing error: {exc}") from exc
|
||||
|
||||
@app.get("/health")
|
||||
async def health():
|
||||
|
||||
@@ -22,11 +22,11 @@ def start(
|
||||
typer.echo(f"L402 payment proxy active — {price} sats per request")
|
||||
typer.echo("Press Ctrl-C to stop")
|
||||
|
||||
typer.echo(f"\nEndpoints:")
|
||||
typer.echo(f" POST /serve/chat — Chat with Timmy")
|
||||
typer.echo(f" GET /serve/invoice — Request an invoice")
|
||||
typer.echo(f" GET /serve/status — Service status")
|
||||
typer.echo(f" GET /health — Health check")
|
||||
typer.echo("\nEndpoints:")
|
||||
typer.echo(" POST /serve/chat — Chat with Timmy")
|
||||
typer.echo(" GET /serve/invoice — Request an invoice")
|
||||
typer.echo(" GET /serve/status — Service status")
|
||||
typer.echo(" GET /health — Health check")
|
||||
|
||||
if dry_run:
|
||||
typer.echo("\n(Dry run mode - not starting server)")
|
||||
|
||||
@@ -10,8 +10,7 @@ import logging
|
||||
import uuid
|
||||
from collections import deque
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
from datetime import UTC, datetime
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -23,7 +22,7 @@ class AgentMessage:
|
||||
to_agent: str = ""
|
||||
content: str = ""
|
||||
message_type: str = "text" # text | command | response | error
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
|
||||
timestamp: str = field(default_factory=lambda: datetime.now(UTC).isoformat())
|
||||
replied: bool = False
|
||||
|
||||
|
||||
@@ -66,7 +65,7 @@ class InterAgentMessenger:
|
||||
queue = self._queues.get(agent_id, deque())
|
||||
return list(queue)[:limit]
|
||||
|
||||
def pop(self, agent_id: str) -> Optional[AgentMessage]:
|
||||
def pop(self, agent_id: str) -> AgentMessage | None:
|
||||
"""Pop the oldest message from an agent's queue."""
|
||||
queue = self._queues.get(agent_id, deque())
|
||||
if not queue:
|
||||
@@ -93,7 +92,7 @@ class InterAgentMessenger:
|
||||
"""Return recent message history across all agents."""
|
||||
return self._all_messages[-limit:]
|
||||
|
||||
def clear(self, agent_id: Optional[str] = None) -> None:
|
||||
def clear(self, agent_id: str | None = None) -> None:
|
||||
"""Clear message queue(s)."""
|
||||
if agent_id:
|
||||
self._queues.pop(agent_id, None)
|
||||
|
||||
@@ -7,7 +7,6 @@ no audio device is available (e.g., headless servers, CI).
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
"""Tests for brain.client — BrainClient memory + task operations."""
|
||||
|
||||
import json
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -209,9 +209,9 @@ class TestRecallSync:
|
||||
results = memory.recall_sync("underwater basket weaving")
|
||||
if results:
|
||||
# If semantic search returned something, score should be low
|
||||
assert (
|
||||
results[0]["score"] < 0.7
|
||||
), f"Expected low score for irrelevant query, got {results[0]['score']}"
|
||||
assert results[0]["score"] < 0.7, (
|
||||
f"Expected low score for irrelevant query, got {results[0]['score']}"
|
||||
)
|
||||
|
||||
def test_recall_respects_limit(self, memory):
|
||||
"""Recall should respect the limit parameter."""
|
||||
@@ -272,9 +272,9 @@ class TestFacts:
|
||||
# Second access — count should be higher
|
||||
facts = memory.get_facts_sync(category="test_cat")
|
||||
second_count = facts[0]["access_count"]
|
||||
assert (
|
||||
second_count > first_count
|
||||
), f"Access count should increment: {first_count} -> {second_count}"
|
||||
assert second_count > first_count, (
|
||||
f"Access count should increment: {first_count} -> {second_count}"
|
||||
)
|
||||
|
||||
def test_fact_confidence_ordering(self, memory):
|
||||
"""Facts should be ordered by confidence (highest first)."""
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
import os
|
||||
import sqlite3
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
@@ -92,16 +91,16 @@ def clean_database(tmp_path):
|
||||
for mod_name in _spark_db_modules:
|
||||
try:
|
||||
mod = __import__(mod_name, fromlist=["DB_PATH"])
|
||||
originals[(mod_name, "DB_PATH")] = getattr(mod, "DB_PATH")
|
||||
setattr(mod, "DB_PATH", tmp_spark_db)
|
||||
originals[(mod_name, "DB_PATH")] = mod.DB_PATH
|
||||
mod.DB_PATH = tmp_spark_db
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for mod_name in _self_coding_db_modules:
|
||||
try:
|
||||
mod = __import__(mod_name, fromlist=["DEFAULT_DB_PATH"])
|
||||
originals[(mod_name, "DEFAULT_DB_PATH")] = getattr(mod, "DEFAULT_DB_PATH")
|
||||
setattr(mod, "DEFAULT_DB_PATH", tmp_self_coding_db)
|
||||
originals[(mod_name, "DEFAULT_DB_PATH")] = mod.DEFAULT_DB_PATH
|
||||
mod.DEFAULT_DB_PATH = tmp_self_coding_db
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@@ -112,8 +111,8 @@ def clean_database(tmp_path):
|
||||
]:
|
||||
try:
|
||||
mod = __import__(mod_name, fromlist=["DB_PATH"])
|
||||
originals[(mod_name, "DB_PATH")] = getattr(mod, "DB_PATH")
|
||||
setattr(mod, "DB_PATH", tmp_db)
|
||||
originals[(mod_name, "DB_PATH")] = mod.DB_PATH
|
||||
mod.DB_PATH = tmp_db
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
import pytest
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ class TestCSRFTraversal:
|
||||
def test_endpoint():
|
||||
return {"message": "success"}
|
||||
|
||||
client = TestClient(app)
|
||||
TestClient(app)
|
||||
|
||||
# We want to check if the middleware logic is flawed.
|
||||
# Since TestClient might normalize, we can test the _is_likely_exempt method directly.
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
"""Tests for request logging middleware."""
|
||||
|
||||
import time
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
@@ -51,7 +49,7 @@ class TestRequestLoggingMiddleware:
|
||||
"""Log should include response status code."""
|
||||
with caplog.at_level("INFO"):
|
||||
client = TestClient(app_with_logging)
|
||||
response = client.get("/test")
|
||||
client.get("/test")
|
||||
|
||||
# Check log contains status code
|
||||
assert any("200" in record.message for record in caplog.records)
|
||||
@@ -60,7 +58,7 @@ class TestRequestLoggingMiddleware:
|
||||
"""Log should include request processing time."""
|
||||
with caplog.at_level("INFO"):
|
||||
client = TestClient(app_with_logging)
|
||||
response = client.get("/slow")
|
||||
client.get("/slow")
|
||||
|
||||
# Check log contains duration (e.g., "0.1" or "100ms")
|
||||
assert any(
|
||||
@@ -71,7 +69,7 @@ class TestRequestLoggingMiddleware:
|
||||
"""Log should include client IP address."""
|
||||
with caplog.at_level("INFO"):
|
||||
client = TestClient(app_with_logging)
|
||||
response = client.get("/test", headers={"X-Forwarded-For": "192.168.1.1"})
|
||||
client.get("/test", headers={"X-Forwarded-For": "192.168.1.1"})
|
||||
|
||||
# Check log contains IP
|
||||
assert any(
|
||||
@@ -83,7 +81,7 @@ class TestRequestLoggingMiddleware:
|
||||
"""Log should include User-Agent header."""
|
||||
with caplog.at_level("INFO"):
|
||||
client = TestClient(app_with_logging)
|
||||
response = client.get("/test", headers={"User-Agent": "TestAgent/1.0"})
|
||||
client.get("/test", headers={"User-Agent": "TestAgent/1.0"})
|
||||
|
||||
# Check log contains user agent
|
||||
assert any("TestAgent" in record.message for record in caplog.records)
|
||||
@@ -111,7 +109,7 @@ class TestRequestLoggingMiddleware:
|
||||
|
||||
with caplog.at_level("INFO", logger="timmy.requests"):
|
||||
client = TestClient(app)
|
||||
response = client.get("/health")
|
||||
client.get("/health")
|
||||
|
||||
# Should not log health check (only check our logger's records)
|
||||
timmy_records = [r for r in caplog.records if r.name == "timmy.requests"]
|
||||
@@ -121,8 +119,8 @@ class TestRequestLoggingMiddleware:
|
||||
"""Each request should have a unique correlation ID."""
|
||||
with caplog.at_level("INFO"):
|
||||
client = TestClient(app_with_logging)
|
||||
response = client.get("/test")
|
||||
client.get("/test")
|
||||
|
||||
# Check for correlation ID format (UUID or similar)
|
||||
log_messages = [record.message for record in caplog.records]
|
||||
[record.message for record in caplog.records]
|
||||
assert any(len(record.message) > 20 for record in caplog.records) # Rough check for ID
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
import pytest
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def _mock_completed_run(content="Just a reply."):
|
||||
"""Create a mock RunOutput for a completed (no tool) run."""
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Tests for timmy/briefing.py — morning briefing engine."""
|
||||
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
@@ -25,7 +24,7 @@ def engine(tmp_db):
|
||||
|
||||
def _make_briefing(offset_minutes: int = 0) -> Briefing:
|
||||
"""Create a Briefing with generated_at offset by offset_minutes from now."""
|
||||
now = datetime.now(timezone.utc) - timedelta(minutes=offset_minutes)
|
||||
now = datetime.now(UTC) - timedelta(minutes=offset_minutes)
|
||||
return Briefing(
|
||||
generated_at=now,
|
||||
summary="Good morning. All quiet on the swarm front.",
|
||||
@@ -50,7 +49,7 @@ def test_briefing_fields():
|
||||
|
||||
|
||||
def test_briefing_default_period_is_6_hours():
|
||||
b = Briefing(generated_at=datetime.now(timezone.utc), summary="test")
|
||||
b = Briefing(generated_at=datetime.now(UTC), summary="test")
|
||||
delta = b.period_end - b.period_start
|
||||
assert abs(delta.total_seconds() - 6 * 3600) < 5 # within 5 seconds
|
||||
|
||||
@@ -101,10 +100,7 @@ def test_load_latest_returns_most_recent(tmp_db):
|
||||
loaded = _load_latest(db_path=tmp_db)
|
||||
assert loaded is not None
|
||||
# Should return the newer one (generated_at closest to now)
|
||||
assert (
|
||||
abs((loaded.generated_at.replace(tzinfo=timezone.utc) - new.generated_at).total_seconds())
|
||||
< 5
|
||||
)
|
||||
assert abs((loaded.generated_at.replace(tzinfo=UTC) - new.generated_at).total_seconds()) < 5
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -264,7 +260,7 @@ async def test_notify_briefing_ready_fires_when_approvals_exist():
|
||||
description="A test item",
|
||||
proposed_action="do something",
|
||||
impact="low",
|
||||
created_at=datetime.now(timezone.utc),
|
||||
created_at=datetime.now(UTC),
|
||||
status="pending",
|
||||
),
|
||||
]
|
||||
|
||||
@@ -79,7 +79,7 @@ def test_morning_ritual_creates_tasks_and_journal_entry(client: TestClient, db_s
|
||||
tasks = db_session.query(Task).all()
|
||||
assert len(tasks) == 4
|
||||
|
||||
mit_tasks = db_session.query(Task).filter(Task.is_mit == True).all()
|
||||
mit_tasks = db_session.query(Task).filter(Task.is_mit).all()
|
||||
assert len(mit_tasks) == 2
|
||||
|
||||
now_task = db_session.query(Task).filter(Task.state == TaskState.NOW).first()
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestExperimentsRoute:
|
||||
"""Tests for /experiments endpoints."""
|
||||
|
||||
@@ -235,9 +235,9 @@ def test_L501_no_innerhtml_with_user_input(client):
|
||||
# Check for dangerous patterns: innerHTML += `${message}` etc.
|
||||
blocks = re.findall(r"innerHTML\s*\+=?\s*`([^`]*)`", html, re.DOTALL)
|
||||
for block in blocks:
|
||||
assert (
|
||||
"${message}" not in block
|
||||
), "innerHTML template literal contains ${message} — XSS vulnerability"
|
||||
assert "${message}" not in block, (
|
||||
"innerHTML template literal contains ${message} — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_L502_uses_textcontent_for_messages(client):
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestSovereigntyEndpoint:
|
||||
"""Tests for /health/sovereignty endpoint."""
|
||||
|
||||
@@ -221,9 +221,9 @@ def test_M601_airllm_agent_has_run_method():
|
||||
"""TimmyAirLLMAgent must expose run() so the dashboard route can call it."""
|
||||
from timmy.backends import TimmyAirLLMAgent
|
||||
|
||||
assert hasattr(
|
||||
TimmyAirLLMAgent, "run"
|
||||
), "TimmyAirLLMAgent is missing run() — dashboard will fail with AirLLM backend"
|
||||
assert hasattr(TimmyAirLLMAgent, "run"), (
|
||||
"TimmyAirLLMAgent is missing run() — dashboard will fail with AirLLM backend"
|
||||
)
|
||||
|
||||
|
||||
def test_M602_airllm_run_returns_content_attribute():
|
||||
@@ -322,9 +322,9 @@ def test_M701_mobile_chat_no_raw_message_interpolation():
|
||||
html = _mobile_html()
|
||||
# The vulnerable pattern is `${message}` inside a template literal assigned to innerHTML
|
||||
# After the fix, message must only appear via textContent assignment
|
||||
assert (
|
||||
"textContent = message" in html or "textContent=message" in html
|
||||
), "mobile.html still uses innerHTML + ${message} interpolation — XSS vulnerability"
|
||||
assert "textContent = message" in html or "textContent=message" in html, (
|
||||
"mobile.html still uses innerHTML + ${message} interpolation — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M702_mobile_chat_user_input_not_in_innerhtml_template_literal():
|
||||
@@ -333,23 +333,23 @@ def test_M702_mobile_chat_user_input_not_in_innerhtml_template_literal():
|
||||
# Find all innerHTML += `...` blocks and verify none contain ${message}
|
||||
blocks = re.findall(r"innerHTML\s*\+=?\s*`([^`]*)`", html, re.DOTALL)
|
||||
for block in blocks:
|
||||
assert (
|
||||
"${message}" not in block
|
||||
), "innerHTML template literal still contains ${message} — XSS vulnerability"
|
||||
assert "${message}" not in block, (
|
||||
"innerHTML template literal still contains ${message} — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M703_swarm_live_agent_name_not_interpolated_in_innerhtml():
|
||||
"""swarm_live.html must not put ${agent.name} inside innerHTML template literals."""
|
||||
html = _swarm_live_html()
|
||||
blocks = re.findall(r"innerHTML\s*=\s*agents\.map\([^;]+\)\.join\([^)]*\)", html, re.DOTALL)
|
||||
assert (
|
||||
len(blocks) == 0
|
||||
), "swarm_live.html still uses innerHTML=agents.map(…) with interpolated agent data — XSS vulnerability"
|
||||
assert len(blocks) == 0, (
|
||||
"swarm_live.html still uses innerHTML=agents.map(…) with interpolated agent data — XSS vulnerability"
|
||||
)
|
||||
|
||||
|
||||
def test_M704_swarm_live_uses_textcontent_for_agent_data():
|
||||
"""swarm_live.html must use textContent (not innerHTML) to set agent name/description."""
|
||||
html = _swarm_live_html()
|
||||
assert (
|
||||
"textContent" in html
|
||||
), "swarm_live.html does not use textContent — agent data may be raw-interpolated into DOM"
|
||||
assert "textContent" in html, (
|
||||
"swarm_live.html does not use textContent — agent data may be raw-interpolated into DOM"
|
||||
)
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
from integrations.paperclip.models import PaperclipAgent, PaperclipGoal, PaperclipIssue
|
||||
|
||||
# ── GET /api/paperclip/status ────────────────────────────────────────────────
|
||||
|
||||
|
||||
|
||||
@@ -100,7 +100,6 @@ def test_swarm_live_page_returns_200(client):
|
||||
|
||||
def test_swarm_live_websocket_sends_initial_state(client):
|
||||
"""WebSocket at /swarm/live sends initial_state on connect."""
|
||||
import json
|
||||
|
||||
with client.websocket_connect("/swarm/live") as ws:
|
||||
data = ws.receive_json()
|
||||
@@ -169,8 +168,10 @@ def test_notifications_bell_dropdown_in_html(client):
|
||||
|
||||
def test_create_timmy_uses_timeout_not_request_timeout():
|
||||
"""create_timmy() should pass timeout=300, not request_timeout."""
|
||||
with patch("timmy.agent.Ollama") as mock_ollama, patch("timmy.agent.SqliteDb"), patch(
|
||||
"timmy.agent.Agent"
|
||||
with (
|
||||
patch("timmy.agent.Ollama") as mock_ollama,
|
||||
patch("timmy.agent.SqliteDb"),
|
||||
patch("timmy.agent.Agent"),
|
||||
):
|
||||
mock_ollama.return_value = MagicMock()
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
"""Test security headers middleware in FastAPI app."""
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
|
||||
@@ -32,8 +32,9 @@ async def test_multistep_chain_completes_all_steps():
|
||||
]
|
||||
)
|
||||
|
||||
with patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent), patch(
|
||||
"timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Search AI news and write summary to file")
|
||||
|
||||
@@ -57,8 +58,9 @@ async def test_multistep_chain_adapts_on_failure():
|
||||
]
|
||||
)
|
||||
|
||||
with patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent), patch(
|
||||
"timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Update config timeout to 60")
|
||||
|
||||
@@ -79,8 +81,9 @@ async def test_max_steps_enforced():
|
||||
]
|
||||
)
|
||||
|
||||
with patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent), patch(
|
||||
"timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
result = await run_agentic_loop("Do 5 things", max_steps=2)
|
||||
|
||||
@@ -106,8 +109,9 @@ async def test_progress_events_fire():
|
||||
]
|
||||
)
|
||||
|
||||
with patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent), patch(
|
||||
"timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock
|
||||
with (
|
||||
patch("timmy.agentic_loop._get_loop_agent", return_value=mock_agent),
|
||||
patch("timmy.agentic_loop._broadcast_progress", new_callable=AsyncMock),
|
||||
):
|
||||
await run_agentic_loop("Do A and B", on_progress=on_progress)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user