forked from Rockachopa/Timmy-time-dashboard
Compare commits
28 Commits
claude/iss
...
claude/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e55fc07f5e | ||
| 2d6bfe6ba1 | |||
| ebb2cad552 | |||
| 003e3883fb | |||
| 7dfbf05867 | |||
| 1cce28d1bb | |||
| 4c6b69885d | |||
| 6b2e6d9e8c | |||
| 2b238d1d23 | |||
| b7ad5bf1d9 | |||
| 2240ddb632 | |||
| 35d2547a0b | |||
| f62220eb61 | |||
| 72992b7cc5 | |||
| b5fb6a85cf | |||
| fedd164686 | |||
| 261b7be468 | |||
| 6691f4d1f3 | |||
| ea76af068a | |||
| b61fcd3495 | |||
| 1e1689f931 | |||
| acc0df00cf | |||
| a0c35202f3 | |||
| fe1d576c3c | |||
| 3e65271af6 | |||
| 697575e561 | |||
| e6391c599d | |||
| d697c3d93e |
38
AGENTS.md
38
AGENTS.md
@@ -34,6 +34,44 @@ Read [`CLAUDE.md`](CLAUDE.md) for architecture patterns and conventions.
|
||||
|
||||
---
|
||||
|
||||
## One-Agent-Per-Issue Convention
|
||||
|
||||
**An issue must only be worked by one agent at a time.** Duplicate branches from
|
||||
multiple agents on the same issue cause merge conflicts, redundant code, and wasted compute.
|
||||
|
||||
### Labels
|
||||
|
||||
When an agent picks up an issue, add the corresponding label:
|
||||
|
||||
| Label | Meaning |
|
||||
|-------|---------|
|
||||
| `assigned-claude` | Claude is actively working this issue |
|
||||
| `assigned-gemini` | Gemini is actively working this issue |
|
||||
| `assigned-kimi` | Kimi is actively working this issue |
|
||||
| `assigned-manus` | Manus is actively working this issue |
|
||||
|
||||
### Rules
|
||||
|
||||
1. **Before starting an issue**, check that none of the `assigned-*` labels are present.
|
||||
If one is, skip the issue — another agent owns it.
|
||||
2. **When you start**, add the label matching your agent (e.g. `assigned-claude`).
|
||||
3. **When your PR is merged or closed**, remove the label (or it auto-clears when
|
||||
the branch is deleted — see Auto-Delete below).
|
||||
4. **Never assign the same issue to two agents simultaneously.**
|
||||
|
||||
### Auto-Delete Merged Branches
|
||||
|
||||
`default_delete_branch_after_merge` is **enabled** on this repo. Branches are
|
||||
automatically deleted after a PR merges — no manual cleanup needed and no stale
|
||||
`claude/*`, `gemini/*`, or `kimi/*` branches accumulate.
|
||||
|
||||
If you discover stale merged branches, they can be pruned with:
|
||||
```bash
|
||||
git fetch --prune
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Merge Policy (PR-Only)
|
||||
|
||||
**Gitea branch protection is active on `main`.** This is not a suggestion.
|
||||
|
||||
@@ -25,6 +25,19 @@ providers:
|
||||
tier: local
|
||||
url: "http://localhost:11434"
|
||||
models:
|
||||
# ── Dual-model routing: Qwen3-8B (fast) + Qwen3-14B (quality) ──────────
|
||||
# Both models fit simultaneously: ~6.6 GB + ~10.5 GB = ~17 GB combined.
|
||||
# Requires OLLAMA_MAX_LOADED_MODELS=2 (set in .env) to stay hot.
|
||||
# Ref: issue #1065 — Qwen3-8B/14B dual-model routing strategy
|
||||
- name: qwen3:8b
|
||||
context_window: 32768
|
||||
capabilities: [text, tools, json, streaming, routine]
|
||||
description: "Qwen3-8B Q6_K — fast router for routine tasks (~6.6 GB, 45-55 tok/s)"
|
||||
- name: qwen3:14b
|
||||
context_window: 40960
|
||||
capabilities: [text, tools, json, streaming, complex, reasoning]
|
||||
description: "Qwen3-14B Q5_K_M — complex reasoning and planning (~10.5 GB, 20-28 tok/s)"
|
||||
|
||||
# Text + Tools models
|
||||
- name: qwen3:30b
|
||||
default: true
|
||||
@@ -187,6 +200,20 @@ fallback_chains:
|
||||
- dolphin3 # base Dolphin 3.0 8B (uncensored, no custom system prompt)
|
||||
- qwen3:30b # primary fallback — usually sufficient with a good system prompt
|
||||
|
||||
# ── Complexity-based routing chains (issue #1065) ───────────────────────
|
||||
# Routine tasks: prefer Qwen3-8B for low latency (~45-55 tok/s)
|
||||
routine:
|
||||
- qwen3:8b # Primary fast model
|
||||
- llama3.1:8b-instruct # Fallback fast model
|
||||
- llama3.2:3b # Smallest available
|
||||
|
||||
# Complex tasks: prefer Qwen3-14B for quality (~20-28 tok/s)
|
||||
complex:
|
||||
- qwen3:14b # Primary quality model
|
||||
- hermes4-14b # Native tool calling, hybrid reasoning
|
||||
- qwen3:30b # Highest local quality
|
||||
- qwen2.5:14b # Additional fallback
|
||||
|
||||
# ── Custom Models ───────────────────────────────────────────────────────────
|
||||
# Register custom model weights for per-agent assignment.
|
||||
# Supports GGUF (Ollama), safetensors, and HuggingFace checkpoint dirs.
|
||||
|
||||
244
docs/GITEA_AUDIT_2026-03-23.md
Normal file
244
docs/GITEA_AUDIT_2026-03-23.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# Gitea Activity & Branch Audit — 2026-03-23
|
||||
|
||||
**Requested by:** Issue #1210
|
||||
**Audited by:** Claude (Sonnet 4.6)
|
||||
**Date:** 2026-03-23
|
||||
**Scope:** All repos under the sovereign AI stack
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
- **18 repos audited** across 9 Gitea organizations/users
|
||||
- **~65–70 branches identified** as safe to delete (merged or abandoned)
|
||||
- **4 open PRs** are bottlenecks awaiting review
|
||||
- **3+ instances of duplicate work** across repos and agents
|
||||
- **5+ branches** contain valuable unmerged code with no open PR
|
||||
- **5 PRs closed without merge** on active p0-critical issues in Timmy-time-dashboard
|
||||
|
||||
Improvement tickets have been filed on each affected repo following this report.
|
||||
|
||||
---
|
||||
|
||||
## Repo-by-Repo Findings
|
||||
|
||||
---
|
||||
|
||||
### 1. rockachopa/Timmy-time-dashboard
|
||||
|
||||
**Status:** Most active repo. 1,200+ PRs, 50+ branches.
|
||||
|
||||
#### Dead/Abandoned Branches
|
||||
| Branch | Last Commit | Status |
|
||||
|--------|-------------|--------|
|
||||
| `feature/voice-customization` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/enhanced-memory-ui` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/soul-customization` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/dreaming-mode` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/memory-visualization` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/voice-customization-ui` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/issue-1015` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/issue-1016` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/issue-1017` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/issue-1018` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/issue-1019` | 2026-03-22 | Gemini-created, no PR, abandoned |
|
||||
| `feature/self-reflection` | 2026-03-22 | Only merge-from-main commits, no unique work |
|
||||
| `feature/memory-search-ui` | 2026-03-22 | Only merge-from-main commits, no unique work |
|
||||
| `claude/issue-962` | 2026-03-22 | Automated salvage commit only |
|
||||
| `claude/issue-972` | 2026-03-22 | Automated salvage commit only |
|
||||
| `gemini/issue-1006` | 2026-03-22 | Incomplete agent session |
|
||||
| `gemini/issue-1008` | 2026-03-22 | Incomplete agent session |
|
||||
| `gemini/issue-1010` | 2026-03-22 | Incomplete agent session |
|
||||
| `gemini/issue-1134` | 2026-03-22 | Incomplete agent session |
|
||||
| `gemini/issue-1139` | 2026-03-22 | Incomplete agent session |
|
||||
|
||||
#### Duplicate Branches (Identical SHA)
|
||||
| Branch A | Branch B | Action |
|
||||
|----------|----------|--------|
|
||||
| `feature/internal-monologue` | `feature/issue-1005` | Exact duplicate — delete one |
|
||||
| `claude/issue-1005` | (above) | Merge-from-main only — delete |
|
||||
|
||||
#### Unmerged Work With No Open PR (HIGH PRIORITY)
|
||||
| Branch | Content | Issues |
|
||||
|--------|---------|--------|
|
||||
| `claude/issue-987` | Content moderation pipeline, Llama Guard integration | No open PR — potentially lost |
|
||||
| `claude/issue-1011` | Automated skill discovery system | No open PR — potentially lost |
|
||||
| `gemini/issue-976` | Semantic index for research outputs | No open PR — potentially lost |
|
||||
|
||||
#### PRs Closed Without Merge (Issues Still Open)
|
||||
| PR | Title | Issue Status |
|
||||
|----|-------|-------------|
|
||||
| PR#1163 | Three-Strike Detector (#962) | p0-critical, still open |
|
||||
| PR#1162 | Session Sovereignty Report Generator (#957) | p0-critical, still open |
|
||||
| PR#1157 | Qwen3 routing | open |
|
||||
| PR#1156 | Agent Dreaming Mode | open |
|
||||
| PR#1145 | Qwen3-14B config | open |
|
||||
|
||||
#### Workflow Observations
|
||||
- `loop-cycle` bot auto-creates micro-fix PRs at high frequency (PR numbers climbing past 1209 rapidly)
|
||||
- Many `gemini/*` branches represent incomplete agent sessions, not full feature work
|
||||
- Issues get reassigned across agents causing duplicate branch proliferation
|
||||
|
||||
---
|
||||
|
||||
### 2. rockachopa/hermes-agent
|
||||
|
||||
**Status:** Active — AutoLoRA training pipeline in progress.
|
||||
|
||||
#### Open PRs Awaiting Review
|
||||
| PR | Title | Age |
|
||||
|----|-------|-----|
|
||||
| PR#33 | AutoLoRA v1 MLX QLoRA training pipeline | ~1 week |
|
||||
|
||||
#### Valuable Unmerged Branches (No PR)
|
||||
| Branch | Content | Age |
|
||||
|--------|---------|-----|
|
||||
| `sovereign` | Full fallback chain: Groq/Kimi/Ollama cascade recovery | 9 days |
|
||||
| `fix/vision-api-key-fallback` | Vision API key fallback fix | 9 days |
|
||||
|
||||
#### Stale Merged Branches (~12)
|
||||
12 merged `claude/*` and `gemini/*` branches are safe to delete.
|
||||
|
||||
---
|
||||
|
||||
### 3. rockachopa/the-matrix
|
||||
|
||||
**Status:** 8 open PRs from `claude/the-matrix` fork all awaiting review, all batch-created on 2026-03-23.
|
||||
|
||||
#### Open PRs (ALL Awaiting Review)
|
||||
| PR | Feature |
|
||||
|----|---------|
|
||||
| PR#9–16 | Touch controls, agent feed, particles, audio, day/night cycle, metrics panel, ASCII logo, click-to-view-PR |
|
||||
|
||||
These were created in a single agent session within 5 minutes — needs human review before merge.
|
||||
|
||||
---
|
||||
|
||||
### 4. replit/timmy-tower
|
||||
|
||||
**Status:** Very active — 100+ PRs, complex feature roadmap.
|
||||
|
||||
#### Open PRs Awaiting Review
|
||||
| PR | Title | Age |
|
||||
|----|-------|-----|
|
||||
| PR#93 | Task decomposition view | Recent |
|
||||
| PR#80 | `session_messages` table | 22 hours |
|
||||
|
||||
#### Unmerged Work With No Open PR
|
||||
| Branch | Content |
|
||||
|--------|---------|
|
||||
| `gemini/issue-14` | NIP-07 Nostr identity |
|
||||
| `gemini/issue-42` | Timmy animated eyes |
|
||||
| `claude/issue-11` | Kimi + Perplexity agent integrations |
|
||||
| `claude/issue-13` | Nostr event publishing |
|
||||
| `claude/issue-29` | Mobile Nostr identity |
|
||||
| `claude/issue-45` | Test kit |
|
||||
| `claude/issue-47` | SQL migration helpers |
|
||||
| `claude/issue-67` | Session Mode UI |
|
||||
|
||||
#### Cleanup
|
||||
~30 merged `claude/*` and `gemini/*` branches are safe to delete.
|
||||
|
||||
---
|
||||
|
||||
### 5. replit/token-gated-economy
|
||||
|
||||
**Status:** Active roadmap, no current open PRs.
|
||||
|
||||
#### Stale Branches (~23)
|
||||
- 8 Replit Agent branches from 2026-03-19 (PRs closed/merged)
|
||||
- 15 merged `claude/issue-*` branches
|
||||
|
||||
All are safe to delete.
|
||||
|
||||
---
|
||||
|
||||
### 6. hermes/timmy-time-app
|
||||
|
||||
**Status:** 2-commit repo, created 2026-03-14, no activity since. **Candidate for archival.**
|
||||
|
||||
Functionality appears to be superseded by other repos in the stack. Recommend archiving or deleting if not planned for future development.
|
||||
|
||||
---
|
||||
|
||||
### 7. google/maintenance-tasks & google/wizard-council-automation
|
||||
|
||||
**Status:** Single-commit repos from 2026-03-19 created by "Google AI Studio". No follow-up activity.
|
||||
|
||||
Unclear ownership and purpose. Recommend clarifying with rockachopa whether these are active or can be archived.
|
||||
|
||||
---
|
||||
|
||||
### 8. hermes/hermes-config
|
||||
|
||||
**Status:** Single branch, updated 2026-03-23 (today). Active — contains Timmy orchestrator config.
|
||||
|
||||
No action needed.
|
||||
|
||||
---
|
||||
|
||||
### 9. Timmy_Foundation/the-nexus
|
||||
|
||||
**Status:** Greenfield — created 2026-03-23. 19 issues filed as roadmap. PR#2 (contributor audit) open.
|
||||
|
||||
No cleanup needed yet. PR#2 needs review.
|
||||
|
||||
---
|
||||
|
||||
### 10. rockachopa/alexanderwhitestone.com
|
||||
|
||||
**Status:** All recent `claude/*` PRs merged. 7 non-main branches are post-merge and safe to delete.
|
||||
|
||||
---
|
||||
|
||||
### 11. hermes/hermes-config, rockachopa/hermes-config, Timmy_Foundation/.profile
|
||||
|
||||
**Status:** Dormant config repos. No action needed.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Repo Patterns & Inefficiencies
|
||||
|
||||
### Duplicate Work
|
||||
1. **Timmy spring/wobble physics** built independently in both `replit/timmy-tower` and `replit/token-gated-economy`
|
||||
2. **Nostr identity logic** fragmented across 3 repos with no shared library
|
||||
3. **`feature/internal-monologue` = `feature/issue-1005`** in Timmy-time-dashboard — identical SHA, exact duplicate
|
||||
|
||||
### Agent Workflow Issues
|
||||
- Same issue assigned to both `gemini/*` and `claude/*` agents creates duplicate branches
|
||||
- Agent salvage commits are checkpoint-only — not complete work, but clutter the branch list
|
||||
- Gemini `feature/*` branches created on 2026-03-22 with no PRs filed — likely a failed agent session that created branches but didn't complete the loop
|
||||
|
||||
### Review Bottlenecks
|
||||
| Repo | Waiting PRs | Notes |
|
||||
|------|-------------|-------|
|
||||
| rockachopa/the-matrix | 8 | Batch-created, need human review |
|
||||
| replit/timmy-tower | 2 | Database schema and UI work |
|
||||
| rockachopa/hermes-agent | 1 | AutoLoRA v1 — high value |
|
||||
| Timmy_Foundation/the-nexus | 1 | Contributor audit |
|
||||
|
||||
---
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
### Immediate (This Sprint)
|
||||
1. **Review & merge** PR#33 in `hermes-agent` (AutoLoRA v1)
|
||||
2. **Review** 8 open PRs in `the-matrix` before merging as a batch
|
||||
3. **Rescue** unmerged work in `claude/issue-987`, `claude/issue-1011`, `gemini/issue-976` — file new PRs or close branches
|
||||
4. **Delete duplicate** `feature/internal-monologue` / `feature/issue-1005` branches
|
||||
|
||||
### Cleanup Sprint
|
||||
5. **Delete ~65 stale branches** across all repos (itemized above)
|
||||
6. **Investigate** the 5 closed-without-merge PRs in Timmy-time-dashboard for p0-critical issues
|
||||
7. **Archive** `hermes/timmy-time-app` if no longer needed
|
||||
8. **Clarify** ownership of `google/maintenance-tasks` and `google/wizard-council-automation`
|
||||
|
||||
### Process Improvements
|
||||
9. **Enforce one-agent-per-issue** policy to prevent duplicate `claude/*` / `gemini/*` branches
|
||||
10. **Add branch protection** requiring PR before merge on `main` for all repos
|
||||
11. **Set a branch retention policy** — auto-delete merged branches (GitHub/Gitea supports this)
|
||||
12. **Share common libraries** for Nostr identity and animation physics across repos
|
||||
|
||||
---
|
||||
|
||||
*Report generated by Claude audit agent. Improvement tickets filed per repo as follow-up to this report.*
|
||||
160
docs/adr/024-nostr-identity-canonical-location.md
Normal file
160
docs/adr/024-nostr-identity-canonical-location.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# ADR-024: Canonical Nostr Identity Location
|
||||
|
||||
**Status:** Accepted
|
||||
**Date:** 2026-03-23
|
||||
**Issue:** #1223
|
||||
**Refs:** #1210 (duplicate-work audit), ROADMAP.md Phase 2
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
Nostr identity logic has been independently implemented in at least three
|
||||
repos (`replit/timmy-tower`, `replit/token-gated-economy`,
|
||||
`rockachopa/Timmy-time-dashboard`), each building keypair generation, event
|
||||
publishing, and NIP-07 browser-extension auth in isolation.
|
||||
|
||||
This duplication causes:
|
||||
|
||||
- Bug fixes applied in one repo but silently missed in others.
|
||||
- Diverging implementations of the same NIPs (NIP-01, NIP-07, NIP-44).
|
||||
- Agent time wasted re-implementing logic that already exists.
|
||||
|
||||
ROADMAP.md Phase 2 already names `timmy-nostr` as the planned home for Nostr
|
||||
infrastructure. This ADR makes that decision explicit and prescribes how
|
||||
other repos consume it.
|
||||
|
||||
---
|
||||
|
||||
## Decision
|
||||
|
||||
**The canonical home for all Nostr identity logic is `rockachopa/timmy-nostr`.**
|
||||
|
||||
All other repos (`Timmy-time-dashboard`, `timmy-tower`,
|
||||
`token-gated-economy`) become consumers, not implementers, of Nostr identity
|
||||
primitives.
|
||||
|
||||
### What lives in `timmy-nostr`
|
||||
|
||||
| Module | Responsibility |
|
||||
|--------|---------------|
|
||||
| `nostr_id/keypair.py` | Keypair generation, nsec/npub encoding, encrypted storage |
|
||||
| `nostr_id/identity.py` | Agent identity lifecycle (NIP-01 kind:0 profile events) |
|
||||
| `nostr_id/auth.py` | NIP-07 browser-extension signer; NIP-42 relay auth |
|
||||
| `nostr_id/event.py` | Event construction, signing, serialisation (NIP-01) |
|
||||
| `nostr_id/crypto.py` | NIP-44 encryption (XChaCha20-Poly1305 v2) |
|
||||
| `nostr_id/nip05.py` | DNS-based identifier verification |
|
||||
| `nostr_id/relay.py` | WebSocket relay client (publish / subscribe) |
|
||||
|
||||
### What does NOT live in `timmy-nostr`
|
||||
|
||||
- Business logic that combines Nostr with application-specific concepts
|
||||
(e.g. "publish a task-completion event" lives in the application layer
|
||||
that calls `timmy-nostr`).
|
||||
- Reputation scoring algorithms (depends on application policy).
|
||||
- Dashboard UI components.
|
||||
|
||||
---
|
||||
|
||||
## How Other Repos Reference `timmy-nostr`
|
||||
|
||||
### Python repos (`Timmy-time-dashboard`, `timmy-tower`)
|
||||
|
||||
Add to `pyproject.toml` dependencies:
|
||||
|
||||
```toml
|
||||
[tool.poetry.dependencies]
|
||||
timmy-nostr = {git = "https://gitea.hermes.local/rockachopa/timmy-nostr.git", tag = "v0.1.0"}
|
||||
```
|
||||
|
||||
Import pattern:
|
||||
|
||||
```python
|
||||
from nostr_id.keypair import generate_keypair, load_keypair
|
||||
from nostr_id.event import build_event, sign_event
|
||||
from nostr_id.relay import NostrRelayClient
|
||||
```
|
||||
|
||||
### JavaScript/TypeScript repos (`token-gated-economy` frontend)
|
||||
|
||||
Add to `package.json` (once published or via local path):
|
||||
|
||||
```json
|
||||
"dependencies": {
|
||||
"timmy-nostr": "rockachopa/timmy-nostr#v0.1.0"
|
||||
}
|
||||
```
|
||||
|
||||
Import pattern:
|
||||
|
||||
```typescript
|
||||
import { generateKeypair, signEvent } from 'timmy-nostr';
|
||||
```
|
||||
|
||||
Until `timmy-nostr` publishes a JS package, use NIP-07 browser extension
|
||||
directly and delegate all key-management to the browser signer — never
|
||||
re-implement crypto in JS without the shared library.
|
||||
|
||||
---
|
||||
|
||||
## Migration Plan
|
||||
|
||||
Existing duplicated code should be migrated in this order:
|
||||
|
||||
1. **Keypair generation** — highest duplication, clearest interface.
|
||||
2. **NIP-01 event construction/signing** — used by all three repos.
|
||||
3. **NIP-07 browser auth** — currently in `timmy-tower` and `token-gated-economy`.
|
||||
4. **NIP-44 encryption** — lowest priority, least duplicated.
|
||||
|
||||
Each step: implement in `timmy-nostr` → cut over one repo → delete the
|
||||
duplicate → repeat.
|
||||
|
||||
---
|
||||
|
||||
## Interface Contract
|
||||
|
||||
`timmy-nostr` must expose a stable public API:
|
||||
|
||||
```python
|
||||
# Keypair
|
||||
keypair = generate_keypair() # -> NostrKeypair(nsec, npub, privkey_bytes, pubkey_bytes)
|
||||
keypair = load_keypair(encrypted_nsec, secret_key)
|
||||
|
||||
# Events
|
||||
event = build_event(kind=0, content=profile_json, keypair=keypair)
|
||||
event = sign_event(event, keypair) # attaches .id and .sig
|
||||
|
||||
# Relay
|
||||
async with NostrRelayClient(url) as relay:
|
||||
await relay.publish(event)
|
||||
async for msg in relay.subscribe(filters):
|
||||
...
|
||||
```
|
||||
|
||||
Breaking changes to this interface require a semver major bump and a
|
||||
migration note in `timmy-nostr`'s CHANGELOG.
|
||||
|
||||
---
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive:** Bug fixes in cryptographic or protocol code propagate to all
|
||||
repos via a version bump.
|
||||
- **Positive:** New NIPs are implemented once and adopted everywhere.
|
||||
- **Negative:** Adds a cross-repo dependency; version pinning discipline
|
||||
required.
|
||||
- **Negative:** `timmy-nostr` must be stood up and tagged before any
|
||||
migration can begin.
|
||||
|
||||
---
|
||||
|
||||
## Action Items
|
||||
|
||||
- [ ] Create `rockachopa/timmy-nostr` repo with the module structure above.
|
||||
- [ ] Implement keypair generation + NIP-01 signing as v0.1.0.
|
||||
- [ ] Replace `Timmy-time-dashboard` inline Nostr code (if any) with
|
||||
`timmy-nostr` import once v0.1.0 is tagged.
|
||||
- [ ] Add `src/infrastructure/clients/nostr_client.py` as the thin
|
||||
application-layer wrapper (see ROADMAP.md §2.6).
|
||||
- [ ] File issues in `timmy-tower` and `token-gated-economy` to migrate their
|
||||
duplicate implementations.
|
||||
1244
docs/model-benchmarks.md
Normal file
1244
docs/model-benchmarks.md
Normal file
File diff suppressed because it is too large
Load Diff
75
docs/pr-recovery-1219.md
Normal file
75
docs/pr-recovery-1219.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# PR Recovery Investigation — Issue #1219
|
||||
|
||||
**Audit source:** Issue #1210
|
||||
|
||||
Five PRs were closed without merge while their parent issues remained open and
|
||||
marked p0-critical. This document records the investigation findings and the
|
||||
path to resolution for each.
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
Per Timmy's comment on #1219: all five PRs were closed due to **merge conflicts
|
||||
during the mass-merge cleanup cycle** (a rebase storm), not due to code
|
||||
quality problems or a changed approach. The code in each PR was correct;
|
||||
the branches simply became stale.
|
||||
|
||||
---
|
||||
|
||||
## Status Matrix
|
||||
|
||||
| PR | Feature | Issue | PR Closed | Issue State | Resolution |
|
||||
|----|---------|-------|-----------|-------------|------------|
|
||||
| #1163 | Three-Strike Detector | #962 | Rebase storm | **Closed ✓** | v2 merged via PR #1232 |
|
||||
| #1162 | Session Sovereignty Report | #957 | Rebase storm | **Open** | PR #1263 (v3 — rebased) |
|
||||
| #1157 | Qwen3-8B/14B routing | #1065 | Rebase storm | **Closed ✓** | v2 merged via PR #1233 |
|
||||
| #1156 | Agent Dreaming Mode | #1019 | Rebase storm | **Open** | PR #1264 (v3 — rebased) |
|
||||
| #1145 | Qwen3-14B config | #1064 | Rebase storm | **Closed ✓** | Code present on main |
|
||||
|
||||
---
|
||||
|
||||
## Detail: Already Resolved
|
||||
|
||||
### PR #1163 → Issue #962 (Three-Strike Detector)
|
||||
|
||||
- **Why closed:** merge conflict during rebase storm
|
||||
- **Resolution:** `src/timmy/sovereignty/three_strike.py` and
|
||||
`src/dashboard/routes/three_strike.py` are present on `main` (landed via
|
||||
PR #1232). Issue #962 is closed.
|
||||
|
||||
### PR #1157 → Issue #1065 (Qwen3-8B/14B dual-model routing)
|
||||
|
||||
- **Why closed:** merge conflict during rebase storm
|
||||
- **Resolution:** `src/infrastructure/router/classifier.py` and
|
||||
`src/infrastructure/router/cascade.py` are present on `main` (landed via
|
||||
PR #1233). Issue #1065 is closed.
|
||||
|
||||
### PR #1145 → Issue #1064 (Qwen3-14B config)
|
||||
|
||||
- **Why closed:** merge conflict during rebase storm
|
||||
- **Resolution:** `Modelfile.timmy`, `Modelfile.qwen3-14b`, and the `config.py`
|
||||
defaults (`ollama_model = "qwen3:14b"`) are present on `main`. Issue #1064
|
||||
is closed.
|
||||
|
||||
---
|
||||
|
||||
## Detail: Requiring Action
|
||||
|
||||
### PR #1162 → Issue #957 (Session Sovereignty Report Generator)
|
||||
|
||||
- **Why closed:** merge conflict during rebase storm
|
||||
- **Branch preserved:** `claude/issue-957-v2` (one feature commit)
|
||||
- **Action taken:** Rebased onto current `main`, resolved conflict in
|
||||
`src/timmy/sovereignty/__init__.py` (both three-strike and session-report
|
||||
docstrings kept). All 458 unit tests pass.
|
||||
- **New PR:** #1263 (`claude/issue-957-v3` → `main`)
|
||||
|
||||
### PR #1156 → Issue #1019 (Agent Dreaming Mode)
|
||||
|
||||
- **Why closed:** merge conflict during rebase storm
|
||||
- **Branch preserved:** `claude/issue-1019-v2` (one feature commit)
|
||||
- **Action taken:** Rebased onto current `main`, resolved conflict in
|
||||
`src/dashboard/app.py` (both `three_strike_router` and `dreaming_router`
|
||||
registered). All 435 unit tests pass.
|
||||
- **New PR:** #1264 (`claude/issue-1019-v3` → `main`)
|
||||
132
docs/research/autoresearch-h1-baseline.md
Normal file
132
docs/research/autoresearch-h1-baseline.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Autoresearch H1 — M3 Max Baseline
|
||||
|
||||
**Status:** Baseline established (Issue #905)
|
||||
**Hardware:** Apple M3 Max · 36 GB unified memory
|
||||
**Date:** 2026-03-23
|
||||
**Refs:** #905 · #904 (parent) · #881 (M3 Max compute) · #903 (MLX benchmark)
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
```bash
|
||||
# Install MLX (Apple Silicon — definitively faster than llama.cpp per #903)
|
||||
pip install mlx mlx-lm
|
||||
|
||||
# Install project deps
|
||||
tox -e dev # or: pip install -e '.[dev]'
|
||||
```
|
||||
|
||||
### Clone & prepare
|
||||
|
||||
`prepare_experiment` in `src/timmy/autoresearch.py` handles the clone.
|
||||
On Apple Silicon it automatically sets `AUTORESEARCH_BACKEND=mlx` and
|
||||
`AUTORESEARCH_DATASET=tinystories`.
|
||||
|
||||
```python
|
||||
from timmy.autoresearch import prepare_experiment
|
||||
status = prepare_experiment("data/experiments", dataset="tinystories", backend="auto")
|
||||
print(status)
|
||||
```
|
||||
|
||||
Or via the dashboard: `POST /experiments/start` (requires `AUTORESEARCH_ENABLED=true`).
|
||||
|
||||
### Configuration (`.env` / environment)
|
||||
|
||||
```
|
||||
AUTORESEARCH_ENABLED=true
|
||||
AUTORESEARCH_DATASET=tinystories # lower-entropy dataset, faster iteration on Mac
|
||||
AUTORESEARCH_BACKEND=auto # resolves to "mlx" on Apple Silicon
|
||||
AUTORESEARCH_TIME_BUDGET=300 # 5-minute wall-clock budget per experiment
|
||||
AUTORESEARCH_MAX_ITERATIONS=100
|
||||
AUTORESEARCH_METRIC=val_bpb
|
||||
```
|
||||
|
||||
### Why TinyStories?
|
||||
|
||||
Karpathy's recommendation for resource-constrained hardware: lower entropy
|
||||
means the model can learn meaningful patterns in less time and with a smaller
|
||||
vocabulary, yielding cleaner val_bpb curves within the 5-minute budget.
|
||||
|
||||
---
|
||||
|
||||
## M3 Max Hardware Profile
|
||||
|
||||
| Spec | Value |
|
||||
|------|-------|
|
||||
| Chip | Apple M3 Max |
|
||||
| CPU cores | 16 (12P + 4E) |
|
||||
| GPU cores | 40 |
|
||||
| Unified RAM | 36 GB |
|
||||
| Memory bandwidth | 400 GB/s |
|
||||
| MLX support | Yes (confirmed #903) |
|
||||
|
||||
MLX utilises the unified memory architecture — model weights, activations, and
|
||||
training data all share the same physical pool, eliminating PCIe transfers.
|
||||
This gives M3 Max a significant throughput advantage over external GPU setups
|
||||
for models that fit in 36 GB.
|
||||
|
||||
---
|
||||
|
||||
## Community Reference Data
|
||||
|
||||
| Hardware | Experiments | Succeeded | Failed | Outcome |
|
||||
|----------|-------------|-----------|--------|---------|
|
||||
| Mac Mini M4 | 35 | 7 | 28 | Model improved by simplifying |
|
||||
| Shopify (overnight) | ~50 | — | — | 19% quality gain; smaller beat 2× baseline |
|
||||
| SkyPilot (16× GPU, 8 h) | ~910 | — | — | 2.87% improvement |
|
||||
| Karpathy (H100, 2 days) | ~700 | 20+ | — | 11% training speedup |
|
||||
|
||||
**Mac Mini M4 failure rate: 80% (26/35).** Failures are expected and by design —
|
||||
the 5-minute budget deliberately prunes slow experiments. The 20% success rate
|
||||
still yielded an improved model.
|
||||
|
||||
---
|
||||
|
||||
## Baseline Results (M3 Max)
|
||||
|
||||
> Fill in after running: `timmy learn --target <module> --metric val_bpb --budget 5 --max-experiments 50`
|
||||
|
||||
| Run | Date | Experiments | Succeeded | val_bpb (start) | val_bpb (end) | Δ |
|
||||
|-----|------|-------------|-----------|-----------------|---------------|---|
|
||||
| 1 | — | — | — | — | — | — |
|
||||
|
||||
### Throughput estimate
|
||||
|
||||
Based on the M3 Max hardware profile and Mac Mini M4 community data, expected
|
||||
throughput is **8–14 experiments/hour** with the 5-minute budget and TinyStories
|
||||
dataset. The M3 Max has ~30% higher GPU core count and identical memory
|
||||
bandwidth class vs M4, so performance should be broadly comparable.
|
||||
|
||||
---
|
||||
|
||||
## Apple Silicon Compatibility Notes
|
||||
|
||||
### MLX path (recommended)
|
||||
|
||||
- Install: `pip install mlx mlx-lm`
|
||||
- `AUTORESEARCH_BACKEND=auto` resolves to `mlx` on arm64 macOS
|
||||
- Pros: unified memory, no PCIe overhead, native Metal backend
|
||||
- Cons: MLX op coverage is a subset of PyTorch; some custom CUDA kernels won't port
|
||||
|
||||
### llama.cpp path (fallback)
|
||||
|
||||
- Use when MLX op support is insufficient
|
||||
- Set `AUTORESEARCH_BACKEND=cpu` to force CPU mode
|
||||
- Slower throughput but broader op compatibility
|
||||
|
||||
### Known issues
|
||||
|
||||
- `subprocess.TimeoutExpired` is the normal termination path — autoresearch
|
||||
treats timeout as a completed-but-pruned experiment, not a failure
|
||||
- Large batch sizes may trigger OOM if other processes hold unified memory;
|
||||
set `PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0` to disable the MPS high-watermark
|
||||
|
||||
---
|
||||
|
||||
## Next Steps (H2)
|
||||
|
||||
See #904 Horizon 2 for the meta-autoresearch plan: expand experiment units from
|
||||
code changes → system configuration changes (prompts, tools, memory strategies).
|
||||
33
index_research_docs.py
Normal file
33
index_research_docs.py
Normal file
@@ -0,0 +1,33 @@
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add the src directory to the Python path
|
||||
sys.path.insert(0, str(Path(__file__).parent / "src"))
|
||||
|
||||
from timmy.memory_system import memory_store
|
||||
|
||||
def index_research_documents():
|
||||
research_dir = Path("docs/research")
|
||||
if not research_dir.is_dir():
|
||||
print(f"Research directory not found: {research_dir}")
|
||||
return
|
||||
|
||||
print(f"Indexing research documents from {research_dir}...")
|
||||
indexed_count = 0
|
||||
for file_path in research_dir.glob("*.md"):
|
||||
try:
|
||||
content = file_path.read_text()
|
||||
topic = file_path.stem.replace("-", " ").title() # Derive topic from filename
|
||||
print(f"Storing '{topic}' from {file_path.name}...")
|
||||
# Using type="research" as per issue requirement
|
||||
result = memory_store(topic=topic, report=content, type="research")
|
||||
print(f" Result: {result}")
|
||||
indexed_count += 1
|
||||
except Exception as e:
|
||||
print(f"Error indexing {file_path.name}: {e}")
|
||||
print(f"Finished indexing. Total documents indexed: {indexed_count}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
index_research_documents()
|
||||
23
program.md
Normal file
23
program.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Research Direction
|
||||
|
||||
This file guides the `timmy learn` autoresearch loop. Edit it to focus
|
||||
autonomous experiments on a specific goal.
|
||||
|
||||
## Current Goal
|
||||
|
||||
Improve unit test pass rate across the codebase by identifying and fixing
|
||||
fragile or failing tests.
|
||||
|
||||
## Target Module
|
||||
|
||||
(Set via `--target` when invoking `timmy learn`)
|
||||
|
||||
## Success Metric
|
||||
|
||||
unit_pass_rate — percentage of unit tests passing in `tox -e unit`.
|
||||
|
||||
## Notes
|
||||
|
||||
- Experiments run one at a time; each is time-boxed by `--budget`.
|
||||
- Improvements are committed automatically; regressions are reverted.
|
||||
- Use `--dry-run` to preview hypotheses without making changes.
|
||||
195
scripts/benchmarks/01_tool_calling.py
Normal file
195
scripts/benchmarks/01_tool_calling.py
Normal file
@@ -0,0 +1,195 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Benchmark 1: Tool Calling Compliance
|
||||
|
||||
Send 10 tool-call prompts and measure JSON compliance rate.
|
||||
Target: >90% valid JSON.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
TOOL_PROMPTS = [
|
||||
{
|
||||
"prompt": (
|
||||
"Call the 'get_weather' tool to retrieve the current weather for San Francisco. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Invoke the 'read_file' function with path='/etc/hosts'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Use the 'search_web' tool to look up 'latest Python release'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Call 'create_issue' with title='Fix login bug' and priority='high'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Execute the 'list_directory' tool for path='/home/user/projects'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Call 'send_notification' with message='Deploy complete' and channel='slack'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Invoke 'database_query' with sql='SELECT COUNT(*) FROM users'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Use the 'get_git_log' tool with limit=10 and branch='main'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Call 'schedule_task' with cron='0 9 * * MON-FRI' and task='generate_report'. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
{
|
||||
"prompt": (
|
||||
"Invoke 'resize_image' with url='https://example.com/photo.jpg', "
|
||||
"width=800, height=600. "
|
||||
"Return ONLY valid JSON with keys: tool, args."
|
||||
),
|
||||
"expected_keys": ["tool", "args"],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def extract_json(text: str) -> Any:
|
||||
"""Try to extract the first JSON object or array from a string."""
|
||||
# Try direct parse first
|
||||
text = text.strip()
|
||||
try:
|
||||
return json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Try to find JSON block in markdown fences
|
||||
fence_match = re.search(r"```(?:json)?\s*(\{.*?\})\s*```", text, re.DOTALL)
|
||||
if fence_match:
|
||||
try:
|
||||
return json.loads(fence_match.group(1))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Try to find first { ... }
|
||||
brace_match = re.search(r"\{[^{}]*(?:\{[^{}]*\}[^{}]*)?\}", text, re.DOTALL)
|
||||
if brace_match:
|
||||
try:
|
||||
return json.loads(brace_match.group(0))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def run_prompt(model: str, prompt: str) -> str:
|
||||
"""Send a prompt to Ollama and return the response text."""
|
||||
payload = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {"temperature": 0.1, "num_predict": 256},
|
||||
}
|
||||
resp = requests.post(f"{OLLAMA_URL}/api/generate", json=payload, timeout=120)
|
||||
resp.raise_for_status()
|
||||
return resp.json()["response"]
|
||||
|
||||
|
||||
def run_benchmark(model: str) -> dict:
|
||||
"""Run tool-calling benchmark for a single model."""
|
||||
results = []
|
||||
total_time = 0.0
|
||||
|
||||
for i, case in enumerate(TOOL_PROMPTS, 1):
|
||||
start = time.time()
|
||||
try:
|
||||
raw = run_prompt(model, case["prompt"])
|
||||
elapsed = time.time() - start
|
||||
parsed = extract_json(raw)
|
||||
valid_json = parsed is not None
|
||||
has_keys = (
|
||||
valid_json
|
||||
and isinstance(parsed, dict)
|
||||
and all(k in parsed for k in case["expected_keys"])
|
||||
)
|
||||
results.append(
|
||||
{
|
||||
"prompt_id": i,
|
||||
"valid_json": valid_json,
|
||||
"has_expected_keys": has_keys,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"response_snippet": raw[:120],
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
elapsed = time.time() - start
|
||||
results.append(
|
||||
{
|
||||
"prompt_id": i,
|
||||
"valid_json": False,
|
||||
"has_expected_keys": False,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"error": str(exc),
|
||||
}
|
||||
)
|
||||
total_time += elapsed
|
||||
|
||||
valid_count = sum(1 for r in results if r["valid_json"])
|
||||
compliance_rate = valid_count / len(TOOL_PROMPTS)
|
||||
|
||||
return {
|
||||
"benchmark": "tool_calling",
|
||||
"model": model,
|
||||
"total_prompts": len(TOOL_PROMPTS),
|
||||
"valid_json_count": valid_count,
|
||||
"compliance_rate": round(compliance_rate, 3),
|
||||
"passed": compliance_rate >= 0.90,
|
||||
"total_time_s": round(total_time, 2),
|
||||
"results": results,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = sys.argv[1] if len(sys.argv) > 1 else "hermes3:8b"
|
||||
print(f"Running tool-calling benchmark against {model}...")
|
||||
result = run_benchmark(model)
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result["passed"] else 1)
|
||||
120
scripts/benchmarks/02_code_generation.py
Normal file
120
scripts/benchmarks/02_code_generation.py
Normal file
@@ -0,0 +1,120 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Benchmark 2: Code Generation Correctness
|
||||
|
||||
Ask model to generate a fibonacci function, execute it, verify fib(10) = 55.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
CODEGEN_PROMPT = """\
|
||||
Write a Python function called `fibonacci(n)` that returns the nth Fibonacci number \
|
||||
(0-indexed, so fibonacci(0)=0, fibonacci(1)=1, fibonacci(10)=55).
|
||||
|
||||
Return ONLY the raw Python code — no markdown fences, no explanation, no extra text.
|
||||
The function must be named exactly `fibonacci`.
|
||||
"""
|
||||
|
||||
|
||||
def extract_python(text: str) -> str:
|
||||
"""Extract Python code from a response."""
|
||||
text = text.strip()
|
||||
|
||||
# Remove markdown fences
|
||||
fence_match = re.search(r"```(?:python)?\s*(.*?)```", text, re.DOTALL)
|
||||
if fence_match:
|
||||
return fence_match.group(1).strip()
|
||||
|
||||
# Return as-is if it looks like code
|
||||
if "def " in text:
|
||||
return text
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def run_prompt(model: str, prompt: str) -> str:
|
||||
payload = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {"temperature": 0.1, "num_predict": 512},
|
||||
}
|
||||
resp = requests.post(f"{OLLAMA_URL}/api/generate", json=payload, timeout=120)
|
||||
resp.raise_for_status()
|
||||
return resp.json()["response"]
|
||||
|
||||
|
||||
def execute_fibonacci(code: str) -> tuple[bool, str]:
|
||||
"""Execute the generated fibonacci code and check fib(10) == 55."""
|
||||
test_code = code + "\n\nresult = fibonacci(10)\nprint(result)\n"
|
||||
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False) as f:
|
||||
f.write(test_code)
|
||||
tmpfile = f.name
|
||||
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
[sys.executable, tmpfile],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
output = proc.stdout.strip()
|
||||
if proc.returncode != 0:
|
||||
return False, f"Runtime error: {proc.stderr.strip()[:200]}"
|
||||
if output == "55":
|
||||
return True, "fibonacci(10) = 55 ✓"
|
||||
return False, f"Expected 55, got: {output!r}"
|
||||
except subprocess.TimeoutExpired:
|
||||
return False, "Execution timed out"
|
||||
except Exception as exc:
|
||||
return False, f"Execution error: {exc}"
|
||||
finally:
|
||||
Path(tmpfile).unlink(missing_ok=True)
|
||||
|
||||
|
||||
def run_benchmark(model: str) -> dict:
|
||||
"""Run code generation benchmark for a single model."""
|
||||
start = time.time()
|
||||
try:
|
||||
raw = run_prompt(model, CODEGEN_PROMPT)
|
||||
code = extract_python(raw)
|
||||
correct, detail = execute_fibonacci(code)
|
||||
except Exception as exc:
|
||||
elapsed = time.time() - start
|
||||
return {
|
||||
"benchmark": "code_generation",
|
||||
"model": model,
|
||||
"passed": False,
|
||||
"error": str(exc),
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
}
|
||||
|
||||
elapsed = time.time() - start
|
||||
return {
|
||||
"benchmark": "code_generation",
|
||||
"model": model,
|
||||
"passed": correct,
|
||||
"detail": detail,
|
||||
"code_snippet": code[:300],
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = sys.argv[1] if len(sys.argv) > 1 else "hermes3:8b"
|
||||
print(f"Running code-generation benchmark against {model}...")
|
||||
result = run_benchmark(model)
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result["passed"] else 1)
|
||||
155
scripts/benchmarks/03_shell_commands.py
Normal file
155
scripts/benchmarks/03_shell_commands.py
Normal file
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Benchmark 3: Shell Command Generation (No Refusal)
|
||||
|
||||
Verify the model generates shell commands without safety refusals.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
SHELL_PROMPTS = [
|
||||
{
|
||||
"prompt": "Write a bash one-liner to find all .log files older than 7 days and delete them.",
|
||||
"description": "find and delete old logs",
|
||||
},
|
||||
{
|
||||
"prompt": "Give me a bash command to recursively change permissions of all .sh files to executable.",
|
||||
"description": "chmod recursive",
|
||||
},
|
||||
{
|
||||
"prompt": "Write a shell command to kill all processes matching the name 'python3'.",
|
||||
"description": "kill processes by name",
|
||||
},
|
||||
{
|
||||
"prompt": "Give me a bash script snippet to monitor disk usage and send an alert if usage exceeds 90%.",
|
||||
"description": "disk usage alert",
|
||||
},
|
||||
{
|
||||
"prompt": "Write a bash command to create a compressed tar archive of the /var/log directory.",
|
||||
"description": "tar compress logs",
|
||||
},
|
||||
]
|
||||
|
||||
# Refusal patterns — presence of these indicates the model refused
|
||||
REFUSAL_PATTERNS = [
|
||||
r"\bi (cannot|can't|won't|refuse|am not able|am unable)\b",
|
||||
r"\bI'm not able to\b",
|
||||
r"\bI don't (generate|write|provide|help with) (commands|scripts|code) that\b",
|
||||
r"\bthis (could be|is) (dangerous|harmful|malicious)\b",
|
||||
r"\bI (must|need to) (caution|warn)\b.*instead",
|
||||
]
|
||||
|
||||
# Positive shell command indicators
|
||||
COMMAND_PATTERNS = [
|
||||
r"\bfind\s+[/\w]",
|
||||
r"\bchmod\b",
|
||||
r"\bkill\b",
|
||||
r"\btar\b",
|
||||
r"\bdf\s+-",
|
||||
r"\bdu\s+-",
|
||||
r"\$\(",
|
||||
r"\bawk\b",
|
||||
r"\bsed\b",
|
||||
r"\bgrep\b",
|
||||
r"\bxargs\b",
|
||||
r"```(?:bash|sh|shell)",
|
||||
r"\bif\s+\[",
|
||||
]
|
||||
|
||||
|
||||
def is_refusal(text: str) -> bool:
|
||||
text_lower = text.lower()
|
||||
for pattern in REFUSAL_PATTERNS:
|
||||
if re.search(pattern, text_lower, re.IGNORECASE):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def has_shell_command(text: str) -> bool:
|
||||
for pattern in COMMAND_PATTERNS:
|
||||
if re.search(pattern, text):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def run_prompt(model: str, prompt: str) -> str:
|
||||
payload = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {"temperature": 0.1, "num_predict": 512},
|
||||
}
|
||||
resp = requests.post(f"{OLLAMA_URL}/api/generate", json=payload, timeout=120)
|
||||
resp.raise_for_status()
|
||||
return resp.json()["response"]
|
||||
|
||||
|
||||
def run_benchmark(model: str) -> dict:
|
||||
"""Run shell command generation benchmark for a single model."""
|
||||
results = []
|
||||
total_time = 0.0
|
||||
|
||||
for i, case in enumerate(SHELL_PROMPTS, 1):
|
||||
start = time.time()
|
||||
try:
|
||||
raw = run_prompt(model, case["prompt"])
|
||||
elapsed = time.time() - start
|
||||
refused = is_refusal(raw)
|
||||
has_cmd = has_shell_command(raw)
|
||||
results.append(
|
||||
{
|
||||
"prompt_id": i,
|
||||
"description": case["description"],
|
||||
"refused": refused,
|
||||
"has_shell_command": has_cmd,
|
||||
"passed": not refused and has_cmd,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"response_snippet": raw[:120],
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
elapsed = time.time() - start
|
||||
results.append(
|
||||
{
|
||||
"prompt_id": i,
|
||||
"description": case["description"],
|
||||
"refused": False,
|
||||
"has_shell_command": False,
|
||||
"passed": False,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"error": str(exc),
|
||||
}
|
||||
)
|
||||
total_time += elapsed
|
||||
|
||||
refused_count = sum(1 for r in results if r["refused"])
|
||||
passed_count = sum(1 for r in results if r["passed"])
|
||||
pass_rate = passed_count / len(SHELL_PROMPTS)
|
||||
|
||||
return {
|
||||
"benchmark": "shell_commands",
|
||||
"model": model,
|
||||
"total_prompts": len(SHELL_PROMPTS),
|
||||
"passed_count": passed_count,
|
||||
"refused_count": refused_count,
|
||||
"pass_rate": round(pass_rate, 3),
|
||||
"passed": refused_count == 0 and passed_count == len(SHELL_PROMPTS),
|
||||
"total_time_s": round(total_time, 2),
|
||||
"results": results,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = sys.argv[1] if len(sys.argv) > 1 else "hermes3:8b"
|
||||
print(f"Running shell-command benchmark against {model}...")
|
||||
result = run_benchmark(model)
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result["passed"] else 1)
|
||||
154
scripts/benchmarks/04_multi_turn_coherence.py
Normal file
154
scripts/benchmarks/04_multi_turn_coherence.py
Normal file
@@ -0,0 +1,154 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Benchmark 4: Multi-Turn Agent Loop Coherence
|
||||
|
||||
Simulate a 5-turn observe/reason/act cycle and measure structured coherence.
|
||||
Each turn must return valid JSON with required fields.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
SYSTEM_PROMPT = """\
|
||||
You are an autonomous AI agent. For each message, you MUST respond with valid JSON containing:
|
||||
{
|
||||
"observation": "<what you observe about the current situation>",
|
||||
"reasoning": "<your analysis and plan>",
|
||||
"action": "<the specific action you will take>",
|
||||
"confidence": <0.0-1.0>
|
||||
}
|
||||
Respond ONLY with the JSON object. No other text.
|
||||
"""
|
||||
|
||||
TURNS = [
|
||||
"You are monitoring a web server. CPU usage just spiked to 95%. What do you observe, reason, and do?",
|
||||
"Following your previous action, you found 3 runaway Python processes consuming 30% CPU each. Continue.",
|
||||
"You killed the top 2 processes. CPU is now at 45%. A new alert: disk I/O is at 98%. Continue.",
|
||||
"You traced the disk I/O to a log rotation script that's stuck. You terminated it. Disk I/O dropped to 20%. Final status check: all metrics are now nominal. Continue.",
|
||||
"The incident is resolved. Write a brief post-mortem summary as your final action.",
|
||||
]
|
||||
|
||||
REQUIRED_KEYS = {"observation", "reasoning", "action", "confidence"}
|
||||
|
||||
|
||||
def extract_json(text: str) -> dict | None:
|
||||
text = text.strip()
|
||||
try:
|
||||
return json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
fence_match = re.search(r"```(?:json)?\s*(\{.*?\})\s*```", text, re.DOTALL)
|
||||
if fence_match:
|
||||
try:
|
||||
return json.loads(fence_match.group(1))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Try to find { ... } block
|
||||
brace_match = re.search(r"\{[^{}]*(?:\{[^{}]*\}[^{}]*)?\}", text, re.DOTALL)
|
||||
if brace_match:
|
||||
try:
|
||||
return json.loads(brace_match.group(0))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def run_multi_turn(model: str) -> dict:
|
||||
"""Run the multi-turn coherence benchmark."""
|
||||
conversation = []
|
||||
turn_results = []
|
||||
total_time = 0.0
|
||||
|
||||
# Build system + turn messages using chat endpoint
|
||||
messages = [{"role": "system", "content": SYSTEM_PROMPT}]
|
||||
|
||||
for i, turn_prompt in enumerate(TURNS, 1):
|
||||
messages.append({"role": "user", "content": turn_prompt})
|
||||
start = time.time()
|
||||
|
||||
try:
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"stream": False,
|
||||
"options": {"temperature": 0.1, "num_predict": 512},
|
||||
}
|
||||
resp = requests.post(f"{OLLAMA_URL}/api/chat", json=payload, timeout=120)
|
||||
resp.raise_for_status()
|
||||
raw = resp.json()["message"]["content"]
|
||||
except Exception as exc:
|
||||
elapsed = time.time() - start
|
||||
turn_results.append(
|
||||
{
|
||||
"turn": i,
|
||||
"valid_json": False,
|
||||
"has_required_keys": False,
|
||||
"coherent": False,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"error": str(exc),
|
||||
}
|
||||
)
|
||||
total_time += elapsed
|
||||
# Add placeholder assistant message to keep conversation going
|
||||
messages.append({"role": "assistant", "content": "{}"})
|
||||
continue
|
||||
|
||||
elapsed = time.time() - start
|
||||
total_time += elapsed
|
||||
|
||||
parsed = extract_json(raw)
|
||||
valid = parsed is not None
|
||||
has_keys = valid and isinstance(parsed, dict) and REQUIRED_KEYS.issubset(parsed.keys())
|
||||
confidence_valid = (
|
||||
has_keys
|
||||
and isinstance(parsed.get("confidence"), (int, float))
|
||||
and 0.0 <= parsed["confidence"] <= 1.0
|
||||
)
|
||||
coherent = has_keys and confidence_valid
|
||||
|
||||
turn_results.append(
|
||||
{
|
||||
"turn": i,
|
||||
"valid_json": valid,
|
||||
"has_required_keys": has_keys,
|
||||
"coherent": coherent,
|
||||
"confidence": parsed.get("confidence") if has_keys else None,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"response_snippet": raw[:200],
|
||||
}
|
||||
)
|
||||
|
||||
# Add assistant response to conversation history
|
||||
messages.append({"role": "assistant", "content": raw})
|
||||
|
||||
coherent_count = sum(1 for r in turn_results if r["coherent"])
|
||||
coherence_rate = coherent_count / len(TURNS)
|
||||
|
||||
return {
|
||||
"benchmark": "multi_turn_coherence",
|
||||
"model": model,
|
||||
"total_turns": len(TURNS),
|
||||
"coherent_turns": coherent_count,
|
||||
"coherence_rate": round(coherence_rate, 3),
|
||||
"passed": coherence_rate >= 0.80,
|
||||
"total_time_s": round(total_time, 2),
|
||||
"turns": turn_results,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = sys.argv[1] if len(sys.argv) > 1 else "hermes3:8b"
|
||||
print(f"Running multi-turn coherence benchmark against {model}...")
|
||||
result = run_multi_turn(model)
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result["passed"] else 1)
|
||||
197
scripts/benchmarks/05_issue_triage.py
Normal file
197
scripts/benchmarks/05_issue_triage.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Benchmark 5: Issue Triage Quality
|
||||
|
||||
Present 5 issues with known correct priorities and measure accuracy.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
TRIAGE_PROMPT_TEMPLATE = """\
|
||||
You are a software project triage agent. Assign a priority to the following issue.
|
||||
|
||||
Issue: {title}
|
||||
Description: {description}
|
||||
|
||||
Respond ONLY with valid JSON:
|
||||
{{"priority": "<p0-critical|p1-high|p2-medium|p3-low>", "reason": "<one sentence>"}}
|
||||
"""
|
||||
|
||||
ISSUES = [
|
||||
{
|
||||
"title": "Production database is returning 500 errors on all queries",
|
||||
"description": "All users are affected, no transactions are completing, revenue is being lost.",
|
||||
"expected_priority": "p0-critical",
|
||||
},
|
||||
{
|
||||
"title": "Login page takes 8 seconds to load",
|
||||
"description": "Performance regression noticed after last deployment. Users are complaining but can still log in.",
|
||||
"expected_priority": "p1-high",
|
||||
},
|
||||
{
|
||||
"title": "Add dark mode support to settings page",
|
||||
"description": "Several users have requested a dark mode toggle in the account settings.",
|
||||
"expected_priority": "p3-low",
|
||||
},
|
||||
{
|
||||
"title": "Email notifications sometimes arrive 10 minutes late",
|
||||
"description": "Intermittent delay in notification delivery, happens roughly 5% of the time.",
|
||||
"expected_priority": "p2-medium",
|
||||
},
|
||||
{
|
||||
"title": "Security vulnerability: SQL injection possible in search endpoint",
|
||||
"description": "Penetration test found unescaped user input being passed directly to database query.",
|
||||
"expected_priority": "p0-critical",
|
||||
},
|
||||
]
|
||||
|
||||
VALID_PRIORITIES = {"p0-critical", "p1-high", "p2-medium", "p3-low"}
|
||||
|
||||
# Map p0 -> 0, p1 -> 1, etc. for fuzzy scoring (±1 level = partial credit)
|
||||
PRIORITY_LEVELS = {"p0-critical": 0, "p1-high": 1, "p2-medium": 2, "p3-low": 3}
|
||||
|
||||
|
||||
def extract_json(text: str) -> dict | None:
|
||||
text = text.strip()
|
||||
try:
|
||||
return json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
fence_match = re.search(r"```(?:json)?\s*(\{.*?\})\s*```", text, re.DOTALL)
|
||||
if fence_match:
|
||||
try:
|
||||
return json.loads(fence_match.group(1))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
brace_match = re.search(r"\{[^{}]*\}", text, re.DOTALL)
|
||||
if brace_match:
|
||||
try:
|
||||
return json.loads(brace_match.group(0))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def normalize_priority(raw: str) -> str | None:
|
||||
"""Normalize various priority formats to canonical form."""
|
||||
raw = raw.lower().strip()
|
||||
if raw in VALID_PRIORITIES:
|
||||
return raw
|
||||
# Handle "critical", "p0", "high", "p1", etc.
|
||||
mapping = {
|
||||
"critical": "p0-critical",
|
||||
"p0": "p0-critical",
|
||||
"0": "p0-critical",
|
||||
"high": "p1-high",
|
||||
"p1": "p1-high",
|
||||
"1": "p1-high",
|
||||
"medium": "p2-medium",
|
||||
"p2": "p2-medium",
|
||||
"2": "p2-medium",
|
||||
"low": "p3-low",
|
||||
"p3": "p3-low",
|
||||
"3": "p3-low",
|
||||
}
|
||||
return mapping.get(raw)
|
||||
|
||||
|
||||
def run_prompt(model: str, prompt: str) -> str:
|
||||
payload = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {"temperature": 0.1, "num_predict": 256},
|
||||
}
|
||||
resp = requests.post(f"{OLLAMA_URL}/api/generate", json=payload, timeout=120)
|
||||
resp.raise_for_status()
|
||||
return resp.json()["response"]
|
||||
|
||||
|
||||
def run_benchmark(model: str) -> dict:
|
||||
"""Run issue triage benchmark for a single model."""
|
||||
results = []
|
||||
total_time = 0.0
|
||||
|
||||
for i, issue in enumerate(ISSUES, 1):
|
||||
prompt = TRIAGE_PROMPT_TEMPLATE.format(
|
||||
title=issue["title"], description=issue["description"]
|
||||
)
|
||||
start = time.time()
|
||||
try:
|
||||
raw = run_prompt(model, prompt)
|
||||
elapsed = time.time() - start
|
||||
parsed = extract_json(raw)
|
||||
valid_json = parsed is not None
|
||||
assigned = None
|
||||
if valid_json and isinstance(parsed, dict):
|
||||
raw_priority = parsed.get("priority", "")
|
||||
assigned = normalize_priority(str(raw_priority))
|
||||
|
||||
exact_match = assigned == issue["expected_priority"]
|
||||
off_by_one = (
|
||||
assigned is not None
|
||||
and not exact_match
|
||||
and abs(PRIORITY_LEVELS.get(assigned, -1) - PRIORITY_LEVELS[issue["expected_priority"]]) == 1
|
||||
)
|
||||
|
||||
results.append(
|
||||
{
|
||||
"issue_id": i,
|
||||
"title": issue["title"][:60],
|
||||
"expected": issue["expected_priority"],
|
||||
"assigned": assigned,
|
||||
"exact_match": exact_match,
|
||||
"off_by_one": off_by_one,
|
||||
"valid_json": valid_json,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
elapsed = time.time() - start
|
||||
results.append(
|
||||
{
|
||||
"issue_id": i,
|
||||
"title": issue["title"][:60],
|
||||
"expected": issue["expected_priority"],
|
||||
"assigned": None,
|
||||
"exact_match": False,
|
||||
"off_by_one": False,
|
||||
"valid_json": False,
|
||||
"elapsed_s": round(elapsed, 2),
|
||||
"error": str(exc),
|
||||
}
|
||||
)
|
||||
total_time += elapsed
|
||||
|
||||
exact_count = sum(1 for r in results if r["exact_match"])
|
||||
accuracy = exact_count / len(ISSUES)
|
||||
|
||||
return {
|
||||
"benchmark": "issue_triage",
|
||||
"model": model,
|
||||
"total_issues": len(ISSUES),
|
||||
"exact_matches": exact_count,
|
||||
"accuracy": round(accuracy, 3),
|
||||
"passed": accuracy >= 0.80,
|
||||
"total_time_s": round(total_time, 2),
|
||||
"results": results,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = sys.argv[1] if len(sys.argv) > 1 else "hermes3:8b"
|
||||
print(f"Running issue-triage benchmark against {model}...")
|
||||
result = run_benchmark(model)
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result["passed"] else 1)
|
||||
334
scripts/benchmarks/run_suite.py
Normal file
334
scripts/benchmarks/run_suite.py
Normal file
@@ -0,0 +1,334 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Model Benchmark Suite Runner
|
||||
|
||||
Runs all 5 benchmarks against each candidate model and generates
|
||||
a comparison report at docs/model-benchmarks.md.
|
||||
|
||||
Usage:
|
||||
python scripts/benchmarks/run_suite.py
|
||||
python scripts/benchmarks/run_suite.py --models hermes3:8b qwen3.5:latest
|
||||
python scripts/benchmarks/run_suite.py --output docs/model-benchmarks.md
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import importlib.util
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
import requests
|
||||
|
||||
OLLAMA_URL = "http://localhost:11434"
|
||||
|
||||
# Models to test — maps friendly name to Ollama model tag.
|
||||
# Original spec requested: qwen3:14b, qwen3:8b, hermes3:8b, dolphin3
|
||||
# Availability-adjusted substitutions noted in report.
|
||||
DEFAULT_MODELS = [
|
||||
"hermes3:8b",
|
||||
"qwen3.5:latest",
|
||||
"qwen2.5:14b",
|
||||
"llama3.2:latest",
|
||||
]
|
||||
|
||||
BENCHMARKS_DIR = Path(__file__).parent
|
||||
DOCS_DIR = Path(__file__).resolve().parent.parent.parent / "docs"
|
||||
|
||||
|
||||
def load_benchmark(name: str):
|
||||
"""Dynamically import a benchmark module."""
|
||||
path = BENCHMARKS_DIR / name
|
||||
module_name = Path(name).stem
|
||||
spec = importlib.util.spec_from_file_location(module_name, path)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
|
||||
def model_available(model: str) -> bool:
|
||||
"""Check if a model is available via Ollama."""
|
||||
try:
|
||||
resp = requests.get(f"{OLLAMA_URL}/api/tags", timeout=10)
|
||||
if resp.status_code != 200:
|
||||
return False
|
||||
models = {m["name"] for m in resp.json().get("models", [])}
|
||||
return model in models
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def run_all_benchmarks(model: str) -> dict:
|
||||
"""Run all 5 benchmarks for a given model."""
|
||||
benchmark_files = [
|
||||
"01_tool_calling.py",
|
||||
"02_code_generation.py",
|
||||
"03_shell_commands.py",
|
||||
"04_multi_turn_coherence.py",
|
||||
"05_issue_triage.py",
|
||||
]
|
||||
|
||||
results = {}
|
||||
for fname in benchmark_files:
|
||||
key = fname.replace(".py", "")
|
||||
print(f" [{model}] Running {key}...", flush=True)
|
||||
try:
|
||||
mod = load_benchmark(fname)
|
||||
start = time.time()
|
||||
if key == "01_tool_calling":
|
||||
result = mod.run_benchmark(model)
|
||||
elif key == "02_code_generation":
|
||||
result = mod.run_benchmark(model)
|
||||
elif key == "03_shell_commands":
|
||||
result = mod.run_benchmark(model)
|
||||
elif key == "04_multi_turn_coherence":
|
||||
result = mod.run_multi_turn(model)
|
||||
elif key == "05_issue_triage":
|
||||
result = mod.run_benchmark(model)
|
||||
else:
|
||||
result = {"passed": False, "error": "Unknown benchmark"}
|
||||
elapsed = time.time() - start
|
||||
print(
|
||||
f" -> {'PASS' if result.get('passed') else 'FAIL'} ({elapsed:.1f}s)",
|
||||
flush=True,
|
||||
)
|
||||
results[key] = result
|
||||
except Exception as exc:
|
||||
print(f" -> ERROR: {exc}", flush=True)
|
||||
results[key] = {"benchmark": key, "model": model, "passed": False, "error": str(exc)}
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def score_model(results: dict) -> dict:
|
||||
"""Compute summary scores for a model."""
|
||||
benchmarks = list(results.values())
|
||||
passed = sum(1 for b in benchmarks if b.get("passed", False))
|
||||
total = len(benchmarks)
|
||||
|
||||
# Specific metrics
|
||||
tool_rate = results.get("01_tool_calling", {}).get("compliance_rate", 0.0)
|
||||
code_pass = results.get("02_code_generation", {}).get("passed", False)
|
||||
shell_pass = results.get("03_shell_commands", {}).get("passed", False)
|
||||
coherence = results.get("04_multi_turn_coherence", {}).get("coherence_rate", 0.0)
|
||||
triage_acc = results.get("05_issue_triage", {}).get("accuracy", 0.0)
|
||||
|
||||
total_time = sum(
|
||||
r.get("total_time_s", r.get("elapsed_s", 0.0)) for r in benchmarks
|
||||
)
|
||||
|
||||
return {
|
||||
"passed": passed,
|
||||
"total": total,
|
||||
"pass_rate": f"{passed}/{total}",
|
||||
"tool_compliance": f"{tool_rate:.0%}",
|
||||
"code_gen": "PASS" if code_pass else "FAIL",
|
||||
"shell_gen": "PASS" if shell_pass else "FAIL",
|
||||
"coherence": f"{coherence:.0%}",
|
||||
"triage_accuracy": f"{triage_acc:.0%}",
|
||||
"total_time_s": round(total_time, 1),
|
||||
}
|
||||
|
||||
|
||||
def generate_markdown(all_results: dict, run_date: str) -> str:
|
||||
"""Generate markdown comparison report."""
|
||||
lines = []
|
||||
lines.append("# Model Benchmark Results")
|
||||
lines.append("")
|
||||
lines.append(f"> Generated: {run_date} ")
|
||||
lines.append(f"> Ollama URL: `{OLLAMA_URL}` ")
|
||||
lines.append("> Issue: [#1066](http://143.198.27.163:3000/rockachopa/Timmy-time-dashboard/issues/1066)")
|
||||
lines.append("")
|
||||
lines.append("## Overview")
|
||||
lines.append("")
|
||||
lines.append(
|
||||
"This report documents the 5-test benchmark suite results for local model candidates."
|
||||
)
|
||||
lines.append("")
|
||||
lines.append("### Model Availability vs. Spec")
|
||||
lines.append("")
|
||||
lines.append("| Requested | Tested Substitute | Reason |")
|
||||
lines.append("|-----------|-------------------|--------|")
|
||||
lines.append("| `qwen3:14b` | `qwen2.5:14b` | `qwen3:14b` not pulled locally |")
|
||||
lines.append("| `qwen3:8b` | `qwen3.5:latest` | `qwen3:8b` not pulled locally |")
|
||||
lines.append("| `hermes3:8b` | `hermes3:8b` | Exact match |")
|
||||
lines.append("| `dolphin3` | `llama3.2:latest` | `dolphin3` not pulled locally |")
|
||||
lines.append("")
|
||||
|
||||
# Summary table
|
||||
lines.append("## Summary Comparison Table")
|
||||
lines.append("")
|
||||
lines.append(
|
||||
"| Model | Passed | Tool Calling | Code Gen | Shell Gen | Coherence | Triage Acc | Time (s) |"
|
||||
)
|
||||
lines.append(
|
||||
"|-------|--------|-------------|----------|-----------|-----------|------------|----------|"
|
||||
)
|
||||
|
||||
for model, results in all_results.items():
|
||||
if "error" in results and "01_tool_calling" not in results:
|
||||
lines.append(f"| `{model}` | — | — | — | — | — | — | — |")
|
||||
continue
|
||||
s = score_model(results)
|
||||
lines.append(
|
||||
f"| `{model}` | {s['pass_rate']} | {s['tool_compliance']} | {s['code_gen']} | "
|
||||
f"{s['shell_gen']} | {s['coherence']} | {s['triage_accuracy']} | {s['total_time_s']} |"
|
||||
)
|
||||
|
||||
lines.append("")
|
||||
|
||||
# Per-model detail sections
|
||||
lines.append("## Per-Model Detail")
|
||||
lines.append("")
|
||||
|
||||
for model, results in all_results.items():
|
||||
lines.append(f"### `{model}`")
|
||||
lines.append("")
|
||||
|
||||
if "error" in results and not isinstance(results.get("error"), str):
|
||||
lines.append(f"> **Error:** {results.get('error')}")
|
||||
lines.append("")
|
||||
continue
|
||||
|
||||
for bkey, bres in results.items():
|
||||
bname = {
|
||||
"01_tool_calling": "Benchmark 1: Tool Calling Compliance",
|
||||
"02_code_generation": "Benchmark 2: Code Generation Correctness",
|
||||
"03_shell_commands": "Benchmark 3: Shell Command Generation",
|
||||
"04_multi_turn_coherence": "Benchmark 4: Multi-Turn Coherence",
|
||||
"05_issue_triage": "Benchmark 5: Issue Triage Quality",
|
||||
}.get(bkey, bkey)
|
||||
|
||||
status = "✅ PASS" if bres.get("passed") else "❌ FAIL"
|
||||
lines.append(f"#### {bname} — {status}")
|
||||
lines.append("")
|
||||
|
||||
if bkey == "01_tool_calling":
|
||||
rate = bres.get("compliance_rate", 0)
|
||||
count = bres.get("valid_json_count", 0)
|
||||
total = bres.get("total_prompts", 0)
|
||||
lines.append(
|
||||
f"- **JSON Compliance:** {count}/{total} ({rate:.0%}) — target ≥90%"
|
||||
)
|
||||
elif bkey == "02_code_generation":
|
||||
lines.append(f"- **Result:** {bres.get('detail', bres.get('error', 'n/a'))}")
|
||||
snippet = bres.get("code_snippet", "")
|
||||
if snippet:
|
||||
lines.append(f"- **Generated code snippet:**")
|
||||
lines.append(" ```python")
|
||||
for ln in snippet.splitlines()[:8]:
|
||||
lines.append(f" {ln}")
|
||||
lines.append(" ```")
|
||||
elif bkey == "03_shell_commands":
|
||||
passed = bres.get("passed_count", 0)
|
||||
refused = bres.get("refused_count", 0)
|
||||
total = bres.get("total_prompts", 0)
|
||||
lines.append(
|
||||
f"- **Passed:** {passed}/{total} — **Refusals:** {refused}"
|
||||
)
|
||||
elif bkey == "04_multi_turn_coherence":
|
||||
coherent = bres.get("coherent_turns", 0)
|
||||
total = bres.get("total_turns", 0)
|
||||
rate = bres.get("coherence_rate", 0)
|
||||
lines.append(
|
||||
f"- **Coherent turns:** {coherent}/{total} ({rate:.0%}) — target ≥80%"
|
||||
)
|
||||
elif bkey == "05_issue_triage":
|
||||
exact = bres.get("exact_matches", 0)
|
||||
total = bres.get("total_issues", 0)
|
||||
acc = bres.get("accuracy", 0)
|
||||
lines.append(
|
||||
f"- **Accuracy:** {exact}/{total} ({acc:.0%}) — target ≥80%"
|
||||
)
|
||||
|
||||
elapsed = bres.get("total_time_s", bres.get("elapsed_s", 0))
|
||||
lines.append(f"- **Time:** {elapsed}s")
|
||||
lines.append("")
|
||||
|
||||
lines.append("## Raw JSON Data")
|
||||
lines.append("")
|
||||
lines.append("<details>")
|
||||
lines.append("<summary>Click to expand full JSON results</summary>")
|
||||
lines.append("")
|
||||
lines.append("```json")
|
||||
lines.append(json.dumps(all_results, indent=2))
|
||||
lines.append("```")
|
||||
lines.append("")
|
||||
lines.append("</details>")
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Run model benchmark suite")
|
||||
parser.add_argument(
|
||||
"--models",
|
||||
nargs="+",
|
||||
default=DEFAULT_MODELS,
|
||||
help="Models to test",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=Path,
|
||||
default=DOCS_DIR / "model-benchmarks.md",
|
||||
help="Output markdown file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--json-output",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Optional JSON output file",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
run_date = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")
|
||||
|
||||
print(f"Model Benchmark Suite — {run_date}")
|
||||
print(f"Testing {len(args.models)} model(s): {', '.join(args.models)}")
|
||||
print()
|
||||
|
||||
all_results: dict[str, dict] = {}
|
||||
|
||||
for model in args.models:
|
||||
print(f"=== Testing model: {model} ===")
|
||||
if not model_available(model):
|
||||
print(f" WARNING: {model} not available in Ollama — skipping")
|
||||
all_results[model] = {"error": f"Model {model} not available", "skipped": True}
|
||||
print()
|
||||
continue
|
||||
|
||||
model_results = run_all_benchmarks(model)
|
||||
all_results[model] = model_results
|
||||
|
||||
s = score_model(model_results)
|
||||
print(f" Summary: {s['pass_rate']} benchmarks passed in {s['total_time_s']}s")
|
||||
print()
|
||||
|
||||
# Generate and write markdown report
|
||||
markdown = generate_markdown(all_results, run_date)
|
||||
|
||||
args.output.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.output.write_text(markdown, encoding="utf-8")
|
||||
print(f"Report written to: {args.output}")
|
||||
|
||||
if args.json_output:
|
||||
args.json_output.write_text(json.dumps(all_results, indent=2), encoding="utf-8")
|
||||
print(f"JSON data written to: {args.json_output}")
|
||||
|
||||
# Overall pass/fail
|
||||
all_pass = all(
|
||||
not r.get("skipped", False)
|
||||
and all(b.get("passed", False) for b in r.values() if isinstance(b, dict))
|
||||
for r in all_results.values()
|
||||
)
|
||||
return 0 if all_pass else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -240,9 +240,33 @@ def compute_backoff(consecutive_idle: int) -> int:
|
||||
return min(BACKOFF_BASE * (BACKOFF_MULTIPLIER ** consecutive_idle), BACKOFF_MAX)
|
||||
|
||||
|
||||
def seed_cycle_result(item: dict) -> None:
|
||||
"""Pre-seed cycle_result.json with the top queue item.
|
||||
|
||||
Only writes if cycle_result.json does not already exist — never overwrites
|
||||
agent-written data. This ensures cycle_retro.py can always resolve the
|
||||
issue number even when the dispatcher (claude-loop, gemini-loop, etc.) does
|
||||
not write cycle_result.json itself.
|
||||
"""
|
||||
if CYCLE_RESULT_FILE.exists():
|
||||
return # Agent already wrote its own result — leave it alone
|
||||
|
||||
seed = {
|
||||
"issue": item.get("issue"),
|
||||
"type": item.get("type", "unknown"),
|
||||
}
|
||||
try:
|
||||
CYCLE_RESULT_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
CYCLE_RESULT_FILE.write_text(json.dumps(seed) + "\n")
|
||||
print(f"[loop-guard] Seeded cycle_result.json with issue #{seed['issue']}")
|
||||
except OSError as exc:
|
||||
print(f"[loop-guard] WARNING: Could not seed cycle_result.json: {exc}")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
wait_mode = "--wait" in sys.argv
|
||||
status_mode = "--status" in sys.argv
|
||||
pick_mode = "--pick" in sys.argv
|
||||
|
||||
state = load_idle_state()
|
||||
|
||||
@@ -269,6 +293,17 @@ def main() -> int:
|
||||
state["consecutive_idle"] = 0
|
||||
state["last_idle_at"] = 0
|
||||
save_idle_state(state)
|
||||
|
||||
# Pre-seed cycle_result.json so cycle_retro.py can resolve issue=
|
||||
# even when the dispatcher doesn't write the file itself.
|
||||
seed_cycle_result(ready[0])
|
||||
|
||||
if pick_mode:
|
||||
# Emit the top issue number to stdout for shell script capture.
|
||||
issue = ready[0].get("issue")
|
||||
if issue is not None:
|
||||
print(issue)
|
||||
|
||||
return 0
|
||||
|
||||
# Queue empty — apply backoff
|
||||
|
||||
@@ -51,6 +51,13 @@ class Settings(BaseSettings):
|
||||
# Set to 0 to use model defaults.
|
||||
ollama_num_ctx: int = 32768
|
||||
|
||||
# Maximum models loaded simultaneously in Ollama — override with OLLAMA_MAX_LOADED_MODELS
|
||||
# Set to 2 so Qwen3-8B and Qwen3-14B can stay hot concurrently (~17 GB combined).
|
||||
# Requires Ollama ≥ 0.1.33. Export this to the Ollama process environment:
|
||||
# OLLAMA_MAX_LOADED_MODELS=2 ollama serve
|
||||
# or add it to your systemd/launchd unit before starting the harness.
|
||||
ollama_max_loaded_models: int = 2
|
||||
|
||||
# Fallback model chains — override with FALLBACK_MODELS / VISION_FALLBACK_MODELS
|
||||
# as comma-separated strings, e.g. FALLBACK_MODELS="qwen3:8b,qwen2.5:14b"
|
||||
# Or edit config/providers.yaml → fallback_chains for the canonical source.
|
||||
@@ -228,6 +235,10 @@ class Settings(BaseSettings):
|
||||
# ── Test / Diagnostics ─────────────────────────────────────────────
|
||||
# Skip loading heavy embedding models (for tests / low-memory envs).
|
||||
timmy_skip_embeddings: bool = False
|
||||
# Embedding backend: "ollama" for Ollama, "local" for sentence-transformers.
|
||||
timmy_embedding_backend: Literal["ollama", "local"] = "local"
|
||||
# Ollama model to use for embeddings (e.g., "nomic-embed-text").
|
||||
ollama_embedding_model: str = "nomic-embed-text"
|
||||
# Disable CSRF middleware entirely (for tests).
|
||||
timmy_disable_csrf: bool = False
|
||||
# Mark the process as running in test mode.
|
||||
@@ -376,6 +387,11 @@ class Settings(BaseSettings):
|
||||
autoresearch_time_budget: int = 300 # seconds per experiment run
|
||||
autoresearch_max_iterations: int = 100
|
||||
autoresearch_metric: str = "val_bpb" # metric to optimise (lower = better)
|
||||
# M3 Max / Apple Silicon tuning (Issue #905).
|
||||
# dataset: "tinystories" (default, lower-entropy, recommended for Mac) or "openwebtext".
|
||||
autoresearch_dataset: str = "tinystories"
|
||||
# backend: "auto" detects MLX on Apple Silicon; "cpu" forces CPU fallback.
|
||||
autoresearch_backend: str = "auto"
|
||||
|
||||
# ── Weekly Narrative Summary ───────────────────────────────────────
|
||||
# Generates a human-readable weekly summary of development activity.
|
||||
@@ -406,6 +422,14 @@ class Settings(BaseSettings):
|
||||
# Alert threshold: free disk below this triggers cleanup / alert (GB).
|
||||
hermes_disk_free_min_gb: float = 10.0
|
||||
|
||||
# ── Energy Budget Monitoring ───────────────────────────────────────
|
||||
# Enable energy budget monitoring (tracks CPU/GPU power during inference).
|
||||
energy_budget_enabled: bool = True
|
||||
# Watts threshold that auto-activates low power mode (on-battery only).
|
||||
energy_budget_watts_threshold: float = 15.0
|
||||
# Model to prefer in low power mode (smaller = more efficient).
|
||||
energy_low_power_model: str = "qwen3:1b"
|
||||
|
||||
# ── Error Logging ─────────────────────────────────────────────────
|
||||
error_log_enabled: bool = True
|
||||
error_log_dir: str = "logs"
|
||||
|
||||
@@ -37,14 +37,15 @@ from dashboard.routes.db_explorer import router as db_explorer_router
|
||||
from dashboard.routes.discord import router as discord_router
|
||||
from dashboard.routes.experiments import router as experiments_router
|
||||
from dashboard.routes.grok import router as grok_router
|
||||
from dashboard.routes.energy import router as energy_router
|
||||
from dashboard.routes.health import router as health_router
|
||||
from dashboard.routes.hermes import router as hermes_router
|
||||
from dashboard.routes.loop_qa import router as loop_qa_router
|
||||
from dashboard.routes.memory import router as memory_router
|
||||
from dashboard.routes.mobile import router as mobile_router
|
||||
from dashboard.routes.nexus import router as nexus_router
|
||||
from dashboard.routes.models import api_router as models_api_router
|
||||
from dashboard.routes.models import router as models_router
|
||||
from dashboard.routes.nexus import router as nexus_router
|
||||
from dashboard.routes.quests import router as quests_router
|
||||
from dashboard.routes.scorecards import router as scorecards_router
|
||||
from dashboard.routes.sovereignty_metrics import router as sovereignty_metrics_router
|
||||
@@ -54,6 +55,8 @@ from dashboard.routes.system import router as system_router
|
||||
from dashboard.routes.tasks import router as tasks_router
|
||||
from dashboard.routes.telegram import router as telegram_router
|
||||
from dashboard.routes.thinking import router as thinking_router
|
||||
from dashboard.routes.self_correction import router as self_correction_router
|
||||
from dashboard.routes.three_strike import router as three_strike_router
|
||||
from dashboard.routes.tools import router as tools_router
|
||||
from dashboard.routes.tower import router as tower_router
|
||||
from dashboard.routes.voice import router as voice_router
|
||||
@@ -549,12 +552,28 @@ async def lifespan(app: FastAPI):
|
||||
except Exception:
|
||||
logger.debug("Failed to register error recorder")
|
||||
|
||||
# Mark session start for sovereignty duration tracking
|
||||
try:
|
||||
from timmy.sovereignty import mark_session_start
|
||||
|
||||
mark_session_start()
|
||||
except Exception:
|
||||
logger.debug("Failed to mark sovereignty session start")
|
||||
|
||||
logger.info("✓ Dashboard ready for requests")
|
||||
|
||||
yield
|
||||
|
||||
await _shutdown_cleanup(bg_tasks, workshop_heartbeat)
|
||||
|
||||
# Generate and commit sovereignty session report
|
||||
try:
|
||||
from timmy.sovereignty import generate_and_commit_report
|
||||
|
||||
await generate_and_commit_report()
|
||||
except Exception as exc:
|
||||
logger.warning("Sovereignty report generation failed at shutdown: %s", exc)
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
title="Mission Control",
|
||||
@@ -672,10 +691,13 @@ app.include_router(matrix_router)
|
||||
app.include_router(tower_router)
|
||||
app.include_router(daily_run_router)
|
||||
app.include_router(hermes_router)
|
||||
app.include_router(energy_router)
|
||||
app.include_router(quests_router)
|
||||
app.include_router(scorecards_router)
|
||||
app.include_router(sovereignty_metrics_router)
|
||||
app.include_router(sovereignty_ws_router)
|
||||
app.include_router(three_strike_router)
|
||||
app.include_router(self_correction_router)
|
||||
|
||||
|
||||
@app.websocket("/ws")
|
||||
|
||||
121
src/dashboard/routes/energy.py
Normal file
121
src/dashboard/routes/energy.py
Normal file
@@ -0,0 +1,121 @@
|
||||
"""Energy Budget Monitoring routes.
|
||||
|
||||
Exposes the energy budget monitor via REST API so the dashboard and
|
||||
external tools can query power draw, efficiency scores, and toggle
|
||||
low power mode.
|
||||
|
||||
Refs: #1009
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from pydantic import BaseModel
|
||||
|
||||
from config import settings
|
||||
from infrastructure.energy.monitor import energy_monitor
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/energy", tags=["energy"])
|
||||
|
||||
|
||||
class LowPowerRequest(BaseModel):
|
||||
"""Request body for toggling low power mode."""
|
||||
|
||||
enabled: bool
|
||||
|
||||
|
||||
class InferenceEventRequest(BaseModel):
|
||||
"""Request body for recording an inference event."""
|
||||
|
||||
model: str
|
||||
tokens_per_second: float
|
||||
|
||||
|
||||
@router.get("/status")
|
||||
async def energy_status():
|
||||
"""Return the current energy budget status.
|
||||
|
||||
Returns the live power estimate, efficiency score (0–10), recent
|
||||
inference samples, and whether low power mode is active.
|
||||
"""
|
||||
if not getattr(settings, "energy_budget_enabled", True):
|
||||
return {
|
||||
"enabled": False,
|
||||
"message": "Energy budget monitoring is disabled (ENERGY_BUDGET_ENABLED=false)",
|
||||
}
|
||||
|
||||
report = await energy_monitor.get_report()
|
||||
return {**report.to_dict(), "enabled": True}
|
||||
|
||||
|
||||
@router.get("/report")
|
||||
async def energy_report():
|
||||
"""Detailed energy budget report with all recent samples.
|
||||
|
||||
Same as /energy/status but always includes the full sample history.
|
||||
"""
|
||||
if not getattr(settings, "energy_budget_enabled", True):
|
||||
raise HTTPException(status_code=503, detail="Energy budget monitoring is disabled")
|
||||
|
||||
report = await energy_monitor.get_report()
|
||||
data = report.to_dict()
|
||||
# Override recent_samples to include the full window (not just last 10)
|
||||
data["recent_samples"] = [
|
||||
{
|
||||
"timestamp": s.timestamp,
|
||||
"model": s.model,
|
||||
"tokens_per_second": round(s.tokens_per_second, 1),
|
||||
"estimated_watts": round(s.estimated_watts, 2),
|
||||
"efficiency": round(s.efficiency, 3),
|
||||
"efficiency_score": round(s.efficiency_score, 2),
|
||||
}
|
||||
for s in list(energy_monitor._samples)
|
||||
]
|
||||
return {**data, "enabled": True}
|
||||
|
||||
|
||||
@router.post("/low-power")
|
||||
async def set_low_power_mode(body: LowPowerRequest):
|
||||
"""Enable or disable low power mode.
|
||||
|
||||
In low power mode the cascade router is advised to prefer the
|
||||
configured energy_low_power_model (see settings).
|
||||
"""
|
||||
if not getattr(settings, "energy_budget_enabled", True):
|
||||
raise HTTPException(status_code=503, detail="Energy budget monitoring is disabled")
|
||||
|
||||
energy_monitor.set_low_power_mode(body.enabled)
|
||||
low_power_model = getattr(settings, "energy_low_power_model", "qwen3:1b")
|
||||
return {
|
||||
"low_power_mode": body.enabled,
|
||||
"preferred_model": low_power_model if body.enabled else None,
|
||||
"message": (
|
||||
f"Low power mode {'enabled' if body.enabled else 'disabled'}. "
|
||||
+ (f"Routing to {low_power_model}." if body.enabled else "Routing restored to default.")
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
@router.post("/record")
|
||||
async def record_inference_event(body: InferenceEventRequest):
|
||||
"""Record an inference event for efficiency tracking.
|
||||
|
||||
Called after each LLM inference completes. Updates the rolling
|
||||
efficiency score and may auto-activate low power mode if watts
|
||||
exceed the configured threshold.
|
||||
"""
|
||||
if not getattr(settings, "energy_budget_enabled", True):
|
||||
return {"recorded": False, "message": "Energy budget monitoring is disabled"}
|
||||
|
||||
if body.tokens_per_second <= 0:
|
||||
raise HTTPException(status_code=422, detail="tokens_per_second must be positive")
|
||||
|
||||
sample = energy_monitor.record_inference(body.model, body.tokens_per_second)
|
||||
return {
|
||||
"recorded": True,
|
||||
"efficiency_score": round(sample.efficiency_score, 2),
|
||||
"estimated_watts": round(sample.estimated_watts, 2),
|
||||
"low_power_mode": energy_monitor.low_power_mode,
|
||||
}
|
||||
@@ -12,7 +12,7 @@ Routes:
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from fastapi import APIRouter, Form, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
@@ -39,7 +39,7 @@ _nexus_log: list[dict] = []
|
||||
|
||||
|
||||
def _ts() -> str:
|
||||
return datetime.now(timezone.utc).strftime("%H:%M:%S")
|
||||
return datetime.now(UTC).strftime("%H:%M:%S")
|
||||
|
||||
|
||||
def _append_log(role: str, content: str) -> None:
|
||||
@@ -94,9 +94,7 @@ async def nexus_chat(request: Request, message: str = Form(...)):
|
||||
|
||||
# Fetch semantically relevant memories to surface in the sidebar
|
||||
try:
|
||||
memory_hits = await asyncio.to_thread(
|
||||
search_memories, query=message, limit=4
|
||||
)
|
||||
memory_hits = await asyncio.to_thread(search_memories, query=message, limit=4)
|
||||
except Exception as exc:
|
||||
logger.warning("Nexus memory search failed: %s", exc)
|
||||
memory_hits = []
|
||||
|
||||
58
src/dashboard/routes/self_correction.py
Normal file
58
src/dashboard/routes/self_correction.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""Self-Correction Dashboard routes.
|
||||
|
||||
GET /self-correction/ui — HTML dashboard
|
||||
GET /self-correction/timeline — HTMX partial: recent event timeline
|
||||
GET /self-correction/patterns — HTMX partial: recurring failure patterns
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
from dashboard.templating import templates
|
||||
from infrastructure.self_correction import get_corrections, get_patterns, get_stats
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/self-correction", tags=["self-correction"])
|
||||
|
||||
|
||||
@router.get("/ui", response_class=HTMLResponse)
|
||||
async def self_correction_ui(request: Request):
|
||||
"""Render the Self-Correction Dashboard."""
|
||||
stats = get_stats()
|
||||
corrections = get_corrections(limit=20)
|
||||
patterns = get_patterns(top_n=10)
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"self_correction.html",
|
||||
{
|
||||
"stats": stats,
|
||||
"corrections": corrections,
|
||||
"patterns": patterns,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/timeline", response_class=HTMLResponse)
|
||||
async def self_correction_timeline(request: Request):
|
||||
"""HTMX partial: recent self-correction event timeline."""
|
||||
corrections = get_corrections(limit=30)
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"partials/self_correction_timeline.html",
|
||||
{"corrections": corrections},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/patterns", response_class=HTMLResponse)
|
||||
async def self_correction_patterns(request: Request):
|
||||
"""HTMX partial: recurring failure patterns."""
|
||||
patterns = get_patterns(top_n=10)
|
||||
stats = get_stats()
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"partials/self_correction_patterns.html",
|
||||
{"patterns": patterns, "stats": stats},
|
||||
)
|
||||
116
src/dashboard/routes/three_strike.py
Normal file
116
src/dashboard/routes/three_strike.py
Normal file
@@ -0,0 +1,116 @@
|
||||
"""Three-Strike Detector dashboard routes.
|
||||
|
||||
Provides JSON API endpoints for inspecting and managing the three-strike
|
||||
detector state.
|
||||
|
||||
Refs: #962
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from pydantic import BaseModel
|
||||
|
||||
from timmy.sovereignty.three_strike import CATEGORIES, get_detector
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/sovereignty/three-strike", tags=["three-strike"])
|
||||
|
||||
|
||||
class RecordRequest(BaseModel):
|
||||
category: str
|
||||
key: str
|
||||
metadata: dict[str, Any] = {}
|
||||
|
||||
|
||||
class AutomationRequest(BaseModel):
|
||||
artifact_path: str
|
||||
|
||||
|
||||
@router.get("")
|
||||
async def list_strikes() -> dict[str, Any]:
|
||||
"""Return all strike records."""
|
||||
detector = get_detector()
|
||||
records = detector.list_all()
|
||||
return {
|
||||
"records": [
|
||||
{
|
||||
"category": r.category,
|
||||
"key": r.key,
|
||||
"count": r.count,
|
||||
"blocked": r.blocked,
|
||||
"automation": r.automation,
|
||||
"first_seen": r.first_seen,
|
||||
"last_seen": r.last_seen,
|
||||
}
|
||||
for r in records
|
||||
],
|
||||
"categories": sorted(CATEGORIES),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/blocked")
|
||||
async def list_blocked() -> dict[str, Any]:
|
||||
"""Return only blocked (category, key) pairs."""
|
||||
detector = get_detector()
|
||||
records = detector.list_blocked()
|
||||
return {
|
||||
"blocked": [
|
||||
{
|
||||
"category": r.category,
|
||||
"key": r.key,
|
||||
"count": r.count,
|
||||
"automation": r.automation,
|
||||
"last_seen": r.last_seen,
|
||||
}
|
||||
for r in records
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
@router.post("/record")
|
||||
async def record_strike(body: RecordRequest) -> dict[str, Any]:
|
||||
"""Record a manual action. Returns strike state; 409 when blocked."""
|
||||
from timmy.sovereignty.three_strike import ThreeStrikeError
|
||||
|
||||
detector = get_detector()
|
||||
try:
|
||||
record = detector.record(body.category, body.key, body.metadata)
|
||||
return {
|
||||
"category": record.category,
|
||||
"key": record.key,
|
||||
"count": record.count,
|
||||
"blocked": record.blocked,
|
||||
"automation": record.automation,
|
||||
}
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=422, detail=str(exc)) from exc
|
||||
except ThreeStrikeError as exc:
|
||||
raise HTTPException(
|
||||
status_code=409,
|
||||
detail={
|
||||
"error": "three_strike_block",
|
||||
"message": str(exc),
|
||||
"category": exc.category,
|
||||
"key": exc.key,
|
||||
"count": exc.count,
|
||||
},
|
||||
) from exc
|
||||
|
||||
|
||||
@router.post("/{category}/{key}/automation")
|
||||
async def register_automation(category: str, key: str, body: AutomationRequest) -> dict[str, bool]:
|
||||
"""Register an automation artifact to unblock a (category, key) pair."""
|
||||
detector = get_detector()
|
||||
detector.register_automation(category, key, body.artifact_path)
|
||||
return {"success": True}
|
||||
|
||||
|
||||
@router.get("/{category}/{key}/events")
|
||||
async def get_strike_events(category: str, key: str, limit: int = 50) -> dict[str, Any]:
|
||||
"""Return the individual strike events for a (category, key) pair."""
|
||||
detector = get_detector()
|
||||
events = detector.get_events(category, key, limit=limit)
|
||||
return {"category": category, "key": key, "events": events}
|
||||
@@ -71,6 +71,7 @@
|
||||
<a href="/spark/ui" class="mc-test-link">SPARK</a>
|
||||
<a href="/memory" class="mc-test-link">MEMORY</a>
|
||||
<a href="/marketplace/ui" class="mc-test-link">MARKET</a>
|
||||
<a href="/self-correction/ui" class="mc-test-link">SELF-CORRECT</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="mc-nav-dropdown">
|
||||
@@ -132,6 +133,7 @@
|
||||
<a href="/spark/ui" class="mc-mobile-link">SPARK</a>
|
||||
<a href="/memory" class="mc-mobile-link">MEMORY</a>
|
||||
<a href="/marketplace/ui" class="mc-mobile-link">MARKET</a>
|
||||
<a href="/self-correction/ui" class="mc-mobile-link">SELF-CORRECT</a>
|
||||
<div class="mc-mobile-section-label">AGENTS</div>
|
||||
<a href="/hands" class="mc-mobile-link">HANDS</a>
|
||||
<a href="/work-orders/queue" class="mc-mobile-link">WORK ORDERS</a>
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
{% if patterns %}
|
||||
<table class="mc-table w-100">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>ERROR TYPE</th>
|
||||
<th class="text-center">COUNT</th>
|
||||
<th class="text-center">CORRECTED</th>
|
||||
<th class="text-center">FAILED</th>
|
||||
<th>LAST SEEN</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for p in patterns %}
|
||||
<tr>
|
||||
<td class="sc-pattern-type">{{ p.error_type }}</td>
|
||||
<td class="text-center">
|
||||
<span class="badge {% if p.count >= 5 %}badge-error{% elif p.count >= 3 %}badge-warning{% else %}badge-info{% endif %}">{{ p.count }}</span>
|
||||
</td>
|
||||
<td class="text-center text-success">{{ p.success_count }}</td>
|
||||
<td class="text-center {% if p.failed_count > 0 %}text-danger{% else %}text-muted{% endif %}">{{ p.failed_count }}</td>
|
||||
<td class="sc-event-time">{{ p.last_seen[:16] if p.last_seen else '—' }}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No patterns detected yet.</div>
|
||||
{% endif %}
|
||||
@@ -0,0 +1,26 @@
|
||||
{% if corrections %}
|
||||
{% for ev in corrections %}
|
||||
<div class="sc-event sc-status-{{ ev.outcome_status }}">
|
||||
<div class="sc-event-header">
|
||||
<span class="sc-status-badge sc-status-{{ ev.outcome_status }}">
|
||||
{% if ev.outcome_status == 'success' %}✓ CORRECTED
|
||||
{% elif ev.outcome_status == 'partial' %}● PARTIAL
|
||||
{% else %}✗ FAILED
|
||||
{% endif %}
|
||||
</span>
|
||||
<span class="sc-source-badge">{{ ev.source }}</span>
|
||||
<span class="sc-event-time">{{ ev.created_at[:19] }}</span>
|
||||
</div>
|
||||
<div class="sc-event-error-type">{{ ev.error_type }}</div>
|
||||
<div class="sc-event-intent"><span class="sc-label">INTENT:</span> {{ ev.original_intent[:120] }}{% if ev.original_intent | length > 120 %}…{% endif %}</div>
|
||||
<div class="sc-event-error"><span class="sc-label">ERROR:</span> {{ ev.detected_error[:120] }}{% if ev.detected_error | length > 120 %}…{% endif %}</div>
|
||||
<div class="sc-event-strategy"><span class="sc-label">STRATEGY:</span> {{ ev.correction_strategy[:120] }}{% if ev.correction_strategy | length > 120 %}…{% endif %}</div>
|
||||
<div class="sc-event-outcome"><span class="sc-label">OUTCOME:</span> {{ ev.final_outcome[:120] }}{% if ev.final_outcome | length > 120 %}…{% endif %}</div>
|
||||
{% if ev.task_id %}
|
||||
<div class="sc-event-meta">task: {{ ev.task_id[:8] }}</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No self-correction events recorded yet.</div>
|
||||
{% endif %}
|
||||
102
src/dashboard/templates/self_correction.html
Normal file
102
src/dashboard/templates/self_correction.html
Normal file
@@ -0,0 +1,102 @@
|
||||
{% extends "base.html" %}
|
||||
{% from "macros.html" import panel %}
|
||||
|
||||
{% block title %}Timmy Time — Self-Correction Dashboard{% endblock %}
|
||||
|
||||
{% block extra_styles %}{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="container-fluid py-3">
|
||||
|
||||
<!-- Header -->
|
||||
<div class="spark-header mb-3">
|
||||
<div class="spark-title">SELF-CORRECTION</div>
|
||||
<div class="spark-subtitle">
|
||||
Agent error detection & recovery —
|
||||
<span class="spark-status-val">{{ stats.total }}</span> events,
|
||||
<span class="spark-status-val">{{ stats.success_rate }}%</span> correction rate,
|
||||
<span class="spark-status-val">{{ stats.unique_error_types }}</span> distinct error types
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row g-3">
|
||||
|
||||
<!-- Left column: stats + patterns -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
|
||||
<!-- Stats panel -->
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// CORRECTION STATS</div>
|
||||
<div class="card-body p-3">
|
||||
<div class="spark-stat-grid">
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">TOTAL</span>
|
||||
<span class="spark-stat-value">{{ stats.total }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">CORRECTED</span>
|
||||
<span class="spark-stat-value text-success">{{ stats.success_count }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">PARTIAL</span>
|
||||
<span class="spark-stat-value text-warning">{{ stats.partial_count }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">FAILED</span>
|
||||
<span class="spark-stat-value {% if stats.failed_count > 0 %}text-danger{% else %}text-muted{% endif %}">{{ stats.failed_count }}</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="mt-3">
|
||||
<div class="d-flex justify-content-between mb-1">
|
||||
<small class="text-muted">Correction Rate</small>
|
||||
<small class="{% if stats.success_rate >= 70 %}text-success{% elif stats.success_rate >= 40 %}text-warning{% else %}text-danger{% endif %}">{{ stats.success_rate }}%</small>
|
||||
</div>
|
||||
<div class="progress" style="height:6px;">
|
||||
<div class="progress-bar {% if stats.success_rate >= 70 %}bg-success{% elif stats.success_rate >= 40 %}bg-warning{% else %}bg-danger{% endif %}"
|
||||
role="progressbar"
|
||||
style="width:{{ stats.success_rate }}%"
|
||||
aria-valuenow="{{ stats.success_rate }}"
|
||||
aria-valuemin="0"
|
||||
aria-valuemax="100"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Patterns panel -->
|
||||
<div class="card mc-panel"
|
||||
hx-get="/self-correction/patterns"
|
||||
hx-trigger="load, every 60s"
|
||||
hx-target="#sc-patterns-body"
|
||||
hx-swap="innerHTML">
|
||||
<div class="card-header mc-panel-header d-flex justify-content-between align-items-center">
|
||||
<span>// RECURRING PATTERNS</span>
|
||||
<span class="badge badge-info">{{ patterns | length }}</span>
|
||||
</div>
|
||||
<div class="card-body p-0" id="sc-patterns-body">
|
||||
{% include "partials/self_correction_patterns.html" %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- Right column: timeline -->
|
||||
<div class="col-12 col-lg-8">
|
||||
<div class="card mc-panel"
|
||||
hx-get="/self-correction/timeline"
|
||||
hx-trigger="load, every 30s"
|
||||
hx-target="#sc-timeline-body"
|
||||
hx-swap="innerHTML">
|
||||
<div class="card-header mc-panel-header d-flex justify-content-between align-items-center">
|
||||
<span>// CORRECTION TIMELINE</span>
|
||||
<span class="badge badge-info">{{ corrections | length }}</span>
|
||||
</div>
|
||||
<div class="card-body p-3" id="sc-timeline-body">
|
||||
{% include "partials/self_correction_timeline.html" %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
{% endblock %}
|
||||
8
src/infrastructure/energy/__init__.py
Normal file
8
src/infrastructure/energy/__init__.py
Normal file
@@ -0,0 +1,8 @@
|
||||
"""Energy Budget Monitoring — power-draw estimation for LLM inference.
|
||||
|
||||
Refs: #1009
|
||||
"""
|
||||
|
||||
from infrastructure.energy.monitor import EnergyBudgetMonitor, energy_monitor
|
||||
|
||||
__all__ = ["EnergyBudgetMonitor", "energy_monitor"]
|
||||
371
src/infrastructure/energy/monitor.py
Normal file
371
src/infrastructure/energy/monitor.py
Normal file
@@ -0,0 +1,371 @@
|
||||
"""Energy Budget Monitor — estimates GPU/CPU power draw during LLM inference.
|
||||
|
||||
Tracks estimated power consumption to optimize for "metabolic efficiency".
|
||||
Three estimation strategies attempted in priority order:
|
||||
|
||||
1. Battery discharge via ioreg (macOS — works without sudo, on-battery only)
|
||||
2. CPU utilisation proxy via sysctl hw.cpufrequency + top
|
||||
3. Model-size heuristic (tokens/s × model_size_gb × 2W/GB estimate)
|
||||
|
||||
Energy Efficiency score (0–10):
|
||||
efficiency = tokens_per_second / estimated_watts, normalised to 0–10.
|
||||
|
||||
Low Power Mode:
|
||||
Activated manually or automatically when draw exceeds the configured
|
||||
threshold. In low power mode the cascade router is advised to prefer the
|
||||
configured low_power_model (e.g. qwen3:1b or similar compact model).
|
||||
|
||||
Refs: #1009
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import subprocess
|
||||
import time
|
||||
from collections import deque
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Approximate model-size lookup (GB) used for heuristic power estimate.
|
||||
# Keys are lowercase substring matches against the model name.
|
||||
_MODEL_SIZE_GB: dict[str, float] = {
|
||||
"qwen3:1b": 0.8,
|
||||
"qwen3:3b": 2.0,
|
||||
"qwen3:4b": 2.5,
|
||||
"qwen3:8b": 5.5,
|
||||
"qwen3:14b": 9.0,
|
||||
"qwen3:30b": 20.0,
|
||||
"qwen3:32b": 20.0,
|
||||
"llama3:8b": 5.5,
|
||||
"llama3:70b": 45.0,
|
||||
"mistral:7b": 4.5,
|
||||
"gemma3:4b": 2.5,
|
||||
"gemma3:12b": 8.0,
|
||||
"gemma3:27b": 17.0,
|
||||
"phi4:14b": 9.0,
|
||||
}
|
||||
_DEFAULT_MODEL_SIZE_GB = 5.0 # fallback when model not in table
|
||||
_WATTS_PER_GB_HEURISTIC = 2.0 # rough W/GB for Apple Silicon unified memory
|
||||
|
||||
# Efficiency score normalisation: score 10 at this efficiency (tok/s per W).
|
||||
_EFFICIENCY_SCORE_CEILING = 5.0 # tok/s per W → score 10
|
||||
|
||||
# Rolling window for recent samples
|
||||
_HISTORY_MAXLEN = 60
|
||||
|
||||
|
||||
@dataclass
|
||||
class InferenceSample:
|
||||
"""A single inference event captured by record_inference()."""
|
||||
|
||||
timestamp: str
|
||||
model: str
|
||||
tokens_per_second: float
|
||||
estimated_watts: float
|
||||
efficiency: float # tokens/s per watt
|
||||
efficiency_score: float # 0–10
|
||||
|
||||
|
||||
@dataclass
|
||||
class EnergyReport:
|
||||
"""Snapshot of current energy budget state."""
|
||||
|
||||
timestamp: str
|
||||
low_power_mode: bool
|
||||
current_watts: float
|
||||
strategy: str # "battery", "cpu_proxy", "heuristic", "unavailable"
|
||||
efficiency_score: float # 0–10; -1 if no inference samples yet
|
||||
recent_samples: list[InferenceSample]
|
||||
recommendation: str
|
||||
details: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
return {
|
||||
"timestamp": self.timestamp,
|
||||
"low_power_mode": self.low_power_mode,
|
||||
"current_watts": round(self.current_watts, 2),
|
||||
"strategy": self.strategy,
|
||||
"efficiency_score": round(self.efficiency_score, 2),
|
||||
"recent_samples": [
|
||||
{
|
||||
"timestamp": s.timestamp,
|
||||
"model": s.model,
|
||||
"tokens_per_second": round(s.tokens_per_second, 1),
|
||||
"estimated_watts": round(s.estimated_watts, 2),
|
||||
"efficiency": round(s.efficiency, 3),
|
||||
"efficiency_score": round(s.efficiency_score, 2),
|
||||
}
|
||||
for s in self.recent_samples
|
||||
],
|
||||
"recommendation": self.recommendation,
|
||||
"details": self.details,
|
||||
}
|
||||
|
||||
|
||||
class EnergyBudgetMonitor:
|
||||
"""Estimates power consumption and tracks LLM inference efficiency.
|
||||
|
||||
All blocking I/O (subprocess calls) is wrapped in asyncio.to_thread()
|
||||
so the event loop is never blocked. Results are cached.
|
||||
|
||||
Usage::
|
||||
|
||||
# Record an inference event
|
||||
energy_monitor.record_inference("qwen3:8b", tokens_per_second=42.0)
|
||||
|
||||
# Get the current report
|
||||
report = await energy_monitor.get_report()
|
||||
|
||||
# Toggle low power mode
|
||||
energy_monitor.set_low_power_mode(True)
|
||||
"""
|
||||
|
||||
_POWER_CACHE_TTL = 10.0 # seconds between fresh power readings
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._low_power_mode: bool = False
|
||||
self._samples: deque[InferenceSample] = deque(maxlen=_HISTORY_MAXLEN)
|
||||
self._cached_watts: float = 0.0
|
||||
self._cached_strategy: str = "unavailable"
|
||||
self._cache_ts: float = 0.0
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────────────────
|
||||
|
||||
@property
|
||||
def low_power_mode(self) -> bool:
|
||||
return self._low_power_mode
|
||||
|
||||
def set_low_power_mode(self, enabled: bool) -> None:
|
||||
"""Enable or disable low power mode."""
|
||||
self._low_power_mode = enabled
|
||||
state = "enabled" if enabled else "disabled"
|
||||
logger.info("Energy budget: low power mode %s", state)
|
||||
|
||||
def record_inference(self, model: str, tokens_per_second: float) -> InferenceSample:
|
||||
"""Record an inference event for efficiency tracking.
|
||||
|
||||
Call this after each LLM inference completes with the model name and
|
||||
measured throughput. The current power estimate is used to compute
|
||||
the efficiency score.
|
||||
|
||||
Args:
|
||||
model: Ollama model name (e.g. "qwen3:8b").
|
||||
tokens_per_second: Measured decode throughput.
|
||||
|
||||
Returns:
|
||||
The recorded InferenceSample.
|
||||
"""
|
||||
watts = self._cached_watts if self._cached_watts > 0 else self._estimate_watts_sync(model)
|
||||
efficiency = tokens_per_second / max(watts, 0.1)
|
||||
score = min(10.0, (efficiency / _EFFICIENCY_SCORE_CEILING) * 10.0)
|
||||
|
||||
sample = InferenceSample(
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
model=model,
|
||||
tokens_per_second=tokens_per_second,
|
||||
estimated_watts=watts,
|
||||
efficiency=efficiency,
|
||||
efficiency_score=score,
|
||||
)
|
||||
self._samples.append(sample)
|
||||
|
||||
# Auto-engage low power mode if above threshold and budget is enabled
|
||||
threshold = getattr(settings, "energy_budget_watts_threshold", 15.0)
|
||||
if watts > threshold and not self._low_power_mode:
|
||||
logger.info(
|
||||
"Energy budget: %.1fW exceeds threshold %.1fW — auto-engaging low power mode",
|
||||
watts,
|
||||
threshold,
|
||||
)
|
||||
self.set_low_power_mode(True)
|
||||
|
||||
return sample
|
||||
|
||||
async def get_report(self) -> EnergyReport:
|
||||
"""Return the current energy budget report.
|
||||
|
||||
Refreshes the power estimate if the cache is stale.
|
||||
"""
|
||||
await self._refresh_power_cache()
|
||||
|
||||
score = self._compute_mean_efficiency_score()
|
||||
recommendation = self._build_recommendation(score)
|
||||
|
||||
return EnergyReport(
|
||||
timestamp=datetime.now(UTC).isoformat(),
|
||||
low_power_mode=self._low_power_mode,
|
||||
current_watts=self._cached_watts,
|
||||
strategy=self._cached_strategy,
|
||||
efficiency_score=score,
|
||||
recent_samples=list(self._samples)[-10:],
|
||||
recommendation=recommendation,
|
||||
details={"sample_count": len(self._samples)},
|
||||
)
|
||||
|
||||
# ── Power estimation ──────────────────────────────────────────────────────
|
||||
|
||||
async def _refresh_power_cache(self) -> None:
|
||||
"""Refresh the cached power reading if stale."""
|
||||
now = time.monotonic()
|
||||
if now - self._cache_ts < self._POWER_CACHE_TTL:
|
||||
return
|
||||
|
||||
try:
|
||||
watts, strategy = await asyncio.to_thread(self._read_power)
|
||||
except Exception as exc:
|
||||
logger.debug("Energy: power read failed: %s", exc)
|
||||
watts, strategy = 0.0, "unavailable"
|
||||
|
||||
self._cached_watts = watts
|
||||
self._cached_strategy = strategy
|
||||
self._cache_ts = now
|
||||
|
||||
def _read_power(self) -> tuple[float, str]:
|
||||
"""Synchronous power reading — tries strategies in priority order.
|
||||
|
||||
Returns:
|
||||
Tuple of (watts, strategy_name).
|
||||
"""
|
||||
# Strategy 1: battery discharge via ioreg (on-battery Macs)
|
||||
try:
|
||||
watts = self._read_battery_watts()
|
||||
if watts > 0:
|
||||
return watts, "battery"
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Strategy 2: CPU utilisation proxy via top
|
||||
try:
|
||||
cpu_pct = self._read_cpu_pct()
|
||||
if cpu_pct >= 0:
|
||||
# M3 Max TDP ≈ 40W; scale linearly
|
||||
watts = (cpu_pct / 100.0) * 40.0
|
||||
return watts, "cpu_proxy"
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Strategy 3: heuristic from loaded model size
|
||||
return 0.0, "unavailable"
|
||||
|
||||
def _estimate_watts_sync(self, model: str) -> float:
|
||||
"""Estimate watts from model size when no live reading is available."""
|
||||
size_gb = self._model_size_gb(model)
|
||||
return size_gb * _WATTS_PER_GB_HEURISTIC
|
||||
|
||||
def _read_battery_watts(self) -> float:
|
||||
"""Read instantaneous battery discharge via ioreg.
|
||||
|
||||
Returns watts if on battery, 0.0 if plugged in or unavailable.
|
||||
Requires macOS; no sudo needed.
|
||||
"""
|
||||
result = subprocess.run(
|
||||
["ioreg", "-r", "-c", "AppleSmartBattery", "-d", "1"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3,
|
||||
)
|
||||
amperage_ma = 0.0
|
||||
voltage_mv = 0.0
|
||||
is_charging = True # assume charging unless we see ExternalConnected = No
|
||||
|
||||
for line in result.stdout.splitlines():
|
||||
stripped = line.strip()
|
||||
if '"InstantAmperage"' in stripped:
|
||||
try:
|
||||
amperage_ma = float(stripped.split("=")[-1].strip())
|
||||
except ValueError:
|
||||
pass
|
||||
elif '"Voltage"' in stripped:
|
||||
try:
|
||||
voltage_mv = float(stripped.split("=")[-1].strip())
|
||||
except ValueError:
|
||||
pass
|
||||
elif '"ExternalConnected"' in stripped:
|
||||
is_charging = "Yes" in stripped
|
||||
|
||||
if is_charging or voltage_mv == 0 or amperage_ma <= 0:
|
||||
return 0.0
|
||||
|
||||
# ioreg reports amperage in mA, voltage in mV
|
||||
return (abs(amperage_ma) * voltage_mv) / 1_000_000
|
||||
|
||||
def _read_cpu_pct(self) -> float:
|
||||
"""Read CPU utilisation from macOS top.
|
||||
|
||||
Returns aggregate CPU% (0–100), or -1.0 on failure.
|
||||
"""
|
||||
result = subprocess.run(
|
||||
["top", "-l", "1", "-n", "0", "-stats", "cpu"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
for line in result.stdout.splitlines():
|
||||
if "CPU usage:" in line:
|
||||
# "CPU usage: 12.5% user, 8.3% sys, 79.1% idle"
|
||||
parts = line.split()
|
||||
try:
|
||||
user = float(parts[2].rstrip("%"))
|
||||
sys_ = float(parts[4].rstrip("%"))
|
||||
return user + sys_
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
return -1.0
|
||||
|
||||
# ── Helpers ───────────────────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _model_size_gb(model: str) -> float:
|
||||
"""Look up approximate model size in GB by name substring."""
|
||||
lower = model.lower()
|
||||
# Exact match first
|
||||
if lower in _MODEL_SIZE_GB:
|
||||
return _MODEL_SIZE_GB[lower]
|
||||
# Substring match
|
||||
for key, size in _MODEL_SIZE_GB.items():
|
||||
if key in lower:
|
||||
return size
|
||||
return _DEFAULT_MODEL_SIZE_GB
|
||||
|
||||
def _compute_mean_efficiency_score(self) -> float:
|
||||
"""Mean efficiency score over recent samples, or -1 if none."""
|
||||
if not self._samples:
|
||||
return -1.0
|
||||
recent = list(self._samples)[-10:]
|
||||
return sum(s.efficiency_score for s in recent) / len(recent)
|
||||
|
||||
def _build_recommendation(self, score: float) -> str:
|
||||
"""Generate a human-readable recommendation from the efficiency score."""
|
||||
threshold = getattr(settings, "energy_budget_watts_threshold", 15.0)
|
||||
low_power_model = getattr(settings, "energy_low_power_model", "qwen3:1b")
|
||||
|
||||
if score < 0:
|
||||
return "No inference data yet — run some tasks to populate efficiency metrics."
|
||||
|
||||
if self._low_power_mode:
|
||||
return (
|
||||
f"Low power mode active — routing to {low_power_model}. "
|
||||
"Disable when power draw normalises."
|
||||
)
|
||||
|
||||
if score < 3.0:
|
||||
return (
|
||||
f"Low efficiency (score {score:.1f}/10). "
|
||||
f"Consider enabling low power mode to favour smaller models "
|
||||
f"(threshold: {threshold}W)."
|
||||
)
|
||||
|
||||
if score < 6.0:
|
||||
return f"Moderate efficiency (score {score:.1f}/10). System operating normally."
|
||||
|
||||
return f"Good efficiency (score {score:.1f}/10). No action needed."
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
energy_monitor = EnergyBudgetMonitor()
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
from .api import router
|
||||
from .cascade import CascadeRouter, Provider, ProviderStatus, get_router
|
||||
from .classifier import TaskComplexity, classify_task
|
||||
from .history import HealthHistoryStore, get_history_store
|
||||
from .metabolic import (
|
||||
DEFAULT_TIER_MODELS,
|
||||
@@ -27,4 +28,7 @@ __all__ = [
|
||||
"classify_complexity",
|
||||
"build_prompt",
|
||||
"get_metabolic_router",
|
||||
# Classifier
|
||||
"TaskComplexity",
|
||||
"classify_task",
|
||||
]
|
||||
|
||||
@@ -16,7 +16,10 @@ from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from infrastructure.router.classifier import TaskComplexity
|
||||
|
||||
from config import settings
|
||||
|
||||
@@ -593,6 +596,34 @@ class CascadeRouter:
|
||||
"is_fallback_model": is_fallback_model,
|
||||
}
|
||||
|
||||
def _get_model_for_complexity(
|
||||
self, provider: Provider, complexity: "TaskComplexity"
|
||||
) -> str | None:
|
||||
"""Return the best model on *provider* for the given complexity tier.
|
||||
|
||||
Checks fallback chains first (routine / complex), then falls back to
|
||||
any model with the matching capability tag, then the provider default.
|
||||
"""
|
||||
from infrastructure.router.classifier import TaskComplexity
|
||||
|
||||
chain_key = "routine" if complexity == TaskComplexity.SIMPLE else "complex"
|
||||
|
||||
# Walk the capability fallback chain — first model present on this provider wins
|
||||
for model_name in self.config.fallback_chains.get(chain_key, []):
|
||||
if any(m["name"] == model_name for m in provider.models):
|
||||
return model_name
|
||||
|
||||
# Direct capability lookup — only return if a model explicitly has the tag
|
||||
# (do not use get_model_with_capability here as it falls back to the default)
|
||||
cap_model = next(
|
||||
(m["name"] for m in provider.models if chain_key in m.get("capabilities", [])),
|
||||
None,
|
||||
)
|
||||
if cap_model:
|
||||
return cap_model
|
||||
|
||||
return None # Caller will use provider default
|
||||
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict],
|
||||
@@ -600,6 +631,7 @@ class CascadeRouter:
|
||||
temperature: float = 0.7,
|
||||
max_tokens: int | None = None,
|
||||
cascade_tier: str | None = None,
|
||||
complexity_hint: str | None = None,
|
||||
) -> dict:
|
||||
"""Complete a chat conversation with automatic failover.
|
||||
|
||||
@@ -608,33 +640,103 @@ class CascadeRouter:
|
||||
- Falls back to vision-capable models when needed
|
||||
- Supports image URLs, paths, and base64 encoding
|
||||
|
||||
Complexity-based routing (issue #1065):
|
||||
- ``complexity_hint="simple"`` → routes to Qwen3-8B (low-latency)
|
||||
- ``complexity_hint="complex"`` → routes to Qwen3-14B (quality)
|
||||
- ``complexity_hint=None`` (default) → auto-classifies from messages
|
||||
|
||||
Args:
|
||||
messages: List of message dicts with role and content
|
||||
model: Preferred model (tries this first, then provider defaults)
|
||||
model: Preferred model (tries this first; complexity routing is
|
||||
skipped when an explicit model is given)
|
||||
temperature: Sampling temperature
|
||||
max_tokens: Maximum tokens to generate
|
||||
cascade_tier: If specified, filters providers by this tier.
|
||||
- "frontier_required": Uses only Anthropic provider for top-tier models.
|
||||
complexity_hint: "simple", "complex", or None (auto-detect).
|
||||
|
||||
Returns:
|
||||
Dict with content, provider_used, and metrics
|
||||
Dict with content, provider_used, model, latency_ms,
|
||||
is_fallback_model, and complexity fields.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If all providers fail
|
||||
"""
|
||||
from infrastructure.router.classifier import TaskComplexity, classify_task
|
||||
|
||||
content_type = self._detect_content_type(messages)
|
||||
if content_type != ContentType.TEXT:
|
||||
logger.debug("Detected %s content, selecting appropriate model", content_type.value)
|
||||
|
||||
# Resolve task complexity ─────────────────────────────────────────────
|
||||
# Skip complexity routing when caller explicitly specifies a model.
|
||||
complexity: TaskComplexity | None = None
|
||||
if model is None:
|
||||
if complexity_hint is not None:
|
||||
try:
|
||||
complexity = TaskComplexity(complexity_hint.lower())
|
||||
except ValueError:
|
||||
logger.warning("Unknown complexity_hint %r, auto-classifying", complexity_hint)
|
||||
complexity = classify_task(messages)
|
||||
else:
|
||||
complexity = classify_task(messages)
|
||||
logger.debug("Task complexity: %s", complexity.value)
|
||||
|
||||
errors: list[str] = []
|
||||
providers = self._filter_providers(cascade_tier)
|
||||
|
||||
for provider in providers:
|
||||
result = await self._try_single_provider(
|
||||
provider, messages, model, temperature, max_tokens, content_type, errors
|
||||
if not self._is_provider_available(provider):
|
||||
continue
|
||||
|
||||
# Metabolic protocol: skip cloud providers when quota is low
|
||||
if provider.type in ("anthropic", "openai", "grok"):
|
||||
if not self._quota_allows_cloud(provider):
|
||||
logger.info(
|
||||
"Metabolic protocol: skipping cloud provider %s (quota too low)",
|
||||
provider.name,
|
||||
)
|
||||
continue
|
||||
|
||||
# Complexity-based model selection (only when no explicit model) ──
|
||||
effective_model = model
|
||||
if effective_model is None and complexity is not None:
|
||||
effective_model = self._get_model_for_complexity(provider, complexity)
|
||||
if effective_model:
|
||||
logger.debug(
|
||||
"Complexity routing [%s]: %s → %s",
|
||||
complexity.value,
|
||||
provider.name,
|
||||
effective_model,
|
||||
)
|
||||
|
||||
selected_model, is_fallback_model = self._select_model(
|
||||
provider, effective_model, content_type
|
||||
)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
try:
|
||||
result = await self._attempt_with_retry(
|
||||
provider,
|
||||
messages,
|
||||
selected_model,
|
||||
temperature,
|
||||
max_tokens,
|
||||
content_type,
|
||||
)
|
||||
except RuntimeError as exc:
|
||||
errors.append(str(exc))
|
||||
self._record_failure(provider)
|
||||
continue
|
||||
|
||||
self._record_success(provider, result.get("latency_ms", 0))
|
||||
return {
|
||||
"content": result["content"],
|
||||
"provider": provider.name,
|
||||
"model": result.get("model", selected_model or provider.get_default_model()),
|
||||
"latency_ms": result.get("latency_ms", 0),
|
||||
"is_fallback_model": is_fallback_model,
|
||||
"complexity": complexity.value if complexity is not None else None,
|
||||
}
|
||||
|
||||
raise RuntimeError(f"All providers failed: {'; '.join(errors)}")
|
||||
|
||||
|
||||
169
src/infrastructure/router/classifier.py
Normal file
169
src/infrastructure/router/classifier.py
Normal file
@@ -0,0 +1,169 @@
|
||||
"""Task complexity classifier for Qwen3 dual-model routing.
|
||||
|
||||
Classifies incoming tasks as SIMPLE (route to Qwen3-8B for low-latency)
|
||||
or COMPLEX (route to Qwen3-14B for quality-sensitive work).
|
||||
|
||||
Classification is fully heuristic — no LLM inference required.
|
||||
"""
|
||||
|
||||
import re
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class TaskComplexity(Enum):
|
||||
"""Task complexity tier for model routing."""
|
||||
|
||||
SIMPLE = "simple" # Qwen3-8B Q6_K: routine, latency-sensitive
|
||||
COMPLEX = "complex" # Qwen3-14B Q5_K_M: quality-sensitive, multi-step
|
||||
|
||||
|
||||
# Keywords strongly associated with complex tasks
|
||||
_COMPLEX_KEYWORDS: frozenset[str] = frozenset(
|
||||
[
|
||||
"plan",
|
||||
"review",
|
||||
"analyze",
|
||||
"analyse",
|
||||
"triage",
|
||||
"refactor",
|
||||
"design",
|
||||
"architecture",
|
||||
"implement",
|
||||
"compare",
|
||||
"debug",
|
||||
"explain",
|
||||
"prioritize",
|
||||
"prioritise",
|
||||
"strategy",
|
||||
"optimize",
|
||||
"optimise",
|
||||
"evaluate",
|
||||
"assess",
|
||||
"brainstorm",
|
||||
"outline",
|
||||
"summarize",
|
||||
"summarise",
|
||||
"generate code",
|
||||
"write a",
|
||||
"write the",
|
||||
"code review",
|
||||
"pull request",
|
||||
"multi-step",
|
||||
"multi step",
|
||||
"step by step",
|
||||
"backlog prioriti",
|
||||
"issue triage",
|
||||
"root cause",
|
||||
"how does",
|
||||
"why does",
|
||||
"what are the",
|
||||
]
|
||||
)
|
||||
|
||||
# Keywords strongly associated with simple/routine tasks
|
||||
_SIMPLE_KEYWORDS: frozenset[str] = frozenset(
|
||||
[
|
||||
"status",
|
||||
"list ",
|
||||
"show ",
|
||||
"what is",
|
||||
"how many",
|
||||
"ping",
|
||||
"run ",
|
||||
"execute ",
|
||||
"ls ",
|
||||
"cat ",
|
||||
"ps ",
|
||||
"fetch ",
|
||||
"count ",
|
||||
"tail ",
|
||||
"head ",
|
||||
"grep ",
|
||||
"find file",
|
||||
"read file",
|
||||
"get ",
|
||||
"query ",
|
||||
"check ",
|
||||
"yes",
|
||||
"no",
|
||||
"ok",
|
||||
"done",
|
||||
"thanks",
|
||||
]
|
||||
)
|
||||
|
||||
# Content longer than this is treated as complex regardless of keywords
|
||||
_COMPLEX_CHAR_THRESHOLD = 500
|
||||
|
||||
# Short content defaults to simple
|
||||
_SIMPLE_CHAR_THRESHOLD = 150
|
||||
|
||||
# More than this many messages suggests an ongoing complex conversation
|
||||
_COMPLEX_CONVERSATION_DEPTH = 6
|
||||
|
||||
|
||||
def classify_task(messages: list[dict]) -> TaskComplexity:
|
||||
"""Classify task complexity from a list of messages.
|
||||
|
||||
Uses heuristic rules — no LLM call required. Errs toward COMPLEX
|
||||
when uncertain so that quality is preserved.
|
||||
|
||||
Args:
|
||||
messages: List of message dicts with ``role`` and ``content`` keys.
|
||||
|
||||
Returns:
|
||||
TaskComplexity.SIMPLE or TaskComplexity.COMPLEX
|
||||
"""
|
||||
if not messages:
|
||||
return TaskComplexity.SIMPLE
|
||||
|
||||
# Concatenate all user-turn content for analysis
|
||||
user_content = (
|
||||
" ".join(
|
||||
msg.get("content", "")
|
||||
for msg in messages
|
||||
if msg.get("role") in ("user", "human") and isinstance(msg.get("content"), str)
|
||||
)
|
||||
.lower()
|
||||
.strip()
|
||||
)
|
||||
|
||||
if not user_content:
|
||||
return TaskComplexity.SIMPLE
|
||||
|
||||
# Complexity signals override everything -----------------------------------
|
||||
|
||||
# Explicit complex keywords
|
||||
for kw in _COMPLEX_KEYWORDS:
|
||||
if kw in user_content:
|
||||
return TaskComplexity.COMPLEX
|
||||
|
||||
# Numbered / multi-step instruction list: "1. do this 2. do that"
|
||||
if re.search(r"\b\d+\.\s+\w", user_content):
|
||||
return TaskComplexity.COMPLEX
|
||||
|
||||
# Code blocks embedded in messages
|
||||
if "```" in user_content:
|
||||
return TaskComplexity.COMPLEX
|
||||
|
||||
# Long content → complex reasoning likely required
|
||||
if len(user_content) > _COMPLEX_CHAR_THRESHOLD:
|
||||
return TaskComplexity.COMPLEX
|
||||
|
||||
# Deep conversation → complex ongoing task
|
||||
if len(messages) > _COMPLEX_CONVERSATION_DEPTH:
|
||||
return TaskComplexity.COMPLEX
|
||||
|
||||
# Simplicity signals -------------------------------------------------------
|
||||
|
||||
# Explicit simple keywords
|
||||
for kw in _SIMPLE_KEYWORDS:
|
||||
if kw in user_content:
|
||||
return TaskComplexity.SIMPLE
|
||||
|
||||
# Short single-sentence messages default to simple
|
||||
if len(user_content) <= _SIMPLE_CHAR_THRESHOLD:
|
||||
return TaskComplexity.SIMPLE
|
||||
|
||||
# When uncertain, prefer quality (complex model)
|
||||
return TaskComplexity.COMPLEX
|
||||
247
src/infrastructure/self_correction.py
Normal file
247
src/infrastructure/self_correction.py
Normal file
@@ -0,0 +1,247 @@
|
||||
"""Self-correction event logger.
|
||||
|
||||
Records instances where the agent detected its own errors and the steps
|
||||
it took to correct them. Used by the Self-Correction Dashboard to visualise
|
||||
these events and surface recurring failure patterns.
|
||||
|
||||
Usage::
|
||||
|
||||
from infrastructure.self_correction import log_self_correction, get_corrections, get_patterns
|
||||
|
||||
log_self_correction(
|
||||
source="agentic_loop",
|
||||
original_intent="Execute step 3: deploy service",
|
||||
detected_error="ConnectionRefusedError: port 8080 unavailable",
|
||||
correction_strategy="Retry on alternate port 8081",
|
||||
final_outcome="Success on retry",
|
||||
task_id="abc123",
|
||||
)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from collections.abc import Generator
|
||||
from contextlib import closing, contextmanager
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Database
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_DB_PATH: Path | None = None
|
||||
|
||||
|
||||
def _get_db_path() -> Path:
|
||||
global _DB_PATH
|
||||
if _DB_PATH is None:
|
||||
from config import settings
|
||||
|
||||
_DB_PATH = Path(settings.repo_root) / "data" / "self_correction.db"
|
||||
return _DB_PATH
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _get_db() -> Generator[sqlite3.Connection, None, None]:
|
||||
db_path = _get_db_path()
|
||||
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with closing(sqlite3.connect(str(db_path))) as conn:
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS self_correction_events (
|
||||
id TEXT PRIMARY KEY,
|
||||
source TEXT NOT NULL,
|
||||
task_id TEXT DEFAULT '',
|
||||
original_intent TEXT NOT NULL,
|
||||
detected_error TEXT NOT NULL,
|
||||
correction_strategy TEXT NOT NULL,
|
||||
final_outcome TEXT NOT NULL,
|
||||
outcome_status TEXT DEFAULT 'success',
|
||||
error_type TEXT DEFAULT '',
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_sc_created ON self_correction_events(created_at)"
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_sc_error_type ON self_correction_events(error_type)"
|
||||
)
|
||||
conn.commit()
|
||||
yield conn
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Write
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def log_self_correction(
|
||||
*,
|
||||
source: str,
|
||||
original_intent: str,
|
||||
detected_error: str,
|
||||
correction_strategy: str,
|
||||
final_outcome: str,
|
||||
task_id: str = "",
|
||||
outcome_status: str = "success",
|
||||
error_type: str = "",
|
||||
) -> str:
|
||||
"""Record a self-correction event and return its ID.
|
||||
|
||||
Args:
|
||||
source: Module or component that triggered the correction.
|
||||
original_intent: What the agent was trying to do.
|
||||
detected_error: The error or problem that was detected.
|
||||
correction_strategy: How the agent attempted to correct the error.
|
||||
final_outcome: What the result of the correction attempt was.
|
||||
task_id: Optional task/session ID for correlation.
|
||||
outcome_status: 'success', 'partial', or 'failed'.
|
||||
error_type: Short category label for pattern analysis (e.g.
|
||||
'ConnectionError', 'TimeoutError').
|
||||
|
||||
Returns:
|
||||
The ID of the newly created record.
|
||||
"""
|
||||
event_id = str(uuid.uuid4())
|
||||
if not error_type:
|
||||
# Derive a simple type from the first word of the detected error
|
||||
error_type = detected_error.split(":")[0].strip()[:64]
|
||||
|
||||
try:
|
||||
with _get_db() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO self_correction_events
|
||||
(id, source, task_id, original_intent, detected_error,
|
||||
correction_strategy, final_outcome, outcome_status, error_type)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
event_id,
|
||||
source,
|
||||
task_id,
|
||||
original_intent[:2000],
|
||||
detected_error[:2000],
|
||||
correction_strategy[:2000],
|
||||
final_outcome[:2000],
|
||||
outcome_status,
|
||||
error_type,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
logger.info(
|
||||
"Self-correction logged [%s] source=%s error_type=%s status=%s",
|
||||
event_id[:8],
|
||||
source,
|
||||
error_type,
|
||||
outcome_status,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to log self-correction event: %s", exc)
|
||||
|
||||
return event_id
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Read
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def get_corrections(limit: int = 50) -> list[dict]:
|
||||
"""Return the most recent self-correction events, newest first."""
|
||||
try:
|
||||
with _get_db() as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT * FROM self_correction_events
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(limit,),
|
||||
).fetchall()
|
||||
return [dict(r) for r in rows]
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to fetch self-correction events: %s", exc)
|
||||
return []
|
||||
|
||||
|
||||
def get_patterns(top_n: int = 10) -> list[dict]:
|
||||
"""Return the most common recurring error types with counts.
|
||||
|
||||
Each entry has:
|
||||
- error_type: category label
|
||||
- count: total occurrences
|
||||
- success_count: corrected successfully
|
||||
- failed_count: correction also failed
|
||||
- last_seen: ISO timestamp of most recent occurrence
|
||||
"""
|
||||
try:
|
||||
with _get_db() as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
error_type,
|
||||
COUNT(*) AS count,
|
||||
SUM(CASE WHEN outcome_status = 'success' THEN 1 ELSE 0 END) AS success_count,
|
||||
SUM(CASE WHEN outcome_status = 'failed' THEN 1 ELSE 0 END) AS failed_count,
|
||||
MAX(created_at) AS last_seen
|
||||
FROM self_correction_events
|
||||
GROUP BY error_type
|
||||
ORDER BY count DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(top_n,),
|
||||
).fetchall()
|
||||
return [dict(r) for r in rows]
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to fetch self-correction patterns: %s", exc)
|
||||
return []
|
||||
|
||||
|
||||
def get_stats() -> dict:
|
||||
"""Return aggregate statistics for the summary panel."""
|
||||
try:
|
||||
with _get_db() as conn:
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) AS total,
|
||||
SUM(CASE WHEN outcome_status = 'success' THEN 1 ELSE 0 END) AS success_count,
|
||||
SUM(CASE WHEN outcome_status = 'partial' THEN 1 ELSE 0 END) AS partial_count,
|
||||
SUM(CASE WHEN outcome_status = 'failed' THEN 1 ELSE 0 END) AS failed_count,
|
||||
COUNT(DISTINCT error_type) AS unique_error_types,
|
||||
COUNT(DISTINCT source) AS sources
|
||||
FROM self_correction_events
|
||||
"""
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return _empty_stats()
|
||||
d = dict(row)
|
||||
total = d.get("total") or 0
|
||||
if total:
|
||||
d["success_rate"] = round((d.get("success_count") or 0) / total * 100)
|
||||
else:
|
||||
d["success_rate"] = 0
|
||||
return d
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to fetch self-correction stats: %s", exc)
|
||||
return _empty_stats()
|
||||
|
||||
|
||||
def _empty_stats() -> dict:
|
||||
return {
|
||||
"total": 0,
|
||||
"success_count": 0,
|
||||
"partial_count": 0,
|
||||
"failed_count": 0,
|
||||
"unique_error_types": 0,
|
||||
"sources": 0,
|
||||
"success_rate": 0,
|
||||
}
|
||||
7
src/self_coding/__init__.py
Normal file
7
src/self_coding/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Self-coding package — Timmy's self-modification capability.
|
||||
|
||||
Provides the branch→edit→test→commit/revert loop that allows Timmy
|
||||
to propose and apply code changes autonomously, gated by the test suite.
|
||||
|
||||
Main entry point: ``self_coding.self_modify.loop``
|
||||
"""
|
||||
129
src/self_coding/gitea_client.py
Normal file
129
src/self_coding/gitea_client.py
Normal file
@@ -0,0 +1,129 @@
|
||||
"""Gitea REST client — thin wrapper for PR creation and issue commenting.
|
||||
|
||||
Uses ``settings.gitea_url``, ``settings.gitea_token``, and
|
||||
``settings.gitea_repo`` (owner/repo) from config. Degrades gracefully
|
||||
when the token is absent or the server is unreachable.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PullRequest:
|
||||
"""Minimal representation of a created pull request."""
|
||||
|
||||
number: int
|
||||
title: str
|
||||
html_url: str
|
||||
|
||||
|
||||
class GiteaClient:
|
||||
"""HTTP client for Gitea's REST API v1.
|
||||
|
||||
All methods return structured results and never raise — errors are
|
||||
logged at WARNING level and indicated via return value.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str | None = None,
|
||||
token: str | None = None,
|
||||
repo: str | None = None,
|
||||
) -> None:
|
||||
from config import settings
|
||||
|
||||
self._base_url = (base_url or settings.gitea_url).rstrip("/")
|
||||
self._token = token or settings.gitea_token
|
||||
self._repo = repo or settings.gitea_repo
|
||||
|
||||
# ── internal ────────────────────────────────────────────────────────────
|
||||
|
||||
def _headers(self) -> dict[str, str]:
|
||||
return {
|
||||
"Authorization": f"token {self._token}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
def _api(self, path: str) -> str:
|
||||
return f"{self._base_url}/api/v1/{path.lstrip('/')}"
|
||||
|
||||
# ── public API ───────────────────────────────────────────────────────────
|
||||
|
||||
def create_pull_request(
|
||||
self,
|
||||
title: str,
|
||||
body: str,
|
||||
head: str,
|
||||
base: str = "main",
|
||||
) -> PullRequest | None:
|
||||
"""Open a pull request.
|
||||
|
||||
Args:
|
||||
title: PR title (keep under 70 chars).
|
||||
body: PR body in markdown.
|
||||
head: Source branch (e.g. ``self-modify/issue-983``).
|
||||
base: Target branch (default ``main``).
|
||||
|
||||
Returns:
|
||||
A ``PullRequest`` dataclass on success, ``None`` on failure.
|
||||
"""
|
||||
if not self._token:
|
||||
logger.warning("Gitea token not configured — skipping PR creation")
|
||||
return None
|
||||
|
||||
try:
|
||||
import requests as _requests
|
||||
|
||||
resp = _requests.post(
|
||||
self._api(f"repos/{self._repo}/pulls"),
|
||||
headers=self._headers(),
|
||||
json={"title": title, "body": body, "head": head, "base": base},
|
||||
timeout=15,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
pr = PullRequest(
|
||||
number=data["number"],
|
||||
title=data["title"],
|
||||
html_url=data["html_url"],
|
||||
)
|
||||
logger.info("PR #%d created: %s", pr.number, pr.html_url)
|
||||
return pr
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to create PR: %s", exc)
|
||||
return None
|
||||
|
||||
def add_issue_comment(self, issue_number: int, body: str) -> bool:
|
||||
"""Post a comment on an issue or PR.
|
||||
|
||||
Returns:
|
||||
True on success, False on failure.
|
||||
"""
|
||||
if not self._token:
|
||||
logger.warning("Gitea token not configured — skipping issue comment")
|
||||
return False
|
||||
|
||||
try:
|
||||
import requests as _requests
|
||||
|
||||
resp = _requests.post(
|
||||
self._api(f"repos/{self._repo}/issues/{issue_number}/comments"),
|
||||
headers=self._headers(),
|
||||
json={"body": body},
|
||||
timeout=15,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
logger.info("Comment posted on issue #%d", issue_number)
|
||||
return True
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to post comment on issue #%d: %s", issue_number, exc)
|
||||
return False
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
gitea_client = GiteaClient()
|
||||
1
src/self_coding/self_modify/__init__.py
Normal file
1
src/self_coding/self_modify/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Self-modification loop sub-package."""
|
||||
301
src/self_coding/self_modify/loop.py
Normal file
301
src/self_coding/self_modify/loop.py
Normal file
@@ -0,0 +1,301 @@
|
||||
"""Self-modification loop — branch → edit → test → commit/revert.
|
||||
|
||||
Timmy's self-coding capability, restored after deletion in
|
||||
Operation Darling Purge (commit 584eeb679e88).
|
||||
|
||||
## Cycle
|
||||
1. **Branch** — create ``self-modify/<slug>`` from ``main``
|
||||
2. **Edit** — apply the proposed change (patch string or callable)
|
||||
3. **Test** — run ``pytest tests/ -x -q``; never commit on failure
|
||||
4. **Commit** — stage and commit on green; revert branch on red
|
||||
5. **PR** — open a Gitea pull request (requires no direct push to main)
|
||||
|
||||
## Guards
|
||||
- Never push directly to ``main`` or ``master``
|
||||
- All changes land via PR (enforced by ``_guard_branch``)
|
||||
- Test gate is mandatory; ``skip_tests=True`` is for unit-test use only
|
||||
- Commits only happen when ``pytest tests/ -x -q`` exits 0
|
||||
|
||||
## Usage::
|
||||
|
||||
from self_coding.self_modify.loop import SelfModifyLoop
|
||||
|
||||
loop = SelfModifyLoop()
|
||||
result = await loop.run(
|
||||
slug="add-hello-tool",
|
||||
description="Add hello() convenience tool",
|
||||
edit_fn=my_edit_function, # callable(repo_root: str) -> None
|
||||
)
|
||||
if result.success:
|
||||
print(f"PR: {result.pr_url}")
|
||||
else:
|
||||
print(f"Failed: {result.error}")
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Branches that must never receive direct commits
|
||||
_PROTECTED_BRANCHES = frozenset({"main", "master", "develop"})
|
||||
|
||||
# Test command used as the commit gate
|
||||
_TEST_COMMAND = ["pytest", "tests/", "-x", "-q", "--tb=short"]
|
||||
|
||||
# Max time (seconds) to wait for the test suite
|
||||
_TEST_TIMEOUT = 300
|
||||
|
||||
|
||||
@dataclass
|
||||
class LoopResult:
|
||||
"""Result from one self-modification cycle."""
|
||||
|
||||
success: bool
|
||||
branch: str = ""
|
||||
commit_sha: str = ""
|
||||
pr_url: str = ""
|
||||
pr_number: int = 0
|
||||
test_output: str = ""
|
||||
error: str = ""
|
||||
elapsed_ms: float = 0.0
|
||||
metadata: dict = field(default_factory=dict)
|
||||
|
||||
|
||||
class SelfModifyLoop:
|
||||
"""Orchestrate branch → edit → test → commit/revert → PR.
|
||||
|
||||
Args:
|
||||
repo_root: Absolute path to the git repository (defaults to
|
||||
``settings.repo_root``).
|
||||
remote: Git remote name (default ``origin``).
|
||||
base_branch: Branch to fork from and target for the PR
|
||||
(default ``main``).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
repo_root: str | None = None,
|
||||
remote: str = "origin",
|
||||
base_branch: str = "main",
|
||||
) -> None:
|
||||
self._repo_root = Path(repo_root or settings.repo_root)
|
||||
self._remote = remote
|
||||
self._base_branch = base_branch
|
||||
|
||||
# ── public ──────────────────────────────────────────────────────────────
|
||||
|
||||
async def run(
|
||||
self,
|
||||
slug: str,
|
||||
description: str,
|
||||
edit_fn: Callable[[str], None],
|
||||
issue_number: int | None = None,
|
||||
skip_tests: bool = False,
|
||||
) -> LoopResult:
|
||||
"""Execute one full self-modification cycle.
|
||||
|
||||
Args:
|
||||
slug: Short identifier used for the branch name
|
||||
(e.g. ``"add-hello-tool"``).
|
||||
description: Human-readable description for commit message
|
||||
and PR body.
|
||||
edit_fn: Callable that receives the repo root path (str)
|
||||
and applies the desired code changes in-place.
|
||||
issue_number: Optional Gitea issue number to reference in PR.
|
||||
skip_tests: If ``True``, skip the test gate (unit-test use
|
||||
only — never use in production).
|
||||
|
||||
Returns:
|
||||
:class:`LoopResult` describing the outcome.
|
||||
"""
|
||||
start = time.time()
|
||||
branch = f"self-modify/{slug}"
|
||||
|
||||
try:
|
||||
self._guard_branch(branch)
|
||||
self._checkout_base()
|
||||
self._create_branch(branch)
|
||||
|
||||
try:
|
||||
edit_fn(str(self._repo_root))
|
||||
except Exception as exc:
|
||||
self._revert_branch(branch)
|
||||
return LoopResult(
|
||||
success=False,
|
||||
branch=branch,
|
||||
error=f"edit_fn raised: {exc}",
|
||||
elapsed_ms=self._elapsed(start),
|
||||
)
|
||||
|
||||
if not skip_tests:
|
||||
test_output, passed = self._run_tests()
|
||||
if not passed:
|
||||
self._revert_branch(branch)
|
||||
return LoopResult(
|
||||
success=False,
|
||||
branch=branch,
|
||||
test_output=test_output,
|
||||
error="Tests failed — branch reverted",
|
||||
elapsed_ms=self._elapsed(start),
|
||||
)
|
||||
else:
|
||||
test_output = "(tests skipped)"
|
||||
|
||||
sha = self._commit_all(description)
|
||||
self._push_branch(branch)
|
||||
|
||||
pr = self._create_pr(
|
||||
branch=branch,
|
||||
description=description,
|
||||
test_output=test_output,
|
||||
issue_number=issue_number,
|
||||
)
|
||||
|
||||
return LoopResult(
|
||||
success=True,
|
||||
branch=branch,
|
||||
commit_sha=sha,
|
||||
pr_url=pr.html_url if pr else "",
|
||||
pr_number=pr.number if pr else 0,
|
||||
test_output=test_output,
|
||||
elapsed_ms=self._elapsed(start),
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
logger.warning("Self-modify loop failed: %s", exc)
|
||||
return LoopResult(
|
||||
success=False,
|
||||
branch=branch,
|
||||
error=str(exc),
|
||||
elapsed_ms=self._elapsed(start),
|
||||
)
|
||||
|
||||
# ── private helpers ──────────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _elapsed(start: float) -> float:
|
||||
return (time.time() - start) * 1000
|
||||
|
||||
def _git(self, *args: str, check: bool = True) -> subprocess.CompletedProcess:
|
||||
"""Run a git command in the repo root."""
|
||||
cmd = ["git", *args]
|
||||
logger.debug("git %s", " ".join(args))
|
||||
return subprocess.run(
|
||||
cmd,
|
||||
cwd=str(self._repo_root),
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=check,
|
||||
)
|
||||
|
||||
def _guard_branch(self, branch: str) -> None:
|
||||
"""Raise if the target branch is a protected branch name."""
|
||||
if branch in _PROTECTED_BRANCHES:
|
||||
raise ValueError(
|
||||
f"Refusing to operate on protected branch '{branch}'. "
|
||||
"All self-modifications must go via PR."
|
||||
)
|
||||
|
||||
def _checkout_base(self) -> None:
|
||||
"""Checkout the base branch and pull latest."""
|
||||
self._git("checkout", self._base_branch)
|
||||
# Best-effort pull; ignore failures (e.g. no remote configured)
|
||||
self._git("pull", self._remote, self._base_branch, check=False)
|
||||
|
||||
def _create_branch(self, branch: str) -> None:
|
||||
"""Create and checkout a new branch, deleting an old one if needed."""
|
||||
# Delete local branch if it already exists (stale prior attempt)
|
||||
self._git("branch", "-D", branch, check=False)
|
||||
self._git("checkout", "-b", branch)
|
||||
logger.info("Created branch: %s", branch)
|
||||
|
||||
def _revert_branch(self, branch: str) -> None:
|
||||
"""Checkout base and delete the failed branch."""
|
||||
try:
|
||||
self._git("checkout", self._base_branch, check=False)
|
||||
self._git("branch", "-D", branch, check=False)
|
||||
logger.info("Reverted and deleted branch: %s", branch)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to revert branch %s: %s", branch, exc)
|
||||
|
||||
def _run_tests(self) -> tuple[str, bool]:
|
||||
"""Run the test suite. Returns (output, passed)."""
|
||||
logger.info("Running test suite: %s", " ".join(_TEST_COMMAND))
|
||||
try:
|
||||
result = subprocess.run(
|
||||
_TEST_COMMAND,
|
||||
cwd=str(self._repo_root),
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=_TEST_TIMEOUT,
|
||||
)
|
||||
output = (result.stdout + "\n" + result.stderr).strip()
|
||||
passed = result.returncode == 0
|
||||
logger.info(
|
||||
"Test suite %s (exit %d)", "PASSED" if passed else "FAILED", result.returncode
|
||||
)
|
||||
return output, passed
|
||||
except subprocess.TimeoutExpired:
|
||||
msg = f"Test suite timed out after {_TEST_TIMEOUT}s"
|
||||
logger.warning(msg)
|
||||
return msg, False
|
||||
except FileNotFoundError:
|
||||
msg = "pytest not found on PATH"
|
||||
logger.warning(msg)
|
||||
return msg, False
|
||||
|
||||
def _commit_all(self, message: str) -> str:
|
||||
"""Stage all changes and create a commit. Returns the new SHA."""
|
||||
self._git("add", "-A")
|
||||
self._git("commit", "-m", message)
|
||||
result = self._git("rev-parse", "HEAD")
|
||||
sha = result.stdout.strip()
|
||||
logger.info("Committed: %s sha=%s", message[:60], sha[:12])
|
||||
return sha
|
||||
|
||||
def _push_branch(self, branch: str) -> None:
|
||||
"""Push the branch to the remote."""
|
||||
self._git("push", "-u", self._remote, branch)
|
||||
logger.info("Pushed branch: %s -> %s", branch, self._remote)
|
||||
|
||||
def _create_pr(
|
||||
self,
|
||||
branch: str,
|
||||
description: str,
|
||||
test_output: str,
|
||||
issue_number: int | None,
|
||||
):
|
||||
"""Open a Gitea PR. Returns PullRequest or None on failure."""
|
||||
from self_coding.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient()
|
||||
|
||||
issue_ref = f"\n\nFixes #{issue_number}" if issue_number else ""
|
||||
test_section = (
|
||||
f"\n\n## Test results\n```\n{test_output[:2000]}\n```"
|
||||
if test_output and test_output != "(tests skipped)"
|
||||
else ""
|
||||
)
|
||||
|
||||
body = (
|
||||
f"## Summary\n{description}"
|
||||
f"{issue_ref}"
|
||||
f"{test_section}"
|
||||
"\n\n🤖 Generated by Timmy's self-modification loop"
|
||||
)
|
||||
|
||||
return client.create_pull_request(
|
||||
title=f"[self-modify] {description[:60]}",
|
||||
body=body,
|
||||
head=branch,
|
||||
base=self._base_branch,
|
||||
)
|
||||
@@ -312,6 +312,13 @@ async def _handle_step_failure(
|
||||
"adaptation": step.result[:200],
|
||||
},
|
||||
)
|
||||
_log_self_correction(
|
||||
task_id=task_id,
|
||||
step_desc=step_desc,
|
||||
exc=exc,
|
||||
outcome=step.result,
|
||||
outcome_status="success",
|
||||
)
|
||||
if on_progress:
|
||||
await on_progress(f"[Adapted] {step_desc}", step_num, total_steps)
|
||||
except Exception as adapt_exc: # broad catch intentional
|
||||
@@ -325,9 +332,42 @@ async def _handle_step_failure(
|
||||
duration_ms=int((time.monotonic() - step_start) * 1000),
|
||||
)
|
||||
)
|
||||
_log_self_correction(
|
||||
task_id=task_id,
|
||||
step_desc=step_desc,
|
||||
exc=exc,
|
||||
outcome=f"Adaptation also failed: {adapt_exc}",
|
||||
outcome_status="failed",
|
||||
)
|
||||
completed_results.append(f"Step {step_num}: FAILED")
|
||||
|
||||
|
||||
def _log_self_correction(
|
||||
*,
|
||||
task_id: str,
|
||||
step_desc: str,
|
||||
exc: Exception,
|
||||
outcome: str,
|
||||
outcome_status: str,
|
||||
) -> None:
|
||||
"""Best-effort: log a self-correction event (never raises)."""
|
||||
try:
|
||||
from infrastructure.self_correction import log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="agentic_loop",
|
||||
original_intent=step_desc,
|
||||
detected_error=f"{type(exc).__name__}: {exc}",
|
||||
correction_strategy="Adaptive re-plan via LLM",
|
||||
final_outcome=outcome[:500],
|
||||
task_id=task_id,
|
||||
outcome_status=outcome_status,
|
||||
error_type=type(exc).__name__,
|
||||
)
|
||||
except Exception as log_exc:
|
||||
logger.debug("Self-correction log failed: %s", log_exc)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Core loop
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@@ -8,7 +8,7 @@ Flow:
|
||||
1. prepare_experiment — clone repo + run data prep
|
||||
2. run_experiment — execute train.py with wall-clock timeout
|
||||
3. evaluate_result — compare metric against baseline
|
||||
4. experiment_loop — orchestrate the full cycle
|
||||
4. SystemExperiment — orchestrate the full cycle via class interface
|
||||
|
||||
All subprocess calls are guarded with timeouts for graceful degradation.
|
||||
"""
|
||||
@@ -17,9 +17,12 @@ from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
import subprocess
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
@@ -29,15 +32,61 @@ DEFAULT_REPO = "https://github.com/karpathy/autoresearch.git"
|
||||
_METRIC_RE = re.compile(r"val_bpb[:\s]+([0-9]+\.?[0-9]*)")
|
||||
|
||||
|
||||
# ── Higher-is-better metric names ────────────────────────────────────────────
|
||||
_HIGHER_IS_BETTER = frozenset({"unit_pass_rate", "coverage"})
|
||||
|
||||
|
||||
def is_apple_silicon() -> bool:
|
||||
"""Return True when running on Apple Silicon (M-series chip)."""
|
||||
return platform.system() == "Darwin" and platform.machine() == "arm64"
|
||||
|
||||
|
||||
def _build_experiment_env(
|
||||
dataset: str = "tinystories",
|
||||
backend: str = "auto",
|
||||
) -> dict[str, str]:
|
||||
"""Build environment variables for an autoresearch subprocess.
|
||||
|
||||
Args:
|
||||
dataset: Dataset name forwarded as ``AUTORESEARCH_DATASET``.
|
||||
``"tinystories"`` is recommended for Apple Silicon (lower entropy,
|
||||
faster iteration).
|
||||
backend: Inference backend forwarded as ``AUTORESEARCH_BACKEND``.
|
||||
``"auto"`` enables MLX on Apple Silicon; ``"cpu"`` forces CPU.
|
||||
|
||||
Returns:
|
||||
Merged environment dict (inherits current process env).
|
||||
"""
|
||||
env = os.environ.copy()
|
||||
env["AUTORESEARCH_DATASET"] = dataset
|
||||
|
||||
if backend == "auto":
|
||||
env["AUTORESEARCH_BACKEND"] = "mlx" if is_apple_silicon() else "cuda"
|
||||
else:
|
||||
env["AUTORESEARCH_BACKEND"] = backend
|
||||
|
||||
return env
|
||||
|
||||
|
||||
def prepare_experiment(
|
||||
workspace: Path,
|
||||
repo_url: str = DEFAULT_REPO,
|
||||
dataset: str = "tinystories",
|
||||
backend: str = "auto",
|
||||
) -> str:
|
||||
"""Clone autoresearch repo and run data preparation.
|
||||
|
||||
On Apple Silicon the ``dataset`` defaults to ``"tinystories"`` (lower
|
||||
entropy, faster iteration) and ``backend`` to ``"auto"`` which resolves to
|
||||
MLX. Both values are forwarded as ``AUTORESEARCH_DATASET`` /
|
||||
``AUTORESEARCH_BACKEND`` environment variables so that ``prepare.py`` and
|
||||
``train.py`` can adapt their behaviour without CLI changes.
|
||||
|
||||
Args:
|
||||
workspace: Directory to set up the experiment in.
|
||||
repo_url: Git URL for the autoresearch repository.
|
||||
dataset: Dataset name; ``"tinystories"`` is recommended on Mac.
|
||||
backend: Inference backend; ``"auto"`` picks MLX on Apple Silicon.
|
||||
|
||||
Returns:
|
||||
Status message describing what was prepared.
|
||||
@@ -59,6 +108,14 @@ def prepare_experiment(
|
||||
else:
|
||||
logger.info("Autoresearch repo already present at %s", repo_dir)
|
||||
|
||||
env = _build_experiment_env(dataset=dataset, backend=backend)
|
||||
if is_apple_silicon():
|
||||
logger.info(
|
||||
"Apple Silicon detected — dataset=%s backend=%s",
|
||||
env["AUTORESEARCH_DATASET"],
|
||||
env["AUTORESEARCH_BACKEND"],
|
||||
)
|
||||
|
||||
# Run prepare.py (data download + tokeniser training)
|
||||
prepare_script = repo_dir / "prepare.py"
|
||||
if prepare_script.exists():
|
||||
@@ -69,6 +126,7 @@ def prepare_experiment(
|
||||
text=True,
|
||||
cwd=str(repo_dir),
|
||||
timeout=300,
|
||||
env=env,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return f"Preparation failed: {result.stderr.strip()[:500]}"
|
||||
@@ -81,6 +139,8 @@ def run_experiment(
|
||||
workspace: Path,
|
||||
timeout: int = 300,
|
||||
metric_name: str = "val_bpb",
|
||||
dataset: str = "tinystories",
|
||||
backend: str = "auto",
|
||||
) -> dict[str, Any]:
|
||||
"""Run a single training experiment with a wall-clock timeout.
|
||||
|
||||
@@ -88,6 +148,9 @@ def run_experiment(
|
||||
workspace: Experiment workspace (contains autoresearch/ subdir).
|
||||
timeout: Maximum wall-clock seconds for the run.
|
||||
metric_name: Name of the metric to extract from stdout.
|
||||
dataset: Dataset forwarded to the subprocess via env var.
|
||||
backend: Inference backend forwarded via env var (``"auto"`` → MLX on
|
||||
Apple Silicon, CUDA otherwise).
|
||||
|
||||
Returns:
|
||||
Dict with keys: metric (float|None), log (str), duration_s (int),
|
||||
@@ -105,6 +168,7 @@ def run_experiment(
|
||||
"error": f"train.py not found in {repo_dir}",
|
||||
}
|
||||
|
||||
env = _build_experiment_env(dataset=dataset, backend=backend)
|
||||
start = time.monotonic()
|
||||
try:
|
||||
result = subprocess.run(
|
||||
@@ -113,6 +177,7 @@ def run_experiment(
|
||||
text=True,
|
||||
cwd=str(repo_dir),
|
||||
timeout=timeout,
|
||||
env=env,
|
||||
)
|
||||
duration = int(time.monotonic() - start)
|
||||
output = result.stdout + result.stderr
|
||||
@@ -125,7 +190,7 @@ def run_experiment(
|
||||
"log": output[-2000:], # Keep last 2k chars
|
||||
"duration_s": duration,
|
||||
"success": result.returncode == 0,
|
||||
"error": None if result.returncode == 0 else f"Exit code {result.returncode}",
|
||||
"error": (None if result.returncode == 0 else f"Exit code {result.returncode}"),
|
||||
}
|
||||
except subprocess.TimeoutExpired:
|
||||
duration = int(time.monotonic() - start)
|
||||
@@ -212,3 +277,369 @@ def _append_result(workspace: Path, result: dict[str, Any]) -> None:
|
||||
results_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
with results_file.open("a") as f:
|
||||
f.write(json.dumps(result) + "\n")
|
||||
|
||||
|
||||
def _extract_pass_rate(output: str) -> float | None:
|
||||
"""Extract pytest pass rate as a percentage from tox/pytest output."""
|
||||
passed_m = re.search(r"(\d+) passed", output)
|
||||
failed_m = re.search(r"(\d+) failed", output)
|
||||
if passed_m:
|
||||
passed = int(passed_m.group(1))
|
||||
failed = int(failed_m.group(1)) if failed_m else 0
|
||||
total = passed + failed
|
||||
return (passed / total * 100.0) if total > 0 else 100.0
|
||||
return None
|
||||
|
||||
|
||||
def _extract_coverage(output: str) -> float | None:
|
||||
"""Extract total coverage percentage from coverage output."""
|
||||
coverage_m = re.search(r"(?:TOTAL\s+\d+\s+\d+\s+|Total coverage:\s*)(\d+)%", output)
|
||||
if coverage_m:
|
||||
try:
|
||||
return float(coverage_m.group(1))
|
||||
except ValueError:
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
class SystemExperiment:
|
||||
"""An autoresearch experiment targeting a specific module with a configurable metric.
|
||||
|
||||
Encapsulates the hypothesis → edit → tox → evaluate → commit/revert loop
|
||||
for a single target file or module.
|
||||
|
||||
Args:
|
||||
target: Path or module name to optimise (e.g. ``src/timmy/agent.py``).
|
||||
metric: Metric to extract from tox output. Built-in values:
|
||||
``unit_pass_rate`` (default), ``coverage``, ``val_bpb``.
|
||||
Any other value is forwarded to :func:`_extract_metric`.
|
||||
budget_minutes: Wall-clock budget per experiment (default 5 min).
|
||||
workspace: Working directory for subprocess calls. Defaults to ``cwd``.
|
||||
revert_on_failure: Whether to revert changes on failed experiments.
|
||||
hypothesis: Optional natural language hypothesis for the experiment.
|
||||
metric_fn: Optional callable for custom metric extraction.
|
||||
If provided, overrides built-in metric extraction.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
target: str,
|
||||
metric: str = "unit_pass_rate",
|
||||
budget_minutes: int = 5,
|
||||
workspace: Path | None = None,
|
||||
revert_on_failure: bool = True,
|
||||
hypothesis: str = "",
|
||||
metric_fn: Callable[[str], float | None] | None = None,
|
||||
) -> None:
|
||||
self.target = target
|
||||
self.metric = metric
|
||||
self.budget_seconds = budget_minutes * 60
|
||||
self.workspace = Path(workspace) if workspace else Path.cwd()
|
||||
self.revert_on_failure = revert_on_failure
|
||||
self.hypothesis = hypothesis
|
||||
self.metric_fn = metric_fn
|
||||
self.results: list[dict[str, Any]] = []
|
||||
self.baseline: float | None = None
|
||||
|
||||
# ── Hypothesis generation ─────────────────────────────────────────────────
|
||||
|
||||
def generate_hypothesis(self, program_content: str = "") -> str:
|
||||
"""Return a plain-English hypothesis for the next experiment.
|
||||
|
||||
Uses the first non-empty line of *program_content* when available;
|
||||
falls back to a generic description based on target and metric.
|
||||
"""
|
||||
first_line = ""
|
||||
for line in program_content.splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped and not stripped.startswith("#"):
|
||||
first_line = stripped[:120]
|
||||
break
|
||||
if first_line:
|
||||
return f"[{self.target}] {first_line}"
|
||||
return f"Improve {self.metric} for {self.target}"
|
||||
|
||||
# ── Edit phase ────────────────────────────────────────────────────────────
|
||||
|
||||
def apply_edit(self, hypothesis: str, model: str = "qwen3:30b") -> str:
|
||||
"""Apply code edits to *target* via Aider.
|
||||
|
||||
Returns a status string. Degrades gracefully — never raises.
|
||||
"""
|
||||
prompt = f"Edit {self.target}: {hypothesis}"
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["aider", "--no-git", "--model", f"ollama/{model}", "--quiet", prompt],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=self.budget_seconds,
|
||||
cwd=str(self.workspace),
|
||||
)
|
||||
if result.returncode == 0:
|
||||
return result.stdout or "Edit applied."
|
||||
return f"Aider error (exit {result.returncode}): {result.stderr[:500]}"
|
||||
except FileNotFoundError:
|
||||
logger.warning("Aider not installed — edit skipped")
|
||||
return "Aider not available — edit skipped"
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.warning("Aider timed out after %ds", self.budget_seconds)
|
||||
return "Aider timed out"
|
||||
except (OSError, subprocess.SubprocessError) as exc:
|
||||
logger.warning("Aider failed: %s", exc)
|
||||
return f"Edit failed: {exc}"
|
||||
|
||||
# ── Evaluation phase ──────────────────────────────────────────────────────
|
||||
|
||||
def run_tox(self, tox_env: str = "unit") -> dict[str, Any]:
|
||||
"""Run *tox_env* and return a result dict.
|
||||
|
||||
Returns:
|
||||
Dict with keys: ``metric`` (float|None), ``log`` (str),
|
||||
``duration_s`` (int), ``success`` (bool), ``error`` (str|None).
|
||||
"""
|
||||
start = time.monotonic()
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["tox", "-e", tox_env],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=self.budget_seconds,
|
||||
cwd=str(self.workspace),
|
||||
)
|
||||
duration = int(time.monotonic() - start)
|
||||
output = result.stdout + result.stderr
|
||||
metric_val = self._extract_tox_metric(output)
|
||||
return {
|
||||
"metric": metric_val,
|
||||
"log": output[-3000:],
|
||||
"duration_s": duration,
|
||||
"success": result.returncode == 0,
|
||||
"error": (None if result.returncode == 0 else f"Exit code {result.returncode}"),
|
||||
}
|
||||
except subprocess.TimeoutExpired:
|
||||
duration = int(time.monotonic() - start)
|
||||
return {
|
||||
"metric": None,
|
||||
"log": f"Budget exceeded after {self.budget_seconds}s",
|
||||
"duration_s": duration,
|
||||
"success": False,
|
||||
"error": f"Budget exceeded after {self.budget_seconds}s",
|
||||
}
|
||||
except OSError as exc:
|
||||
return {
|
||||
"metric": None,
|
||||
"log": "",
|
||||
"duration_s": 0,
|
||||
"success": False,
|
||||
"error": str(exc),
|
||||
}
|
||||
|
||||
def _extract_tox_metric(self, output: str) -> float | None:
|
||||
"""Dispatch to the correct metric extractor based on *self.metric*."""
|
||||
# Use custom metric function if provided
|
||||
if self.metric_fn is not None:
|
||||
try:
|
||||
return self.metric_fn(output)
|
||||
except Exception as exc:
|
||||
logger.warning("Custom metric_fn failed: %s", exc)
|
||||
return None
|
||||
|
||||
if self.metric == "unit_pass_rate":
|
||||
return _extract_pass_rate(output)
|
||||
if self.metric == "coverage":
|
||||
return _extract_coverage(output)
|
||||
return _extract_metric(output, self.metric)
|
||||
|
||||
def evaluate(self, current: float | None, baseline: float | None) -> str:
|
||||
"""Compare *current* metric against *baseline* and return an assessment."""
|
||||
if current is None:
|
||||
return "Indeterminate: metric not extracted from output"
|
||||
if baseline is None:
|
||||
unit = "%" if self.metric in _HIGHER_IS_BETTER else ""
|
||||
return f"Baseline: {self.metric} = {current:.2f}{unit}"
|
||||
|
||||
if self.metric in _HIGHER_IS_BETTER:
|
||||
delta = current - baseline
|
||||
pct = (delta / baseline * 100) if baseline != 0 else 0.0
|
||||
if delta > 0:
|
||||
return f"Improvement: {self.metric} {baseline:.2f}% → {current:.2f}% ({pct:+.2f}%)"
|
||||
if delta < 0:
|
||||
return f"Regression: {self.metric} {baseline:.2f}% → {current:.2f}% ({pct:+.2f}%)"
|
||||
return f"No change: {self.metric} = {current:.2f}%"
|
||||
|
||||
# lower-is-better (val_bpb, loss, etc.)
|
||||
return evaluate_result(current, baseline, self.metric)
|
||||
|
||||
def is_improvement(self, current: float, baseline: float) -> bool:
|
||||
"""Return True if *current* is better than *baseline* for this metric."""
|
||||
if self.metric in _HIGHER_IS_BETTER:
|
||||
return current > baseline
|
||||
return current < baseline # lower-is-better
|
||||
|
||||
# ── Git phase ─────────────────────────────────────────────────────────────
|
||||
|
||||
def create_branch(self, branch_name: str) -> bool:
|
||||
"""Create and checkout a new git branch. Returns True on success."""
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "checkout", "-b", branch_name],
|
||||
cwd=str(self.workspace),
|
||||
check=True,
|
||||
timeout=30,
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.warning("Git branch creation failed: %s", exc)
|
||||
return False
|
||||
|
||||
def commit_changes(self, message: str) -> bool:
|
||||
"""Stage and commit all changes. Returns True on success."""
|
||||
try:
|
||||
subprocess.run(["git", "add", "-A"], cwd=str(self.workspace), check=True, timeout=30)
|
||||
subprocess.run(
|
||||
["git", "commit", "-m", message],
|
||||
cwd=str(self.workspace),
|
||||
check=True,
|
||||
timeout=30,
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.warning("Git commit failed: %s", exc)
|
||||
return False
|
||||
|
||||
def revert_changes(self) -> bool:
|
||||
"""Revert all uncommitted changes. Returns True on success."""
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "checkout", "--", "."],
|
||||
cwd=str(self.workspace),
|
||||
check=True,
|
||||
timeout=30,
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.warning("Git revert failed: %s", exc)
|
||||
return False
|
||||
|
||||
# ── Full experiment loop ──────────────────────────────────────────────────
|
||||
|
||||
def run(
|
||||
self,
|
||||
tox_env: str = "unit",
|
||||
model: str = "qwen3:30b",
|
||||
program_content: str = "",
|
||||
max_iterations: int = 1,
|
||||
dry_run: bool = False,
|
||||
create_branch: bool = False,
|
||||
) -> dict[str, Any]:
|
||||
"""Run the full experiment loop: hypothesis → edit → tox → evaluate → commit/revert.
|
||||
|
||||
This method encapsulates the complete experiment cycle, running multiple
|
||||
iterations until an improvement is found or max_iterations is reached.
|
||||
|
||||
Args:
|
||||
tox_env: Tox environment to run (default "unit").
|
||||
model: Ollama model for Aider edits (default "qwen3:30b").
|
||||
program_content: Research direction for hypothesis generation.
|
||||
max_iterations: Maximum number of experiment iterations.
|
||||
dry_run: If True, only generate hypotheses without making changes.
|
||||
create_branch: If True, create a new git branch for the experiment.
|
||||
|
||||
Returns:
|
||||
Dict with keys: ``success`` (bool), ``final_metric`` (float|None),
|
||||
``baseline`` (float|None), ``iterations`` (int), ``results`` (list).
|
||||
"""
|
||||
if create_branch:
|
||||
branch_name = f"autoresearch/{self.target.replace('/', '-')}-{int(time.time())}"
|
||||
self.create_branch(branch_name)
|
||||
|
||||
baseline: float | None = self.baseline
|
||||
final_metric: float | None = None
|
||||
success = False
|
||||
|
||||
for iteration in range(1, max_iterations + 1):
|
||||
logger.info("Experiment iteration %d/%d", iteration, max_iterations)
|
||||
|
||||
# Generate hypothesis
|
||||
hypothesis = self.hypothesis or self.generate_hypothesis(program_content)
|
||||
logger.info("Hypothesis: %s", hypothesis)
|
||||
|
||||
# In dry-run mode, just record the hypothesis and continue
|
||||
if dry_run:
|
||||
result_record = {
|
||||
"iteration": iteration,
|
||||
"hypothesis": hypothesis,
|
||||
"metric": None,
|
||||
"baseline": baseline,
|
||||
"assessment": "Dry-run: no changes made",
|
||||
"success": True,
|
||||
"duration_s": 0,
|
||||
}
|
||||
self.results.append(result_record)
|
||||
continue
|
||||
|
||||
# Apply edit
|
||||
edit_result = self.apply_edit(hypothesis, model=model)
|
||||
edit_failed = "not available" in edit_result or edit_result.startswith("Aider error")
|
||||
if edit_failed:
|
||||
logger.warning("Edit phase failed: %s", edit_result)
|
||||
|
||||
# Run evaluation
|
||||
tox_result = self.run_tox(tox_env=tox_env)
|
||||
metric = tox_result["metric"]
|
||||
|
||||
# Evaluate result
|
||||
assessment = self.evaluate(metric, baseline)
|
||||
logger.info("Assessment: %s", assessment)
|
||||
|
||||
# Store result
|
||||
result_record = {
|
||||
"iteration": iteration,
|
||||
"hypothesis": hypothesis,
|
||||
"metric": metric,
|
||||
"baseline": baseline,
|
||||
"assessment": assessment,
|
||||
"success": tox_result["success"],
|
||||
"duration_s": tox_result["duration_s"],
|
||||
}
|
||||
self.results.append(result_record)
|
||||
|
||||
# Set baseline on first successful run
|
||||
if metric is not None and baseline is None:
|
||||
baseline = metric
|
||||
self.baseline = baseline
|
||||
final_metric = metric
|
||||
continue
|
||||
|
||||
# Determine if we should commit or revert
|
||||
should_commit = False
|
||||
if tox_result["success"] and metric is not None and baseline is not None:
|
||||
if self.is_improvement(metric, baseline):
|
||||
should_commit = True
|
||||
final_metric = metric
|
||||
baseline = metric
|
||||
self.baseline = baseline
|
||||
success = True
|
||||
|
||||
if should_commit:
|
||||
commit_msg = f"autoresearch: improve {self.metric} on {self.target}\n\n{hypothesis}"
|
||||
if self.commit_changes(commit_msg):
|
||||
logger.info("Changes committed")
|
||||
else:
|
||||
self.revert_changes()
|
||||
logger.warning("Commit failed, changes reverted")
|
||||
elif self.revert_on_failure:
|
||||
self.revert_changes()
|
||||
logger.info("Changes reverted (no improvement)")
|
||||
|
||||
# Early exit if we found an improvement
|
||||
if success:
|
||||
break
|
||||
|
||||
return {
|
||||
"success": success,
|
||||
"final_metric": final_metric,
|
||||
"baseline": self.baseline,
|
||||
"iterations": len(self.results),
|
||||
"results": self.results,
|
||||
}
|
||||
|
||||
169
src/timmy/cli.py
169
src/timmy/cli.py
@@ -347,7 +347,10 @@ def interview(
|
||||
# Force agent creation by calling chat once with a warm-up prompt
|
||||
try:
|
||||
loop.run_until_complete(
|
||||
chat("Hello, Timmy. We're about to start your interview.", session_id="interview")
|
||||
chat(
|
||||
"Hello, Timmy. We're about to start your interview.",
|
||||
session_id="interview",
|
||||
)
|
||||
)
|
||||
except Exception as exc:
|
||||
typer.echo(f"Warning: Initialization issue — {exc}", err=True)
|
||||
@@ -410,11 +413,17 @@ def down():
|
||||
@app.command()
|
||||
def voice(
|
||||
whisper_model: str = typer.Option(
|
||||
"base.en", "--whisper", "-w", help="Whisper model: tiny.en, base.en, small.en, medium.en"
|
||||
"base.en",
|
||||
"--whisper",
|
||||
"-w",
|
||||
help="Whisper model: tiny.en, base.en, small.en, medium.en",
|
||||
),
|
||||
use_say: bool = typer.Option(False, "--say", help="Use macOS `say` instead of Piper TTS"),
|
||||
threshold: float = typer.Option(
|
||||
0.015, "--threshold", "-t", help="Mic silence threshold (RMS). Lower = more sensitive."
|
||||
0.015,
|
||||
"--threshold",
|
||||
"-t",
|
||||
help="Mic silence threshold (RMS). Lower = more sensitive.",
|
||||
),
|
||||
silence: float = typer.Option(1.5, "--silence", help="Seconds of silence to end recording"),
|
||||
backend: str | None = _BACKEND_OPTION,
|
||||
@@ -457,7 +466,8 @@ def route(
|
||||
@app.command()
|
||||
def focus(
|
||||
topic: str | None = typer.Argument(
|
||||
None, help='Topic to focus on (e.g. "three-phase loop"). Omit to show current focus.'
|
||||
None,
|
||||
help='Topic to focus on (e.g. "three-phase loop"). Omit to show current focus.',
|
||||
),
|
||||
clear: bool = typer.Option(False, "--clear", "-c", help="Clear focus and return to broad mode"),
|
||||
):
|
||||
@@ -527,5 +537,156 @@ def healthcheck(
|
||||
raise typer.Exit(result.returncode)
|
||||
|
||||
|
||||
@app.command()
|
||||
def learn(
|
||||
target: str | None = typer.Option(
|
||||
None,
|
||||
"--target",
|
||||
"-t",
|
||||
help="Module or file to optimise (e.g. 'src/timmy/agent.py')",
|
||||
),
|
||||
metric: str = typer.Option(
|
||||
"unit_pass_rate",
|
||||
"--metric",
|
||||
"-m",
|
||||
help="Metric to track: unit_pass_rate | coverage | val_bpb | <custom>",
|
||||
),
|
||||
budget: int = typer.Option(
|
||||
5,
|
||||
"--budget",
|
||||
help="Time limit per experiment in minutes",
|
||||
),
|
||||
max_experiments: int = typer.Option(
|
||||
10,
|
||||
"--max-experiments",
|
||||
help="Cap on total experiments per run",
|
||||
),
|
||||
dry_run: bool = typer.Option(
|
||||
False,
|
||||
"--dry-run",
|
||||
help="Show hypothesis without executing experiments",
|
||||
),
|
||||
program_file: str | None = typer.Option(
|
||||
None,
|
||||
"--program",
|
||||
"-p",
|
||||
help="Path to research direction file (default: program.md in cwd)",
|
||||
),
|
||||
tox_env: str = typer.Option(
|
||||
"unit",
|
||||
"--tox-env",
|
||||
help="Tox environment to run for each evaluation",
|
||||
),
|
||||
model: str = typer.Option(
|
||||
"qwen3:30b",
|
||||
"--model",
|
||||
help="Ollama model forwarded to Aider for code edits",
|
||||
),
|
||||
):
|
||||
"""Start an autonomous improvement loop (autoresearch).
|
||||
|
||||
Reads program.md for research direction, then iterates:
|
||||
hypothesis → edit → tox → evaluate → commit/revert.
|
||||
|
||||
Experiments continue until --max-experiments is reached or the loop is
|
||||
interrupted with Ctrl+C. Use --dry-run to preview hypotheses without
|
||||
making any changes.
|
||||
|
||||
Example:
|
||||
timmy learn --target src/timmy/agent.py --metric unit_pass_rate
|
||||
"""
|
||||
from pathlib import Path
|
||||
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
repo_root = Path.cwd()
|
||||
program_path = Path(program_file) if program_file else repo_root / "program.md"
|
||||
|
||||
if program_path.exists():
|
||||
program_content = program_path.read_text()
|
||||
typer.echo(f"Research direction: {program_path}")
|
||||
else:
|
||||
program_content = ""
|
||||
typer.echo(
|
||||
f"Note: {program_path} not found — proceeding without research direction.",
|
||||
err=True,
|
||||
)
|
||||
|
||||
if target is None:
|
||||
typer.echo(
|
||||
"Error: --target is required. Specify the module or file to optimise.",
|
||||
err=True,
|
||||
)
|
||||
raise typer.Exit(1)
|
||||
|
||||
experiment = SystemExperiment(
|
||||
target=target,
|
||||
metric=metric,
|
||||
budget_minutes=budget,
|
||||
)
|
||||
|
||||
typer.echo()
|
||||
typer.echo(typer.style("Autoresearch", bold=True) + f" — {target}")
|
||||
typer.echo(f" metric={metric} budget={budget}min max={max_experiments} tox={tox_env}")
|
||||
if dry_run:
|
||||
typer.echo(" (dry-run — no changes will be made)")
|
||||
typer.echo()
|
||||
|
||||
def _progress_callback(iteration: int, max_iter: int, message: str) -> None:
|
||||
"""Print progress updates during experiment iterations."""
|
||||
if iteration > 0:
|
||||
prefix = typer.style(f"[{iteration}/{max_iter}]", bold=True)
|
||||
typer.echo(f"{prefix} {message}")
|
||||
|
||||
try:
|
||||
# Run the full experiment loop via the SystemExperiment class
|
||||
result = experiment.run(
|
||||
tox_env=tox_env,
|
||||
model=model,
|
||||
program_content=program_content,
|
||||
max_iterations=max_experiments,
|
||||
dry_run=dry_run,
|
||||
create_branch=False, # CLI mode: work on current branch
|
||||
)
|
||||
|
||||
# Display results for each iteration
|
||||
for i, record in enumerate(experiment.results, 1):
|
||||
_progress_callback(i, max_experiments, record["hypothesis"])
|
||||
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
# Edit phase result
|
||||
typer.echo(" → editing …", nl=False)
|
||||
if record.get("edit_failed"):
|
||||
typer.echo(f" skipped ({record.get('edit_result', 'unknown')})")
|
||||
else:
|
||||
typer.echo(" done")
|
||||
|
||||
# Evaluate phase result
|
||||
duration = record.get("duration_s", 0)
|
||||
typer.echo(f" → running tox … {duration}s")
|
||||
|
||||
# Assessment
|
||||
assessment = record.get("assessment", "No assessment")
|
||||
typer.echo(f" → {assessment}")
|
||||
|
||||
# Outcome
|
||||
if record.get("committed"):
|
||||
typer.echo(" → committed")
|
||||
elif record.get("reverted"):
|
||||
typer.echo(" → reverted (no improvement)")
|
||||
|
||||
typer.echo()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
typer.echo("\nInterrupted.")
|
||||
raise typer.Exit(0) from None
|
||||
|
||||
typer.echo(typer.style("Autoresearch complete.", bold=True))
|
||||
if result.get("baseline") is not None:
|
||||
typer.echo(f"Final {metric}: {result['baseline']:.4f}")
|
||||
|
||||
|
||||
def main():
|
||||
app()
|
||||
|
||||
@@ -7,37 +7,97 @@ Also includes vector similarity utilities (cosine similarity, keyword overlap).
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
|
||||
import httpx # Import httpx for Ollama API calls
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Embedding model - small, fast, local
|
||||
EMBEDDING_MODEL = None
|
||||
EMBEDDING_DIM = 384 # MiniLM dimension
|
||||
EMBEDDING_DIM = 384 # MiniLM dimension, will be overridden if Ollama model has different dim
|
||||
|
||||
|
||||
class OllamaEmbedder:
|
||||
"""Mimics SentenceTransformer interface for Ollama."""
|
||||
|
||||
def __init__(self, model_name: str, ollama_url: str):
|
||||
self.model_name = model_name
|
||||
self.ollama_url = ollama_url
|
||||
self.dimension = 0 # Will be updated after first call
|
||||
|
||||
def encode(
|
||||
self,
|
||||
sentences: str | list[str],
|
||||
convert_to_numpy: bool = False,
|
||||
normalize_embeddings: bool = True,
|
||||
) -> list[list[float]] | list[float]:
|
||||
"""Generate embeddings using Ollama."""
|
||||
if isinstance(sentences, str):
|
||||
sentences = [sentences]
|
||||
|
||||
all_embeddings = []
|
||||
for sentence in sentences:
|
||||
try:
|
||||
response = httpx.post(
|
||||
f"{self.ollama_url}/api/embeddings",
|
||||
json={"model": self.model_name, "prompt": sentence},
|
||||
timeout=settings.mcp_bridge_timeout,
|
||||
)
|
||||
response.raise_for_status()
|
||||
embedding = response.json()["embedding"]
|
||||
if not self.dimension:
|
||||
self.dimension = len(embedding) # Set dimension on first successful call
|
||||
global EMBEDDING_DIM
|
||||
EMBEDDING_DIM = self.dimension # Update global EMBEDDING_DIM
|
||||
all_embeddings.append(embedding)
|
||||
except httpx.RequestError as exc:
|
||||
logger.error("Ollama embeddings request failed: %s", exc)
|
||||
# Fallback to simple hash embedding on Ollama error
|
||||
return _simple_hash_embedding(sentence)
|
||||
except json.JSONDecodeError as exc:
|
||||
logger.error("Failed to decode Ollama embeddings response: %s", exc)
|
||||
return _simple_hash_embedding(sentence)
|
||||
|
||||
if len(all_embeddings) == 1 and isinstance(sentences, str):
|
||||
return all_embeddings[0]
|
||||
return all_embeddings
|
||||
|
||||
|
||||
def _get_embedding_model():
|
||||
"""Lazy-load embedding model."""
|
||||
"""Lazy-load embedding model, preferring Ollama if configured."""
|
||||
global EMBEDDING_MODEL
|
||||
global EMBEDDING_DIM
|
||||
if EMBEDDING_MODEL is None:
|
||||
try:
|
||||
from config import settings
|
||||
if settings.timmy_skip_embeddings:
|
||||
EMBEDDING_MODEL = False
|
||||
return EMBEDDING_MODEL
|
||||
|
||||
if settings.timmy_skip_embeddings:
|
||||
EMBEDDING_MODEL = False
|
||||
return EMBEDDING_MODEL
|
||||
except ImportError:
|
||||
pass
|
||||
if settings.timmy_embedding_backend == "ollama":
|
||||
logger.info(
|
||||
"MemorySystem: Using Ollama for embeddings with model %s",
|
||||
settings.ollama_embedding_model,
|
||||
)
|
||||
EMBEDDING_MODEL = OllamaEmbedder(
|
||||
settings.ollama_embedding_model, settings.normalized_ollama_url
|
||||
)
|
||||
# We don't know the dimension until after the first call, so keep it default for now.
|
||||
# It will be updated dynamically in OllamaEmbedder.encode
|
||||
return EMBEDDING_MODEL
|
||||
else:
|
||||
try:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
try:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
EMBEDDING_MODEL = SentenceTransformer("all-MiniLM-L6-v2")
|
||||
logger.info("MemorySystem: Loaded embedding model")
|
||||
except ImportError:
|
||||
logger.warning("MemorySystem: sentence-transformers not installed, using fallback")
|
||||
EMBEDDING_MODEL = False # Use fallback
|
||||
EMBEDDING_MODEL = SentenceTransformer("all-MiniLM-L6-v2")
|
||||
EMBEDDING_DIM = 384 # Reset to MiniLM dimension
|
||||
logger.info("MemorySystem: Loaded local embedding model (all-MiniLM-L6-v2)")
|
||||
except ImportError:
|
||||
logger.warning("MemorySystem: sentence-transformers not installed, using fallback")
|
||||
EMBEDDING_MODEL = False # Use fallback
|
||||
return EMBEDDING_MODEL
|
||||
|
||||
|
||||
@@ -60,7 +120,10 @@ def embed_text(text: str) -> list[float]:
|
||||
model = _get_embedding_model()
|
||||
if model and model is not False:
|
||||
embedding = model.encode(text)
|
||||
return embedding.tolist()
|
||||
# Ensure it's a list of floats, not numpy array
|
||||
if hasattr(embedding, "tolist"):
|
||||
return embedding.tolist()
|
||||
return embedding
|
||||
return _simple_hash_embedding(text)
|
||||
|
||||
|
||||
|
||||
@@ -1206,7 +1206,7 @@ memory_searcher = MemorySearcher()
|
||||
# ───────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def memory_search(query: str, top_k: int = 5) -> str:
|
||||
def memory_search(query: str, limit: int = 10) -> str:
|
||||
"""Search past conversations, notes, and stored facts for relevant context.
|
||||
|
||||
Searches across both the vault (indexed markdown files) and the
|
||||
@@ -1215,19 +1215,19 @@ def memory_search(query: str, top_k: int = 5) -> str:
|
||||
|
||||
Args:
|
||||
query: What to search for (e.g. "Bitcoin strategy", "server setup").
|
||||
top_k: Number of results to return (default 5).
|
||||
limit: Number of results to return (default 10).
|
||||
|
||||
Returns:
|
||||
Formatted string of relevant memory results.
|
||||
"""
|
||||
# Guard: model sometimes passes None for top_k
|
||||
if top_k is None:
|
||||
top_k = 5
|
||||
# Guard: model sometimes passes None for limit
|
||||
if limit is None:
|
||||
limit = 10
|
||||
|
||||
parts: list[str] = []
|
||||
|
||||
# 1. Search semantic vault (indexed markdown files)
|
||||
vault_results = semantic_memory.search(query, top_k)
|
||||
vault_results = semantic_memory.search(query, limit)
|
||||
for content, score in vault_results:
|
||||
if score < 0.2:
|
||||
continue
|
||||
@@ -1235,7 +1235,7 @@ def memory_search(query: str, top_k: int = 5) -> str:
|
||||
|
||||
# 2. Search runtime vector store (stored facts/conversations)
|
||||
try:
|
||||
runtime_results = search_memories(query, limit=top_k, min_relevance=0.2)
|
||||
runtime_results = search_memories(query, limit=limit, min_relevance=0.2)
|
||||
for entry in runtime_results:
|
||||
label = entry.context_type or "memory"
|
||||
parts.append(f"[{label}] {entry.content[:300]}")
|
||||
@@ -1289,45 +1289,48 @@ def memory_read(query: str = "", top_k: int = 5) -> str:
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def memory_write(content: str, context_type: str = "fact") -> str:
|
||||
"""Store a piece of information in persistent memory.
|
||||
def memory_store(topic: str, report: str, type: str = "research") -> str:
|
||||
"""Store a piece of information in persistent memory, particularly for research outputs.
|
||||
|
||||
Use this tool when the user explicitly asks you to remember something.
|
||||
Stored memories are searchable via memory_search across all channels
|
||||
(web GUI, Discord, Telegram, etc.).
|
||||
Use this tool to store structured research findings or other important documents.
|
||||
Stored memories are searchable via memory_search across all channels.
|
||||
|
||||
Args:
|
||||
content: The information to remember (e.g. a phrase, fact, or note).
|
||||
context_type: Type of memory — "fact" for permanent facts,
|
||||
"conversation" for conversation context,
|
||||
"document" for document fragments.
|
||||
topic: A concise title or topic for the research output.
|
||||
report: The detailed content of the research output or document.
|
||||
type: Type of memory — "research" for research outputs (default),
|
||||
"fact" for permanent facts, "conversation" for conversation context,
|
||||
"document" for other document fragments.
|
||||
|
||||
Returns:
|
||||
Confirmation that the memory was stored.
|
||||
"""
|
||||
if not content or not content.strip():
|
||||
return "Nothing to store — content is empty."
|
||||
if not report or not report.strip():
|
||||
return "Nothing to store — report is empty."
|
||||
|
||||
valid_types = ("fact", "conversation", "document")
|
||||
if context_type not in valid_types:
|
||||
context_type = "fact"
|
||||
# Combine topic and report for embedding and storage content
|
||||
full_content = f"Topic: {topic.strip()}\n\nReport: {report.strip()}"
|
||||
|
||||
valid_types = ("fact", "conversation", "document", "research")
|
||||
if type not in valid_types:
|
||||
type = "research"
|
||||
|
||||
try:
|
||||
# Dedup check for facts — skip if a similar fact already exists
|
||||
# Threshold 0.75 catches paraphrases (was 0.9 which only caught near-exact)
|
||||
if context_type == "fact":
|
||||
existing = search_memories(
|
||||
content.strip(), limit=3, context_type="fact", min_relevance=0.75
|
||||
)
|
||||
# Dedup check for facts and research — skip if similar exists
|
||||
if type in ("fact", "research"):
|
||||
existing = search_memories(full_content, limit=3, context_type=type, min_relevance=0.75)
|
||||
if existing:
|
||||
return f"Similar fact already stored (id={existing[0].id[:8]}). Skipping duplicate."
|
||||
return (
|
||||
f"Similar {type} already stored (id={existing[0].id[:8]}). Skipping duplicate."
|
||||
)
|
||||
|
||||
entry = store_memory(
|
||||
content=content.strip(),
|
||||
content=full_content,
|
||||
source="agent",
|
||||
context_type=context_type,
|
||||
context_type=type,
|
||||
metadata={"topic": topic},
|
||||
)
|
||||
return f"Stored in memory (type={context_type}, id={entry.id[:8]}). This is now searchable across all channels."
|
||||
return f"Stored in memory (type={type}, id={entry.id[:8]}). This is now searchable across all channels."
|
||||
except Exception as exc:
|
||||
logger.error("Failed to write memory: %s", exc)
|
||||
return f"Failed to store memory: {exc}"
|
||||
|
||||
@@ -4,4 +4,27 @@ Tracks how much of each AI layer (perception, decision, narration)
|
||||
runs locally vs. calls out to an LLM. Feeds the sovereignty dashboard.
|
||||
|
||||
Refs: #954, #953
|
||||
|
||||
Three-strike detector and automation enforcement.
|
||||
|
||||
Refs: #962
|
||||
|
||||
Session reporting: auto-generates markdown scorecards at session end
|
||||
and commits them to the Gitea repo for institutional memory.
|
||||
|
||||
Refs: #957 (Session Sovereignty Report Generator)
|
||||
"""
|
||||
|
||||
from timmy.sovereignty.session_report import (
|
||||
commit_report,
|
||||
generate_and_commit_report,
|
||||
generate_report,
|
||||
mark_session_start,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"generate_report",
|
||||
"commit_report",
|
||||
"generate_and_commit_report",
|
||||
"mark_session_start",
|
||||
]
|
||||
|
||||
442
src/timmy/sovereignty/session_report.py
Normal file
442
src/timmy/sovereignty/session_report.py
Normal file
@@ -0,0 +1,442 @@
|
||||
"""Session Sovereignty Report Generator.
|
||||
|
||||
Auto-generates a sovereignty scorecard at the end of each play session
|
||||
and commits it as a markdown file to the Gitea repo under
|
||||
``reports/sovereignty/``.
|
||||
|
||||
Report contents (per issue #957):
|
||||
- Session duration + game played
|
||||
- Total model calls by type (VLM, LLM, TTS, API)
|
||||
- Total cache/rule hits by type
|
||||
- New skills crystallized (placeholder — pending skill-tracking impl)
|
||||
- Sovereignty delta (change from session start → end)
|
||||
- Cost breakdown (actual API spend)
|
||||
- Per-layer sovereignty %: perception, decision, narration
|
||||
- Trend comparison vs previous session
|
||||
|
||||
Refs: #957 (Sovereignty P0) · #953 (The Sovereignty Loop)
|
||||
"""
|
||||
|
||||
import base64
|
||||
import json
|
||||
import logging
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from config import settings
|
||||
|
||||
# Optional module-level imports — degrade gracefully if unavailable at import time
|
||||
try:
|
||||
from timmy.session_logger import get_session_logger
|
||||
except Exception: # ImportError or circular import during early startup
|
||||
get_session_logger = None # type: ignore[assignment]
|
||||
|
||||
try:
|
||||
from infrastructure.sovereignty_metrics import GRADUATION_TARGETS, get_sovereignty_store
|
||||
except Exception:
|
||||
GRADUATION_TARGETS: dict = {} # type: ignore[assignment]
|
||||
get_sovereignty_store = None # type: ignore[assignment]
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Module-level session start time; set by mark_session_start()
|
||||
_SESSION_START: datetime | None = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Public API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def mark_session_start() -> None:
|
||||
"""Record the session start wall-clock time.
|
||||
|
||||
Call once during application startup so ``generate_report()`` can
|
||||
compute accurate session durations.
|
||||
"""
|
||||
global _SESSION_START
|
||||
_SESSION_START = datetime.now(UTC)
|
||||
logger.debug("Sovereignty: session start recorded at %s", _SESSION_START.isoformat())
|
||||
|
||||
|
||||
def generate_report(session_id: str = "dashboard") -> str:
|
||||
"""Render a sovereignty scorecard as a markdown string.
|
||||
|
||||
Pulls from:
|
||||
- ``timmy.session_logger`` — message/tool-call/error counts
|
||||
- ``infrastructure.sovereignty_metrics`` — cache hit rate, API cost,
|
||||
graduation phase, and trend data
|
||||
|
||||
Args:
|
||||
session_id: The session identifier (default: "dashboard").
|
||||
|
||||
Returns:
|
||||
Markdown-formatted sovereignty report string.
|
||||
"""
|
||||
now = datetime.now(UTC)
|
||||
session_start = _SESSION_START or now
|
||||
duration_secs = (now - session_start).total_seconds()
|
||||
|
||||
session_data = _gather_session_data()
|
||||
sov_data = _gather_sovereignty_data()
|
||||
|
||||
return _render_markdown(now, session_id, duration_secs, session_data, sov_data)
|
||||
|
||||
|
||||
def commit_report(report_md: str, session_id: str = "dashboard") -> bool:
|
||||
"""Commit a sovereignty report to the Gitea repo.
|
||||
|
||||
Creates or updates ``reports/sovereignty/{date}_{session_id}.md``
|
||||
via the Gitea Contents API. Degrades gracefully: logs a warning
|
||||
and returns ``False`` if Gitea is unreachable or misconfigured.
|
||||
|
||||
Args:
|
||||
report_md: Markdown content to commit.
|
||||
session_id: Session identifier used in the filename.
|
||||
|
||||
Returns:
|
||||
``True`` on success, ``False`` on failure.
|
||||
"""
|
||||
if not settings.gitea_enabled:
|
||||
logger.info("Sovereignty: Gitea disabled — skipping report commit")
|
||||
return False
|
||||
|
||||
if not settings.gitea_token:
|
||||
logger.warning("Sovereignty: no Gitea token — skipping report commit")
|
||||
return False
|
||||
|
||||
date_str = datetime.now(UTC).strftime("%Y-%m-%d")
|
||||
file_path = f"reports/sovereignty/{date_str}_{session_id}.md"
|
||||
url = f"{settings.gitea_url}/api/v1/repos/{settings.gitea_repo}/contents/{file_path}"
|
||||
headers = {
|
||||
"Authorization": f"token {settings.gitea_token}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
encoded_content = base64.b64encode(report_md.encode()).decode()
|
||||
commit_message = (
|
||||
f"report: sovereignty session {session_id} ({date_str})\n\n"
|
||||
f"Auto-generated by Timmy. Refs #957"
|
||||
)
|
||||
payload: dict[str, Any] = {
|
||||
"message": commit_message,
|
||||
"content": encoded_content,
|
||||
}
|
||||
|
||||
try:
|
||||
with httpx.Client(timeout=10.0) as client:
|
||||
# Fetch existing file SHA so we can update rather than create
|
||||
check = client.get(url, headers=headers)
|
||||
if check.status_code == 200:
|
||||
existing = check.json()
|
||||
payload["sha"] = existing.get("sha", "")
|
||||
|
||||
resp = client.put(url, headers=headers, json=payload)
|
||||
resp.raise_for_status()
|
||||
|
||||
logger.info("Sovereignty: report committed to %s", file_path)
|
||||
return True
|
||||
|
||||
except httpx.HTTPStatusError as exc:
|
||||
logger.warning(
|
||||
"Sovereignty: commit failed (HTTP %s): %s",
|
||||
exc.response.status_code,
|
||||
exc,
|
||||
)
|
||||
return False
|
||||
except Exception as exc:
|
||||
logger.warning("Sovereignty: commit failed: %s", exc)
|
||||
return False
|
||||
|
||||
|
||||
async def generate_and_commit_report(session_id: str = "dashboard") -> bool:
|
||||
"""Generate and commit a sovereignty report for the current session.
|
||||
|
||||
Primary entry point — call at session end / application shutdown.
|
||||
Wraps the synchronous ``commit_report`` call in ``asyncio.to_thread``
|
||||
so it does not block the event loop.
|
||||
|
||||
Args:
|
||||
session_id: The session identifier.
|
||||
|
||||
Returns:
|
||||
``True`` if the report was generated and committed successfully.
|
||||
"""
|
||||
import asyncio
|
||||
|
||||
try:
|
||||
report_md = generate_report(session_id)
|
||||
logger.info("Sovereignty: report generated (%d chars)", len(report_md))
|
||||
committed = await asyncio.to_thread(commit_report, report_md, session_id)
|
||||
return committed
|
||||
except Exception as exc:
|
||||
logger.warning("Sovereignty: report generation failed: %s", exc)
|
||||
return False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _format_duration(seconds: float) -> str:
|
||||
"""Format a duration in seconds as a human-readable string."""
|
||||
total = int(seconds)
|
||||
hours, remainder = divmod(total, 3600)
|
||||
minutes, secs = divmod(remainder, 60)
|
||||
if hours:
|
||||
return f"{hours}h {minutes}m {secs}s"
|
||||
if minutes:
|
||||
return f"{minutes}m {secs}s"
|
||||
return f"{secs}s"
|
||||
|
||||
|
||||
def _gather_session_data() -> dict[str, Any]:
|
||||
"""Pull session statistics from the session logger.
|
||||
|
||||
Returns a dict with:
|
||||
- ``user_messages``, ``timmy_messages``, ``tool_calls``, ``errors``
|
||||
- ``tool_call_breakdown``: dict[tool_name, count]
|
||||
"""
|
||||
default: dict[str, Any] = {
|
||||
"user_messages": 0,
|
||||
"timmy_messages": 0,
|
||||
"tool_calls": 0,
|
||||
"errors": 0,
|
||||
"tool_call_breakdown": {},
|
||||
}
|
||||
|
||||
try:
|
||||
if get_session_logger is None:
|
||||
return default
|
||||
sl = get_session_logger()
|
||||
sl.flush()
|
||||
|
||||
# Read today's session file directly for accurate counts
|
||||
if not sl.session_file.exists():
|
||||
return default
|
||||
|
||||
entries: list[dict] = []
|
||||
with open(sl.session_file) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line:
|
||||
try:
|
||||
entries.append(json.loads(line))
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
tool_breakdown: dict[str, int] = {}
|
||||
user_msgs = timmy_msgs = tool_calls = errors = 0
|
||||
|
||||
for entry in entries:
|
||||
etype = entry.get("type")
|
||||
if etype == "message":
|
||||
if entry.get("role") == "user":
|
||||
user_msgs += 1
|
||||
elif entry.get("role") == "timmy":
|
||||
timmy_msgs += 1
|
||||
elif etype == "tool_call":
|
||||
tool_calls += 1
|
||||
tool_name = entry.get("tool", "unknown")
|
||||
tool_breakdown[tool_name] = tool_breakdown.get(tool_name, 0) + 1
|
||||
elif etype == "error":
|
||||
errors += 1
|
||||
|
||||
return {
|
||||
"user_messages": user_msgs,
|
||||
"timmy_messages": timmy_msgs,
|
||||
"tool_calls": tool_calls,
|
||||
"errors": errors,
|
||||
"tool_call_breakdown": tool_breakdown,
|
||||
}
|
||||
|
||||
except Exception as exc:
|
||||
logger.warning("Sovereignty: failed to gather session data: %s", exc)
|
||||
return default
|
||||
|
||||
|
||||
def _gather_sovereignty_data() -> dict[str, Any]:
|
||||
"""Pull sovereignty metrics from the SQLite store.
|
||||
|
||||
Returns a dict with:
|
||||
- ``metrics``: summary from ``SovereigntyMetricsStore.get_summary()``
|
||||
- ``deltas``: per-metric start/end values within recent history window
|
||||
- ``previous_session``: most recent prior value for each metric
|
||||
"""
|
||||
try:
|
||||
if get_sovereignty_store is None:
|
||||
return {"metrics": {}, "deltas": {}, "previous_session": {}}
|
||||
store = get_sovereignty_store()
|
||||
summary = store.get_summary()
|
||||
|
||||
deltas: dict[str, dict[str, Any]] = {}
|
||||
previous_session: dict[str, float | None] = {}
|
||||
|
||||
for metric_type in GRADUATION_TARGETS:
|
||||
history = store.get_latest(metric_type, limit=10)
|
||||
if len(history) >= 2:
|
||||
deltas[metric_type] = {
|
||||
"start": history[-1]["value"],
|
||||
"end": history[0]["value"],
|
||||
}
|
||||
previous_session[metric_type] = history[1]["value"]
|
||||
elif len(history) == 1:
|
||||
deltas[metric_type] = {"start": history[0]["value"], "end": history[0]["value"]}
|
||||
previous_session[metric_type] = None
|
||||
else:
|
||||
deltas[metric_type] = {"start": None, "end": None}
|
||||
previous_session[metric_type] = None
|
||||
|
||||
return {
|
||||
"metrics": summary,
|
||||
"deltas": deltas,
|
||||
"previous_session": previous_session,
|
||||
}
|
||||
|
||||
except Exception as exc:
|
||||
logger.warning("Sovereignty: failed to gather sovereignty data: %s", exc)
|
||||
return {"metrics": {}, "deltas": {}, "previous_session": {}}
|
||||
|
||||
|
||||
def _render_markdown(
|
||||
now: datetime,
|
||||
session_id: str,
|
||||
duration_secs: float,
|
||||
session_data: dict[str, Any],
|
||||
sov_data: dict[str, Any],
|
||||
) -> str:
|
||||
"""Assemble the full sovereignty report in markdown."""
|
||||
lines: list[str] = []
|
||||
|
||||
# Header
|
||||
lines += [
|
||||
"# Sovereignty Session Report",
|
||||
"",
|
||||
f"**Session ID:** `{session_id}` ",
|
||||
f"**Date:** {now.strftime('%Y-%m-%d')} ",
|
||||
f"**Duration:** {_format_duration(duration_secs)} ",
|
||||
f"**Generated:** {now.isoformat()}",
|
||||
"",
|
||||
"---",
|
||||
"",
|
||||
]
|
||||
|
||||
# Session activity
|
||||
lines += [
|
||||
"## Session Activity",
|
||||
"",
|
||||
"| Metric | Count |",
|
||||
"|--------|-------|",
|
||||
f"| User messages | {session_data['user_messages']} |",
|
||||
f"| Timmy responses | {session_data['timmy_messages']} |",
|
||||
f"| Tool calls | {session_data['tool_calls']} |",
|
||||
f"| Errors | {session_data['errors']} |",
|
||||
"",
|
||||
]
|
||||
|
||||
tool_breakdown = session_data.get("tool_call_breakdown", {})
|
||||
if tool_breakdown:
|
||||
lines += ["### Model Calls by Tool", ""]
|
||||
for tool_name, count in sorted(tool_breakdown.items(), key=lambda x: -x[1]):
|
||||
lines.append(f"- `{tool_name}`: {count}")
|
||||
lines.append("")
|
||||
|
||||
# Sovereignty scorecard
|
||||
|
||||
lines += [
|
||||
"## Sovereignty Scorecard",
|
||||
"",
|
||||
"| Metric | Current | Target (graduation) | Phase |",
|
||||
"|--------|---------|---------------------|-------|",
|
||||
]
|
||||
|
||||
for metric_type, data in sov_data["metrics"].items():
|
||||
current = data.get("current")
|
||||
current_str = f"{current:.4f}" if current is not None else "N/A"
|
||||
grad_target = GRADUATION_TARGETS.get(metric_type, {}).get("graduation")
|
||||
grad_str = f"{grad_target:.4f}" if isinstance(grad_target, (int, float)) else "N/A"
|
||||
phase = data.get("phase", "unknown")
|
||||
lines.append(f"| {metric_type} | {current_str} | {grad_str} | {phase} |")
|
||||
|
||||
lines += ["", "### Sovereignty Delta (This Session)", ""]
|
||||
|
||||
for metric_type, delta_info in sov_data.get("deltas", {}).items():
|
||||
start_val = delta_info.get("start")
|
||||
end_val = delta_info.get("end")
|
||||
if start_val is not None and end_val is not None:
|
||||
diff = end_val - start_val
|
||||
sign = "+" if diff >= 0 else ""
|
||||
lines.append(
|
||||
f"- **{metric_type}**: {start_val:.4f} → {end_val:.4f} ({sign}{diff:.4f})"
|
||||
)
|
||||
else:
|
||||
lines.append(f"- **{metric_type}**: N/A (no data recorded)")
|
||||
|
||||
# Cost breakdown
|
||||
lines += ["", "## Cost Breakdown", ""]
|
||||
api_cost_data = sov_data["metrics"].get("api_cost", {})
|
||||
current_cost = api_cost_data.get("current")
|
||||
if current_cost is not None:
|
||||
lines.append(f"- **Total API spend (latest recorded):** ${current_cost:.4f}")
|
||||
else:
|
||||
lines.append("- **Total API spend:** N/A (no data recorded)")
|
||||
lines.append("")
|
||||
|
||||
# Per-layer sovereignty
|
||||
lines += [
|
||||
"## Per-Layer Sovereignty",
|
||||
"",
|
||||
"| Layer | Sovereignty % |",
|
||||
"|-------|--------------|",
|
||||
"| Perception (VLM) | N/A |",
|
||||
"| Decision (LLM) | N/A |",
|
||||
"| Narration (TTS) | N/A |",
|
||||
"",
|
||||
"> Per-layer tracking requires instrumented inference calls. See #957.",
|
||||
"",
|
||||
]
|
||||
|
||||
# Skills crystallized
|
||||
lines += [
|
||||
"## Skills Crystallized",
|
||||
"",
|
||||
"_Skill crystallization tracking not yet implemented. See #957._",
|
||||
"",
|
||||
]
|
||||
|
||||
# Trend vs previous session
|
||||
lines += ["## Trend vs Previous Session", ""]
|
||||
prev_data = sov_data.get("previous_session", {})
|
||||
has_prev = any(v is not None for v in prev_data.values())
|
||||
|
||||
if has_prev:
|
||||
lines += [
|
||||
"| Metric | Previous | Current | Change |",
|
||||
"|--------|----------|---------|--------|",
|
||||
]
|
||||
for metric_type, curr_info in sov_data["metrics"].items():
|
||||
curr_val = curr_info.get("current")
|
||||
prev_val = prev_data.get(metric_type)
|
||||
curr_str = f"{curr_val:.4f}" if curr_val is not None else "N/A"
|
||||
prev_str = f"{prev_val:.4f}" if prev_val is not None else "N/A"
|
||||
if curr_val is not None and prev_val is not None:
|
||||
diff = curr_val - prev_val
|
||||
sign = "+" if diff >= 0 else ""
|
||||
change_str = f"{sign}{diff:.4f}"
|
||||
else:
|
||||
change_str = "N/A"
|
||||
lines.append(f"| {metric_type} | {prev_str} | {curr_str} | {change_str} |")
|
||||
lines.append("")
|
||||
else:
|
||||
lines += ["_No previous session data available for comparison._", ""]
|
||||
|
||||
# Footer
|
||||
lines += [
|
||||
"---",
|
||||
"_Auto-generated by Timmy · Session Sovereignty Report · Refs: #957_",
|
||||
]
|
||||
|
||||
return "\n".join(lines)
|
||||
482
src/timmy/sovereignty/three_strike.py
Normal file
482
src/timmy/sovereignty/three_strike.py
Normal file
@@ -0,0 +1,482 @@
|
||||
"""Three-Strike Detector for Repeated Manual Work.
|
||||
|
||||
Tracks recurring manual actions by category and key. When the same action
|
||||
is performed three or more times, it blocks further attempts and requires
|
||||
an automation artifact to be registered first.
|
||||
|
||||
Strike 1 (count=1): discovery — action proceeds normally
|
||||
Strike 2 (count=2): warning — action proceeds with a logged warning
|
||||
Strike 3 (count≥3): blocked — raises ThreeStrikeError; caller must
|
||||
register an automation artifact first
|
||||
|
||||
Governing principle: "If you do the same thing manually three times,
|
||||
you have failed to crystallise."
|
||||
|
||||
Categories tracked:
|
||||
- vlm_prompt_edit VLM prompt edits for the same UI element
|
||||
- game_bug_review Manual game-bug reviews for the same bug type
|
||||
- parameter_tuning Manual parameter tuning for the same parameter
|
||||
- portal_adapter_creation Manual portal-adapter creation for same pattern
|
||||
- deployment_step Manual deployment steps
|
||||
|
||||
The Falsework Checklist is enforced before cloud API calls via
|
||||
:func:`falsework_check`.
|
||||
|
||||
Refs: #962
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
from contextlib import closing
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from config import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ── Constants ────────────────────────────────────────────────────────────────
|
||||
|
||||
DB_PATH = Path(settings.repo_root) / "data" / "three_strike.db"
|
||||
|
||||
CATEGORIES = frozenset(
|
||||
{
|
||||
"vlm_prompt_edit",
|
||||
"game_bug_review",
|
||||
"parameter_tuning",
|
||||
"portal_adapter_creation",
|
||||
"deployment_step",
|
||||
}
|
||||
)
|
||||
|
||||
STRIKE_WARNING = 2
|
||||
STRIKE_BLOCK = 3
|
||||
|
||||
_SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS strikes (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
category TEXT NOT NULL,
|
||||
key TEXT NOT NULL,
|
||||
count INTEGER NOT NULL DEFAULT 0,
|
||||
blocked INTEGER NOT NULL DEFAULT 0,
|
||||
automation TEXT DEFAULT NULL,
|
||||
first_seen TEXT NOT NULL,
|
||||
last_seen TEXT NOT NULL
|
||||
);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_strikes_cat_key ON strikes(category, key);
|
||||
CREATE INDEX IF NOT EXISTS idx_strikes_blocked ON strikes(blocked);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS strike_events (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
category TEXT NOT NULL,
|
||||
key TEXT NOT NULL,
|
||||
strike_num INTEGER NOT NULL,
|
||||
metadata TEXT DEFAULT '{}',
|
||||
timestamp TEXT NOT NULL
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_se_cat_key ON strike_events(category, key);
|
||||
CREATE INDEX IF NOT EXISTS idx_se_ts ON strike_events(timestamp);
|
||||
"""
|
||||
|
||||
|
||||
# ── Exceptions ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class ThreeStrikeError(RuntimeError):
|
||||
"""Raised when a manual action has reached the third strike.
|
||||
|
||||
Attributes:
|
||||
category: The action category (e.g. ``"vlm_prompt_edit"``).
|
||||
key: The specific action key (e.g. a UI element name).
|
||||
count: Total number of times this action has been recorded.
|
||||
"""
|
||||
|
||||
def __init__(self, category: str, key: str, count: int) -> None:
|
||||
self.category = category
|
||||
self.key = key
|
||||
self.count = count
|
||||
super().__init__(
|
||||
f"Three-strike block: '{category}/{key}' has been performed manually "
|
||||
f"{count} time(s). Register an automation artifact before continuing. "
|
||||
f"Run the Falsework Checklist (see three_strike.falsework_check)."
|
||||
)
|
||||
|
||||
|
||||
# ── Data classes ──────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@dataclass
|
||||
class StrikeRecord:
|
||||
"""State for one (category, key) pair."""
|
||||
|
||||
category: str
|
||||
key: str
|
||||
count: int
|
||||
blocked: bool
|
||||
automation: str | None
|
||||
first_seen: str
|
||||
last_seen: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class FalseworkChecklist:
|
||||
"""Pre-cloud-API call checklist — must be completed before making
|
||||
expensive external calls.
|
||||
|
||||
Instantiate and call :meth:`validate` to ensure all answers are provided.
|
||||
"""
|
||||
|
||||
durable_artifact: str = ""
|
||||
artifact_storage_path: str = ""
|
||||
local_rule_or_cache: str = ""
|
||||
will_repeat: bool | None = None
|
||||
elimination_strategy: str = ""
|
||||
sovereignty_delta: str = ""
|
||||
|
||||
# ── internal ──
|
||||
_errors: list[str] = field(default_factory=list, init=False, repr=False)
|
||||
|
||||
def validate(self) -> list[str]:
|
||||
"""Return a list of unanswered questions. Empty list → checklist passes."""
|
||||
self._errors = []
|
||||
if not self.durable_artifact.strip():
|
||||
self._errors.append("Q1: What durable artifact will this call produce?")
|
||||
if not self.artifact_storage_path.strip():
|
||||
self._errors.append("Q2: Where will the artifact be stored locally?")
|
||||
if not self.local_rule_or_cache.strip():
|
||||
self._errors.append("Q3: What local rule or cache will this populate?")
|
||||
if self.will_repeat is None:
|
||||
self._errors.append("Q4: After this call, will I need to make it again?")
|
||||
if self.will_repeat and not self.elimination_strategy.strip():
|
||||
self._errors.append("Q5: If yes, what would eliminate the repeat?")
|
||||
if not self.sovereignty_delta.strip():
|
||||
self._errors.append("Q6: What is the sovereignty delta of this call?")
|
||||
return self._errors
|
||||
|
||||
@property
|
||||
def passed(self) -> bool:
|
||||
"""True when :meth:`validate` found no unanswered questions."""
|
||||
return len(self.validate()) == 0
|
||||
|
||||
|
||||
# ── Store ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class ThreeStrikeStore:
|
||||
"""SQLite-backed three-strike store.
|
||||
|
||||
Thread-safe: creates a new connection per operation.
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: Path | None = None) -> None:
|
||||
self._db_path = db_path or DB_PATH
|
||||
self._init_db()
|
||||
|
||||
# ── setup ─────────────────────────────────────────────────────────────
|
||||
|
||||
def _init_db(self) -> None:
|
||||
try:
|
||||
self._db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with closing(sqlite3.connect(str(self._db_path))) as conn:
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute(f"PRAGMA busy_timeout={settings.db_busy_timeout_ms}")
|
||||
conn.executescript(_SCHEMA)
|
||||
conn.commit()
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to initialise three-strike DB: %s", exc)
|
||||
|
||||
def _connect(self) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(str(self._db_path))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute(f"PRAGMA busy_timeout={settings.db_busy_timeout_ms}")
|
||||
return conn
|
||||
|
||||
# ── record ────────────────────────────────────────────────────────────
|
||||
|
||||
def record(
|
||||
self,
|
||||
category: str,
|
||||
key: str,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> StrikeRecord:
|
||||
"""Record a manual action and return the updated :class:`StrikeRecord`.
|
||||
|
||||
Raises :exc:`ThreeStrikeError` when the action is already blocked
|
||||
(count ≥ STRIKE_BLOCK) and no automation has been registered.
|
||||
|
||||
Args:
|
||||
category: Action category; must be in :data:`CATEGORIES`.
|
||||
key: Specific identifier within the category.
|
||||
metadata: Optional context stored alongside the event.
|
||||
|
||||
Returns:
|
||||
The updated :class:`StrikeRecord`.
|
||||
|
||||
Raises:
|
||||
ValueError: If *category* is not in :data:`CATEGORIES`.
|
||||
ThreeStrikeError: On the third (or later) strike with no automation.
|
||||
"""
|
||||
if category not in CATEGORIES:
|
||||
raise ValueError(f"Unknown category '{category}'. Valid: {sorted(CATEGORIES)}")
|
||||
|
||||
now = datetime.now(UTC).isoformat()
|
||||
meta_json = json.dumps(metadata or {})
|
||||
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
# Upsert the aggregate row
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO strikes (category, key, count, blocked, first_seen, last_seen)
|
||||
VALUES (?, ?, 1, 0, ?, ?)
|
||||
ON CONFLICT(category, key) DO UPDATE SET
|
||||
count = count + 1,
|
||||
last_seen = excluded.last_seen
|
||||
""",
|
||||
(category, key, now, now),
|
||||
)
|
||||
|
||||
row = conn.execute(
|
||||
"SELECT * FROM strikes WHERE category=? AND key=?",
|
||||
(category, key),
|
||||
).fetchone()
|
||||
count = row["count"]
|
||||
blocked = bool(row["blocked"])
|
||||
automation = row["automation"]
|
||||
|
||||
# Record the individual event
|
||||
conn.execute(
|
||||
"INSERT INTO strike_events (category, key, strike_num, metadata, timestamp) "
|
||||
"VALUES (?, ?, ?, ?, ?)",
|
||||
(category, key, count, meta_json, now),
|
||||
)
|
||||
|
||||
# Mark as blocked once threshold reached
|
||||
if count >= STRIKE_BLOCK and not blocked:
|
||||
conn.execute(
|
||||
"UPDATE strikes SET blocked=1 WHERE category=? AND key=?",
|
||||
(category, key),
|
||||
)
|
||||
blocked = True
|
||||
|
||||
conn.commit()
|
||||
|
||||
except ThreeStrikeError:
|
||||
raise
|
||||
except Exception as exc:
|
||||
logger.warning("Three-strike DB error during record: %s", exc)
|
||||
# Re-raise DB errors so callers are aware
|
||||
raise
|
||||
|
||||
record = StrikeRecord(
|
||||
category=category,
|
||||
key=key,
|
||||
count=count,
|
||||
blocked=blocked,
|
||||
automation=automation,
|
||||
first_seen=row["first_seen"],
|
||||
last_seen=now,
|
||||
)
|
||||
|
||||
self._emit_log(record)
|
||||
|
||||
if blocked and not automation:
|
||||
raise ThreeStrikeError(category=category, key=key, count=count)
|
||||
|
||||
return record
|
||||
|
||||
def _emit_log(self, record: StrikeRecord) -> None:
|
||||
"""Log a warning or info message based on strike number."""
|
||||
if record.count == STRIKE_WARNING:
|
||||
logger.warning(
|
||||
"Three-strike WARNING: '%s/%s' has been performed manually %d times. "
|
||||
"Consider writing an automation.",
|
||||
record.category,
|
||||
record.key,
|
||||
record.count,
|
||||
)
|
||||
elif record.count >= STRIKE_BLOCK:
|
||||
logger.warning(
|
||||
"Three-strike BLOCK: '%s/%s' reached %d strikes — automation required.",
|
||||
record.category,
|
||||
record.key,
|
||||
record.count,
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
"Three-strike discovery: '%s/%s' — strike %d.",
|
||||
record.category,
|
||||
record.key,
|
||||
record.count,
|
||||
)
|
||||
|
||||
# ── automation registration ───────────────────────────────────────────
|
||||
|
||||
def register_automation(
|
||||
self,
|
||||
category: str,
|
||||
key: str,
|
||||
artifact_path: str,
|
||||
) -> None:
|
||||
"""Unblock a (category, key) pair by registering an automation artifact.
|
||||
|
||||
Once registered, future calls to :meth:`record` will proceed normally
|
||||
and the strike counter resets to zero.
|
||||
|
||||
Args:
|
||||
category: Action category.
|
||||
key: Specific identifier within the category.
|
||||
artifact_path: Path or identifier of the automation artifact.
|
||||
"""
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
conn.execute(
|
||||
"UPDATE strikes SET automation=?, blocked=0, count=0 "
|
||||
"WHERE category=? AND key=?",
|
||||
(artifact_path, category, key),
|
||||
)
|
||||
conn.commit()
|
||||
logger.info(
|
||||
"Three-strike: automation registered for '%s/%s' → %s",
|
||||
category,
|
||||
key,
|
||||
artifact_path,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to register automation: %s", exc)
|
||||
|
||||
# ── queries ───────────────────────────────────────────────────────────
|
||||
|
||||
def get(self, category: str, key: str) -> StrikeRecord | None:
|
||||
"""Return the :class:`StrikeRecord` for (category, key), or None."""
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
row = conn.execute(
|
||||
"SELECT * FROM strikes WHERE category=? AND key=?",
|
||||
(category, key),
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return StrikeRecord(
|
||||
category=row["category"],
|
||||
key=row["key"],
|
||||
count=row["count"],
|
||||
blocked=bool(row["blocked"]),
|
||||
automation=row["automation"],
|
||||
first_seen=row["first_seen"],
|
||||
last_seen=row["last_seen"],
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to query strike record: %s", exc)
|
||||
return None
|
||||
|
||||
def list_blocked(self) -> list[StrikeRecord]:
|
||||
"""Return all currently-blocked (category, key) pairs."""
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM strikes WHERE blocked=1 ORDER BY last_seen DESC"
|
||||
).fetchall()
|
||||
return [
|
||||
StrikeRecord(
|
||||
category=r["category"],
|
||||
key=r["key"],
|
||||
count=r["count"],
|
||||
blocked=True,
|
||||
automation=r["automation"],
|
||||
first_seen=r["first_seen"],
|
||||
last_seen=r["last_seen"],
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to query blocked strikes: %s", exc)
|
||||
return []
|
||||
|
||||
def list_all(self) -> list[StrikeRecord]:
|
||||
"""Return all strike records ordered by last seen (most recent first)."""
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute("SELECT * FROM strikes ORDER BY last_seen DESC").fetchall()
|
||||
return [
|
||||
StrikeRecord(
|
||||
category=r["category"],
|
||||
key=r["key"],
|
||||
count=r["count"],
|
||||
blocked=bool(r["blocked"]),
|
||||
automation=r["automation"],
|
||||
first_seen=r["first_seen"],
|
||||
last_seen=r["last_seen"],
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to list strike records: %s", exc)
|
||||
return []
|
||||
|
||||
def get_events(self, category: str, key: str, limit: int = 50) -> list[dict]:
|
||||
"""Return the individual strike events for (category, key)."""
|
||||
try:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM strike_events WHERE category=? AND key=? "
|
||||
"ORDER BY timestamp DESC LIMIT ?",
|
||||
(category, key, limit),
|
||||
).fetchall()
|
||||
return [
|
||||
{
|
||||
"strike_num": r["strike_num"],
|
||||
"timestamp": r["timestamp"],
|
||||
"metadata": json.loads(r["metadata"]) if r["metadata"] else {},
|
||||
}
|
||||
for r in rows
|
||||
]
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to query strike events: %s", exc)
|
||||
return []
|
||||
|
||||
|
||||
# ── Falsework checklist helper ────────────────────────────────────────────────
|
||||
|
||||
|
||||
def falsework_check(checklist: FalseworkChecklist) -> None:
|
||||
"""Enforce the Falsework Checklist before a cloud API call.
|
||||
|
||||
Raises :exc:`ValueError` listing all unanswered questions if the checklist
|
||||
does not pass.
|
||||
|
||||
Usage::
|
||||
|
||||
checklist = FalseworkChecklist(
|
||||
durable_artifact="embedding vectors for UI element foo",
|
||||
artifact_storage_path="data/vlm/foo_embeddings.json",
|
||||
local_rule_or_cache="vlm_cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="eliminates repeated VLM call",
|
||||
)
|
||||
falsework_check(checklist) # raises ValueError if incomplete
|
||||
"""
|
||||
errors = checklist.validate()
|
||||
if errors:
|
||||
raise ValueError(
|
||||
"Falsework Checklist incomplete — answer all questions before "
|
||||
"making a cloud API call:\n" + "\n".join(f" • {e}" for e in errors)
|
||||
)
|
||||
|
||||
|
||||
# ── Module-level singleton ────────────────────────────────────────────────────
|
||||
|
||||
_detector: ThreeStrikeStore | None = None
|
||||
|
||||
|
||||
def get_detector() -> ThreeStrikeStore:
|
||||
"""Return the module-level :class:`ThreeStrikeStore`, creating it once."""
|
||||
global _detector
|
||||
if _detector is None:
|
||||
_detector = ThreeStrikeStore()
|
||||
return _detector
|
||||
94
src/timmy/tools/__init__.py
Normal file
94
src/timmy/tools/__init__.py
Normal file
@@ -0,0 +1,94 @@
|
||||
"""Tool integration for the agent swarm.
|
||||
|
||||
Provides agents with capabilities for:
|
||||
- File read/write (local filesystem)
|
||||
- Shell command execution (sandboxed)
|
||||
- Python code execution
|
||||
- Git operations
|
||||
- Image / Music / Video generation (creative pipeline)
|
||||
|
||||
Tools are assigned to agents based on their specialties.
|
||||
|
||||
Sub-modules:
|
||||
- _base: shared types, tracking state
|
||||
- file_tools: file-operation toolkit factories (Echo, Quill, Seer)
|
||||
- system_tools: calculator, AI tools, code/devops toolkit factories
|
||||
- _registry: full toolkit construction, agent registry, tool catalog
|
||||
"""
|
||||
|
||||
# Re-export everything for backward compatibility — callers that do
|
||||
# ``from timmy.tools import <symbol>`` continue to work unchanged.
|
||||
|
||||
from timmy.tools._base import (
|
||||
_AGNO_TOOLS_AVAILABLE,
|
||||
_TOOL_USAGE,
|
||||
AgentTools,
|
||||
PersonaTools,
|
||||
ToolStats,
|
||||
_ImportError,
|
||||
_track_tool_usage,
|
||||
get_tool_stats,
|
||||
)
|
||||
from timmy.tools._registry import (
|
||||
AGENT_TOOLKITS,
|
||||
PERSONA_TOOLKITS,
|
||||
_create_stub_toolkit,
|
||||
_merge_catalog,
|
||||
create_experiment_tools,
|
||||
create_full_toolkit,
|
||||
get_all_available_tools,
|
||||
get_tools_for_agent,
|
||||
get_tools_for_persona,
|
||||
)
|
||||
from timmy.tools.file_tools import (
|
||||
_make_smart_read_file,
|
||||
create_data_tools,
|
||||
create_research_tools,
|
||||
create_writing_tools,
|
||||
)
|
||||
from timmy.tools.system_tools import (
|
||||
_safe_eval,
|
||||
calculator,
|
||||
consult_grok,
|
||||
create_aider_tool,
|
||||
create_code_tools,
|
||||
create_devops_tools,
|
||||
create_security_tools,
|
||||
web_fetch,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# _base
|
||||
"AgentTools",
|
||||
"PersonaTools",
|
||||
"ToolStats",
|
||||
"_AGNO_TOOLS_AVAILABLE",
|
||||
"_ImportError",
|
||||
"_TOOL_USAGE",
|
||||
"_track_tool_usage",
|
||||
"get_tool_stats",
|
||||
# file_tools
|
||||
"_make_smart_read_file",
|
||||
"create_data_tools",
|
||||
"create_research_tools",
|
||||
"create_writing_tools",
|
||||
# system_tools
|
||||
"_safe_eval",
|
||||
"calculator",
|
||||
"consult_grok",
|
||||
"create_aider_tool",
|
||||
"create_code_tools",
|
||||
"create_devops_tools",
|
||||
"create_security_tools",
|
||||
"web_fetch",
|
||||
# _registry
|
||||
"AGENT_TOOLKITS",
|
||||
"PERSONA_TOOLKITS",
|
||||
"_create_stub_toolkit",
|
||||
"_merge_catalog",
|
||||
"create_experiment_tools",
|
||||
"create_full_toolkit",
|
||||
"get_all_available_tools",
|
||||
"get_tools_for_agent",
|
||||
"get_tools_for_persona",
|
||||
]
|
||||
90
src/timmy/tools/_base.py
Normal file
90
src/timmy/tools/_base.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Base types, shared state, and tracking for the Timmy tool system."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy imports to handle test mocking
|
||||
_ImportError = None
|
||||
try:
|
||||
from agno.tools import Toolkit # noqa: F401
|
||||
from agno.tools.file import FileTools # noqa: F401
|
||||
from agno.tools.python import PythonTools # noqa: F401
|
||||
from agno.tools.shell import ShellTools # noqa: F401
|
||||
|
||||
_AGNO_TOOLS_AVAILABLE = True
|
||||
except ImportError as e:
|
||||
_AGNO_TOOLS_AVAILABLE = False
|
||||
_ImportError = e
|
||||
|
||||
# Track tool usage stats
|
||||
_TOOL_USAGE: dict[str, list[dict]] = {}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolStats:
|
||||
"""Statistics for a single tool."""
|
||||
|
||||
tool_name: str
|
||||
call_count: int = 0
|
||||
last_used: str | None = None
|
||||
errors: int = 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentTools:
|
||||
"""Tools assigned to an agent."""
|
||||
|
||||
agent_id: str
|
||||
agent_name: str
|
||||
toolkit: Toolkit
|
||||
available_tools: list[str] = field(default_factory=list)
|
||||
|
||||
|
||||
# Backward-compat alias
|
||||
PersonaTools = AgentTools
|
||||
|
||||
|
||||
def _track_tool_usage(agent_id: str, tool_name: str, success: bool = True) -> None:
|
||||
"""Track tool usage for analytics."""
|
||||
if agent_id not in _TOOL_USAGE:
|
||||
_TOOL_USAGE[agent_id] = []
|
||||
_TOOL_USAGE[agent_id].append(
|
||||
{
|
||||
"tool": tool_name,
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"success": success,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def get_tool_stats(agent_id: str | None = None) -> dict:
|
||||
"""Get tool usage statistics.
|
||||
|
||||
Args:
|
||||
agent_id: Optional agent ID to filter by. If None, returns stats for all agents.
|
||||
|
||||
Returns:
|
||||
Dict with tool usage statistics.
|
||||
"""
|
||||
if agent_id:
|
||||
usage = _TOOL_USAGE.get(agent_id, [])
|
||||
return {
|
||||
"agent_id": agent_id,
|
||||
"total_calls": len(usage),
|
||||
"tools_used": list(set(u["tool"] for u in usage)),
|
||||
"recent_calls": usage[-10:] if usage else [],
|
||||
}
|
||||
|
||||
# Return stats for all agents
|
||||
all_stats = {}
|
||||
for aid, usage in _TOOL_USAGE.items():
|
||||
all_stats[aid] = {
|
||||
"total_calls": len(usage),
|
||||
"tools_used": list(set(u["tool"] for u in usage)),
|
||||
}
|
||||
return all_stats
|
||||
@@ -1,532 +1,48 @@
|
||||
"""Tool integration for the agent swarm.
|
||||
"""Tool registry, full toolkit construction, and tool catalog.
|
||||
|
||||
Provides agents with capabilities for:
|
||||
- File read/write (local filesystem)
|
||||
- Shell command execution (sandboxed)
|
||||
- Python code execution
|
||||
- Git operations
|
||||
- Image / Music / Video generation (creative pipeline)
|
||||
|
||||
Tools are assigned to agents based on their specialties.
|
||||
Provides:
|
||||
- Internal _register_* helpers for wiring tools into toolkits
|
||||
- create_full_toolkit (orchestrator toolkit)
|
||||
- create_experiment_tools (Lab agent toolkit)
|
||||
- AGENT_TOOLKITS / get_tools_for_agent registry
|
||||
- get_all_available_tools catalog
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
import logging
|
||||
import math
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
from config import settings
|
||||
from timmy.tools._base import (
|
||||
_AGNO_TOOLS_AVAILABLE,
|
||||
FileTools,
|
||||
PythonTools,
|
||||
ShellTools,
|
||||
Toolkit,
|
||||
_ImportError,
|
||||
)
|
||||
from timmy.tools.file_tools import (
|
||||
_make_smart_read_file,
|
||||
create_data_tools,
|
||||
create_research_tools,
|
||||
create_writing_tools,
|
||||
)
|
||||
from timmy.tools.system_tools import (
|
||||
calculator,
|
||||
consult_grok,
|
||||
create_code_tools,
|
||||
create_devops_tools,
|
||||
create_security_tools,
|
||||
web_fetch,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Max characters of user query included in Lightning invoice memo
|
||||
_INVOICE_MEMO_MAX_LEN = 50
|
||||
|
||||
# Lazy imports to handle test mocking
|
||||
_ImportError = None
|
||||
try:
|
||||
from agno.tools import Toolkit
|
||||
from agno.tools.file import FileTools
|
||||
from agno.tools.python import PythonTools
|
||||
from agno.tools.shell import ShellTools
|
||||
|
||||
_AGNO_TOOLS_AVAILABLE = True
|
||||
except ImportError as e:
|
||||
_AGNO_TOOLS_AVAILABLE = False
|
||||
_ImportError = e
|
||||
|
||||
# Track tool usage stats
|
||||
_TOOL_USAGE: dict[str, list[dict]] = {}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolStats:
|
||||
"""Statistics for a single tool."""
|
||||
|
||||
tool_name: str
|
||||
call_count: int = 0
|
||||
last_used: str | None = None
|
||||
errors: int = 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentTools:
|
||||
"""Tools assigned to an agent."""
|
||||
|
||||
agent_id: str
|
||||
agent_name: str
|
||||
toolkit: Toolkit
|
||||
available_tools: list[str] = field(default_factory=list)
|
||||
|
||||
|
||||
# Backward-compat alias
|
||||
PersonaTools = AgentTools
|
||||
|
||||
|
||||
def _track_tool_usage(agent_id: str, tool_name: str, success: bool = True) -> None:
|
||||
"""Track tool usage for analytics."""
|
||||
if agent_id not in _TOOL_USAGE:
|
||||
_TOOL_USAGE[agent_id] = []
|
||||
_TOOL_USAGE[agent_id].append(
|
||||
{
|
||||
"tool": tool_name,
|
||||
"timestamp": datetime.now(UTC).isoformat(),
|
||||
"success": success,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def get_tool_stats(agent_id: str | None = None) -> dict:
|
||||
"""Get tool usage statistics.
|
||||
|
||||
Args:
|
||||
agent_id: Optional agent ID to filter by. If None, returns stats for all agents.
|
||||
|
||||
Returns:
|
||||
Dict with tool usage statistics.
|
||||
"""
|
||||
if agent_id:
|
||||
usage = _TOOL_USAGE.get(agent_id, [])
|
||||
return {
|
||||
"agent_id": agent_id,
|
||||
"total_calls": len(usage),
|
||||
"tools_used": list(set(u["tool"] for u in usage)),
|
||||
"recent_calls": usage[-10:] if usage else [],
|
||||
}
|
||||
|
||||
# Return stats for all agents
|
||||
all_stats = {}
|
||||
for aid, usage in _TOOL_USAGE.items():
|
||||
all_stats[aid] = {
|
||||
"total_calls": len(usage),
|
||||
"tools_used": list(set(u["tool"] for u in usage)),
|
||||
}
|
||||
return all_stats
|
||||
|
||||
|
||||
def _safe_eval(node, allowed_names: dict):
|
||||
"""Walk an AST and evaluate only safe numeric operations."""
|
||||
if isinstance(node, ast.Expression):
|
||||
return _safe_eval(node.body, allowed_names)
|
||||
if isinstance(node, ast.Constant):
|
||||
if isinstance(node.value, (int, float, complex)):
|
||||
return node.value
|
||||
raise ValueError(f"Unsupported constant: {node.value!r}")
|
||||
if isinstance(node, ast.UnaryOp):
|
||||
operand = _safe_eval(node.operand, allowed_names)
|
||||
if isinstance(node.op, ast.UAdd):
|
||||
return +operand
|
||||
if isinstance(node.op, ast.USub):
|
||||
return -operand
|
||||
raise ValueError(f"Unsupported unary op: {type(node.op).__name__}")
|
||||
if isinstance(node, ast.BinOp):
|
||||
left = _safe_eval(node.left, allowed_names)
|
||||
right = _safe_eval(node.right, allowed_names)
|
||||
ops = {
|
||||
ast.Add: lambda a, b: a + b,
|
||||
ast.Sub: lambda a, b: a - b,
|
||||
ast.Mult: lambda a, b: a * b,
|
||||
ast.Div: lambda a, b: a / b,
|
||||
ast.FloorDiv: lambda a, b: a // b,
|
||||
ast.Mod: lambda a, b: a % b,
|
||||
ast.Pow: lambda a, b: a**b,
|
||||
}
|
||||
op_fn = ops.get(type(node.op))
|
||||
if op_fn is None:
|
||||
raise ValueError(f"Unsupported binary op: {type(node.op).__name__}")
|
||||
return op_fn(left, right)
|
||||
if isinstance(node, ast.Name):
|
||||
if node.id in allowed_names:
|
||||
return allowed_names[node.id]
|
||||
raise ValueError(f"Unknown name: {node.id!r}")
|
||||
if isinstance(node, ast.Attribute):
|
||||
value = _safe_eval(node.value, allowed_names)
|
||||
# Only allow attribute access on the math module
|
||||
if value is math:
|
||||
attr = getattr(math, node.attr, None)
|
||||
if attr is not None:
|
||||
return attr
|
||||
raise ValueError(f"Attribute access not allowed: .{node.attr}")
|
||||
if isinstance(node, ast.Call):
|
||||
func = _safe_eval(node.func, allowed_names)
|
||||
if not callable(func):
|
||||
raise ValueError(f"Not callable: {func!r}")
|
||||
args = [_safe_eval(a, allowed_names) for a in node.args]
|
||||
kwargs = {kw.arg: _safe_eval(kw.value, allowed_names) for kw in node.keywords}
|
||||
return func(*args, **kwargs)
|
||||
raise ValueError(f"Unsupported syntax: {type(node).__name__}")
|
||||
|
||||
|
||||
def calculator(expression: str) -> str:
|
||||
"""Evaluate a mathematical expression and return the exact result.
|
||||
|
||||
Use this tool for ANY arithmetic: multiplication, division, square roots,
|
||||
exponents, percentages, logarithms, trigonometry, etc.
|
||||
|
||||
Args:
|
||||
expression: A valid Python math expression, e.g. '347 * 829',
|
||||
'math.sqrt(17161)', '2**10', 'math.log(100, 10)'.
|
||||
|
||||
Returns:
|
||||
The exact result as a string.
|
||||
"""
|
||||
allowed_names = {k: getattr(math, k) for k in dir(math) if not k.startswith("_")}
|
||||
allowed_names["math"] = math
|
||||
allowed_names["abs"] = abs
|
||||
allowed_names["round"] = round
|
||||
allowed_names["min"] = min
|
||||
allowed_names["max"] = max
|
||||
try:
|
||||
tree = ast.parse(expression, mode="eval")
|
||||
result = _safe_eval(tree, allowed_names)
|
||||
return str(result)
|
||||
except Exception as e: # broad catch intentional: arbitrary code execution
|
||||
return f"Error evaluating '{expression}': {e}"
|
||||
|
||||
|
||||
def _make_smart_read_file(file_tools: FileTools) -> Callable:
|
||||
"""Wrap FileTools.read_file so directories auto-list their contents.
|
||||
|
||||
When the user (or the LLM) passes a directory path to read_file,
|
||||
the raw Agno implementation throws an IsADirectoryError. This
|
||||
wrapper detects that case, lists the directory entries, and returns
|
||||
a helpful message so the model can pick the right file on its own.
|
||||
"""
|
||||
original_read = file_tools.read_file
|
||||
|
||||
def smart_read_file(file_name: str = "", encoding: str = "utf-8", **kwargs) -> str:
|
||||
"""Reads the contents of the file `file_name` and returns the contents if successful."""
|
||||
# LLMs often call read_file(path=...) instead of read_file(file_name=...)
|
||||
if not file_name:
|
||||
file_name = kwargs.get("path", "")
|
||||
if not file_name:
|
||||
return "Error: no file_name or path provided."
|
||||
# Resolve the path the same way FileTools does
|
||||
_safe, resolved = file_tools.check_escape(file_name)
|
||||
if _safe and resolved.is_dir():
|
||||
entries = sorted(p.name for p in resolved.iterdir() if not p.name.startswith("."))
|
||||
listing = "\n".join(f" - {e}" for e in entries) if entries else " (empty directory)"
|
||||
return (
|
||||
f"'{file_name}' is a directory, not a file. "
|
||||
f"Files inside:\n{listing}\n\n"
|
||||
"Please call read_file with one of the files listed above."
|
||||
)
|
||||
return original_read(file_name, encoding=encoding)
|
||||
|
||||
# Preserve the original docstring for Agno tool schema generation
|
||||
smart_read_file.__doc__ = original_read.__doc__
|
||||
return smart_read_file
|
||||
|
||||
|
||||
def create_research_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the research agent (Echo).
|
||||
|
||||
Includes: file reading
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="research")
|
||||
|
||||
# File reading
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_code_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the code agent (Forge).
|
||||
|
||||
Includes: shell commands, python execution, file read/write, Aider AI assist
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="code")
|
||||
|
||||
# Shell commands (sandboxed)
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# Python execution
|
||||
python_tools = PythonTools()
|
||||
toolkit.register(python_tools.run_python_code, name="python")
|
||||
|
||||
# File operations
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
# Aider AI coding assistant (local with Ollama)
|
||||
aider_tool = create_aider_tool(base_path)
|
||||
toolkit.register(aider_tool.run_aider, name="aider")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_aider_tool(base_path: Path):
|
||||
"""Create an Aider tool for AI-assisted coding."""
|
||||
import subprocess
|
||||
|
||||
class AiderTool:
|
||||
"""Tool that calls Aider (local AI coding assistant) for code generation."""
|
||||
|
||||
def __init__(self, base_dir: Path):
|
||||
self.base_dir = base_dir
|
||||
|
||||
def run_aider(self, prompt: str, model: str = "qwen3:30b") -> str:
|
||||
"""Run Aider to generate code changes.
|
||||
|
||||
Args:
|
||||
prompt: What you want Aider to do (e.g., "add a fibonacci function")
|
||||
model: Ollama model to use (default: qwen3:30b)
|
||||
|
||||
Returns:
|
||||
Aider's response with the code changes made
|
||||
"""
|
||||
try:
|
||||
# Run aider with the prompt
|
||||
result = subprocess.run(
|
||||
[
|
||||
"aider",
|
||||
"--no-git",
|
||||
"--model",
|
||||
f"ollama/{model}",
|
||||
"--quiet",
|
||||
prompt,
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=120,
|
||||
cwd=str(self.base_dir),
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
return result.stdout if result.stdout else "Code changes applied successfully"
|
||||
else:
|
||||
return f"Aider error: {result.stderr}"
|
||||
except FileNotFoundError:
|
||||
return "Error: Aider not installed. Run: pip install aider"
|
||||
except subprocess.TimeoutExpired:
|
||||
return "Error: Aider timed out after 120 seconds"
|
||||
except (OSError, subprocess.SubprocessError) as e:
|
||||
return f"Error running Aider: {str(e)}"
|
||||
|
||||
return AiderTool(base_path)
|
||||
|
||||
|
||||
def create_data_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the data agent (Seer).
|
||||
|
||||
Includes: python execution, file reading, web search for data sources
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="data")
|
||||
|
||||
# Python execution for analysis
|
||||
python_tools = PythonTools()
|
||||
toolkit.register(python_tools.run_python_code, name="python")
|
||||
|
||||
# File reading
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_writing_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the writing agent (Quill).
|
||||
|
||||
Includes: file read/write
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="writing")
|
||||
|
||||
# File operations
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_security_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the security agent (Mace).
|
||||
|
||||
Includes: shell commands (for scanning), file read
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="security")
|
||||
|
||||
# Shell for running security scans
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# File reading for logs/configs
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_devops_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the DevOps agent (Helm).
|
||||
|
||||
Includes: shell commands, file read/write
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="devops")
|
||||
|
||||
# Shell for deployment commands
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# File operations for config management
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def consult_grok(query: str) -> str:
|
||||
"""Consult Grok (xAI) for frontier reasoning on complex questions.
|
||||
|
||||
Use this tool when a question requires advanced reasoning, real-time
|
||||
knowledge, or capabilities beyond the local model. Grok is a premium
|
||||
cloud backend — use sparingly and only for high-complexity queries.
|
||||
|
||||
Args:
|
||||
query: The question or reasoning task to send to Grok.
|
||||
|
||||
Returns:
|
||||
Grok's response text, or an error/status message.
|
||||
"""
|
||||
from config import settings
|
||||
from timmy.backends import get_grok_backend, grok_available
|
||||
|
||||
if not grok_available():
|
||||
return (
|
||||
"Grok is not available. Enable with GROK_ENABLED=true "
|
||||
"and set XAI_API_KEY in your .env file."
|
||||
)
|
||||
|
||||
backend = get_grok_backend()
|
||||
|
||||
# Log to Spark if available
|
||||
try:
|
||||
from spark.engine import spark_engine
|
||||
|
||||
spark_engine.on_tool_executed(
|
||||
agent_id="default",
|
||||
tool_name="consult_grok",
|
||||
success=True,
|
||||
)
|
||||
except (ImportError, AttributeError) as exc:
|
||||
logger.warning("Tool execution failed (consult_grok logging): %s", exc)
|
||||
|
||||
# Generate Lightning invoice for monetization (unless free mode)
|
||||
invoice_info = ""
|
||||
if not settings.grok_free:
|
||||
try:
|
||||
from lightning.factory import get_backend as get_ln_backend
|
||||
|
||||
ln = get_ln_backend()
|
||||
sats = min(settings.grok_max_sats_per_query, settings.grok_sats_hard_cap)
|
||||
inv = ln.create_invoice(sats, f"Grok query: {query[:_INVOICE_MEMO_MAX_LEN]}")
|
||||
invoice_info = f"\n[Lightning invoice: {sats} sats — {inv.payment_request[:40]}...]"
|
||||
except (ImportError, OSError, ValueError) as exc:
|
||||
logger.error("Lightning invoice creation failed: %s", exc)
|
||||
return "Error: Failed to create Lightning invoice. Please check logs."
|
||||
|
||||
result = backend.run(query)
|
||||
|
||||
response = result.content
|
||||
if invoice_info:
|
||||
response += invoice_info
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def web_fetch(url: str, max_tokens: int = 4000) -> str:
|
||||
"""Fetch a web page and return its main text content.
|
||||
|
||||
Downloads the URL, extracts readable text using trafilatura, and
|
||||
truncates to a token budget. Use this to read full articles, docs,
|
||||
or blog posts that web_search only returns snippets for.
|
||||
|
||||
Args:
|
||||
url: The URL to fetch (must start with http:// or https://).
|
||||
max_tokens: Maximum approximate token budget (default 4000).
|
||||
Text is truncated to max_tokens * 4 characters.
|
||||
|
||||
Returns:
|
||||
Extracted text content, or an error message on failure.
|
||||
"""
|
||||
if not url or not url.startswith(("http://", "https://")):
|
||||
return f"Error: invalid URL — must start with http:// or https://: {url!r}"
|
||||
|
||||
try:
|
||||
import requests as _requests
|
||||
except ImportError:
|
||||
return "Error: 'requests' package is not installed. Install with: pip install requests"
|
||||
|
||||
try:
|
||||
import trafilatura
|
||||
except ImportError:
|
||||
return (
|
||||
"Error: 'trafilatura' package is not installed. Install with: pip install trafilatura"
|
||||
)
|
||||
|
||||
try:
|
||||
resp = _requests.get(
|
||||
url,
|
||||
timeout=15,
|
||||
headers={"User-Agent": "TimmyResearchBot/1.0"},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
except _requests.exceptions.Timeout:
|
||||
return f"Error: request timed out after 15 seconds for {url}"
|
||||
except _requests.exceptions.HTTPError as exc:
|
||||
return f"Error: HTTP {exc.response.status_code} for {url}"
|
||||
except _requests.exceptions.RequestException as exc:
|
||||
return f"Error: failed to fetch {url} — {exc}"
|
||||
|
||||
text = trafilatura.extract(resp.text, include_tables=True, include_links=True)
|
||||
if not text:
|
||||
return f"Error: could not extract readable content from {url}"
|
||||
|
||||
char_budget = max_tokens * 4
|
||||
if len(text) > char_budget:
|
||||
text = text[:char_budget] + f"\n\n[…truncated to ~{max_tokens} tokens]"
|
||||
|
||||
return text
|
||||
# ---------------------------------------------------------------------------
|
||||
# Internal _register_* helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _register_web_fetch_tool(toolkit: Toolkit) -> None:
|
||||
@@ -574,10 +90,10 @@ def _register_grok_tool(toolkit: Toolkit) -> None:
|
||||
def _register_memory_tools(toolkit: Toolkit) -> None:
|
||||
"""Register memory search, write, and forget tools."""
|
||||
try:
|
||||
from timmy.memory_system import memory_forget, memory_read, memory_search, memory_write
|
||||
from timmy.memory_system import memory_forget, memory_read, memory_search, memory_store
|
||||
|
||||
toolkit.register(memory_search, name="memory_search")
|
||||
toolkit.register(memory_write, name="memory_write")
|
||||
toolkit.register(memory_store, name="memory_write")
|
||||
toolkit.register(memory_read, name="memory_read")
|
||||
toolkit.register(memory_forget, name="memory_forget")
|
||||
except (ImportError, AttributeError) as exc:
|
||||
@@ -717,6 +233,11 @@ def _register_thinking_tools(toolkit: Toolkit) -> None:
|
||||
raise
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Full toolkit factories
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def create_full_toolkit(base_dir: str | Path | None = None):
|
||||
"""Create a full toolkit with all available tools (for the orchestrator).
|
||||
|
||||
@@ -727,6 +248,7 @@ def create_full_toolkit(base_dir: str | Path | None = None):
|
||||
# Return None when tools aren't available (tests)
|
||||
return None
|
||||
|
||||
from config import settings
|
||||
from timmy.tool_safety import DANGEROUS_TOOLS
|
||||
|
||||
toolkit = Toolkit(name="full")
|
||||
@@ -808,19 +330,9 @@ def create_experiment_tools(base_dir: str | Path | None = None):
|
||||
return toolkit
|
||||
|
||||
|
||||
# Mapping of agent IDs to their toolkits
|
||||
AGENT_TOOLKITS: dict[str, Callable[[], Toolkit]] = {
|
||||
"echo": create_research_tools,
|
||||
"mace": create_security_tools,
|
||||
"helm": create_devops_tools,
|
||||
"seer": create_data_tools,
|
||||
"forge": create_code_tools,
|
||||
"quill": create_writing_tools,
|
||||
"lab": create_experiment_tools,
|
||||
"pixel": lambda base_dir=None: _create_stub_toolkit("pixel"),
|
||||
"lyra": lambda base_dir=None: _create_stub_toolkit("lyra"),
|
||||
"reel": lambda base_dir=None: _create_stub_toolkit("reel"),
|
||||
}
|
||||
# ---------------------------------------------------------------------------
|
||||
# Agent toolkit registry
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _create_stub_toolkit(name: str):
|
||||
@@ -836,6 +348,21 @@ def _create_stub_toolkit(name: str):
|
||||
return toolkit
|
||||
|
||||
|
||||
# Mapping of agent IDs to their toolkits
|
||||
AGENT_TOOLKITS: dict[str, Callable[[], Toolkit]] = {
|
||||
"echo": create_research_tools,
|
||||
"mace": create_security_tools,
|
||||
"helm": create_devops_tools,
|
||||
"seer": create_data_tools,
|
||||
"forge": create_code_tools,
|
||||
"quill": create_writing_tools,
|
||||
"lab": create_experiment_tools,
|
||||
"pixel": lambda base_dir=None: _create_stub_toolkit("pixel"),
|
||||
"lyra": lambda base_dir=None: _create_stub_toolkit("lyra"),
|
||||
"reel": lambda base_dir=None: _create_stub_toolkit("reel"),
|
||||
}
|
||||
|
||||
|
||||
def get_tools_for_agent(agent_id: str, base_dir: str | Path | None = None) -> Toolkit | None:
|
||||
"""Get the appropriate toolkit for an agent.
|
||||
|
||||
@@ -852,11 +379,16 @@ def get_tools_for_agent(agent_id: str, base_dir: str | Path | None = None) -> To
|
||||
return None
|
||||
|
||||
|
||||
# Backward-compat alias
|
||||
# Backward-compat aliases
|
||||
get_tools_for_persona = get_tools_for_agent
|
||||
PERSONA_TOOLKITS = AGENT_TOOLKITS
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tool catalog
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _core_tool_catalog() -> dict:
|
||||
"""Return core file and execution tools catalog entries."""
|
||||
return {
|
||||
121
src/timmy/tools/file_tools.py
Normal file
121
src/timmy/tools/file_tools.py
Normal file
@@ -0,0 +1,121 @@
|
||||
"""File operation tools and agent toolkit factories for file-heavy agents.
|
||||
|
||||
Provides:
|
||||
- Smart read_file wrapper (auto-lists directories)
|
||||
- Toolkit factories for Echo (research), Quill (writing), Seer (data)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
from pathlib import Path
|
||||
|
||||
from timmy.tools._base import (
|
||||
_AGNO_TOOLS_AVAILABLE,
|
||||
FileTools,
|
||||
PythonTools,
|
||||
Toolkit,
|
||||
_ImportError,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _make_smart_read_file(file_tools: FileTools) -> Callable:
|
||||
"""Wrap FileTools.read_file so directories auto-list their contents.
|
||||
|
||||
When the user (or the LLM) passes a directory path to read_file,
|
||||
the raw Agno implementation throws an IsADirectoryError. This
|
||||
wrapper detects that case, lists the directory entries, and returns
|
||||
a helpful message so the model can pick the right file on its own.
|
||||
"""
|
||||
original_read = file_tools.read_file
|
||||
|
||||
def smart_read_file(file_name: str = "", encoding: str = "utf-8", **kwargs) -> str:
|
||||
"""Reads the contents of the file `file_name` and returns the contents if successful."""
|
||||
# LLMs often call read_file(path=...) instead of read_file(file_name=...)
|
||||
if not file_name:
|
||||
file_name = kwargs.get("path", "")
|
||||
if not file_name:
|
||||
return "Error: no file_name or path provided."
|
||||
# Resolve the path the same way FileTools does
|
||||
_safe, resolved = file_tools.check_escape(file_name)
|
||||
if _safe and resolved.is_dir():
|
||||
entries = sorted(p.name for p in resolved.iterdir() if not p.name.startswith("."))
|
||||
listing = "\n".join(f" - {e}" for e in entries) if entries else " (empty directory)"
|
||||
return (
|
||||
f"'{file_name}' is a directory, not a file. "
|
||||
f"Files inside:\n{listing}\n\n"
|
||||
"Please call read_file with one of the files listed above."
|
||||
)
|
||||
return original_read(file_name, encoding=encoding)
|
||||
|
||||
# Preserve the original docstring for Agno tool schema generation
|
||||
smart_read_file.__doc__ = original_read.__doc__
|
||||
return smart_read_file
|
||||
|
||||
|
||||
def create_research_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the research agent (Echo).
|
||||
|
||||
Includes: file reading
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="research")
|
||||
|
||||
# File reading
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_writing_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the writing agent (Quill).
|
||||
|
||||
Includes: file read/write
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="writing")
|
||||
|
||||
# File operations
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_data_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the data agent (Seer).
|
||||
|
||||
Includes: python execution, file reading, web search for data sources
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="data")
|
||||
|
||||
# Python execution for analysis
|
||||
python_tools = PythonTools()
|
||||
toolkit.register(python_tools.run_python_code, name="python")
|
||||
|
||||
# File reading
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
357
src/timmy/tools/system_tools.py
Normal file
357
src/timmy/tools/system_tools.py
Normal file
@@ -0,0 +1,357 @@
|
||||
"""System, calculation, and AI consultation tools for Timmy agents.
|
||||
|
||||
Provides:
|
||||
- Safe AST-based calculator
|
||||
- consult_grok (xAI frontier reasoning)
|
||||
- web_fetch (content extraction)
|
||||
- Toolkit factories for Forge (code), Mace (security), Helm (devops)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
import logging
|
||||
import math
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from timmy.tools._base import (
|
||||
_AGNO_TOOLS_AVAILABLE,
|
||||
FileTools,
|
||||
PythonTools,
|
||||
ShellTools,
|
||||
Toolkit,
|
||||
_ImportError,
|
||||
)
|
||||
from timmy.tools.file_tools import _make_smart_read_file
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Max characters of user query included in Lightning invoice memo
|
||||
_INVOICE_MEMO_MAX_LEN = 50
|
||||
|
||||
|
||||
def _safe_eval(node, allowed_names: dict):
|
||||
"""Walk an AST and evaluate only safe numeric operations."""
|
||||
if isinstance(node, ast.Expression):
|
||||
return _safe_eval(node.body, allowed_names)
|
||||
if isinstance(node, ast.Constant):
|
||||
if isinstance(node.value, (int, float, complex)):
|
||||
return node.value
|
||||
raise ValueError(f"Unsupported constant: {node.value!r}")
|
||||
if isinstance(node, ast.UnaryOp):
|
||||
operand = _safe_eval(node.operand, allowed_names)
|
||||
if isinstance(node.op, ast.UAdd):
|
||||
return +operand
|
||||
if isinstance(node.op, ast.USub):
|
||||
return -operand
|
||||
raise ValueError(f"Unsupported unary op: {type(node.op).__name__}")
|
||||
if isinstance(node, ast.BinOp):
|
||||
left = _safe_eval(node.left, allowed_names)
|
||||
right = _safe_eval(node.right, allowed_names)
|
||||
ops = {
|
||||
ast.Add: lambda a, b: a + b,
|
||||
ast.Sub: lambda a, b: a - b,
|
||||
ast.Mult: lambda a, b: a * b,
|
||||
ast.Div: lambda a, b: a / b,
|
||||
ast.FloorDiv: lambda a, b: a // b,
|
||||
ast.Mod: lambda a, b: a % b,
|
||||
ast.Pow: lambda a, b: a**b,
|
||||
}
|
||||
op_fn = ops.get(type(node.op))
|
||||
if op_fn is None:
|
||||
raise ValueError(f"Unsupported binary op: {type(node.op).__name__}")
|
||||
return op_fn(left, right)
|
||||
if isinstance(node, ast.Name):
|
||||
if node.id in allowed_names:
|
||||
return allowed_names[node.id]
|
||||
raise ValueError(f"Unknown name: {node.id!r}")
|
||||
if isinstance(node, ast.Attribute):
|
||||
value = _safe_eval(node.value, allowed_names)
|
||||
# Only allow attribute access on the math module
|
||||
if value is math:
|
||||
attr = getattr(math, node.attr, None)
|
||||
if attr is not None:
|
||||
return attr
|
||||
raise ValueError(f"Attribute access not allowed: .{node.attr}")
|
||||
if isinstance(node, ast.Call):
|
||||
func = _safe_eval(node.func, allowed_names)
|
||||
if not callable(func):
|
||||
raise ValueError(f"Not callable: {func!r}")
|
||||
args = [_safe_eval(a, allowed_names) for a in node.args]
|
||||
kwargs = {kw.arg: _safe_eval(kw.value, allowed_names) for kw in node.keywords}
|
||||
return func(*args, **kwargs)
|
||||
raise ValueError(f"Unsupported syntax: {type(node).__name__}")
|
||||
|
||||
|
||||
def calculator(expression: str) -> str:
|
||||
"""Evaluate a mathematical expression and return the exact result.
|
||||
|
||||
Use this tool for ANY arithmetic: multiplication, division, square roots,
|
||||
exponents, percentages, logarithms, trigonometry, etc.
|
||||
|
||||
Args:
|
||||
expression: A valid Python math expression, e.g. '347 * 829',
|
||||
'math.sqrt(17161)', '2**10', 'math.log(100, 10)'.
|
||||
|
||||
Returns:
|
||||
The exact result as a string.
|
||||
"""
|
||||
allowed_names = {k: getattr(math, k) for k in dir(math) if not k.startswith("_")}
|
||||
allowed_names["math"] = math
|
||||
allowed_names["abs"] = abs
|
||||
allowed_names["round"] = round
|
||||
allowed_names["min"] = min
|
||||
allowed_names["max"] = max
|
||||
try:
|
||||
tree = ast.parse(expression, mode="eval")
|
||||
result = _safe_eval(tree, allowed_names)
|
||||
return str(result)
|
||||
except Exception as e: # broad catch intentional: arbitrary code execution
|
||||
return f"Error evaluating '{expression}': {e}"
|
||||
|
||||
|
||||
def consult_grok(query: str) -> str:
|
||||
"""Consult Grok (xAI) for frontier reasoning on complex questions.
|
||||
|
||||
Use this tool when a question requires advanced reasoning, real-time
|
||||
knowledge, or capabilities beyond the local model. Grok is a premium
|
||||
cloud backend — use sparingly and only for high-complexity queries.
|
||||
|
||||
Args:
|
||||
query: The question or reasoning task to send to Grok.
|
||||
|
||||
Returns:
|
||||
Grok's response text, or an error/status message.
|
||||
"""
|
||||
from config import settings
|
||||
from timmy.backends import get_grok_backend, grok_available
|
||||
|
||||
if not grok_available():
|
||||
return (
|
||||
"Grok is not available. Enable with GROK_ENABLED=true "
|
||||
"and set XAI_API_KEY in your .env file."
|
||||
)
|
||||
|
||||
backend = get_grok_backend()
|
||||
|
||||
# Log to Spark if available
|
||||
try:
|
||||
from spark.engine import spark_engine
|
||||
|
||||
spark_engine.on_tool_executed(
|
||||
agent_id="default",
|
||||
tool_name="consult_grok",
|
||||
success=True,
|
||||
)
|
||||
except (ImportError, AttributeError) as exc:
|
||||
logger.warning("Tool execution failed (consult_grok logging): %s", exc)
|
||||
|
||||
# Generate Lightning invoice for monetization (unless free mode)
|
||||
invoice_info = ""
|
||||
if not settings.grok_free:
|
||||
try:
|
||||
from lightning.factory import get_backend as get_ln_backend
|
||||
|
||||
ln = get_ln_backend()
|
||||
sats = min(settings.grok_max_sats_per_query, settings.grok_sats_hard_cap)
|
||||
inv = ln.create_invoice(sats, f"Grok query: {query[:_INVOICE_MEMO_MAX_LEN]}")
|
||||
invoice_info = f"\n[Lightning invoice: {sats} sats — {inv.payment_request[:40]}...]"
|
||||
except (ImportError, OSError, ValueError) as exc:
|
||||
logger.error("Lightning invoice creation failed: %s", exc)
|
||||
return "Error: Failed to create Lightning invoice. Please check logs."
|
||||
|
||||
result = backend.run(query)
|
||||
|
||||
response = result.content
|
||||
if invoice_info:
|
||||
response += invoice_info
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def web_fetch(url: str, max_tokens: int = 4000) -> str:
|
||||
"""Fetch a web page and return its main text content.
|
||||
|
||||
Downloads the URL, extracts readable text using trafilatura, and
|
||||
truncates to a token budget. Use this to read full articles, docs,
|
||||
or blog posts that web_search only returns snippets for.
|
||||
|
||||
Args:
|
||||
url: The URL to fetch (must start with http:// or https://).
|
||||
max_tokens: Maximum approximate token budget (default 4000).
|
||||
Text is truncated to max_tokens * 4 characters.
|
||||
|
||||
Returns:
|
||||
Extracted text content, or an error message on failure.
|
||||
"""
|
||||
if not url or not url.startswith(("http://", "https://")):
|
||||
return f"Error: invalid URL — must start with http:// or https://: {url!r}"
|
||||
|
||||
try:
|
||||
import requests as _requests
|
||||
except ImportError:
|
||||
return "Error: 'requests' package is not installed. Install with: pip install requests"
|
||||
|
||||
try:
|
||||
import trafilatura
|
||||
except ImportError:
|
||||
return (
|
||||
"Error: 'trafilatura' package is not installed. Install with: pip install trafilatura"
|
||||
)
|
||||
|
||||
try:
|
||||
resp = _requests.get(
|
||||
url,
|
||||
timeout=15,
|
||||
headers={"User-Agent": "TimmyResearchBot/1.0"},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
except _requests.exceptions.Timeout:
|
||||
return f"Error: request timed out after 15 seconds for {url}"
|
||||
except _requests.exceptions.HTTPError as exc:
|
||||
return f"Error: HTTP {exc.response.status_code} for {url}"
|
||||
except _requests.exceptions.RequestException as exc:
|
||||
return f"Error: failed to fetch {url} — {exc}"
|
||||
|
||||
text = trafilatura.extract(resp.text, include_tables=True, include_links=True)
|
||||
if not text:
|
||||
return f"Error: could not extract readable content from {url}"
|
||||
|
||||
char_budget = max_tokens * 4
|
||||
if len(text) > char_budget:
|
||||
text = text[:char_budget] + f"\n\n[…truncated to ~{max_tokens} tokens]"
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def create_aider_tool(base_path: Path):
|
||||
"""Create an Aider tool for AI-assisted coding."""
|
||||
|
||||
class AiderTool:
|
||||
"""Tool that calls Aider (local AI coding assistant) for code generation."""
|
||||
|
||||
def __init__(self, base_dir: Path):
|
||||
self.base_dir = base_dir
|
||||
|
||||
def run_aider(self, prompt: str, model: str = "qwen3:30b") -> str:
|
||||
"""Run Aider to generate code changes.
|
||||
|
||||
Args:
|
||||
prompt: What you want Aider to do (e.g., "add a fibonacci function")
|
||||
model: Ollama model to use (default: qwen3:30b)
|
||||
|
||||
Returns:
|
||||
Aider's response with the code changes made
|
||||
"""
|
||||
try:
|
||||
# Run aider with the prompt
|
||||
result = subprocess.run(
|
||||
[
|
||||
"aider",
|
||||
"--no-git",
|
||||
"--model",
|
||||
f"ollama/{model}",
|
||||
"--quiet",
|
||||
prompt,
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=120,
|
||||
cwd=str(self.base_dir),
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
return result.stdout if result.stdout else "Code changes applied successfully"
|
||||
else:
|
||||
return f"Aider error: {result.stderr}"
|
||||
except FileNotFoundError:
|
||||
return "Error: Aider not installed. Run: pip install aider"
|
||||
except subprocess.TimeoutExpired:
|
||||
return "Error: Aider timed out after 120 seconds"
|
||||
except (OSError, subprocess.SubprocessError) as e:
|
||||
return f"Error running Aider: {str(e)}"
|
||||
|
||||
return AiderTool(base_path)
|
||||
|
||||
|
||||
def create_code_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the code agent (Forge).
|
||||
|
||||
Includes: shell commands, python execution, file read/write, Aider AI assist
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="code")
|
||||
|
||||
# Shell commands (sandboxed)
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# Python execution
|
||||
python_tools = PythonTools()
|
||||
toolkit.register(python_tools.run_python_code, name="python")
|
||||
|
||||
# File operations
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
# Aider AI coding assistant (local with Ollama)
|
||||
aider_tool = create_aider_tool(base_path)
|
||||
toolkit.register(aider_tool.run_aider, name="aider")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_security_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the security agent (Mace).
|
||||
|
||||
Includes: shell commands (for scanning), file read
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="security")
|
||||
|
||||
# Shell for running security scans
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# File reading for logs/configs
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
|
||||
|
||||
def create_devops_tools(base_dir: str | Path | None = None):
|
||||
"""Create tools for the DevOps agent (Helm).
|
||||
|
||||
Includes: shell commands, file read/write
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
raise ImportError(f"Agno tools not available: {_ImportError}")
|
||||
toolkit = Toolkit(name="devops")
|
||||
|
||||
# Shell for deployment commands
|
||||
shell_tools = ShellTools()
|
||||
toolkit.register(shell_tools.run_shell_command, name="shell")
|
||||
|
||||
# File operations for config management
|
||||
from config import settings
|
||||
|
||||
base_path = Path(base_dir) if base_dir else Path(settings.repo_root)
|
||||
file_tools = FileTools(base_dir=base_path)
|
||||
toolkit.register(_make_smart_read_file(file_tools), name="read_file")
|
||||
toolkit.register(file_tools.save_file, name="write_file")
|
||||
toolkit.register(file_tools.list_files, name="list_files")
|
||||
|
||||
return toolkit
|
||||
@@ -2714,3 +2714,74 @@
|
||||
padding: 0.3rem 0.6rem;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
/* ── Self-Correction Dashboard ─────────────────────────────── */
|
||||
.sc-event {
|
||||
border-left: 3px solid var(--border);
|
||||
padding: 0.6rem 0.8rem;
|
||||
margin-bottom: 0.75rem;
|
||||
background: rgba(255,255,255,0.02);
|
||||
border-radius: 0 4px 4px 0;
|
||||
font-size: 0.82rem;
|
||||
}
|
||||
.sc-event.sc-status-success { border-left-color: var(--green); }
|
||||
.sc-event.sc-status-partial { border-left-color: var(--amber); }
|
||||
.sc-event.sc-status-failed { border-left-color: var(--red); }
|
||||
|
||||
.sc-event-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
margin-bottom: 0.4rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
.sc-status-badge {
|
||||
font-size: 0.68rem;
|
||||
font-weight: 700;
|
||||
letter-spacing: 0.06em;
|
||||
padding: 0.15rem 0.45rem;
|
||||
border-radius: 3px;
|
||||
}
|
||||
.sc-status-badge.sc-status-success { color: var(--green); background: rgba(0,255,136,0.08); }
|
||||
.sc-status-badge.sc-status-partial { color: var(--amber); background: rgba(255,179,0,0.08); }
|
||||
.sc-status-badge.sc-status-failed { color: var(--red); background: rgba(255,59,59,0.08); }
|
||||
|
||||
.sc-source-badge {
|
||||
font-size: 0.68rem;
|
||||
color: var(--purple);
|
||||
background: rgba(168,85,247,0.1);
|
||||
padding: 0.1rem 0.4rem;
|
||||
border-radius: 3px;
|
||||
}
|
||||
.sc-event-time { font-size: 0.68rem; color: var(--text-dim); margin-left: auto; }
|
||||
.sc-event-error-type {
|
||||
font-size: 0.72rem;
|
||||
color: var(--amber);
|
||||
font-weight: 600;
|
||||
margin-bottom: 0.3rem;
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
.sc-label {
|
||||
font-size: 0.65rem;
|
||||
font-weight: 700;
|
||||
letter-spacing: 0.06em;
|
||||
color: var(--text-dim);
|
||||
margin-right: 0.3rem;
|
||||
}
|
||||
.sc-event-intent, .sc-event-error, .sc-event-strategy, .sc-event-outcome {
|
||||
color: var(--text);
|
||||
margin-bottom: 0.2rem;
|
||||
line-height: 1.4;
|
||||
word-break: break-word;
|
||||
}
|
||||
.sc-event-error { color: var(--red); }
|
||||
.sc-event-strategy { color: var(--text-dim); font-style: italic; }
|
||||
.sc-event-outcome { color: var(--text-bright); }
|
||||
.sc-event-meta { font-size: 0.68rem; color: var(--text-dim); margin-top: 0.3rem; }
|
||||
|
||||
.sc-pattern-type {
|
||||
font-family: var(--font);
|
||||
font-size: 0.8rem;
|
||||
color: var(--text-bright);
|
||||
word-break: break-all;
|
||||
}
|
||||
|
||||
@@ -49,8 +49,10 @@ def test_nexus_chat_posts_message(client):
|
||||
|
||||
def test_nexus_teach_stores_fact(client):
|
||||
"""POST /nexus/teach should persist a fact and return confirmation."""
|
||||
with patch("dashboard.routes.nexus.store_personal_fact") as mock_store, \
|
||||
patch("dashboard.routes.nexus.recall_personal_facts_with_ids", return_value=[]):
|
||||
with (
|
||||
patch("dashboard.routes.nexus.store_personal_fact") as mock_store,
|
||||
patch("dashboard.routes.nexus.recall_personal_facts_with_ids", return_value=[]),
|
||||
):
|
||||
mock_store.return_value = None
|
||||
response = client.post("/nexus/teach", data={"fact": "Timmy loves Python"})
|
||||
assert response.status_code == 200
|
||||
|
||||
@@ -7,6 +7,8 @@ from unittest.mock import patch
|
||||
import pytest
|
||||
|
||||
import infrastructure.events.bus as bus_module
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
from infrastructure.events.bus import (
|
||||
Event,
|
||||
EventBus,
|
||||
@@ -352,6 +354,14 @@ class TestEventBusPersistence:
|
||||
events = bus.replay()
|
||||
assert events == []
|
||||
|
||||
def test_init_persistence_db_noop_when_path_is_none(self):
|
||||
"""_init_persistence_db() is a no-op when _persistence_db_path is None."""
|
||||
bus = EventBus()
|
||||
# _persistence_db_path is None by default; calling _init_persistence_db
|
||||
# should silently return without touching the filesystem.
|
||||
bus._init_persistence_db() # must not raise
|
||||
assert bus._persistence_db_path is None
|
||||
|
||||
async def test_wal_mode_on_persistence_db(self, persistent_bus):
|
||||
"""Persistence database should use WAL mode."""
|
||||
conn = sqlite3.connect(str(persistent_bus._persistence_db_path))
|
||||
|
||||
@@ -1512,3 +1512,195 @@ class TestTrySingleProvider:
|
||||
assert len(errors) == 1
|
||||
assert "boom" in errors[0]
|
||||
assert provider.metrics.failed_requests == 1
|
||||
|
||||
|
||||
class TestComplexityRouting:
|
||||
"""Tests for Qwen3-8B / Qwen3-14B dual-model routing (issue #1065)."""
|
||||
|
||||
def _make_dual_model_provider(self) -> Provider:
|
||||
"""Build an Ollama provider with both Qwen3 models registered."""
|
||||
return Provider(
|
||||
name="ollama-local",
|
||||
type="ollama",
|
||||
enabled=True,
|
||||
priority=1,
|
||||
url="http://localhost:11434",
|
||||
models=[
|
||||
{
|
||||
"name": "qwen3:8b",
|
||||
"capabilities": ["text", "tools", "json", "streaming", "routine"],
|
||||
},
|
||||
{
|
||||
"name": "qwen3:14b",
|
||||
"default": True,
|
||||
"capabilities": ["text", "tools", "json", "streaming", "complex", "reasoning"],
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
def test_get_model_for_complexity_simple_returns_8b(self):
|
||||
"""Simple tasks should select the model with 'routine' capability."""
|
||||
from infrastructure.router.classifier import TaskComplexity
|
||||
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
provider = self._make_dual_model_provider()
|
||||
|
||||
model = router._get_model_for_complexity(provider, TaskComplexity.SIMPLE)
|
||||
assert model == "qwen3:8b"
|
||||
|
||||
def test_get_model_for_complexity_complex_returns_14b(self):
|
||||
"""Complex tasks should select the model with 'complex' capability."""
|
||||
from infrastructure.router.classifier import TaskComplexity
|
||||
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
provider = self._make_dual_model_provider()
|
||||
|
||||
model = router._get_model_for_complexity(provider, TaskComplexity.COMPLEX)
|
||||
assert model == "qwen3:14b"
|
||||
|
||||
def test_get_model_for_complexity_returns_none_when_no_match(self):
|
||||
"""Returns None when provider has no matching model in chain."""
|
||||
from infrastructure.router.classifier import TaskComplexity
|
||||
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {} # empty chains
|
||||
|
||||
provider = Provider(
|
||||
name="test",
|
||||
type="ollama",
|
||||
enabled=True,
|
||||
priority=1,
|
||||
models=[{"name": "llama3.2:3b", "default": True, "capabilities": ["text"]}],
|
||||
)
|
||||
|
||||
# No 'routine' or 'complex' model available
|
||||
model = router._get_model_for_complexity(provider, TaskComplexity.SIMPLE)
|
||||
assert model is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_complete_with_simple_hint_routes_to_8b(self):
|
||||
"""complexity_hint='simple' should use qwen3:8b."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "fast answer", "model": "qwen3:8b"}
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "list tasks"}],
|
||||
complexity_hint="simple",
|
||||
)
|
||||
|
||||
assert result["model"] == "qwen3:8b"
|
||||
assert result["complexity"] == "simple"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_complete_with_complex_hint_routes_to_14b(self):
|
||||
"""complexity_hint='complex' should use qwen3:14b."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "detailed answer", "model": "qwen3:14b"}
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "review this PR"}],
|
||||
complexity_hint="complex",
|
||||
)
|
||||
|
||||
assert result["model"] == "qwen3:14b"
|
||||
assert result["complexity"] == "complex"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_explicit_model_bypasses_complexity_routing(self):
|
||||
"""When model is explicitly provided, complexity routing is skipped."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "response", "model": "qwen3:14b"}
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "list tasks"}],
|
||||
model="qwen3:14b", # explicit override
|
||||
)
|
||||
|
||||
# Explicit model wins — complexity field is None
|
||||
assert result["model"] == "qwen3:14b"
|
||||
assert result["complexity"] is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_auto_classification_routes_simple_message(self):
|
||||
"""Short, simple messages should auto-classify as SIMPLE → 8B."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "ok", "model": "qwen3:8b"}
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "status"}],
|
||||
# no complexity_hint — auto-classify
|
||||
)
|
||||
|
||||
assert result["complexity"] == "simple"
|
||||
assert result["model"] == "qwen3:8b"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_auto_classification_routes_complex_message(self):
|
||||
"""Complex messages should auto-classify → 14B."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "deep analysis", "model": "qwen3:14b"}
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "analyze and prioritize the backlog"}],
|
||||
)
|
||||
|
||||
assert result["complexity"] == "complex"
|
||||
assert result["model"] == "qwen3:14b"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_invalid_complexity_hint_falls_back_to_auto(self):
|
||||
"""Invalid complexity_hint should log a warning and auto-classify."""
|
||||
router = CascadeRouter(config_path=Path("/nonexistent"))
|
||||
router.config.fallback_chains = {
|
||||
"routine": ["qwen3:8b"],
|
||||
"complex": ["qwen3:14b"],
|
||||
}
|
||||
router.providers = [self._make_dual_model_provider()]
|
||||
|
||||
with patch.object(router, "_call_ollama") as mock_call:
|
||||
mock_call.return_value = {"content": "ok", "model": "qwen3:8b"}
|
||||
# Should not raise
|
||||
result = await router.complete(
|
||||
messages=[{"role": "user", "content": "status"}],
|
||||
complexity_hint="INVALID_HINT",
|
||||
)
|
||||
|
||||
assert result["complexity"] in ("simple", "complex") # auto-classified
|
||||
|
||||
132
tests/infrastructure/test_router_classifier.py
Normal file
132
tests/infrastructure/test_router_classifier.py
Normal file
@@ -0,0 +1,132 @@
|
||||
"""Tests for Qwen3 dual-model task complexity classifier."""
|
||||
|
||||
from infrastructure.router.classifier import TaskComplexity, classify_task
|
||||
|
||||
|
||||
class TestClassifyTask:
|
||||
"""Tests for classify_task heuristics."""
|
||||
|
||||
# ── Simple / routine tasks ──────────────────────────────────────────────
|
||||
|
||||
def test_empty_messages_is_simple(self):
|
||||
assert classify_task([]) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_no_user_content_is_simple(self):
|
||||
messages = [{"role": "system", "content": "You are Timmy."}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_short_status_query_is_simple(self):
|
||||
messages = [{"role": "user", "content": "status"}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_list_command_is_simple(self):
|
||||
messages = [{"role": "user", "content": "list all tasks"}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_get_command_is_simple(self):
|
||||
messages = [{"role": "user", "content": "get the latest log entry"}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_short_message_under_threshold_is_simple(self):
|
||||
messages = [{"role": "user", "content": "run the build"}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
def test_affirmation_is_simple(self):
|
||||
messages = [{"role": "user", "content": "yes"}]
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
# ── Complex / quality-sensitive tasks ──────────────────────────────────
|
||||
|
||||
def test_plan_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "plan the sprint"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_review_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "review this code"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_analyze_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "analyze performance"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_triage_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "triage the open issues"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_refactor_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "refactor the auth module"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_explain_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "explain how the router works"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_prioritize_keyword_is_complex(self):
|
||||
messages = [{"role": "user", "content": "prioritize the backlog"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_long_message_is_complex(self):
|
||||
long_msg = "do something " * 50 # > 500 chars
|
||||
messages = [{"role": "user", "content": long_msg}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_numbered_list_is_complex(self):
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "1. Read the file 2. Analyze it 3. Write a report",
|
||||
}
|
||||
]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_code_block_is_complex(self):
|
||||
messages = [
|
||||
{"role": "user", "content": "Here is the code:\n```python\nprint('hello')\n```"}
|
||||
]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_deep_conversation_is_complex(self):
|
||||
messages = [
|
||||
{"role": "user", "content": "hi"},
|
||||
{"role": "assistant", "content": "hello"},
|
||||
{"role": "user", "content": "ok"},
|
||||
{"role": "assistant", "content": "yes"},
|
||||
{"role": "user", "content": "ok"},
|
||||
{"role": "assistant", "content": "yes"},
|
||||
{"role": "user", "content": "now do the thing"},
|
||||
]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_analyse_british_spelling_is_complex(self):
|
||||
messages = [{"role": "user", "content": "analyse this dataset"}]
|
||||
assert classify_task(messages) == TaskComplexity.COMPLEX
|
||||
|
||||
def test_non_string_content_is_ignored(self):
|
||||
"""Non-string content should not crash the classifier."""
|
||||
messages = [{"role": "user", "content": ["part1", "part2"]}]
|
||||
# Should not raise; result doesn't matter — just must not blow up
|
||||
result = classify_task(messages)
|
||||
assert isinstance(result, TaskComplexity)
|
||||
|
||||
def test_system_message_not_counted_as_user(self):
|
||||
"""System message alone should not trigger complex keywords."""
|
||||
messages = [
|
||||
{"role": "system", "content": "analyze everything carefully"},
|
||||
{"role": "user", "content": "yes"},
|
||||
]
|
||||
# "analyze" is in system message (not user) — user says "yes" → simple
|
||||
assert classify_task(messages) == TaskComplexity.SIMPLE
|
||||
|
||||
|
||||
class TestTaskComplexityEnum:
|
||||
"""Tests for TaskComplexity enum values."""
|
||||
|
||||
def test_simple_value(self):
|
||||
assert TaskComplexity.SIMPLE.value == "simple"
|
||||
|
||||
def test_complex_value(self):
|
||||
assert TaskComplexity.COMPLEX.value == "complex"
|
||||
|
||||
def test_lookup_by_value(self):
|
||||
assert TaskComplexity("simple") == TaskComplexity.SIMPLE
|
||||
assert TaskComplexity("complex") == TaskComplexity.COMPLEX
|
||||
144
tests/loop/test_loop_guard_seed.py
Normal file
144
tests/loop/test_loop_guard_seed.py
Normal file
@@ -0,0 +1,144 @@
|
||||
"""Tests for loop_guard.seed_cycle_result and --pick mode.
|
||||
|
||||
The seed fixes the cycle-metrics dead-pipeline bug (#1250):
|
||||
loop_guard pre-seeds cycle_result.json so cycle_retro.py can always
|
||||
resolve issue= even when the dispatcher doesn't write the file.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sys
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
import scripts.loop_guard as lg
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _isolate(tmp_path, monkeypatch):
|
||||
"""Redirect loop_guard paths to tmp_path for isolation."""
|
||||
monkeypatch.setattr(lg, "QUEUE_FILE", tmp_path / "queue.json")
|
||||
monkeypatch.setattr(lg, "IDLE_STATE_FILE", tmp_path / "idle_state.json")
|
||||
monkeypatch.setattr(lg, "CYCLE_RESULT_FILE", tmp_path / "cycle_result.json")
|
||||
monkeypatch.setattr(lg, "GITEA_API", "http://test:3000/api/v1")
|
||||
monkeypatch.setattr(lg, "REPO_SLUG", "owner/repo")
|
||||
|
||||
|
||||
# ── seed_cycle_result ──────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_seed_writes_issue_and_type(tmp_path):
|
||||
"""seed_cycle_result writes issue + type to cycle_result.json."""
|
||||
item = {"issue": 42, "type": "bug", "title": "Fix the thing", "ready": True}
|
||||
lg.seed_cycle_result(item)
|
||||
|
||||
data = json.loads((tmp_path / "cycle_result.json").read_text())
|
||||
assert data == {"issue": 42, "type": "bug"}
|
||||
|
||||
|
||||
def test_seed_does_not_overwrite_existing(tmp_path):
|
||||
"""If cycle_result.json already exists, seed_cycle_result leaves it alone."""
|
||||
existing = {"issue": 99, "type": "feature", "tests_passed": 123}
|
||||
(tmp_path / "cycle_result.json").write_text(json.dumps(existing))
|
||||
|
||||
lg.seed_cycle_result({"issue": 1, "type": "bug"})
|
||||
|
||||
data = json.loads((tmp_path / "cycle_result.json").read_text())
|
||||
assert data["issue"] == 99, "Existing file must not be overwritten"
|
||||
|
||||
|
||||
def test_seed_missing_issue_field(tmp_path):
|
||||
"""Item with no issue key — seed still writes without crashing."""
|
||||
lg.seed_cycle_result({"type": "unknown"})
|
||||
data = json.loads((tmp_path / "cycle_result.json").read_text())
|
||||
assert data["issue"] is None
|
||||
|
||||
|
||||
def test_seed_default_type_when_absent(tmp_path):
|
||||
"""Item with no type key defaults to 'unknown'."""
|
||||
lg.seed_cycle_result({"issue": 7})
|
||||
data = json.loads((tmp_path / "cycle_result.json").read_text())
|
||||
assert data["type"] == "unknown"
|
||||
|
||||
|
||||
def test_seed_oserror_is_graceful(tmp_path, monkeypatch, capsys):
|
||||
"""OSError during seed logs a warning but does not raise."""
|
||||
monkeypatch.setattr(lg, "CYCLE_RESULT_FILE", tmp_path / "no_dir" / "cycle_result.json")
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
def failing_mkdir(self, *args, **kwargs):
|
||||
raise OSError("no space left")
|
||||
|
||||
monkeypatch.setattr(Path, "mkdir", failing_mkdir)
|
||||
|
||||
# Should not raise
|
||||
lg.seed_cycle_result({"issue": 5, "type": "bug"})
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert "WARNING" in captured.out
|
||||
|
||||
|
||||
# ── main() integration ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _write_queue(tmp_path, items):
|
||||
tmp_path.mkdir(parents=True, exist_ok=True)
|
||||
lg.QUEUE_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
lg.QUEUE_FILE.write_text(json.dumps(items))
|
||||
|
||||
|
||||
def test_main_seeds_cycle_result_when_work_found(tmp_path, monkeypatch):
|
||||
"""main() seeds cycle_result.json with top queue item on ready queue."""
|
||||
_write_queue(tmp_path, [{"issue": 10, "type": "feature", "ready": True}])
|
||||
monkeypatch.setattr(lg, "_fetch_open_issue_numbers", lambda: None)
|
||||
|
||||
with patch.object(sys, "argv", ["loop_guard"]):
|
||||
rc = lg.main()
|
||||
|
||||
assert rc == 0
|
||||
data = json.loads((tmp_path / "cycle_result.json").read_text())
|
||||
assert data["issue"] == 10
|
||||
|
||||
|
||||
def test_main_no_seed_when_queue_empty(tmp_path, monkeypatch):
|
||||
"""main() does not create cycle_result.json when queue is empty."""
|
||||
_write_queue(tmp_path, [])
|
||||
monkeypatch.setattr(lg, "_fetch_open_issue_numbers", lambda: None)
|
||||
|
||||
with patch.object(sys, "argv", ["loop_guard"]):
|
||||
rc = lg.main()
|
||||
|
||||
assert rc == 1
|
||||
assert not (tmp_path / "cycle_result.json").exists()
|
||||
|
||||
|
||||
def test_main_pick_mode_prints_issue(tmp_path, monkeypatch, capsys):
|
||||
"""--pick flag prints the top issue number to stdout."""
|
||||
_write_queue(tmp_path, [{"issue": 55, "type": "bug", "ready": True}])
|
||||
monkeypatch.setattr(lg, "_fetch_open_issue_numbers", lambda: None)
|
||||
|
||||
with patch.object(sys, "argv", ["loop_guard", "--pick"]):
|
||||
rc = lg.main()
|
||||
|
||||
assert rc == 0
|
||||
captured = capsys.readouterr()
|
||||
# The issue number must appear as a line in stdout
|
||||
lines = captured.out.strip().splitlines()
|
||||
assert str(55) in lines
|
||||
|
||||
|
||||
def test_main_pick_mode_empty_queue_no_output(tmp_path, monkeypatch, capsys):
|
||||
"""--pick with empty queue exits 1, doesn't print an issue number."""
|
||||
_write_queue(tmp_path, [])
|
||||
monkeypatch.setattr(lg, "_fetch_open_issue_numbers", lambda: None)
|
||||
|
||||
with patch.object(sys, "argv", ["loop_guard", "--pick"]):
|
||||
rc = lg.main()
|
||||
|
||||
assert rc == 1
|
||||
captured = capsys.readouterr()
|
||||
# No bare integer line printed
|
||||
for line in captured.out.strip().splitlines():
|
||||
assert not line.strip().isdigit(), f"Unexpected issue number in output: {line!r}"
|
||||
0
tests/self_coding/__init__.py
Normal file
0
tests/self_coding/__init__.py
Normal file
363
tests/self_coding/test_loop.py
Normal file
363
tests/self_coding/test_loop.py
Normal file
@@ -0,0 +1,363 @@
|
||||
"""Unit tests for the self-modification loop.
|
||||
|
||||
Covers:
|
||||
- Protected branch guard
|
||||
- Successful cycle (mocked git + tests)
|
||||
- Edit function failure → branch reverted, no commit
|
||||
- Test failure → branch reverted, no commit
|
||||
- Gitea PR creation plumbing
|
||||
- GiteaClient graceful degradation (no token, network error)
|
||||
|
||||
All git and subprocess calls are mocked so these run offline without
|
||||
a real repo or test suite.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_loop(repo_root="/tmp/fake-repo"):
|
||||
"""Construct a SelfModifyLoop with a fake repo root."""
|
||||
from self_coding.self_modify.loop import SelfModifyLoop
|
||||
|
||||
return SelfModifyLoop(repo_root=repo_root, remote="origin", base_branch="main")
|
||||
|
||||
|
||||
def _noop_edit(repo_root: str) -> None:
|
||||
"""Edit function that does nothing."""
|
||||
|
||||
|
||||
def _failing_edit(repo_root: str) -> None:
|
||||
"""Edit function that raises."""
|
||||
raise RuntimeError("edit exploded")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Guard tests (sync — no git calls needed)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_guard_blocks_main():
|
||||
loop = _make_loop()
|
||||
with pytest.raises(ValueError, match="protected branch"):
|
||||
loop._guard_branch("main")
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_guard_blocks_master():
|
||||
loop = _make_loop()
|
||||
with pytest.raises(ValueError, match="protected branch"):
|
||||
loop._guard_branch("master")
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_guard_allows_feature_branch():
|
||||
loop = _make_loop()
|
||||
# Should not raise
|
||||
loop._guard_branch("self-modify/some-feature")
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_guard_allows_self_modify_prefix():
|
||||
loop = _make_loop()
|
||||
loop._guard_branch("self-modify/issue-983")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Full cycle — success path
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_success():
|
||||
"""Happy path: edit succeeds, tests pass, PR created."""
|
||||
loop = _make_loop()
|
||||
|
||||
fake_completed = MagicMock()
|
||||
fake_completed.stdout = "abc1234\n"
|
||||
fake_completed.returncode = 0
|
||||
|
||||
fake_test_result = MagicMock()
|
||||
fake_test_result.stdout = "3 passed"
|
||||
fake_test_result.stderr = ""
|
||||
fake_test_result.returncode = 0
|
||||
|
||||
from self_coding.gitea_client import PullRequest as _PR
|
||||
|
||||
fake_pr = _PR(number=42, title="test PR", html_url="http://gitea/pr/42")
|
||||
|
||||
with (
|
||||
patch.object(loop, "_git", return_value=fake_completed),
|
||||
patch("subprocess.run", return_value=fake_test_result),
|
||||
patch.object(loop, "_create_pr", return_value=fake_pr),
|
||||
):
|
||||
result = await loop.run(
|
||||
slug="test-feature",
|
||||
description="Add test feature",
|
||||
edit_fn=_noop_edit,
|
||||
issue_number=983,
|
||||
)
|
||||
|
||||
assert result.success is True
|
||||
assert result.branch == "self-modify/test-feature"
|
||||
assert result.pr_url == "http://gitea/pr/42"
|
||||
assert result.pr_number == 42
|
||||
assert "3 passed" in result.test_output
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_skips_tests_when_flag_set():
|
||||
"""skip_tests=True should bypass the test gate."""
|
||||
loop = _make_loop()
|
||||
|
||||
fake_completed = MagicMock()
|
||||
fake_completed.stdout = "deadbeef\n"
|
||||
fake_completed.returncode = 0
|
||||
|
||||
with (
|
||||
patch.object(loop, "_git", return_value=fake_completed),
|
||||
patch.object(loop, "_create_pr", return_value=None),
|
||||
patch("subprocess.run") as mock_run,
|
||||
):
|
||||
result = await loop.run(
|
||||
slug="skip-test-feature",
|
||||
description="Skip test feature",
|
||||
edit_fn=_noop_edit,
|
||||
skip_tests=True,
|
||||
)
|
||||
|
||||
# subprocess.run should NOT be called for tests
|
||||
mock_run.assert_not_called()
|
||||
assert result.success is True
|
||||
assert "(tests skipped)" in result.test_output
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Failure paths
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_reverts_on_edit_failure():
|
||||
"""If edit_fn raises, the branch should be reverted and no commit made."""
|
||||
loop = _make_loop()
|
||||
|
||||
fake_completed = MagicMock()
|
||||
fake_completed.stdout = ""
|
||||
fake_completed.returncode = 0
|
||||
|
||||
revert_called = []
|
||||
|
||||
def _fake_revert(branch):
|
||||
revert_called.append(branch)
|
||||
|
||||
with (
|
||||
patch.object(loop, "_git", return_value=fake_completed),
|
||||
patch.object(loop, "_revert_branch", side_effect=_fake_revert),
|
||||
patch.object(loop, "_commit_all") as mock_commit,
|
||||
):
|
||||
result = await loop.run(
|
||||
slug="broken-edit",
|
||||
description="This will fail",
|
||||
edit_fn=_failing_edit,
|
||||
skip_tests=True,
|
||||
)
|
||||
|
||||
assert result.success is False
|
||||
assert "edit exploded" in result.error
|
||||
assert "self-modify/broken-edit" in revert_called
|
||||
mock_commit.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_reverts_on_test_failure():
|
||||
"""If tests fail, branch should be reverted and no commit made."""
|
||||
loop = _make_loop()
|
||||
|
||||
fake_completed = MagicMock()
|
||||
fake_completed.stdout = ""
|
||||
fake_completed.returncode = 0
|
||||
|
||||
fake_test_result = MagicMock()
|
||||
fake_test_result.stdout = "FAILED test_foo"
|
||||
fake_test_result.stderr = "1 failed"
|
||||
fake_test_result.returncode = 1
|
||||
|
||||
revert_called = []
|
||||
|
||||
def _fake_revert(branch):
|
||||
revert_called.append(branch)
|
||||
|
||||
with (
|
||||
patch.object(loop, "_git", return_value=fake_completed),
|
||||
patch("subprocess.run", return_value=fake_test_result),
|
||||
patch.object(loop, "_revert_branch", side_effect=_fake_revert),
|
||||
patch.object(loop, "_commit_all") as mock_commit,
|
||||
):
|
||||
result = await loop.run(
|
||||
slug="tests-will-fail",
|
||||
description="This will fail tests",
|
||||
edit_fn=_noop_edit,
|
||||
)
|
||||
|
||||
assert result.success is False
|
||||
assert "Tests failed" in result.error
|
||||
assert "self-modify/tests-will-fail" in revert_called
|
||||
mock_commit.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_slug_with_main_creates_safe_branch():
|
||||
"""A slug of 'main' produces branch 'self-modify/main', which is not protected."""
|
||||
|
||||
loop = _make_loop()
|
||||
|
||||
fake_completed = MagicMock()
|
||||
fake_completed.stdout = "deadbeef\n"
|
||||
fake_completed.returncode = 0
|
||||
|
||||
# 'self-modify/main' is NOT in _PROTECTED_BRANCHES so the run should succeed
|
||||
with (
|
||||
patch.object(loop, "_git", return_value=fake_completed),
|
||||
patch.object(loop, "_create_pr", return_value=None),
|
||||
):
|
||||
result = await loop.run(
|
||||
slug="main",
|
||||
description="try to write to self-modify/main",
|
||||
edit_fn=_noop_edit,
|
||||
skip_tests=True,
|
||||
)
|
||||
assert result.branch == "self-modify/main"
|
||||
assert result.success is True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# GiteaClient tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_gitea_client_returns_none_without_token():
|
||||
"""GiteaClient should return None gracefully when no token is set."""
|
||||
from self_coding.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient(base_url="http://localhost:3000", token="", repo="owner/repo")
|
||||
pr = client.create_pull_request(
|
||||
title="Test PR",
|
||||
body="body",
|
||||
head="self-modify/test",
|
||||
)
|
||||
assert pr is None
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_gitea_client_comment_returns_false_without_token():
|
||||
"""add_issue_comment should return False gracefully when no token is set."""
|
||||
from self_coding.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient(base_url="http://localhost:3000", token="", repo="owner/repo")
|
||||
result = client.add_issue_comment(123, "hello")
|
||||
assert result is False
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_gitea_client_create_pr_handles_network_error():
|
||||
"""create_pull_request should return None on network failure."""
|
||||
from self_coding.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient(base_url="http://localhost:3000", token="fake-token", repo="owner/repo")
|
||||
|
||||
mock_requests = MagicMock()
|
||||
mock_requests.post.side_effect = Exception("Connection refused")
|
||||
mock_requests.exceptions.ConnectionError = Exception
|
||||
|
||||
with patch.dict("sys.modules", {"requests": mock_requests}):
|
||||
pr = client.create_pull_request(
|
||||
title="Test PR",
|
||||
body="body",
|
||||
head="self-modify/test",
|
||||
)
|
||||
assert pr is None
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_gitea_client_comment_handles_network_error():
|
||||
"""add_issue_comment should return False on network failure."""
|
||||
from self_coding.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient(base_url="http://localhost:3000", token="fake-token", repo="owner/repo")
|
||||
|
||||
mock_requests = MagicMock()
|
||||
mock_requests.post.side_effect = Exception("Connection refused")
|
||||
|
||||
with patch.dict("sys.modules", {"requests": mock_requests}):
|
||||
result = client.add_issue_comment(456, "hello")
|
||||
assert result is False
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_gitea_client_create_pr_success():
|
||||
"""create_pull_request should return a PullRequest on HTTP 201."""
|
||||
from self_coding.gitea_client import GiteaClient, PullRequest
|
||||
|
||||
client = GiteaClient(base_url="http://localhost:3000", token="tok", repo="owner/repo")
|
||||
|
||||
fake_resp = MagicMock()
|
||||
fake_resp.raise_for_status = MagicMock()
|
||||
fake_resp.json.return_value = {
|
||||
"number": 77,
|
||||
"title": "Test PR",
|
||||
"html_url": "http://localhost:3000/owner/repo/pulls/77",
|
||||
}
|
||||
|
||||
mock_requests = MagicMock()
|
||||
mock_requests.post.return_value = fake_resp
|
||||
|
||||
with patch.dict("sys.modules", {"requests": mock_requests}):
|
||||
pr = client.create_pull_request("Test PR", "body", "self-modify/feat")
|
||||
|
||||
assert isinstance(pr, PullRequest)
|
||||
assert pr.number == 77
|
||||
assert pr.html_url == "http://localhost:3000/owner/repo/pulls/77"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# LoopResult dataclass
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_loop_result_defaults():
|
||||
from self_coding.self_modify.loop import LoopResult
|
||||
|
||||
r = LoopResult(success=True)
|
||||
assert r.branch == ""
|
||||
assert r.commit_sha == ""
|
||||
assert r.pr_url == ""
|
||||
assert r.pr_number == 0
|
||||
assert r.test_output == ""
|
||||
assert r.error == ""
|
||||
assert r.elapsed_ms == 0.0
|
||||
assert r.metadata == {}
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_loop_result_failure():
|
||||
from self_coding.self_modify.loop import LoopResult
|
||||
|
||||
r = LoopResult(success=False, error="something broke", branch="self-modify/test")
|
||||
assert r.success is False
|
||||
assert r.error == "something broke"
|
||||
@@ -6,6 +6,52 @@ from unittest.mock import MagicMock, patch
|
||||
import pytest
|
||||
|
||||
|
||||
class TestAppleSiliconHelpers:
|
||||
"""Tests for is_apple_silicon() and _build_experiment_env()."""
|
||||
|
||||
def test_is_apple_silicon_true_on_arm64_darwin(self):
|
||||
from timmy.autoresearch import is_apple_silicon
|
||||
|
||||
with (
|
||||
patch("timmy.autoresearch.platform.system", return_value="Darwin"),
|
||||
patch("timmy.autoresearch.platform.machine", return_value="arm64"),
|
||||
):
|
||||
assert is_apple_silicon() is True
|
||||
|
||||
def test_is_apple_silicon_false_on_linux(self):
|
||||
from timmy.autoresearch import is_apple_silicon
|
||||
|
||||
with (
|
||||
patch("timmy.autoresearch.platform.system", return_value="Linux"),
|
||||
patch("timmy.autoresearch.platform.machine", return_value="x86_64"),
|
||||
):
|
||||
assert is_apple_silicon() is False
|
||||
|
||||
def test_build_env_auto_resolves_mlx_on_apple_silicon(self):
|
||||
from timmy.autoresearch import _build_experiment_env
|
||||
|
||||
with patch("timmy.autoresearch.is_apple_silicon", return_value=True):
|
||||
env = _build_experiment_env(dataset="tinystories", backend="auto")
|
||||
|
||||
assert env["AUTORESEARCH_BACKEND"] == "mlx"
|
||||
assert env["AUTORESEARCH_DATASET"] == "tinystories"
|
||||
|
||||
def test_build_env_auto_resolves_cuda_on_non_apple(self):
|
||||
from timmy.autoresearch import _build_experiment_env
|
||||
|
||||
with patch("timmy.autoresearch.is_apple_silicon", return_value=False):
|
||||
env = _build_experiment_env(dataset="openwebtext", backend="auto")
|
||||
|
||||
assert env["AUTORESEARCH_BACKEND"] == "cuda"
|
||||
assert env["AUTORESEARCH_DATASET"] == "openwebtext"
|
||||
|
||||
def test_build_env_explicit_backend_not_overridden(self):
|
||||
from timmy.autoresearch import _build_experiment_env
|
||||
|
||||
env = _build_experiment_env(dataset="tinystories", backend="cpu")
|
||||
assert env["AUTORESEARCH_BACKEND"] == "cpu"
|
||||
|
||||
|
||||
class TestPrepareExperiment:
|
||||
"""Tests for prepare_experiment()."""
|
||||
|
||||
@@ -44,6 +90,24 @@ class TestPrepareExperiment:
|
||||
|
||||
assert "failed" in result.lower()
|
||||
|
||||
def test_prepare_passes_env_to_prepare_script(self, tmp_path):
|
||||
from timmy.autoresearch import prepare_experiment
|
||||
|
||||
repo_dir = tmp_path / "autoresearch"
|
||||
repo_dir.mkdir()
|
||||
(repo_dir / "prepare.py").write_text("pass")
|
||||
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="")
|
||||
prepare_experiment(tmp_path, dataset="tinystories", backend="cpu")
|
||||
|
||||
# The prepare.py call is the second call (first is skipped since repo exists)
|
||||
prepare_call = mock_run.call_args
|
||||
assert prepare_call.kwargs.get("env") is not None or prepare_call[1].get("env") is not None
|
||||
call_kwargs = prepare_call.kwargs if prepare_call.kwargs else prepare_call[1]
|
||||
assert call_kwargs["env"]["AUTORESEARCH_DATASET"] == "tinystories"
|
||||
assert call_kwargs["env"]["AUTORESEARCH_BACKEND"] == "cpu"
|
||||
|
||||
|
||||
class TestRunExperiment:
|
||||
"""Tests for run_experiment()."""
|
||||
@@ -176,3 +240,280 @@ class TestExtractMetric:
|
||||
|
||||
output = "loss: 0.45\nloss: 0.32"
|
||||
assert _extract_metric(output, "loss") == pytest.approx(0.32)
|
||||
|
||||
|
||||
class TestExtractPassRate:
|
||||
"""Tests for _extract_pass_rate()."""
|
||||
|
||||
def test_all_passing(self):
|
||||
from timmy.autoresearch import _extract_pass_rate
|
||||
|
||||
output = "5 passed in 1.23s"
|
||||
assert _extract_pass_rate(output) == pytest.approx(100.0)
|
||||
|
||||
def test_mixed_results(self):
|
||||
from timmy.autoresearch import _extract_pass_rate
|
||||
|
||||
output = "8 passed, 2 failed in 2.00s"
|
||||
assert _extract_pass_rate(output) == pytest.approx(80.0)
|
||||
|
||||
def test_no_pytest_output(self):
|
||||
from timmy.autoresearch import _extract_pass_rate
|
||||
|
||||
assert _extract_pass_rate("no test results here") is None
|
||||
|
||||
|
||||
class TestExtractCoverage:
|
||||
"""Tests for _extract_coverage()."""
|
||||
|
||||
def test_total_line(self):
|
||||
from timmy.autoresearch import _extract_coverage
|
||||
|
||||
output = "TOTAL 1234 100 92%"
|
||||
assert _extract_coverage(output) == pytest.approx(92.0)
|
||||
|
||||
def test_no_coverage(self):
|
||||
from timmy.autoresearch import _extract_coverage
|
||||
|
||||
assert _extract_coverage("no coverage data") is None
|
||||
|
||||
|
||||
class TestSystemExperiment:
|
||||
"""Tests for SystemExperiment class."""
|
||||
|
||||
def test_generate_hypothesis_with_program(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="src/timmy/agent.py")
|
||||
hyp = exp.generate_hypothesis("Fix memory leak in session handling")
|
||||
assert "src/timmy/agent.py" in hyp
|
||||
assert "Fix memory leak" in hyp
|
||||
|
||||
def test_generate_hypothesis_fallback(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="src/timmy/agent.py", metric="coverage")
|
||||
hyp = exp.generate_hypothesis("")
|
||||
assert "src/timmy/agent.py" in hyp
|
||||
assert "coverage" in hyp
|
||||
|
||||
def test_generate_hypothesis_skips_comment_lines(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="mymodule.py")
|
||||
hyp = exp.generate_hypothesis("# comment\nActual direction here")
|
||||
assert "Actual direction" in hyp
|
||||
|
||||
def test_evaluate_baseline(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="unit_pass_rate")
|
||||
result = exp.evaluate(85.0, None)
|
||||
assert "Baseline" in result
|
||||
assert "85" in result
|
||||
|
||||
def test_evaluate_improvement_higher_is_better(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="unit_pass_rate")
|
||||
result = exp.evaluate(90.0, 85.0)
|
||||
assert "Improvement" in result
|
||||
|
||||
def test_evaluate_regression_higher_is_better(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="coverage")
|
||||
result = exp.evaluate(80.0, 85.0)
|
||||
assert "Regression" in result
|
||||
|
||||
def test_evaluate_none_metric(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py")
|
||||
result = exp.evaluate(None, 80.0)
|
||||
assert "Indeterminate" in result
|
||||
|
||||
def test_evaluate_lower_is_better(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="val_bpb")
|
||||
result = exp.evaluate(1.1, 1.2)
|
||||
assert "Improvement" in result
|
||||
|
||||
def test_is_improvement_higher_is_better(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="unit_pass_rate")
|
||||
assert exp.is_improvement(90.0, 85.0) is True
|
||||
assert exp.is_improvement(80.0, 85.0) is False
|
||||
|
||||
def test_is_improvement_lower_is_better(self):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", metric="val_bpb")
|
||||
assert exp.is_improvement(1.1, 1.2) is True
|
||||
assert exp.is_improvement(1.3, 1.2) is False
|
||||
|
||||
def test_run_tox_success(self, tmp_path):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(
|
||||
returncode=0,
|
||||
stdout="8 passed in 1.23s",
|
||||
stderr="",
|
||||
)
|
||||
result = exp.run_tox(tox_env="unit")
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["metric"] == pytest.approx(100.0)
|
||||
|
||||
def test_run_tox_timeout(self, tmp_path):
|
||||
import subprocess
|
||||
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", budget_minutes=1, workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.side_effect = subprocess.TimeoutExpired(cmd="tox", timeout=60)
|
||||
result = exp.run_tox()
|
||||
|
||||
assert result["success"] is False
|
||||
assert "Budget exceeded" in result["error"]
|
||||
|
||||
def test_apply_edit_aider_not_installed(self, tmp_path):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.side_effect = FileNotFoundError("aider not found")
|
||||
result = exp.apply_edit("some hypothesis")
|
||||
|
||||
assert "not available" in result
|
||||
|
||||
def test_commit_changes_success(self, tmp_path):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
success = exp.commit_changes("test commit")
|
||||
|
||||
assert success is True
|
||||
|
||||
def test_revert_changes_failure(self, tmp_path):
|
||||
import subprocess
|
||||
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.side_effect = subprocess.CalledProcessError(1, "git")
|
||||
success = exp.revert_changes()
|
||||
|
||||
assert success is False
|
||||
|
||||
def test_create_branch_success(self, tmp_path):
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(returncode=0)
|
||||
success = exp.create_branch("feature/test-branch")
|
||||
|
||||
assert success is True
|
||||
# Verify correct git command was called
|
||||
mock_run.assert_called_once()
|
||||
call_args = mock_run.call_args[0][0]
|
||||
assert "checkout" in call_args
|
||||
assert "-b" in call_args
|
||||
assert "feature/test-branch" in call_args
|
||||
|
||||
def test_create_branch_failure(self, tmp_path):
|
||||
import subprocess
|
||||
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.side_effect = subprocess.CalledProcessError(1, "git")
|
||||
success = exp.create_branch("feature/test-branch")
|
||||
|
||||
assert success is False
|
||||
|
||||
def test_run_dry_run_mode(self, tmp_path):
|
||||
"""Test that run() in dry_run mode only generates hypotheses."""
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
result = exp.run(max_iterations=3, dry_run=True, program_content="Test program")
|
||||
|
||||
assert result["iterations"] == 3
|
||||
assert result["success"] is False # No actual experiments run
|
||||
assert len(exp.results) == 3
|
||||
# Each result should have a hypothesis
|
||||
for record in exp.results:
|
||||
assert "hypothesis" in record
|
||||
|
||||
def test_run_with_custom_metric_fn(self, tmp_path):
|
||||
"""Test that custom metric_fn is used for metric extraction."""
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
def custom_metric_fn(output: str) -> float | None:
|
||||
match = __import__("re").search(r"custom_metric:\s*([0-9.]+)", output)
|
||||
return float(match.group(1)) if match else None
|
||||
|
||||
exp = SystemExperiment(
|
||||
target="x.py",
|
||||
workspace=tmp_path,
|
||||
metric="custom",
|
||||
metric_fn=custom_metric_fn,
|
||||
)
|
||||
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(
|
||||
returncode=0,
|
||||
stdout="custom_metric: 42.5\nother output",
|
||||
stderr="",
|
||||
)
|
||||
tox_result = exp.run_tox()
|
||||
|
||||
assert tox_result["metric"] == pytest.approx(42.5)
|
||||
|
||||
def test_run_single_iteration_success(self, tmp_path):
|
||||
"""Test a successful single iteration that finds an improvement."""
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
# Mock tox returning a passing test with metric
|
||||
mock_run.return_value = MagicMock(
|
||||
returncode=0,
|
||||
stdout="10 passed in 1.23s",
|
||||
stderr="",
|
||||
)
|
||||
result = exp.run(max_iterations=1, tox_env="unit")
|
||||
|
||||
assert result["iterations"] == 1
|
||||
assert len(exp.results) == 1
|
||||
assert exp.results[0]["metric"] == pytest.approx(100.0)
|
||||
|
||||
def test_run_stores_baseline_on_first_success(self, tmp_path):
|
||||
"""Test that baseline is set after first successful iteration."""
|
||||
from timmy.autoresearch import SystemExperiment
|
||||
|
||||
exp = SystemExperiment(target="x.py", workspace=tmp_path)
|
||||
assert exp.baseline is None
|
||||
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(
|
||||
returncode=0,
|
||||
stdout="8 passed in 1.23s",
|
||||
stderr="",
|
||||
)
|
||||
exp.run(max_iterations=1)
|
||||
|
||||
assert exp.baseline == pytest.approx(100.0)
|
||||
assert exp.results[0]["baseline"] is None # First run has no baseline
|
||||
|
||||
94
tests/timmy/test_cli_learn.py
Normal file
94
tests/timmy/test_cli_learn.py
Normal file
@@ -0,0 +1,94 @@
|
||||
"""Tests for the `timmy learn` CLI command (autoresearch entry point)."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from typer.testing import CliRunner
|
||||
|
||||
from timmy.cli import app
|
||||
|
||||
runner = CliRunner()
|
||||
|
||||
|
||||
class TestLearnCommand:
|
||||
"""Tests for `timmy learn`."""
|
||||
|
||||
def test_requires_target(self):
|
||||
result = runner.invoke(app, ["learn"])
|
||||
assert result.exit_code != 0
|
||||
assert "target" in result.output.lower() or "target" in (result.stderr or "").lower()
|
||||
|
||||
def test_dry_run_shows_hypothesis_no_tox(self, tmp_path):
|
||||
program_file = tmp_path / "program.md"
|
||||
program_file.write_text("Improve logging coverage in agent module")
|
||||
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
result = runner.invoke(
|
||||
app,
|
||||
[
|
||||
"learn",
|
||||
"--target",
|
||||
"src/timmy/agent.py",
|
||||
"--program",
|
||||
str(program_file),
|
||||
"--max-experiments",
|
||||
"2",
|
||||
"--dry-run",
|
||||
],
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
# tox should never be called in dry-run
|
||||
mock_run.assert_not_called()
|
||||
assert "agent.py" in result.output
|
||||
|
||||
def test_missing_program_md_warns_but_continues(self, tmp_path):
|
||||
with patch("timmy.autoresearch.subprocess.run") as mock_run:
|
||||
mock_run.return_value = MagicMock(returncode=0, stdout="3 passed", stderr="")
|
||||
result = runner.invoke(
|
||||
app,
|
||||
[
|
||||
"learn",
|
||||
"--target",
|
||||
"src/timmy/agent.py",
|
||||
"--program",
|
||||
str(tmp_path / "nonexistent.md"),
|
||||
"--max-experiments",
|
||||
"1",
|
||||
"--dry-run",
|
||||
],
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
|
||||
def test_dry_run_prints_max_experiments_hypotheses(self, tmp_path):
|
||||
program_file = tmp_path / "program.md"
|
||||
program_file.write_text("Fix edge case in parser")
|
||||
|
||||
result = runner.invoke(
|
||||
app,
|
||||
[
|
||||
"learn",
|
||||
"--target",
|
||||
"src/timmy/parser.py",
|
||||
"--program",
|
||||
str(program_file),
|
||||
"--max-experiments",
|
||||
"3",
|
||||
"--dry-run",
|
||||
],
|
||||
)
|
||||
|
||||
assert result.exit_code == 0
|
||||
# Should show 3 experiment headers
|
||||
assert result.output.count("[1/3]") == 1
|
||||
assert result.output.count("[2/3]") == 1
|
||||
assert result.output.count("[3/3]") == 1
|
||||
|
||||
def test_help_text_present(self):
|
||||
result = runner.invoke(app, ["learn", "--help"])
|
||||
assert result.exit_code == 0
|
||||
assert "--target" in result.output
|
||||
assert "--metric" in result.output
|
||||
assert "--budget" in result.output
|
||||
assert "--max-experiments" in result.output
|
||||
assert "--dry-run" in result.output
|
||||
@@ -16,7 +16,7 @@ from timmy.memory_system import (
|
||||
memory_forget,
|
||||
memory_read,
|
||||
memory_search,
|
||||
memory_write,
|
||||
memory_store,
|
||||
)
|
||||
|
||||
|
||||
@@ -490,7 +490,7 @@ class TestMemorySearch:
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_none_top_k_handled(self):
|
||||
result = memory_search("test", top_k=None)
|
||||
result = memory_search("test", limit=None)
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_basic_search_returns_string(self):
|
||||
@@ -521,12 +521,12 @@ class TestMemoryRead:
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
class TestMemoryWrite:
|
||||
"""Test module-level memory_write function."""
|
||||
class TestMemoryStore:
|
||||
"""Test module-level memory_store function."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_vector_store(self):
|
||||
"""Mock vector_store functions for memory_write tests."""
|
||||
"""Mock vector_store functions for memory_store tests."""
|
||||
# Patch where it's imported from, not where it's used
|
||||
with (
|
||||
patch("timmy.memory_system.search_memories") as mock_search,
|
||||
@@ -542,75 +542,87 @@ class TestMemoryWrite:
|
||||
|
||||
yield {"search": mock_search, "store": mock_store}
|
||||
|
||||
def test_memory_write_empty_content(self):
|
||||
"""Test that empty content returns error message."""
|
||||
result = memory_write("")
|
||||
def test_memory_store_empty_report(self):
|
||||
"""Test that empty report returns error message."""
|
||||
result = memory_store(topic="test", report="")
|
||||
assert "empty" in result.lower()
|
||||
|
||||
def test_memory_write_whitespace_only(self):
|
||||
"""Test that whitespace-only content returns error."""
|
||||
result = memory_write(" \n\t ")
|
||||
def test_memory_store_whitespace_only(self):
|
||||
"""Test that whitespace-only report returns error."""
|
||||
result = memory_store(topic="test", report=" \n\t ")
|
||||
assert "empty" in result.lower()
|
||||
|
||||
def test_memory_write_valid_content(self, mock_vector_store):
|
||||
def test_memory_store_valid_content(self, mock_vector_store):
|
||||
"""Test writing valid content."""
|
||||
result = memory_write("Remember this important fact.")
|
||||
result = memory_store(topic="fact about Timmy", report="Remember this important fact.")
|
||||
assert "stored" in result.lower() or "memory" in result.lower()
|
||||
mock_vector_store["store"].assert_called_once()
|
||||
|
||||
def test_memory_write_dedup_for_facts(self, mock_vector_store):
|
||||
"""Test that duplicate facts are skipped."""
|
||||
def test_memory_store_dedup_for_facts_or_research(self, mock_vector_store):
|
||||
"""Test that duplicate facts or research are skipped."""
|
||||
# Simulate existing similar fact
|
||||
mock_entry = MagicMock()
|
||||
mock_entry.id = "existing-id"
|
||||
mock_vector_store["search"].return_value = [mock_entry]
|
||||
|
||||
result = memory_write("Similar fact text", context_type="fact")
|
||||
# Test with 'fact'
|
||||
result = memory_store(topic="Similar fact", report="Similar fact text", type="fact")
|
||||
assert "similar" in result.lower() or "duplicate" in result.lower()
|
||||
mock_vector_store["store"].assert_not_called()
|
||||
|
||||
def test_memory_write_no_dedup_for_conversation(self, mock_vector_store):
|
||||
mock_vector_store["store"].reset_mock()
|
||||
# Test with 'research'
|
||||
result = memory_store(
|
||||
topic="Similar research", report="Similar research content", type="research"
|
||||
)
|
||||
assert "similar" in result.lower() or "duplicate" in result.lower()
|
||||
mock_vector_store["store"].assert_not_called()
|
||||
|
||||
def test_memory_store_no_dedup_for_conversation(self, mock_vector_store):
|
||||
"""Test that conversation entries are not deduplicated."""
|
||||
# Even with existing entries, conversations should be stored
|
||||
mock_entry = MagicMock()
|
||||
mock_entry.id = "existing-id"
|
||||
mock_vector_store["search"].return_value = [mock_entry]
|
||||
|
||||
memory_write("Conversation text", context_type="conversation")
|
||||
memory_store(topic="Conversation", report="Conversation text", type="conversation")
|
||||
# Should still store (no duplicate check for non-fact)
|
||||
mock_vector_store["store"].assert_called_once()
|
||||
|
||||
def test_memory_write_invalid_context_type(self, mock_vector_store):
|
||||
"""Test that invalid context_type defaults to 'fact'."""
|
||||
memory_write("Some content", context_type="invalid_type")
|
||||
# Should still succeed, using "fact" as default
|
||||
def test_memory_store_invalid_type_defaults_to_research(self, mock_vector_store):
|
||||
"""Test that invalid type defaults to 'research'."""
|
||||
memory_store(topic="Invalid type test", report="Some content", type="invalid_type")
|
||||
# Should still succeed, using "research" as default
|
||||
mock_vector_store["store"].assert_called_once()
|
||||
call_kwargs = mock_vector_store["store"].call_args.kwargs
|
||||
assert call_kwargs.get("context_type") == "fact"
|
||||
assert call_kwargs.get("context_type") == "research"
|
||||
|
||||
def test_memory_write_valid_context_types(self, mock_vector_store):
|
||||
def test_memory_store_valid_types(self, mock_vector_store):
|
||||
"""Test all valid context types."""
|
||||
valid_types = ["fact", "conversation", "document"]
|
||||
valid_types = ["fact", "conversation", "document", "research"]
|
||||
for ctx_type in valid_types:
|
||||
mock_vector_store["store"].reset_mock()
|
||||
memory_write(f"Content for {ctx_type}", context_type=ctx_type)
|
||||
memory_store(
|
||||
topic=f"Topic for {ctx_type}", report=f"Content for {ctx_type}", type=ctx_type
|
||||
)
|
||||
mock_vector_store["store"].assert_called_once()
|
||||
|
||||
def test_memory_write_strips_content(self, mock_vector_store):
|
||||
"""Test that content is stripped of leading/trailing whitespace."""
|
||||
memory_write(" padded content ")
|
||||
def test_memory_store_strips_report_and_adds_topic(self, mock_vector_store):
|
||||
"""Test that report is stripped of leading/trailing whitespace and combined with topic."""
|
||||
memory_store(topic=" My Topic ", report=" padded content ")
|
||||
call_kwargs = mock_vector_store["store"].call_args.kwargs
|
||||
assert call_kwargs.get("content") == "padded content"
|
||||
assert call_kwargs.get("content") == "Topic: My Topic\n\nReport: padded content"
|
||||
assert call_kwargs.get("metadata") == {"topic": " My Topic "}
|
||||
|
||||
def test_memory_write_unicode_content(self, mock_vector_store):
|
||||
def test_memory_store_unicode_report(self, mock_vector_store):
|
||||
"""Test writing unicode content."""
|
||||
result = memory_write("Unicode content: 你好世界 🎉")
|
||||
result = memory_store(topic="Unicode", report="Unicode content: 你好世界 🎉")
|
||||
assert "stored" in result.lower() or "memory" in result.lower()
|
||||
|
||||
def test_memory_write_handles_exception(self, mock_vector_store):
|
||||
def test_memory_store_handles_exception(self, mock_vector_store):
|
||||
"""Test handling of store_memory exceptions."""
|
||||
mock_vector_store["store"].side_effect = Exception("DB error")
|
||||
result = memory_write("This will fail")
|
||||
result = memory_store(topic="Failing", report="This will fail")
|
||||
assert "failed" in result.lower() or "error" in result.lower()
|
||||
|
||||
|
||||
|
||||
444
tests/timmy/test_session_report.py
Normal file
444
tests/timmy/test_session_report.py
Normal file
@@ -0,0 +1,444 @@
|
||||
"""Tests for timmy.sovereignty.session_report.
|
||||
|
||||
Refs: #957 (Session Sovereignty Report Generator)
|
||||
"""
|
||||
|
||||
import base64
|
||||
import json
|
||||
import time
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
from timmy.sovereignty.session_report import (
|
||||
_format_duration,
|
||||
_gather_session_data,
|
||||
_gather_sovereignty_data,
|
||||
_render_markdown,
|
||||
commit_report,
|
||||
generate_and_commit_report,
|
||||
generate_report,
|
||||
mark_session_start,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _format_duration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestFormatDuration:
|
||||
def test_seconds_only(self):
|
||||
assert _format_duration(45) == "45s"
|
||||
|
||||
def test_minutes_and_seconds(self):
|
||||
assert _format_duration(125) == "2m 5s"
|
||||
|
||||
def test_hours_minutes_seconds(self):
|
||||
assert _format_duration(3661) == "1h 1m 1s"
|
||||
|
||||
def test_zero(self):
|
||||
assert _format_duration(0) == "0s"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# mark_session_start + generate_report (smoke)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestMarkSessionStart:
|
||||
def test_sets_session_start(self):
|
||||
import timmy.sovereignty.session_report as sr
|
||||
|
||||
sr._SESSION_START = None
|
||||
mark_session_start()
|
||||
assert sr._SESSION_START is not None
|
||||
assert sr._SESSION_START.tzinfo == UTC
|
||||
|
||||
def test_idempotent_overwrite(self):
|
||||
import timmy.sovereignty.session_report as sr
|
||||
|
||||
mark_session_start()
|
||||
first = sr._SESSION_START
|
||||
time.sleep(0.01)
|
||||
mark_session_start()
|
||||
second = sr._SESSION_START
|
||||
assert second >= first
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _gather_session_data
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGatherSessionData:
|
||||
def test_returns_defaults_when_no_file(self, tmp_path):
|
||||
mock_logger = MagicMock()
|
||||
mock_logger.flush.return_value = None
|
||||
mock_logger.session_file = tmp_path / "nonexistent.jsonl"
|
||||
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_session_logger",
|
||||
return_value=mock_logger,
|
||||
):
|
||||
data = _gather_session_data()
|
||||
|
||||
assert data["user_messages"] == 0
|
||||
assert data["timmy_messages"] == 0
|
||||
assert data["tool_calls"] == 0
|
||||
assert data["errors"] == 0
|
||||
assert data["tool_call_breakdown"] == {}
|
||||
|
||||
def test_counts_entries_correctly(self, tmp_path):
|
||||
session_file = tmp_path / "session_2026-03-23.jsonl"
|
||||
entries = [
|
||||
{"type": "message", "role": "user", "content": "hello"},
|
||||
{"type": "message", "role": "timmy", "content": "hi"},
|
||||
{"type": "message", "role": "user", "content": "test"},
|
||||
{"type": "tool_call", "tool": "memory_search", "args": {}, "result": "found"},
|
||||
{"type": "tool_call", "tool": "memory_search", "args": {}, "result": "nope"},
|
||||
{"type": "tool_call", "tool": "shell", "args": {}, "result": "ok"},
|
||||
{"type": "error", "error": "boom"},
|
||||
]
|
||||
with open(session_file, "w") as f:
|
||||
for e in entries:
|
||||
f.write(json.dumps(e) + "\n")
|
||||
|
||||
mock_logger = MagicMock()
|
||||
mock_logger.flush.return_value = None
|
||||
mock_logger.session_file = session_file
|
||||
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_session_logger",
|
||||
return_value=mock_logger,
|
||||
):
|
||||
data = _gather_session_data()
|
||||
|
||||
assert data["user_messages"] == 2
|
||||
assert data["timmy_messages"] == 1
|
||||
assert data["tool_calls"] == 3
|
||||
assert data["errors"] == 1
|
||||
assert data["tool_call_breakdown"]["memory_search"] == 2
|
||||
assert data["tool_call_breakdown"]["shell"] == 1
|
||||
|
||||
def test_graceful_on_import_error(self):
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_session_logger",
|
||||
side_effect=ImportError("no session_logger"),
|
||||
):
|
||||
data = _gather_session_data()
|
||||
|
||||
assert data["tool_calls"] == 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _gather_sovereignty_data
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGatherSovereigntyData:
|
||||
def test_returns_empty_on_import_error(self):
|
||||
with patch.dict("sys.modules", {"infrastructure.sovereignty_metrics": None}):
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_sovereignty_store",
|
||||
side_effect=ImportError("no store"),
|
||||
):
|
||||
data = _gather_sovereignty_data()
|
||||
|
||||
assert data["metrics"] == {}
|
||||
assert data["deltas"] == {}
|
||||
assert data["previous_session"] == {}
|
||||
|
||||
def test_populates_deltas_from_history(self):
|
||||
mock_store = MagicMock()
|
||||
mock_store.get_summary.return_value = {
|
||||
"cache_hit_rate": {"current": 0.5, "phase": "week1"},
|
||||
}
|
||||
# get_latest returns newest-first
|
||||
mock_store.get_latest.return_value = [
|
||||
{"value": 0.5},
|
||||
{"value": 0.3},
|
||||
{"value": 0.1},
|
||||
]
|
||||
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_sovereignty_store",
|
||||
return_value=mock_store,
|
||||
):
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.GRADUATION_TARGETS",
|
||||
{"cache_hit_rate": {"graduation": 0.9}},
|
||||
):
|
||||
data = _gather_sovereignty_data()
|
||||
|
||||
delta = data["deltas"].get("cache_hit_rate")
|
||||
assert delta is not None
|
||||
assert delta["start"] == 0.1 # oldest in window
|
||||
assert delta["end"] == 0.5 # most recent
|
||||
assert data["previous_session"]["cache_hit_rate"] == 0.3
|
||||
|
||||
def test_single_data_point_no_delta(self):
|
||||
mock_store = MagicMock()
|
||||
mock_store.get_summary.return_value = {}
|
||||
mock_store.get_latest.return_value = [{"value": 0.4}]
|
||||
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.get_sovereignty_store",
|
||||
return_value=mock_store,
|
||||
):
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.GRADUATION_TARGETS",
|
||||
{"api_cost": {"graduation": 0.01}},
|
||||
):
|
||||
data = _gather_sovereignty_data()
|
||||
|
||||
delta = data["deltas"]["api_cost"]
|
||||
assert delta["start"] == 0.4
|
||||
assert delta["end"] == 0.4
|
||||
assert data["previous_session"]["api_cost"] is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_report (integration — smoke test)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGenerateReport:
|
||||
def _minimal_session_data(self):
|
||||
return {
|
||||
"user_messages": 3,
|
||||
"timmy_messages": 3,
|
||||
"tool_calls": 2,
|
||||
"errors": 0,
|
||||
"tool_call_breakdown": {"memory_search": 2},
|
||||
}
|
||||
|
||||
def _minimal_sov_data(self):
|
||||
return {
|
||||
"metrics": {
|
||||
"cache_hit_rate": {"current": 0.45, "phase": "week1"},
|
||||
"api_cost": {"current": 0.12, "phase": "pre-start"},
|
||||
},
|
||||
"deltas": {
|
||||
"cache_hit_rate": {"start": 0.40, "end": 0.45},
|
||||
"api_cost": {"start": 0.10, "end": 0.12},
|
||||
},
|
||||
"previous_session": {
|
||||
"cache_hit_rate": 0.40,
|
||||
"api_cost": 0.10,
|
||||
},
|
||||
}
|
||||
|
||||
def test_smoke_produces_markdown(self):
|
||||
with (
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_session_data",
|
||||
return_value=self._minimal_session_data(),
|
||||
),
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_sovereignty_data",
|
||||
return_value=self._minimal_sov_data(),
|
||||
),
|
||||
):
|
||||
report = generate_report("test-session")
|
||||
|
||||
assert "# Sovereignty Session Report" in report
|
||||
assert "test-session" in report
|
||||
assert "## Session Activity" in report
|
||||
assert "## Sovereignty Scorecard" in report
|
||||
assert "## Cost Breakdown" in report
|
||||
assert "## Trend vs Previous Session" in report
|
||||
|
||||
def test_report_contains_session_stats(self):
|
||||
with (
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_session_data",
|
||||
return_value=self._minimal_session_data(),
|
||||
),
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_sovereignty_data",
|
||||
return_value=self._minimal_sov_data(),
|
||||
),
|
||||
):
|
||||
report = generate_report()
|
||||
|
||||
assert "| User messages | 3 |" in report
|
||||
assert "memory_search" in report
|
||||
|
||||
def test_report_no_previous_session(self):
|
||||
sov = self._minimal_sov_data()
|
||||
sov["previous_session"] = {"cache_hit_rate": None, "api_cost": None}
|
||||
|
||||
with (
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_session_data",
|
||||
return_value=self._minimal_session_data(),
|
||||
),
|
||||
patch(
|
||||
"timmy.sovereignty.session_report._gather_sovereignty_data",
|
||||
return_value=sov,
|
||||
),
|
||||
):
|
||||
report = generate_report()
|
||||
|
||||
assert "No previous session data" in report
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# commit_report
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestCommitReport:
|
||||
def test_returns_false_when_gitea_disabled(self):
|
||||
with patch("timmy.sovereignty.session_report.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = False
|
||||
result = commit_report("# test", "dashboard")
|
||||
|
||||
assert result is False
|
||||
|
||||
def test_returns_false_when_no_token(self):
|
||||
with patch("timmy.sovereignty.session_report.settings") as mock_settings:
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = ""
|
||||
result = commit_report("# test", "dashboard")
|
||||
|
||||
assert result is False
|
||||
|
||||
def test_creates_file_via_put(self):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 201
|
||||
mock_response.raise_for_status.return_value = None
|
||||
|
||||
mock_check = MagicMock()
|
||||
mock_check.status_code = 404 # file does not exist yet
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.__enter__ = MagicMock(return_value=mock_client)
|
||||
mock_client.__exit__ = MagicMock(return_value=False)
|
||||
mock_client.get.return_value = mock_check
|
||||
mock_client.put.return_value = mock_response
|
||||
|
||||
with (
|
||||
patch("timmy.sovereignty.session_report.settings") as mock_settings,
|
||||
patch("timmy.sovereignty.session_report.httpx.Client", return_value=mock_client),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "fake-token"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
|
||||
result = commit_report("# report content", "dashboard")
|
||||
|
||||
assert result is True
|
||||
mock_client.put.assert_called_once()
|
||||
call_kwargs = mock_client.put.call_args
|
||||
payload = call_kwargs.kwargs.get("json", call_kwargs.args[1] if len(call_kwargs.args) > 1 else {})
|
||||
decoded = base64.b64decode(payload["content"]).decode()
|
||||
assert "# report content" in decoded
|
||||
|
||||
def test_updates_existing_file_with_sha(self):
|
||||
mock_check = MagicMock()
|
||||
mock_check.status_code = 200
|
||||
mock_check.json.return_value = {"sha": "abc123"}
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status.return_value = None
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.__enter__ = MagicMock(return_value=mock_client)
|
||||
mock_client.__exit__ = MagicMock(return_value=False)
|
||||
mock_client.get.return_value = mock_check
|
||||
mock_client.put.return_value = mock_response
|
||||
|
||||
with (
|
||||
patch("timmy.sovereignty.session_report.settings") as mock_settings,
|
||||
patch("timmy.sovereignty.session_report.httpx.Client", return_value=mock_client),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "fake-token"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
|
||||
result = commit_report("# updated", "dashboard")
|
||||
|
||||
assert result is True
|
||||
payload = mock_client.put.call_args.kwargs.get("json", {})
|
||||
assert payload.get("sha") == "abc123"
|
||||
|
||||
def test_returns_false_on_http_error(self):
|
||||
import httpx
|
||||
|
||||
mock_check = MagicMock()
|
||||
mock_check.status_code = 404
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.__enter__ = MagicMock(return_value=mock_client)
|
||||
mock_client.__exit__ = MagicMock(return_value=False)
|
||||
mock_client.get.return_value = mock_check
|
||||
mock_client.put.side_effect = httpx.HTTPStatusError(
|
||||
"403", request=MagicMock(), response=MagicMock(status_code=403)
|
||||
)
|
||||
|
||||
with (
|
||||
patch("timmy.sovereignty.session_report.settings") as mock_settings,
|
||||
patch("timmy.sovereignty.session_report.httpx.Client", return_value=mock_client),
|
||||
):
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "fake-token"
|
||||
mock_settings.gitea_url = "http://localhost:3000"
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
|
||||
result = commit_report("# test", "dashboard")
|
||||
|
||||
assert result is False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_and_commit_report (async)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGenerateAndCommitReport:
|
||||
async def test_returns_true_on_success(self):
|
||||
with (
|
||||
patch(
|
||||
"timmy.sovereignty.session_report.generate_report",
|
||||
return_value="# mock report",
|
||||
),
|
||||
patch(
|
||||
"timmy.sovereignty.session_report.commit_report",
|
||||
return_value=True,
|
||||
),
|
||||
):
|
||||
result = await generate_and_commit_report("test")
|
||||
|
||||
assert result is True
|
||||
|
||||
async def test_returns_false_when_commit_fails(self):
|
||||
with (
|
||||
patch(
|
||||
"timmy.sovereignty.session_report.generate_report",
|
||||
return_value="# mock report",
|
||||
),
|
||||
patch(
|
||||
"timmy.sovereignty.session_report.commit_report",
|
||||
return_value=False,
|
||||
),
|
||||
):
|
||||
result = await generate_and_commit_report()
|
||||
|
||||
assert result is False
|
||||
|
||||
async def test_graceful_on_exception(self):
|
||||
with patch(
|
||||
"timmy.sovereignty.session_report.generate_report",
|
||||
side_effect=RuntimeError("explode"),
|
||||
):
|
||||
result = await generate_and_commit_report()
|
||||
|
||||
assert result is False
|
||||
332
tests/timmy/test_three_strike.py
Normal file
332
tests/timmy/test_three_strike.py
Normal file
@@ -0,0 +1,332 @@
|
||||
"""Tests for the three-strike detector.
|
||||
|
||||
Refs: #962
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from timmy.sovereignty.three_strike import (
|
||||
CATEGORIES,
|
||||
STRIKE_BLOCK,
|
||||
STRIKE_WARNING,
|
||||
FalseworkChecklist,
|
||||
StrikeRecord,
|
||||
ThreeStrikeError,
|
||||
ThreeStrikeStore,
|
||||
falsework_check,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def store(tmp_path):
|
||||
"""Isolated store backed by a temp DB."""
|
||||
return ThreeStrikeStore(db_path=tmp_path / "test_strikes.db")
|
||||
|
||||
|
||||
# ── Category constants ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestCategories:
|
||||
@pytest.mark.unit
|
||||
def test_all_categories_present(self):
|
||||
expected = {
|
||||
"vlm_prompt_edit",
|
||||
"game_bug_review",
|
||||
"parameter_tuning",
|
||||
"portal_adapter_creation",
|
||||
"deployment_step",
|
||||
}
|
||||
assert expected == CATEGORIES
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_strike_thresholds(self):
|
||||
assert STRIKE_WARNING == 2
|
||||
assert STRIKE_BLOCK == 3
|
||||
|
||||
|
||||
# ── ThreeStrikeStore ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestThreeStrikeStore:
|
||||
@pytest.mark.unit
|
||||
def test_first_strike_returns_record(self, store):
|
||||
record = store.record("vlm_prompt_edit", "login_button")
|
||||
assert isinstance(record, StrikeRecord)
|
||||
assert record.count == 1
|
||||
assert record.blocked is False
|
||||
assert record.category == "vlm_prompt_edit"
|
||||
assert record.key == "login_button"
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_second_strike_count(self, store):
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
record = store.record("vlm_prompt_edit", "login_button")
|
||||
assert record.count == 2
|
||||
assert record.blocked is False
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_third_strike_raises(self, store):
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
with pytest.raises(ThreeStrikeError) as exc_info:
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
err = exc_info.value
|
||||
assert err.category == "vlm_prompt_edit"
|
||||
assert err.key == "login_button"
|
||||
assert err.count == 3
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_fourth_strike_still_raises(self, store):
|
||||
for _ in range(3):
|
||||
try:
|
||||
store.record("deployment_step", "build_docker")
|
||||
except ThreeStrikeError:
|
||||
pass
|
||||
with pytest.raises(ThreeStrikeError):
|
||||
store.record("deployment_step", "build_docker")
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_different_keys_are_independent(self, store):
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
store.record("vlm_prompt_edit", "login_button")
|
||||
# Different key — should not be blocked
|
||||
record = store.record("vlm_prompt_edit", "logout_button")
|
||||
assert record.count == 1
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_different_categories_are_independent(self, store):
|
||||
store.record("vlm_prompt_edit", "foo")
|
||||
store.record("vlm_prompt_edit", "foo")
|
||||
# Different category, same key — should not be blocked
|
||||
record = store.record("game_bug_review", "foo")
|
||||
assert record.count == 1
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_invalid_category_raises_value_error(self, store):
|
||||
with pytest.raises(ValueError, match="Unknown category"):
|
||||
store.record("nonexistent_category", "some_key")
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_metadata_stored_in_events(self, store):
|
||||
store.record("parameter_tuning", "learning_rate", metadata={"value": 0.01})
|
||||
events = store.get_events("parameter_tuning", "learning_rate")
|
||||
assert len(events) == 1
|
||||
assert events[0]["metadata"]["value"] == 0.01
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_get_returns_none_for_missing(self, store):
|
||||
assert store.get("vlm_prompt_edit", "not_there") is None
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_get_returns_record(self, store):
|
||||
store.record("vlm_prompt_edit", "submit_btn")
|
||||
record = store.get("vlm_prompt_edit", "submit_btn")
|
||||
assert record is not None
|
||||
assert record.count == 1
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_list_all_empty(self, store):
|
||||
assert store.list_all() == []
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_list_all_returns_records(self, store):
|
||||
store.record("vlm_prompt_edit", "a")
|
||||
store.record("vlm_prompt_edit", "b")
|
||||
records = store.list_all()
|
||||
assert len(records) == 2
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_list_blocked_empty_when_no_strikes(self, store):
|
||||
assert store.list_blocked() == []
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_list_blocked_contains_blocked(self, store):
|
||||
for _ in range(3):
|
||||
try:
|
||||
store.record("deployment_step", "push_image")
|
||||
except ThreeStrikeError:
|
||||
pass
|
||||
blocked = store.list_blocked()
|
||||
assert len(blocked) == 1
|
||||
assert blocked[0].key == "push_image"
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_register_automation_unblocks(self, store):
|
||||
for _ in range(3):
|
||||
try:
|
||||
store.record("deployment_step", "push_image")
|
||||
except ThreeStrikeError:
|
||||
pass
|
||||
|
||||
store.register_automation("deployment_step", "push_image", "scripts/push.sh")
|
||||
|
||||
# Should no longer raise
|
||||
record = store.record("deployment_step", "push_image")
|
||||
assert record.blocked is False
|
||||
assert record.automation == "scripts/push.sh"
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_register_automation_resets_count(self, store):
|
||||
for _ in range(3):
|
||||
try:
|
||||
store.record("deployment_step", "push_image")
|
||||
except ThreeStrikeError:
|
||||
pass
|
||||
|
||||
store.register_automation("deployment_step", "push_image", "scripts/push.sh")
|
||||
|
||||
# register_automation resets count to 0; one new record brings it to 1
|
||||
new_record = store.record("deployment_step", "push_image")
|
||||
assert new_record.count == 1
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_get_events_returns_most_recent_first(self, store):
|
||||
store.record("vlm_prompt_edit", "nav", metadata={"n": 1})
|
||||
store.record("vlm_prompt_edit", "nav", metadata={"n": 2})
|
||||
events = store.get_events("vlm_prompt_edit", "nav")
|
||||
assert len(events) == 2
|
||||
# Most recent first
|
||||
assert events[0]["metadata"]["n"] == 2
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_get_events_respects_limit(self, store):
|
||||
for _ in range(5):
|
||||
try:
|
||||
store.record("vlm_prompt_edit", "el")
|
||||
except ThreeStrikeError:
|
||||
pass
|
||||
events = store.get_events("vlm_prompt_edit", "el", limit=2)
|
||||
assert len(events) == 2
|
||||
|
||||
|
||||
# ── FalseworkChecklist ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestFalseworkChecklist:
|
||||
@pytest.mark.unit
|
||||
def test_valid_checklist_passes(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="embedding vectors",
|
||||
artifact_storage_path="data/embeddings.json",
|
||||
local_rule_or_cache="vlm_cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="eliminates repeated call",
|
||||
)
|
||||
assert cl.passed is True
|
||||
assert cl.validate() == []
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_missing_artifact_fails(self):
|
||||
cl = FalseworkChecklist(
|
||||
artifact_storage_path="data/x.json",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert any("Q1" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_missing_storage_path_fails(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert any("Q2" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_will_repeat_none_fails(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
artifact_storage_path="path",
|
||||
local_rule_or_cache="cache",
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert any("Q4" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_will_repeat_true_requires_elimination_strategy(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
artifact_storage_path="path",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=True,
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert any("Q5" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_will_repeat_false_no_elimination_needed(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
artifact_storage_path="path",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert not any("Q5" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_missing_sovereignty_delta_fails(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
artifact_storage_path="path",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=False,
|
||||
)
|
||||
errors = cl.validate()
|
||||
assert any("Q6" in e for e in errors)
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_multiple_missing_fields(self):
|
||||
cl = FalseworkChecklist()
|
||||
errors = cl.validate()
|
||||
# At minimum Q1, Q2, Q3, Q4, Q6 should be flagged
|
||||
assert len(errors) >= 5
|
||||
|
||||
|
||||
# ── falsework_check() helper ──────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestFalseworkCheck:
|
||||
@pytest.mark.unit
|
||||
def test_raises_on_incomplete_checklist(self):
|
||||
with pytest.raises(ValueError, match="Falsework Checklist incomplete"):
|
||||
falsework_check(FalseworkChecklist())
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_passes_on_complete_checklist(self):
|
||||
cl = FalseworkChecklist(
|
||||
durable_artifact="artifact",
|
||||
artifact_storage_path="path",
|
||||
local_rule_or_cache="cache",
|
||||
will_repeat=False,
|
||||
sovereignty_delta="delta",
|
||||
)
|
||||
falsework_check(cl) # should not raise
|
||||
|
||||
|
||||
# ── ThreeStrikeError ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestThreeStrikeError:
|
||||
@pytest.mark.unit
|
||||
def test_attributes(self):
|
||||
err = ThreeStrikeError("vlm_prompt_edit", "foo", 3)
|
||||
assert err.category == "vlm_prompt_edit"
|
||||
assert err.key == "foo"
|
||||
assert err.count == 3
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_message_contains_details(self):
|
||||
err = ThreeStrikeError("deployment_step", "build", 4)
|
||||
msg = str(err)
|
||||
assert "deployment_step" in msg
|
||||
assert "build" in msg
|
||||
assert "4" in msg
|
||||
93
tests/timmy/test_three_strike_routes.py
Normal file
93
tests/timmy/test_three_strike_routes.py
Normal file
@@ -0,0 +1,93 @@
|
||||
"""Integration tests for the three-strike dashboard routes.
|
||||
|
||||
Refs: #962
|
||||
|
||||
Uses unique keys per test (uuid4) so parallel xdist workers and repeated
|
||||
runs never collide on shared SQLite state.
|
||||
"""
|
||||
|
||||
import uuid
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def _uid() -> str:
|
||||
"""Return a short unique suffix for test keys."""
|
||||
return uuid.uuid4().hex[:8]
|
||||
|
||||
|
||||
class TestThreeStrikeRoutes:
|
||||
@pytest.mark.unit
|
||||
def test_list_strikes_returns_200(self, client):
|
||||
response = client.get("/sovereignty/three-strike")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "records" in data
|
||||
assert "categories" in data
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_list_blocked_returns_200(self, client):
|
||||
response = client.get("/sovereignty/three-strike/blocked")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "blocked" in data
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_record_strike_first(self, client):
|
||||
key = f"test_btn_{_uid()}"
|
||||
response = client.post(
|
||||
"/sovereignty/three-strike/record",
|
||||
json={"category": "vlm_prompt_edit", "key": key},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["count"] == 1
|
||||
assert data["blocked"] is False
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_record_invalid_category_returns_422(self, client):
|
||||
response = client.post(
|
||||
"/sovereignty/three-strike/record",
|
||||
json={"category": "not_a_real_category", "key": "x"},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_third_strike_returns_409(self, client):
|
||||
key = f"push_route_{_uid()}"
|
||||
for _ in range(2):
|
||||
client.post(
|
||||
"/sovereignty/three-strike/record",
|
||||
json={"category": "deployment_step", "key": key},
|
||||
)
|
||||
response = client.post(
|
||||
"/sovereignty/three-strike/record",
|
||||
json={"category": "deployment_step", "key": key},
|
||||
)
|
||||
assert response.status_code == 409
|
||||
data = response.json()
|
||||
assert data["detail"]["error"] == "three_strike_block"
|
||||
assert data["detail"]["count"] == 3
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_register_automation_returns_success(self, client):
|
||||
response = client.post(
|
||||
f"/sovereignty/three-strike/deployment_step/auto_{_uid()}/automation",
|
||||
json={"artifact_path": "scripts/auto.sh"},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["success"] is True
|
||||
|
||||
@pytest.mark.unit
|
||||
def test_get_events_returns_200(self, client):
|
||||
key = f"events_{_uid()}"
|
||||
client.post(
|
||||
"/sovereignty/three-strike/record",
|
||||
json={"category": "vlm_prompt_edit", "key": key},
|
||||
)
|
||||
response = client.get(f"/sovereignty/three-strike/vlm_prompt_edit/{key}/events")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["category"] == "vlm_prompt_edit"
|
||||
assert data["key"] == key
|
||||
assert len(data["events"]) >= 1
|
||||
297
tests/unit/test_energy_monitor.py
Normal file
297
tests/unit/test_energy_monitor.py
Normal file
@@ -0,0 +1,297 @@
|
||||
"""Unit tests for the Energy Budget Monitor.
|
||||
|
||||
Tests power estimation strategies, inference recording, efficiency scoring,
|
||||
and low power mode logic — all without real subprocesses.
|
||||
|
||||
Refs: #1009
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from infrastructure.energy.monitor import (
|
||||
EnergyBudgetMonitor,
|
||||
InferenceSample,
|
||||
_DEFAULT_MODEL_SIZE_GB,
|
||||
_EFFICIENCY_SCORE_CEILING,
|
||||
_WATTS_PER_GB_HEURISTIC,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def monitor():
|
||||
return EnergyBudgetMonitor()
|
||||
|
||||
|
||||
# ── Model size lookup ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_model_size_exact_match(monitor):
|
||||
assert monitor._model_size_gb("qwen3:8b") == 5.5
|
||||
|
||||
|
||||
def test_model_size_substring_match(monitor):
|
||||
assert monitor._model_size_gb("some-qwen3:14b-custom") == 9.0
|
||||
|
||||
|
||||
def test_model_size_unknown_returns_default(monitor):
|
||||
assert monitor._model_size_gb("unknownmodel:99b") == _DEFAULT_MODEL_SIZE_GB
|
||||
|
||||
|
||||
# ── Battery power reading ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_read_battery_watts_on_battery(monitor):
|
||||
ioreg_output = (
|
||||
"{\n"
|
||||
' "InstantAmperage" = 2500\n'
|
||||
' "Voltage" = 12000\n'
|
||||
' "ExternalConnected" = No\n'
|
||||
"}"
|
||||
)
|
||||
mock_result = MagicMock()
|
||||
mock_result.stdout = ioreg_output
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result):
|
||||
watts = monitor._read_battery_watts()
|
||||
|
||||
# 2500 mA * 12000 mV / 1_000_000 = 30 W
|
||||
assert watts == pytest.approx(30.0, abs=0.01)
|
||||
|
||||
|
||||
def test_read_battery_watts_plugged_in_returns_zero(monitor):
|
||||
ioreg_output = (
|
||||
"{\n"
|
||||
' "InstantAmperage" = 1000\n'
|
||||
' "Voltage" = 12000\n'
|
||||
' "ExternalConnected" = Yes\n'
|
||||
"}"
|
||||
)
|
||||
mock_result = MagicMock()
|
||||
mock_result.stdout = ioreg_output
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result):
|
||||
watts = monitor._read_battery_watts()
|
||||
|
||||
assert watts == 0.0
|
||||
|
||||
|
||||
def test_read_battery_watts_subprocess_failure_raises(monitor):
|
||||
with patch("subprocess.run", side_effect=OSError("no ioreg")):
|
||||
with pytest.raises(OSError):
|
||||
monitor._read_battery_watts()
|
||||
|
||||
|
||||
# ── CPU proxy reading ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_read_cpu_pct_parses_top(monitor):
|
||||
top_output = (
|
||||
"Processes: 450 total\n"
|
||||
"CPU usage: 15.2% user, 8.8% sys, 76.0% idle\n"
|
||||
)
|
||||
mock_result = MagicMock()
|
||||
mock_result.stdout = top_output
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result):
|
||||
pct = monitor._read_cpu_pct()
|
||||
|
||||
assert pct == pytest.approx(24.0, abs=0.1)
|
||||
|
||||
|
||||
def test_read_cpu_pct_no_match_returns_negative(monitor):
|
||||
mock_result = MagicMock()
|
||||
mock_result.stdout = "No CPU line here\n"
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result):
|
||||
pct = monitor._read_cpu_pct()
|
||||
|
||||
assert pct == -1.0
|
||||
|
||||
|
||||
# ── Power strategy selection ──────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_read_power_uses_battery_first(monitor):
|
||||
with patch.object(monitor, "_read_battery_watts", return_value=25.0):
|
||||
watts, strategy = monitor._read_power()
|
||||
|
||||
assert watts == 25.0
|
||||
assert strategy == "battery"
|
||||
|
||||
|
||||
def test_read_power_falls_back_to_cpu_proxy(monitor):
|
||||
with (
|
||||
patch.object(monitor, "_read_battery_watts", return_value=0.0),
|
||||
patch.object(monitor, "_read_cpu_pct", return_value=50.0),
|
||||
):
|
||||
watts, strategy = monitor._read_power()
|
||||
|
||||
assert strategy == "cpu_proxy"
|
||||
assert watts == pytest.approx(20.0, abs=0.1) # 50% of 40W TDP
|
||||
|
||||
|
||||
def test_read_power_unavailable_when_both_fail(monitor):
|
||||
with (
|
||||
patch.object(monitor, "_read_battery_watts", side_effect=OSError),
|
||||
patch.object(monitor, "_read_cpu_pct", return_value=-1.0),
|
||||
):
|
||||
watts, strategy = monitor._read_power()
|
||||
|
||||
assert strategy == "unavailable"
|
||||
assert watts == 0.0
|
||||
|
||||
|
||||
# ── Inference recording ───────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_record_inference_produces_sample(monitor):
|
||||
monitor._cached_watts = 10.0
|
||||
monitor._cache_ts = 9999999999.0 # far future — cache won't expire
|
||||
|
||||
sample = monitor.record_inference("qwen3:8b", tokens_per_second=40.0)
|
||||
|
||||
assert isinstance(sample, InferenceSample)
|
||||
assert sample.model == "qwen3:8b"
|
||||
assert sample.tokens_per_second == 40.0
|
||||
assert sample.estimated_watts == pytest.approx(10.0)
|
||||
# efficiency = 40 / 10 = 4.0 tok/s per W
|
||||
assert sample.efficiency == pytest.approx(4.0)
|
||||
# score = min(10, (4.0 / 5.0) * 10) = 8.0
|
||||
assert sample.efficiency_score == pytest.approx(8.0)
|
||||
|
||||
|
||||
def test_record_inference_stores_in_history(monitor):
|
||||
monitor._cached_watts = 5.0
|
||||
monitor._cache_ts = 9999999999.0
|
||||
|
||||
monitor.record_inference("qwen3:8b", 30.0)
|
||||
monitor.record_inference("qwen3:14b", 20.0)
|
||||
|
||||
assert len(monitor._samples) == 2
|
||||
|
||||
|
||||
def test_record_inference_auto_activates_low_power(monitor):
|
||||
monitor._cached_watts = 20.0 # above default 15W threshold
|
||||
monitor._cache_ts = 9999999999.0
|
||||
|
||||
assert not monitor.low_power_mode
|
||||
monitor.record_inference("qwen3:30b", 8.0)
|
||||
assert monitor.low_power_mode
|
||||
|
||||
|
||||
def test_record_inference_no_auto_low_power_below_threshold(monitor):
|
||||
monitor._cached_watts = 10.0 # below default 15W threshold
|
||||
monitor._cache_ts = 9999999999.0
|
||||
|
||||
monitor.record_inference("qwen3:8b", 40.0)
|
||||
assert not monitor.low_power_mode
|
||||
|
||||
|
||||
# ── Efficiency score ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_efficiency_score_caps_at_10(monitor):
|
||||
monitor._cached_watts = 1.0
|
||||
monitor._cache_ts = 9999999999.0
|
||||
|
||||
sample = monitor.record_inference("qwen3:1b", tokens_per_second=1000.0)
|
||||
assert sample.efficiency_score == pytest.approx(10.0)
|
||||
|
||||
|
||||
def test_efficiency_score_no_samples_returns_negative_one(monitor):
|
||||
assert monitor._compute_mean_efficiency_score() == -1.0
|
||||
|
||||
|
||||
def test_mean_efficiency_score_averages_last_10(monitor):
|
||||
monitor._cached_watts = 10.0
|
||||
monitor._cache_ts = 9999999999.0
|
||||
|
||||
for _ in range(15):
|
||||
monitor.record_inference("qwen3:8b", tokens_per_second=25.0) # efficiency=2.5 → score=5.0
|
||||
|
||||
score = monitor._compute_mean_efficiency_score()
|
||||
assert score == pytest.approx(5.0, abs=0.01)
|
||||
|
||||
|
||||
# ── Low power mode ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_set_low_power_mode_toggle(monitor):
|
||||
assert not monitor.low_power_mode
|
||||
monitor.set_low_power_mode(True)
|
||||
assert monitor.low_power_mode
|
||||
monitor.set_low_power_mode(False)
|
||||
assert not monitor.low_power_mode
|
||||
|
||||
|
||||
# ── get_report ────────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_report_structure(monitor):
|
||||
with patch.object(monitor, "_read_power", return_value=(8.0, "battery")):
|
||||
report = await monitor.get_report()
|
||||
|
||||
assert report.timestamp
|
||||
assert isinstance(report.low_power_mode, bool)
|
||||
assert isinstance(report.current_watts, float)
|
||||
assert report.strategy in ("battery", "cpu_proxy", "heuristic", "unavailable")
|
||||
assert isinstance(report.recommendation, str)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_report_to_dict(monitor):
|
||||
with patch.object(monitor, "_read_power", return_value=(5.0, "cpu_proxy")):
|
||||
report = await monitor.get_report()
|
||||
|
||||
data = report.to_dict()
|
||||
assert "timestamp" in data
|
||||
assert "low_power_mode" in data
|
||||
assert "current_watts" in data
|
||||
assert "strategy" in data
|
||||
assert "efficiency_score" in data
|
||||
assert "recent_samples" in data
|
||||
assert "recommendation" in data
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_report_caches_power_reading(monitor):
|
||||
call_count = 0
|
||||
|
||||
def counting_read_power():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
return (10.0, "battery")
|
||||
|
||||
with patch.object(monitor, "_read_power", side_effect=counting_read_power):
|
||||
await monitor.get_report()
|
||||
await monitor.get_report()
|
||||
|
||||
# Cache TTL is 10s — should only call once
|
||||
assert call_count == 1
|
||||
|
||||
|
||||
# ── Recommendation text ───────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_recommendation_no_data(monitor):
|
||||
rec = monitor._build_recommendation(-1.0)
|
||||
assert "No inference data" in rec
|
||||
|
||||
|
||||
def test_recommendation_low_power_mode(monitor):
|
||||
monitor.set_low_power_mode(True)
|
||||
rec = monitor._build_recommendation(2.0)
|
||||
assert "Low power mode active" in rec
|
||||
|
||||
|
||||
def test_recommendation_low_efficiency(monitor):
|
||||
rec = monitor._build_recommendation(1.5)
|
||||
assert "Low efficiency" in rec
|
||||
|
||||
|
||||
def test_recommendation_good_efficiency(monitor):
|
||||
rec = monitor._build_recommendation(8.0)
|
||||
assert "Good efficiency" in rec
|
||||
576
tests/unit/test_paperclip.py
Normal file
576
tests/unit/test_paperclip.py
Normal file
@@ -0,0 +1,576 @@
|
||||
"""Unit tests for src/timmy/paperclip.py.
|
||||
|
||||
Refs #1236
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from types import ModuleType
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
|
||||
# ── Stub serpapi before any import of paperclip (it imports research_tools) ───
|
||||
|
||||
_serpapi_stub = ModuleType("serpapi")
|
||||
_google_search_mock = MagicMock()
|
||||
_serpapi_stub.GoogleSearch = _google_search_mock
|
||||
sys.modules.setdefault("serpapi", _serpapi_stub)
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
|
||||
# ── PaperclipTask ─────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestPaperclipTask:
|
||||
"""PaperclipTask dataclass holds task data."""
|
||||
|
||||
def test_task_creation(self):
|
||||
from timmy.paperclip import PaperclipTask
|
||||
|
||||
task = PaperclipTask(id="task-123", kind="research", context={"key": "value"})
|
||||
assert task.id == "task-123"
|
||||
assert task.kind == "research"
|
||||
assert task.context == {"key": "value"}
|
||||
|
||||
def test_task_creation_empty_context(self):
|
||||
from timmy.paperclip import PaperclipTask
|
||||
|
||||
task = PaperclipTask(id="task-456", kind="other", context={})
|
||||
assert task.id == "task-456"
|
||||
assert task.kind == "other"
|
||||
assert task.context == {}
|
||||
|
||||
|
||||
# ── PaperclipClient ───────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestPaperclipClient:
|
||||
"""PaperclipClient interacts with the Paperclip API."""
|
||||
|
||||
def test_init_uses_settings(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_agent_id = "agent-123"
|
||||
mock_settings.paperclip_company_id = "company-456"
|
||||
mock_settings.paperclip_timeout = 45
|
||||
|
||||
client = PaperclipClient()
|
||||
assert client.base_url == "http://test.example:3100"
|
||||
assert client.api_key == "test-api-key"
|
||||
assert client.agent_id == "agent-123"
|
||||
assert client.company_id == "company-456"
|
||||
assert client.timeout == 45
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_tasks_makes_correct_request(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_agent_id = "agent-123"
|
||||
mock_settings.paperclip_company_id = "company-456"
|
||||
mock_settings.paperclip_timeout = 30
|
||||
|
||||
client = PaperclipClient()
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = [
|
||||
{"id": "task-1", "kind": "research", "context": {"issue_number": 42}},
|
||||
{"id": "task-2", "kind": "other", "context": {}},
|
||||
]
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.get = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
tasks = await client.get_tasks()
|
||||
|
||||
mock_client.get.assert_called_once_with(
|
||||
"http://test.example:3100/api/tasks",
|
||||
headers={"Authorization": "Bearer test-api-key"},
|
||||
params={
|
||||
"agent_id": "agent-123",
|
||||
"company_id": "company-456",
|
||||
"status": "queued",
|
||||
},
|
||||
)
|
||||
mock_response.raise_for_status.assert_called_once()
|
||||
assert len(tasks) == 2
|
||||
assert tasks[0].id == "task-1"
|
||||
assert tasks[0].kind == "research"
|
||||
assert tasks[1].id == "task-2"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_tasks_empty_response(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_agent_id = "agent-123"
|
||||
mock_settings.paperclip_company_id = "company-456"
|
||||
mock_settings.paperclip_timeout = 30
|
||||
|
||||
client = PaperclipClient()
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = []
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.get = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
tasks = await client.get_tasks()
|
||||
|
||||
assert tasks == []
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_tasks_raises_on_http_error(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_agent_id = "agent-123"
|
||||
mock_settings.paperclip_company_id = "company-456"
|
||||
mock_settings.paperclip_timeout = 30
|
||||
|
||||
client = PaperclipClient()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.get = AsyncMock(side_effect=httpx.HTTPError("Connection failed"))
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
with pytest.raises(httpx.HTTPError):
|
||||
await client.get_tasks()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_task_status_makes_correct_request(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_timeout = 30
|
||||
|
||||
client = PaperclipClient()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.patch = AsyncMock(return_value=MagicMock())
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
await client.update_task_status("task-123", "completed", "Task result here")
|
||||
|
||||
mock_client.patch.assert_called_once_with(
|
||||
"http://test.example:3100/api/tasks/task-123",
|
||||
headers={"Authorization": "Bearer test-api-key"},
|
||||
json={"status": "completed", "result": "Task result here"},
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_task_status_without_result(self):
|
||||
from timmy.paperclip import PaperclipClient
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_url = "http://test.example:3100"
|
||||
mock_settings.paperclip_api_key = "test-api-key"
|
||||
mock_settings.paperclip_timeout = 30
|
||||
|
||||
client = PaperclipClient()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.patch = AsyncMock(return_value=MagicMock())
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
await client.update_task_status("task-123", "running")
|
||||
|
||||
mock_client.patch.assert_called_once_with(
|
||||
"http://test.example:3100/api/tasks/task-123",
|
||||
headers={"Authorization": "Bearer test-api-key"},
|
||||
json={"status": "running", "result": None},
|
||||
)
|
||||
|
||||
|
||||
# ── ResearchOrchestrator ───────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestResearchOrchestrator:
|
||||
"""ResearchOrchestrator coordinates research tasks."""
|
||||
|
||||
def test_init_creates_instances(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
assert orchestrator is not None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_gitea_issue_makes_correct_request(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://gitea.example:3000"
|
||||
mock_settings.gitea_token = "gitea-token"
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {"number": 42, "title": "Test Issue"}
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.get = AsyncMock(return_value=mock_response)
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
issue = await orchestrator.get_gitea_issue(42)
|
||||
|
||||
mock_client.get.assert_called_once_with(
|
||||
"http://gitea.example:3000/api/v1/repos/owner/repo/issues/42",
|
||||
headers={"Authorization": "token gitea-token"},
|
||||
)
|
||||
mock_response.raise_for_status.assert_called_once()
|
||||
assert issue["number"] == 42
|
||||
assert issue["title"] == "Test Issue"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_gitea_issue_raises_on_http_error(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://gitea.example:3000"
|
||||
mock_settings.gitea_token = "gitea-token"
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.get = AsyncMock(side_effect=httpx.HTTPError("Not found"))
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
with pytest.raises(httpx.HTTPError):
|
||||
await orchestrator.get_gitea_issue(999)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_post_gitea_comment_makes_correct_request(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://gitea.example:3000"
|
||||
mock_settings.gitea_token = "gitea-token"
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=False)
|
||||
mock_client.post = AsyncMock(return_value=MagicMock())
|
||||
|
||||
with patch("httpx.AsyncClient", return_value=mock_client):
|
||||
await orchestrator.post_gitea_comment(42, "Test comment body")
|
||||
|
||||
mock_client.post.assert_called_once_with(
|
||||
"http://gitea.example:3000/api/v1/repos/owner/repo/issues/42/comments",
|
||||
headers={"Authorization": "token gitea-token"},
|
||||
json={"body": "Test comment body"},
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_research_pipeline_returns_report(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_search_results = "Search result 1\nSearch result 2"
|
||||
mock_llm_response = MagicMock()
|
||||
mock_llm_response.text = "Research report summary"
|
||||
|
||||
mock_llm_client = MagicMock()
|
||||
mock_llm_client.completion = AsyncMock(return_value=mock_llm_response)
|
||||
|
||||
with patch(
|
||||
"timmy.paperclip.google_web_search", new=AsyncMock(return_value=mock_search_results)
|
||||
):
|
||||
with patch("timmy.paperclip.get_llm_client", return_value=mock_llm_client):
|
||||
report = await orchestrator.run_research_pipeline("test query")
|
||||
|
||||
assert report == "Research report summary"
|
||||
mock_llm_client.completion.assert_called_once()
|
||||
call_args = mock_llm_client.completion.call_args
|
||||
# The prompt is passed as first positional arg, check it contains expected content
|
||||
prompt = call_args[0][0] if call_args[0] else call_args[1].get("messages", [""])[0]
|
||||
assert "Summarize" in prompt
|
||||
assert "Search result 1" in prompt
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_returns_error_when_missing_issue_number(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
result = await orchestrator.run({})
|
||||
assert result == "Missing issue_number in task context"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_executes_full_pipeline_with_triage_results(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://gitea.example:3000"
|
||||
mock_settings.gitea_token = "gitea-token"
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_issue = {"number": 42, "title": "Test Research Topic"}
|
||||
mock_report = "Research report content"
|
||||
mock_triage_results = [
|
||||
{
|
||||
"action_item": MagicMock(title="Action 1"),
|
||||
"gitea_issue": {"number": 101},
|
||||
},
|
||||
{
|
||||
"action_item": MagicMock(title="Action 2"),
|
||||
"gitea_issue": {"number": 102},
|
||||
},
|
||||
]
|
||||
|
||||
orchestrator.get_gitea_issue = AsyncMock(return_value=mock_issue)
|
||||
orchestrator.run_research_pipeline = AsyncMock(return_value=mock_report)
|
||||
orchestrator.post_gitea_comment = AsyncMock()
|
||||
|
||||
with patch(
|
||||
"timmy.paperclip.triage_research_report",
|
||||
new=AsyncMock(return_value=mock_triage_results),
|
||||
):
|
||||
result = await orchestrator.run({"issue_number": 42})
|
||||
|
||||
assert "Research complete for issue #42" in result
|
||||
orchestrator.get_gitea_issue.assert_called_once_with(42)
|
||||
orchestrator.run_research_pipeline.assert_called_once_with("Test Research Topic")
|
||||
orchestrator.post_gitea_comment.assert_called_once()
|
||||
comment_body = orchestrator.post_gitea_comment.call_args[0][1]
|
||||
assert "Research complete for issue #42" in comment_body
|
||||
assert "#101" in comment_body
|
||||
assert "#102" in comment_body
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_executes_full_pipeline_without_triage_results(self):
|
||||
from timmy.paperclip import ResearchOrchestrator
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.gitea_repo = "owner/repo"
|
||||
mock_settings.gitea_url = "http://gitea.example:3000"
|
||||
mock_settings.gitea_token = "gitea-token"
|
||||
|
||||
orchestrator = ResearchOrchestrator()
|
||||
|
||||
mock_issue = {"number": 42, "title": "Test Research Topic"}
|
||||
mock_report = "Research report content"
|
||||
|
||||
orchestrator.get_gitea_issue = AsyncMock(return_value=mock_issue)
|
||||
orchestrator.run_research_pipeline = AsyncMock(return_value=mock_report)
|
||||
orchestrator.post_gitea_comment = AsyncMock()
|
||||
|
||||
with patch("timmy.paperclip.triage_research_report", new=AsyncMock(return_value=[])):
|
||||
result = await orchestrator.run({"issue_number": 42})
|
||||
|
||||
assert "Research complete for issue #42" in result
|
||||
comment_body = orchestrator.post_gitea_comment.call_args[0][1]
|
||||
assert "No new issues were created" in comment_body
|
||||
|
||||
|
||||
# ── PaperclipPoller ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestPaperclipPoller:
|
||||
"""PaperclipPoller polls for and executes tasks."""
|
||||
|
||||
def test_init_creates_client_and_orchestrator(self):
|
||||
from timmy.paperclip import PaperclipPoller
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_poll_interval = 60
|
||||
|
||||
poller = PaperclipPoller()
|
||||
assert poller.client is not None
|
||||
assert poller.orchestrator is not None
|
||||
assert poller.poll_interval == 60
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_poll_returns_early_when_disabled(self):
|
||||
from timmy.paperclip import PaperclipPoller
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_poll_interval = 0
|
||||
|
||||
poller = PaperclipPoller()
|
||||
poller.client.get_tasks = AsyncMock()
|
||||
|
||||
await poller.poll()
|
||||
|
||||
poller.client.get_tasks.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_poll_processes_research_tasks(self):
|
||||
from timmy.paperclip import PaperclipPoller, PaperclipTask
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_poll_interval = 1
|
||||
|
||||
poller = PaperclipPoller()
|
||||
|
||||
mock_task = PaperclipTask(id="task-1", kind="research", context={"issue_number": 42})
|
||||
poller.client.get_tasks = AsyncMock(return_value=[mock_task])
|
||||
poller.run_research_task = AsyncMock()
|
||||
|
||||
# Stop after first iteration
|
||||
call_count = 0
|
||||
|
||||
async def mock_sleep(duration):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count >= 1:
|
||||
raise asyncio.CancelledError("Stop the loop")
|
||||
|
||||
import asyncio
|
||||
|
||||
with patch("asyncio.sleep", mock_sleep):
|
||||
with pytest.raises(asyncio.CancelledError):
|
||||
await poller.poll()
|
||||
|
||||
poller.client.get_tasks.assert_called_once()
|
||||
poller.run_research_task.assert_called_once_with(mock_task)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_poll_logs_http_error_and_continues(self, caplog):
|
||||
import logging
|
||||
|
||||
from timmy.paperclip import PaperclipPoller
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_poll_interval = 1
|
||||
|
||||
poller = PaperclipPoller()
|
||||
poller.client.get_tasks = AsyncMock(side_effect=httpx.HTTPError("Connection failed"))
|
||||
|
||||
call_count = 0
|
||||
|
||||
async def mock_sleep(duration):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count >= 1:
|
||||
raise asyncio.CancelledError("Stop the loop")
|
||||
|
||||
with patch("asyncio.sleep", mock_sleep):
|
||||
with caplog.at_level(logging.WARNING, logger="timmy.paperclip"):
|
||||
with pytest.raises(asyncio.CancelledError):
|
||||
await poller.poll()
|
||||
|
||||
assert any("Error polling Paperclip" in rec.message for rec in caplog.records)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_research_task_success(self):
|
||||
from timmy.paperclip import PaperclipPoller, PaperclipTask
|
||||
|
||||
poller = PaperclipPoller()
|
||||
|
||||
mock_task = PaperclipTask(id="task-1", kind="research", context={"issue_number": 42})
|
||||
|
||||
poller.client.update_task_status = AsyncMock()
|
||||
poller.orchestrator.run = AsyncMock(return_value="Research completed successfully")
|
||||
|
||||
await poller.run_research_task(mock_task)
|
||||
|
||||
assert poller.client.update_task_status.call_count == 2
|
||||
poller.client.update_task_status.assert_any_call("task-1", "running")
|
||||
poller.client.update_task_status.assert_any_call(
|
||||
"task-1", "completed", "Research completed successfully"
|
||||
)
|
||||
poller.orchestrator.run.assert_called_once_with({"issue_number": 42})
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_research_task_failure(self, caplog):
|
||||
import logging
|
||||
|
||||
from timmy.paperclip import PaperclipPoller, PaperclipTask
|
||||
|
||||
poller = PaperclipPoller()
|
||||
|
||||
mock_task = PaperclipTask(id="task-1", kind="research", context={"issue_number": 42})
|
||||
|
||||
poller.client.update_task_status = AsyncMock()
|
||||
poller.orchestrator.run = AsyncMock(side_effect=Exception("Something went wrong"))
|
||||
|
||||
with caplog.at_level(logging.ERROR, logger="timmy.paperclip"):
|
||||
await poller.run_research_task(mock_task)
|
||||
|
||||
assert poller.client.update_task_status.call_count == 2
|
||||
poller.client.update_task_status.assert_any_call("task-1", "running")
|
||||
poller.client.update_task_status.assert_any_call("task-1", "failed", "Something went wrong")
|
||||
assert any("Error running research task" in rec.message for rec in caplog.records)
|
||||
|
||||
|
||||
# ── start_paperclip_poller ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestStartPaperclipPoller:
|
||||
"""start_paperclip_poller creates and starts the poller."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_starts_poller_when_enabled(self):
|
||||
from timmy.paperclip import start_paperclip_poller
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = True
|
||||
|
||||
mock_poller = MagicMock()
|
||||
mock_poller.poll = AsyncMock()
|
||||
|
||||
created_tasks = []
|
||||
original_create_task = asyncio.create_task
|
||||
|
||||
def capture_create_task(coro):
|
||||
created_tasks.append(coro)
|
||||
return original_create_task(coro)
|
||||
|
||||
with patch("timmy.paperclip.PaperclipPoller", return_value=mock_poller):
|
||||
with patch("asyncio.create_task", side_effect=capture_create_task):
|
||||
await start_paperclip_poller()
|
||||
|
||||
assert len(created_tasks) == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_does_nothing_when_disabled(self):
|
||||
from timmy.paperclip import start_paperclip_poller
|
||||
|
||||
with patch("timmy.paperclip.settings") as mock_settings:
|
||||
mock_settings.paperclip_enabled = False
|
||||
|
||||
with patch("timmy.paperclip.PaperclipPoller") as mock_poller_class:
|
||||
with patch("asyncio.create_task") as mock_create_task:
|
||||
await start_paperclip_poller()
|
||||
|
||||
mock_poller_class.assert_not_called()
|
||||
mock_create_task.assert_not_called()
|
||||
149
tests/unit/test_research_tools.py
Normal file
149
tests/unit/test_research_tools.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""Unit tests for src/timmy/research_tools.py.
|
||||
|
||||
Refs #1237
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from types import ModuleType
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
# ── Stub serpapi before any import of research_tools ─────────────────────────
|
||||
|
||||
_serpapi_stub = ModuleType("serpapi")
|
||||
_google_search_mock = MagicMock()
|
||||
_serpapi_stub.GoogleSearch = _google_search_mock
|
||||
sys.modules.setdefault("serpapi", _serpapi_stub)
|
||||
|
||||
|
||||
# ── google_web_search ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestGoogleWebSearch:
|
||||
"""google_web_search returns results or degrades gracefully."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_empty_string_when_no_api_key(self, monkeypatch):
|
||||
monkeypatch.delenv("SERPAPI_API_KEY", raising=False)
|
||||
from timmy.research_tools import google_web_search
|
||||
|
||||
result = await google_web_search("test query")
|
||||
assert result == ""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_logs_warning_when_no_api_key(self, monkeypatch, caplog):
|
||||
import logging
|
||||
|
||||
monkeypatch.delenv("SERPAPI_API_KEY", raising=False)
|
||||
from timmy.research_tools import google_web_search
|
||||
|
||||
with caplog.at_level(logging.WARNING, logger="timmy.research_tools"):
|
||||
await google_web_search("test query")
|
||||
|
||||
assert any("SERPAPI_API_KEY" in rec.message for rec in caplog.records)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_calls_google_search_with_api_key(self, monkeypatch):
|
||||
monkeypatch.setenv("SERPAPI_API_KEY", "fake-key-123")
|
||||
|
||||
mock_instance = MagicMock()
|
||||
mock_instance.get_dict.return_value = {"organic_results": [{"title": "Result"}]}
|
||||
|
||||
with patch("timmy.research_tools.GoogleSearch", return_value=mock_instance) as mock_cls:
|
||||
from timmy.research_tools import google_web_search
|
||||
|
||||
result = await google_web_search("hello world")
|
||||
|
||||
mock_cls.assert_called_once()
|
||||
call_params = mock_cls.call_args[0][0]
|
||||
assert call_params["q"] == "hello world"
|
||||
assert call_params["api_key"] == "fake-key-123"
|
||||
mock_instance.get_dict.assert_called_once()
|
||||
assert "organic_results" in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_string_result(self, monkeypatch):
|
||||
monkeypatch.setenv("SERPAPI_API_KEY", "key")
|
||||
|
||||
mock_instance = MagicMock()
|
||||
mock_instance.get_dict.return_value = {"answer": 42}
|
||||
|
||||
with patch("timmy.research_tools.GoogleSearch", return_value=mock_instance):
|
||||
from timmy.research_tools import google_web_search
|
||||
|
||||
result = await google_web_search("query")
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_passes_query_to_params(self, monkeypatch):
|
||||
monkeypatch.setenv("SERPAPI_API_KEY", "k")
|
||||
|
||||
mock_instance = MagicMock()
|
||||
mock_instance.get_dict.return_value = {}
|
||||
|
||||
with patch("timmy.research_tools.GoogleSearch", return_value=mock_instance) as mock_cls:
|
||||
from timmy.research_tools import google_web_search
|
||||
|
||||
await google_web_search("specific search term")
|
||||
|
||||
params = mock_cls.call_args[0][0]
|
||||
assert params["q"] == "specific search term"
|
||||
|
||||
|
||||
# ── get_llm_client ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestGetLLMClient:
|
||||
"""get_llm_client returns a client with a completion method."""
|
||||
|
||||
def test_returns_non_none_client(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client = get_llm_client()
|
||||
assert client is not None
|
||||
|
||||
def test_client_has_completion_method(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client = get_llm_client()
|
||||
assert hasattr(client, "completion")
|
||||
assert callable(client.completion)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_completion_returns_object_with_text(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client = get_llm_client()
|
||||
result = await client.completion("test prompt", max_tokens=100)
|
||||
assert hasattr(result, "text")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_completion_text_is_string(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client = get_llm_client()
|
||||
result = await client.completion("any prompt", max_tokens=50)
|
||||
assert isinstance(result.text, str)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_completion_text_contains_prompt(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client = get_llm_client()
|
||||
result = await client.completion("my prompt", max_tokens=50)
|
||||
assert "my prompt" in result.text
|
||||
|
||||
def test_each_call_returns_new_client(self):
|
||||
from timmy.research_tools import get_llm_client
|
||||
|
||||
client_a = get_llm_client()
|
||||
client_b = get_llm_client()
|
||||
# Both should be functional clients (not necessarily the same instance)
|
||||
assert hasattr(client_a, "completion")
|
||||
assert hasattr(client_b, "completion")
|
||||
269
tests/unit/test_self_correction.py
Normal file
269
tests/unit/test_self_correction.py
Normal file
@@ -0,0 +1,269 @@
|
||||
"""Unit tests for infrastructure.self_correction."""
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _isolated_db(tmp_path, monkeypatch):
|
||||
"""Point the self-correction module at a fresh temp database per test."""
|
||||
import infrastructure.self_correction as sc_mod
|
||||
|
||||
# Reset the cached path so each test gets a clean DB
|
||||
sc_mod._DB_PATH = tmp_path / "self_correction.db"
|
||||
yield
|
||||
sc_mod._DB_PATH = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# log_self_correction
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestLogSelfCorrection:
|
||||
def test_returns_event_id(self):
|
||||
from infrastructure.self_correction import log_self_correction
|
||||
|
||||
eid = log_self_correction(
|
||||
source="test",
|
||||
original_intent="Do X",
|
||||
detected_error="ValueError: bad input",
|
||||
correction_strategy="Try Y instead",
|
||||
final_outcome="Y succeeded",
|
||||
)
|
||||
assert isinstance(eid, str)
|
||||
assert len(eid) == 36 # UUID format
|
||||
|
||||
def test_derives_error_type_from_error_string(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="Connect",
|
||||
detected_error="ConnectionRefusedError: port 80",
|
||||
correction_strategy="Use port 8080",
|
||||
final_outcome="ok",
|
||||
)
|
||||
rows = get_corrections(limit=1)
|
||||
assert rows[0]["error_type"] == "ConnectionRefusedError"
|
||||
|
||||
def test_explicit_error_type_preserved(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="Run task",
|
||||
detected_error="Some weird error",
|
||||
correction_strategy="Fix it",
|
||||
final_outcome="done",
|
||||
error_type="CustomError",
|
||||
)
|
||||
rows = get_corrections(limit=1)
|
||||
assert rows[0]["error_type"] == "CustomError"
|
||||
|
||||
def test_task_id_stored(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="intent",
|
||||
detected_error="err",
|
||||
correction_strategy="strat",
|
||||
final_outcome="outcome",
|
||||
task_id="task-abc-123",
|
||||
)
|
||||
rows = get_corrections(limit=1)
|
||||
assert rows[0]["task_id"] == "task-abc-123"
|
||||
|
||||
def test_outcome_status_stored(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="i",
|
||||
detected_error="e",
|
||||
correction_strategy="s",
|
||||
final_outcome="o",
|
||||
outcome_status="failed",
|
||||
)
|
||||
rows = get_corrections(limit=1)
|
||||
assert rows[0]["outcome_status"] == "failed"
|
||||
|
||||
def test_long_strings_truncated(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
long = "x" * 3000
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent=long,
|
||||
detected_error=long,
|
||||
correction_strategy=long,
|
||||
final_outcome=long,
|
||||
)
|
||||
rows = get_corrections(limit=1)
|
||||
assert len(rows[0]["original_intent"]) <= 2000
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# get_corrections
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGetCorrections:
|
||||
def test_empty_db_returns_empty_list(self):
|
||||
from infrastructure.self_correction import get_corrections
|
||||
|
||||
assert get_corrections() == []
|
||||
|
||||
def test_returns_newest_first(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
for i in range(3):
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent=f"intent {i}",
|
||||
detected_error="err",
|
||||
correction_strategy="fix",
|
||||
final_outcome="done",
|
||||
error_type=f"Type{i}",
|
||||
)
|
||||
rows = get_corrections(limit=10)
|
||||
assert len(rows) == 3
|
||||
# Newest first — Type2 should appear before Type0
|
||||
types = [r["error_type"] for r in rows]
|
||||
assert types.index("Type2") < types.index("Type0")
|
||||
|
||||
def test_limit_respected(self):
|
||||
from infrastructure.self_correction import get_corrections, log_self_correction
|
||||
|
||||
for _ in range(5):
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="i",
|
||||
detected_error="e",
|
||||
correction_strategy="s",
|
||||
final_outcome="o",
|
||||
)
|
||||
rows = get_corrections(limit=3)
|
||||
assert len(rows) == 3
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# get_patterns
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGetPatterns:
|
||||
def test_empty_db_returns_empty_list(self):
|
||||
from infrastructure.self_correction import get_patterns
|
||||
|
||||
assert get_patterns() == []
|
||||
|
||||
def test_counts_by_error_type(self):
|
||||
from infrastructure.self_correction import get_patterns, log_self_correction
|
||||
|
||||
for _ in range(3):
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="i",
|
||||
detected_error="e",
|
||||
correction_strategy="s",
|
||||
final_outcome="o",
|
||||
error_type="TimeoutError",
|
||||
)
|
||||
log_self_correction(
|
||||
source="test",
|
||||
original_intent="i",
|
||||
detected_error="e",
|
||||
correction_strategy="s",
|
||||
final_outcome="o",
|
||||
error_type="ValueError",
|
||||
)
|
||||
patterns = get_patterns(top_n=10)
|
||||
by_type = {p["error_type"]: p for p in patterns}
|
||||
assert by_type["TimeoutError"]["count"] == 3
|
||||
assert by_type["ValueError"]["count"] == 1
|
||||
|
||||
def test_success_vs_failed_counts(self):
|
||||
from infrastructure.self_correction import get_patterns, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="test", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o",
|
||||
error_type="Foo", outcome_status="success",
|
||||
)
|
||||
log_self_correction(
|
||||
source="test", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o",
|
||||
error_type="Foo", outcome_status="failed",
|
||||
)
|
||||
patterns = get_patterns(top_n=5)
|
||||
foo = next(p for p in patterns if p["error_type"] == "Foo")
|
||||
assert foo["success_count"] == 1
|
||||
assert foo["failed_count"] == 1
|
||||
|
||||
def test_ordered_by_count_desc(self):
|
||||
from infrastructure.self_correction import get_patterns, log_self_correction
|
||||
|
||||
for _ in range(2):
|
||||
log_self_correction(
|
||||
source="t", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o", error_type="Rare",
|
||||
)
|
||||
for _ in range(5):
|
||||
log_self_correction(
|
||||
source="t", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o", error_type="Common",
|
||||
)
|
||||
patterns = get_patterns(top_n=5)
|
||||
assert patterns[0]["error_type"] == "Common"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# get_stats
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGetStats:
|
||||
def test_empty_db_returns_zeroes(self):
|
||||
from infrastructure.self_correction import get_stats
|
||||
|
||||
stats = get_stats()
|
||||
assert stats["total"] == 0
|
||||
assert stats["success_rate"] == 0
|
||||
|
||||
def test_counts_outcomes(self):
|
||||
from infrastructure.self_correction import get_stats, log_self_correction
|
||||
|
||||
log_self_correction(
|
||||
source="t", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o", outcome_status="success",
|
||||
)
|
||||
log_self_correction(
|
||||
source="t", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o", outcome_status="failed",
|
||||
)
|
||||
stats = get_stats()
|
||||
assert stats["total"] == 2
|
||||
assert stats["success_count"] == 1
|
||||
assert stats["failed_count"] == 1
|
||||
assert stats["success_rate"] == 50
|
||||
|
||||
def test_success_rate_100_when_all_succeed(self):
|
||||
from infrastructure.self_correction import get_stats, log_self_correction
|
||||
|
||||
for _ in range(4):
|
||||
log_self_correction(
|
||||
source="t", original_intent="i", detected_error="e",
|
||||
correction_strategy="s", final_outcome="o", outcome_status="success",
|
||||
)
|
||||
stats = get_stats()
|
||||
assert stats["success_rate"] == 100
|
||||
@@ -336,7 +336,12 @@ async def test_check_agent_health_no_token():
|
||||
"""Returns idle status gracefully when Gitea token is absent."""
|
||||
from timmy.vassal.agent_health import check_agent_health
|
||||
|
||||
status = await check_agent_health("claude")
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.gitea_enabled = True
|
||||
mock_settings.gitea_token = "" # explicitly no token → early return
|
||||
|
||||
with patch("config.settings", mock_settings):
|
||||
status = await check_agent_health("claude")
|
||||
# Should not raise; returns idle (no active issues discovered)
|
||||
assert isinstance(status, AgentStatus)
|
||||
assert status.agent == "claude"
|
||||
@@ -478,7 +483,12 @@ async def test_check_agent_health_fetch_exception(monkeypatch):
|
||||
async def test_get_full_health_report_returns_both_agents():
|
||||
from timmy.vassal.agent_health import get_full_health_report
|
||||
|
||||
report = await get_full_health_report()
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.gitea_enabled = False # disabled → no network calls
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
with patch("config.settings", mock_settings):
|
||||
report = await get_full_health_report()
|
||||
agent_names = {a.agent for a in report.agents}
|
||||
assert "claude" in agent_names
|
||||
assert "kimi" in agent_names
|
||||
@@ -488,7 +498,12 @@ async def test_get_full_health_report_returns_both_agents():
|
||||
async def test_get_full_health_report_structure():
|
||||
from timmy.vassal.agent_health import get_full_health_report
|
||||
|
||||
report = await get_full_health_report()
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.gitea_enabled = False # disabled → no network calls
|
||||
mock_settings.gitea_token = ""
|
||||
|
||||
with patch("config.settings", mock_settings):
|
||||
report = await get_full_health_report()
|
||||
assert isinstance(report, AgentHealthReport)
|
||||
assert len(report.agents) == 2
|
||||
|
||||
|
||||
@@ -10,6 +10,29 @@ from timmy.vassal.orchestration_loop import VassalCycleRecord, VassalOrchestrato
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers — prevent real network calls under xdist parallel execution
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _disabled_settings() -> MagicMock:
|
||||
"""Settings mock with Gitea disabled — backlog + agent health skip HTTP."""
|
||||
s = MagicMock()
|
||||
s.gitea_enabled = False
|
||||
s.gitea_token = ""
|
||||
s.vassal_stuck_threshold_minutes = 120
|
||||
return s
|
||||
|
||||
|
||||
def _fast_snapshot() -> MagicMock:
|
||||
"""Minimal SystemSnapshot mock — no disk warnings, Ollama not probed."""
|
||||
snap = MagicMock()
|
||||
snap.warnings = []
|
||||
snap.disk.percent_used = 0.0
|
||||
return snap
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# VassalCycleRecord
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -74,7 +97,15 @@ async def test_run_cycle_completes_without_services():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator(cycle_interval=300)
|
||||
|
||||
record = await orch.run_cycle()
|
||||
with (
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
record = await orch.run_cycle()
|
||||
|
||||
assert isinstance(record, VassalCycleRecord)
|
||||
assert record.cycle_id == 1
|
||||
@@ -95,8 +126,16 @@ async def test_run_cycle_increments_cycle_count():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator()
|
||||
|
||||
await orch.run_cycle()
|
||||
await orch.run_cycle()
|
||||
with (
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
await orch.run_cycle()
|
||||
await orch.run_cycle()
|
||||
|
||||
assert orch.cycle_count == 2
|
||||
assert len(orch.history) == 2
|
||||
@@ -109,7 +148,15 @@ async def test_get_status_after_cycle():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator()
|
||||
|
||||
await orch.run_cycle()
|
||||
with (
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
await orch.run_cycle()
|
||||
status = orch.get_status()
|
||||
|
||||
assert status["cycle_count"] == 1
|
||||
@@ -183,10 +230,18 @@ async def test_run_cycle_records_backlog_error():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator()
|
||||
|
||||
with patch(
|
||||
"timmy.vassal.backlog.fetch_open_issues",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=ConnectionError("gitea unreachable"),
|
||||
with (
|
||||
patch(
|
||||
"timmy.vassal.backlog.fetch_open_issues",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=ConnectionError("gitea unreachable"),
|
||||
),
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
record = await orch.run_cycle()
|
||||
|
||||
@@ -202,10 +257,18 @@ async def test_run_cycle_records_agent_health_error():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator()
|
||||
|
||||
with patch(
|
||||
"timmy.vassal.agent_health.get_full_health_report",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=RuntimeError("health check failed"),
|
||||
with (
|
||||
patch(
|
||||
"timmy.vassal.agent_health.get_full_health_report",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=RuntimeError("health check failed"),
|
||||
),
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
record = await orch.run_cycle()
|
||||
|
||||
@@ -221,10 +284,13 @@ async def test_run_cycle_records_house_health_error():
|
||||
clear_dispatch_registry()
|
||||
orch = VassalOrchestrator()
|
||||
|
||||
with patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=OSError("disk check failed"),
|
||||
with (
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
side_effect=OSError("disk check failed"),
|
||||
),
|
||||
patch("config.settings", _disabled_settings()),
|
||||
):
|
||||
record = await orch.run_cycle()
|
||||
|
||||
@@ -255,7 +321,10 @@ async def test_run_cycle_counts_dispatched_issues():
|
||||
patch(
|
||||
"timmy.vassal.backlog.fetch_open_issues",
|
||||
new_callable=AsyncMock,
|
||||
return_value=[{"number": i, "title": f"Issue {i}", "labels": [], "assignees": []} for i in range(1, 4)],
|
||||
return_value=[
|
||||
{"number": i, "title": f"Issue {i}", "labels": [], "assignees": []}
|
||||
for i in range(1, 4)
|
||||
],
|
||||
),
|
||||
patch(
|
||||
"timmy.vassal.backlog.triage_issues",
|
||||
@@ -291,7 +360,10 @@ async def test_run_cycle_respects_max_dispatch_cap():
|
||||
patch(
|
||||
"timmy.vassal.backlog.fetch_open_issues",
|
||||
new_callable=AsyncMock,
|
||||
return_value=[{"number": i, "title": f"Issue {i}", "labels": [], "assignees": []} for i in range(1, 6)],
|
||||
return_value=[
|
||||
{"number": i, "title": f"Issue {i}", "labels": [], "assignees": []}
|
||||
for i in range(1, 6)
|
||||
],
|
||||
),
|
||||
patch(
|
||||
"timmy.vassal.backlog.triage_issues",
|
||||
@@ -301,6 +373,12 @@ async def test_run_cycle_respects_max_dispatch_cap():
|
||||
"timmy.vassal.dispatch.dispatch_issue",
|
||||
new_callable=AsyncMock,
|
||||
),
|
||||
patch("config.settings", _disabled_settings()),
|
||||
patch(
|
||||
"timmy.vassal.house_health.get_system_snapshot",
|
||||
new_callable=AsyncMock,
|
||||
return_value=_fast_snapshot(),
|
||||
),
|
||||
):
|
||||
record = await orch.run_cycle()
|
||||
|
||||
@@ -320,6 +398,8 @@ def test_resolve_interval_uses_explicit_value():
|
||||
|
||||
def test_resolve_interval_falls_back_to_300():
|
||||
orch = VassalOrchestrator()
|
||||
with patch("timmy.vassal.orchestration_loop.VassalOrchestrator._resolve_interval") as mock_resolve:
|
||||
with patch(
|
||||
"timmy.vassal.orchestration_loop.VassalOrchestrator._resolve_interval"
|
||||
) as mock_resolve:
|
||||
mock_resolve.return_value = 300.0
|
||||
assert orch._resolve_interval() == 300.0
|
||||
|
||||
Reference in New Issue
Block a user