Compare commits
5 Commits
ezra/lazar
...
timmy/code
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c07b6b7d1b | ||
| 0c723199ec | |||
| 317140efcf | |||
| 2b308f300a | |||
| 9146bcb4b2 |
41
COST_SAVING.md
Normal file
41
COST_SAVING.md
Normal file
@@ -0,0 +1,41 @@
|
||||
|
||||
# Sovereign Efficiency: Local-First & Cost Saving Guide
|
||||
|
||||
This guide outlines the strategy for eliminating waste and optimizing flow within the Timmy Foundation ecosystem.
|
||||
|
||||
## 1. Smart Model Routing (SMR)
|
||||
**Goal:** Use the right tool for the job. Don't use a 14B or 70B model to say "Hello" or "Task complete."
|
||||
|
||||
- **Action:** Enable `smart_model_routing` in `config.yaml`.
|
||||
- **Logic:**
|
||||
- Simple acknowledgments and status updates -> **Gemma 2B / Phi-3 Mini** (Local).
|
||||
- Complex reasoning and coding -> **Hermes 14B / Llama 3 70B** (Local).
|
||||
- Fortress-grade synthesis -> **Claude 3.5 Sonnet / Gemini 1.5 Pro** (Cloud - Emergency Only).
|
||||
|
||||
## 2. Context Compression
|
||||
**Goal:** Keep the KV cache lean. Long sessions shouldn't slow down the "Thought Stream."
|
||||
|
||||
- **Action:** Enable `compression` in `config.yaml`.
|
||||
- **Threshold:** Set to `0.5` to trigger summarization when the context is half full.
|
||||
- **Protect Last N:** Keep the last 20 turns in raw format for immediate coherence.
|
||||
|
||||
## 3. Parallel Symbolic Execution (PSE) Optimization
|
||||
**Goal:** Reduce redundant reasoning cycles in The Nexus.
|
||||
|
||||
- **Action:** The Nexus now uses **Adaptive Reasoning Frequency**. If the world stability is high (>0.9), reasoning cycles are halved.
|
||||
- **Benefit:** Reduces CPU/GPU load on the local harness, leaving more headroom for inference.
|
||||
|
||||
## 4. L402 Cost Transparency
|
||||
**Goal:** Treat compute as a finite resource.
|
||||
|
||||
- **Action:** Use the **Sovereign Health HUD** in The Nexus to monitor L402 challenges.
|
||||
- **Metric:** Track "Sats per Thought" to identify which agents are "token-heavy."
|
||||
|
||||
## 5. Waste Elimination (Ghost Triage)
|
||||
**Goal:** Remove stale state.
|
||||
|
||||
- **Action:** Run the `triage_sprint.ts` script weekly to assign or archive stale issues.
|
||||
- **Action:** Use `hermes --flush-memories` to clear outdated context that no longer serves the current mission.
|
||||
|
||||
---
|
||||
*Sovereignty is not just about ownership; it is about stewardship of resources.*
|
||||
37
FRONTIER_LOCAL.md
Normal file
37
FRONTIER_LOCAL.md
Normal file
@@ -0,0 +1,37 @@
|
||||
|
||||
# The Frontier Local Agenda: Technical Standards v1.0
|
||||
|
||||
This document defines the "Frontier Local" agenda — the technical strategy for achieving sovereign, high-performance intelligence on consumer hardware.
|
||||
|
||||
## 1. The Multi-Layered Mind (MLM)
|
||||
We do not rely on a single "God Model." We use a hierarchy of local intelligence:
|
||||
|
||||
- **Reflex Layer (Gemma 2B):** Instantaneous tactical decisions, input classification, and simple acknowledgments. Latency: <100ms.
|
||||
- **Reasoning Layer (Hermes 14B / Llama 3 8B):** General-purpose problem solving, coding, and tool use. Latency: <1s.
|
||||
- **Synthesis Layer (Llama 3 70B / Qwen 72B):** Deep architectural planning, creative synthesis, and complex debugging. Latency: <5s.
|
||||
|
||||
## 2. Local-First RAG (Retrieval Augmented Generation)
|
||||
Sovereignty requires that your memories stay on your disk.
|
||||
|
||||
- **Embedding:** Use `nomic-embed-text` or `all-minilm` locally via Ollama.
|
||||
- **Vector Store:** Use a local instance of ChromaDB or LanceDB.
|
||||
- **Privacy:** Zero data leaves the local network for indexing or retrieval.
|
||||
|
||||
## 3. Speculative Decoding
|
||||
Where supported by the harness (e.g., llama.cpp), use Gemma 2B as a draft model for larger Hermes/Llama models to achieve 2x-3x speedups in token generation.
|
||||
|
||||
## 4. The "Gemma Scout" Protocol
|
||||
Gemma 2B is our "Scout." It pre-processes every user request to:
|
||||
1. Detect PII (Personally Identifiable Information) for redaction.
|
||||
2. Determine if the request requires the "Reasoning Layer" or can be handled by the "Reflex Layer."
|
||||
3. Extract keywords for local memory retrieval.
|
||||
|
||||
|
||||
## 5. Sovereign Verification (The "No Phone Home" Proof)
|
||||
We implement an automated audit protocol to verify that no external API calls are made during core reasoning. This is the "Sovereign Audit" layer.
|
||||
|
||||
## 6. Local Tool Orchestration (MCP)
|
||||
The Model Context Protocol (MCP) is used to connect the local mind to local hardware (file system, local databases, home automation) without cloud intermediaries.
|
||||
|
||||
---
|
||||
*Intelligence is a utility. Sovereignty is a right. The Frontier is Local.*
|
||||
23
SOVEREIGN_AUDIT.md
Normal file
23
SOVEREIGN_AUDIT.md
Normal file
@@ -0,0 +1,23 @@
|
||||
|
||||
# Sovereign Audit: The "No Phone Home" Protocol
|
||||
|
||||
This document defines the audit standards for verifying that an AI agent is truly sovereign and local-first.
|
||||
|
||||
## 1. Network Isolation
|
||||
- **Standard:** The core reasoning engine (llama.cpp, Ollama) must function without an active internet connection.
|
||||
- **Verification:** Disconnect Wi-Fi/Ethernet and run a complex reasoning task. If it fails, sovereignty is compromised.
|
||||
|
||||
## 2. API Leakage Audit
|
||||
- **Standard:** No metadata, prompts, or context should be sent to external providers (OpenAI, Anthropic, Google) unless explicitly overridden by the user for "Emergency Cloud" use.
|
||||
- **Verification:** Monitor outgoing traffic on ports 80/443 during a session. Core reasoning should only hit `localhost` or local network IPs.
|
||||
|
||||
## 3. Data Residency
|
||||
- **Standard:** All "Memories" (Vector DB, Chat History, SOUL.md) must reside on the user's physical disk.
|
||||
- **Verification:** Check the `~/.timmy/memories` and `~/.timmy/config` directories. No data should be stored in cloud-managed databases.
|
||||
|
||||
## 4. Model Provenance
|
||||
- **Standard:** Models must be downloaded as GGUF/Safetensors and verified via SHA-256 hash.
|
||||
- **Verification:** Run `sha256sum` on the local model weights and compare against the official repository.
|
||||
|
||||
---
|
||||
*If you don't own the weights, you don't own the mind.*
|
||||
@@ -5,7 +5,7 @@ set -uo pipefail
|
||||
export PATH="/opt/homebrew/bin:$HOME/.local/bin:$HOME/.hermes/bin:/usr/local/bin:$PATH"
|
||||
|
||||
LOG="$HOME/.hermes/logs/claudemax-watchdog.log"
|
||||
GITEA_URL="https://forge.alexanderwhitestone.com"
|
||||
GITEA_URL="http://143.198.27.163:3000"
|
||||
GITEA_TOKEN=$(tr -d '[:space:]' < "$HOME/.hermes/gitea_token_vps" 2>/dev/null || true)
|
||||
REPO_API="$GITEA_URL/api/v1/repos/Timmy_Foundation/the-nexus"
|
||||
MIN_OPEN_ISSUES=10
|
||||
|
||||
@@ -9,7 +9,7 @@ THRESHOLD_HOURS="${1:-2}"
|
||||
THRESHOLD_SECS=$((THRESHOLD_HOURS * 3600))
|
||||
LOG_DIR="$HOME/.hermes/logs"
|
||||
LOG_FILE="$LOG_DIR/deadman.log"
|
||||
GITEA_URL="https://forge.alexanderwhitestone.com"
|
||||
GITEA_URL="http://143.198.27.163:3000"
|
||||
GITEA_TOKEN=$(cat "$HOME/.hermes/gitea_token_vps" 2>/dev/null || echo "")
|
||||
TELEGRAM_TOKEN=$(cat "$HOME/.config/telegram/special_bot" 2>/dev/null || echo "")
|
||||
TELEGRAM_CHAT="-1003664764329"
|
||||
|
||||
@@ -25,35 +25,10 @@ else
|
||||
fi
|
||||
|
||||
# ── Config ──
|
||||
GITEA_TOKEN=$(cat ~/.hermes/gitea_token_vps 2>/dev/null || echo "")
|
||||
GITEA_API="https://forge.alexanderwhitestone.com/api/v1"
|
||||
|
||||
# Resolve Tailscale IPs dynamically; fallback to env vars
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
RESOLVER="${SCRIPT_DIR}/../tools/tailscale_ip_resolver.py"
|
||||
if [ ! -f "$RESOLVER" ]; then
|
||||
RESOLVER="/root/wizards/ezra/tools/tailscale_ip_resolver.py"
|
||||
fi
|
||||
|
||||
resolve_host() {
|
||||
local default_ip="$1"
|
||||
if [ -n "$TAILSCALE_IP" ]; then
|
||||
echo "root@${TAILSCALE_IP}"
|
||||
return
|
||||
fi
|
||||
if [ -f "$RESOLVER" ]; then
|
||||
local ip
|
||||
ip=$(python3 "$RESOLVER" 2>/dev/null)
|
||||
if [ -n "$ip" ]; then
|
||||
echo "root@${ip}"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
echo "root@${default_ip}"
|
||||
}
|
||||
|
||||
EZRA_HOST=$(resolve_host "143.198.27.163")
|
||||
BEZALEL_HOST="root@${BEZALEL_TAILSCALE_IP:-67.205.155.108}"
|
||||
GITEA_TOKEN=$(cat ~/.hermes/gitea_token_vps 2>/dev/null)
|
||||
GITEA_API="http://143.198.27.163:3000/api/v1"
|
||||
EZRA_HOST="root@143.198.27.163"
|
||||
BEZALEL_HOST="root@67.205.155.108"
|
||||
SSH_OPTS="-o ConnectTimeout=4 -o StrictHostKeyChecking=no -o BatchMode=yes"
|
||||
|
||||
ANY_DOWN=0
|
||||
@@ -179,7 +154,7 @@ fi
|
||||
|
||||
print_line "Timmy" "$TIMMY_STATUS" "$TIMMY_MODEL" "$TIMMY_ACTIVITY"
|
||||
|
||||
# ── 2. Ezra ──
|
||||
# ── 2. Ezra (VPS 143.198.27.163) ──
|
||||
EZRA_STATUS="DOWN"
|
||||
EZRA_MODEL="hermes-ezra"
|
||||
EZRA_ACTIVITY=""
|
||||
@@ -211,7 +186,7 @@ fi
|
||||
|
||||
print_line "Ezra" "$EZRA_STATUS" "$EZRA_MODEL" "$EZRA_ACTIVITY"
|
||||
|
||||
# ── 3. Bezalel ──
|
||||
# ── 3. Bezalel (VPS 67.205.155.108) ──
|
||||
BEZ_STATUS="DOWN"
|
||||
BEZ_MODEL="hermes-bezalel"
|
||||
BEZ_ACTIVITY=""
|
||||
@@ -271,7 +246,7 @@ if [ -n "$GITEA_VER" ]; then
|
||||
GITEA_STATUS="UP"
|
||||
VER=$(python3 -c "import json; print(json.loads('''${GITEA_VER}''').get('version','?'))" 2>/dev/null)
|
||||
GITEA_MODEL="gitea v${VER}"
|
||||
GITEA_ACTIVITY="forge.alexanderwhitestone.com"
|
||||
GITEA_ACTIVITY="143.198.27.163:3000"
|
||||
else
|
||||
GITEA_STATUS="DOWN"
|
||||
GITEA_MODEL="gitea(unreachable)"
|
||||
|
||||
91
code-claw-delegation.md
Normal file
91
code-claw-delegation.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Code Claw delegation
|
||||
|
||||
Purpose:
|
||||
- give the team a clean way to hand issues to `claw-code`
|
||||
- let Code Claw work from Gitea instead of ad hoc local prompts
|
||||
- keep queue state visible through labels and comments
|
||||
|
||||
## What it is
|
||||
|
||||
Code Claw is a separate local runtime from Hermes/OpenClaw.
|
||||
|
||||
Current lane:
|
||||
- runtime: local patched `~/code-claw`
|
||||
- backend: OpenRouter
|
||||
- model: `qwen/qwen3.6-plus:free`
|
||||
- Gitea identity: `claw-code`
|
||||
- dispatch style: assign in Gitea, heartbeat picks it up every 15 minutes
|
||||
|
||||
## Trigger methods
|
||||
|
||||
Either of these is enough:
|
||||
- assign the issue to `claw-code`
|
||||
- add label `assigned-claw-code`
|
||||
|
||||
## Label lifecycle
|
||||
|
||||
- `assigned-claw-code` — queued
|
||||
- `claw-code-in-progress` — picked up by heartbeat
|
||||
- `claw-code-done` — Code Claw completed a pass
|
||||
|
||||
## Repo coverage
|
||||
|
||||
Currently wired:
|
||||
- `Timmy_Foundation/timmy-home`
|
||||
- `Timmy_Foundation/timmy-config`
|
||||
- `Timmy_Foundation/the-nexus`
|
||||
- `Timmy_Foundation/hermes-agent`
|
||||
|
||||
## Operational flow
|
||||
|
||||
1. Team assigns issue to `claw-code` or adds `assigned-claw-code`
|
||||
2. launchd heartbeat runs every 15 minutes
|
||||
3. Timmy posts a pickup comment
|
||||
4. worker clones the target repo
|
||||
5. worker creates branch `claw-code/issue-<num>`
|
||||
6. worker runs Code Claw against the issue context
|
||||
7. if work exists, worker pushes and opens a PR
|
||||
8. issue is marked `claw-code-done`
|
||||
9. completion comment links branch + PR
|
||||
|
||||
## Logs and files
|
||||
|
||||
Local files:
|
||||
- heartbeat script: `~/.timmy/uniwizard/codeclaw_qwen_heartbeat.py`
|
||||
- worker script: `~/.timmy/uniwizard/codeclaw_qwen_worker.py`
|
||||
- launchd job: `~/Library/LaunchAgents/ai.timmy.codeclaw-qwen-heartbeat.plist`
|
||||
|
||||
Logs:
|
||||
- heartbeat log: `/tmp/codeclaw-qwen-heartbeat.log`
|
||||
- worker log: `/tmp/codeclaw-qwen-worker-<issue>.log`
|
||||
|
||||
## Best-fit work
|
||||
|
||||
Use Code Claw for:
|
||||
- small code/config/doc issues
|
||||
- repo hygiene
|
||||
- isolated bugfixes
|
||||
- narrow CI and `.gitignore` work
|
||||
- quick issue-driven patches where a PR is the desired output
|
||||
|
||||
Do not use it first for:
|
||||
- giant epics
|
||||
- broad architecture KT
|
||||
- local game embodiment tasks
|
||||
- complex multi-repo archaeology
|
||||
|
||||
## Proof of life
|
||||
|
||||
Smoke-tested on:
|
||||
- `Timmy_Foundation/timmy-config#232`
|
||||
|
||||
Observed:
|
||||
- pickup comment posted
|
||||
- branch `claw-code/issue-232` created
|
||||
- PR opened by `claw-code`
|
||||
|
||||
## Notes
|
||||
|
||||
- Exact PR matching matters. Do not trust broad Gitea PR queries without post-filtering by branch.
|
||||
- This lane is intentionally simple and issue-driven.
|
||||
- Treat it like a specialized intern: useful, fast, and bounded.
|
||||
29
config.yaml
29
config.yaml
@@ -20,7 +20,12 @@ terminal:
|
||||
modal_image: nikolaik/python-nodejs:python3.11-nodejs20
|
||||
daytona_image: nikolaik/python-nodejs:python3.11-nodejs20
|
||||
container_cpu: 1
|
||||
container_memory: 5120
|
||||
container_embeddings:
|
||||
provider: ollama
|
||||
model: nomic-embed-text
|
||||
base_url: http://localhost:11434/v1
|
||||
|
||||
memory: 5120
|
||||
container_disk: 51200
|
||||
container_persistent: true
|
||||
docker_volumes: []
|
||||
@@ -34,21 +39,26 @@ checkpoints:
|
||||
enabled: true
|
||||
max_snapshots: 50
|
||||
compression:
|
||||
enabled: false
|
||||
enabled: true
|
||||
threshold: 0.5
|
||||
target_ratio: 0.2
|
||||
protect_last_n: 20
|
||||
summary_model: ''
|
||||
summary_provider: ''
|
||||
summary_base_url: ''
|
||||
synthesis_model:
|
||||
provider: custom
|
||||
model: llama3:70b
|
||||
base_url: http://localhost:8081/v1
|
||||
|
||||
smart_model_routing:
|
||||
enabled: false
|
||||
max_simple_chars: 200
|
||||
max_simple_words: 35
|
||||
enabled: true
|
||||
max_simple_chars: 400
|
||||
max_simple_words: 75
|
||||
cheap_model:
|
||||
provider: ''
|
||||
model: ''
|
||||
base_url: ''
|
||||
provider: 'ollama'
|
||||
model: 'gemma2:2b'
|
||||
base_url: 'http://localhost:11434/v1'
|
||||
api_key: ''
|
||||
auxiliary:
|
||||
vision:
|
||||
@@ -165,6 +175,9 @@ command_allowlist: []
|
||||
quick_commands: {}
|
||||
personalities: {}
|
||||
security:
|
||||
sovereign_audit: true
|
||||
no_phone_home: true
|
||||
|
||||
redact_secrets: true
|
||||
tirith_enabled: true
|
||||
tirith_path: tirith
|
||||
|
||||
@@ -1,212 +0,0 @@
|
||||
# Lazarus Cell Specification v1.0
|
||||
|
||||
**Canonical epic:** `Timmy_Foundation/timmy-config#267`
|
||||
**Author:** Ezra (architect)
|
||||
**Date:** 2026-04-06
|
||||
**Status:** Draft — open for burn-down by `#269` `#270` `#271` `#272` `#273` `#274`
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines the **Cell** — the fundamental isolation primitive of the Lazarus Pit v2.0. Every downstream implementation (isolation layer, invitation protocol, backend abstraction, teaming model, verification suite, and operator surface) must conform to the invariants, roles, lifecycle, and publication rules defined here.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Invariants
|
||||
|
||||
> *No agent shall leak state, credentials, or filesystem into another agent's resurrection cell.*
|
||||
|
||||
### 2.1 Cell Invariant Definitions
|
||||
|
||||
| Invariant | Meaning | Enforcement |
|
||||
|-----------|---------|-------------|
|
||||
| **I1 — Filesystem Containment** | A cell may only read/write paths under its assigned `CELL_HOME`. No traversal into host `~/.hermes/`, `/root/wizards/`, or other cells. | Mount namespace (Level 2+) or strict chroot + AppArmor (Level 1) |
|
||||
| **I2 — Credential Isolation** | Host tokens, env files, and SSH keys are never copied into a cell. Only per-cell credential pools are injected at spawn. | Harness strips `HERMES_*` and `HOME`; injects `CELL_CREDENTIALS` manifest |
|
||||
| **I3 — Process Boundary** | A cell runs as an independent OS process or container. It cannot ptrace, signal, or inspect sibling cells. | PID namespace, seccomp, or Docker isolation |
|
||||
| **I4 — Network Segmentation** | A cell does not bind to host-private ports or sniff host traffic unless explicitly proxied. | Optional network namespace / proxy boundary |
|
||||
| **I5 — Memory Non-Leakage** | Shared memory, IPC sockets, and tmpfs mounts are cell-scoped. No post-exit residue in host `/tmp` or `/dev/shm`. | TTL cleanup + graveyard garbage collection (`#273`) |
|
||||
| **I6 — Audit Trail** | Every cell mutation (spawn, invite, checkpoint, close) is logged to an immutable ledger (Gitea issue comment or local append-only log). | Required for all production cells |
|
||||
|
||||
---
|
||||
|
||||
## 3. Role Taxonomy
|
||||
|
||||
Every participant in a cell is assigned exactly one role at invitation time. Roles are immutable for the duration of the session.
|
||||
|
||||
| Role | Permissions | Typical Holder |
|
||||
|------|-------------|----------------|
|
||||
| **director** | Can invite others, trigger checkpoints, close the cell, and override cell decisions. Cannot directly execute tools unless also granted `executor`. | Human operator (Alexander) or fleet commander (Timmy) |
|
||||
| **executor** | Full tool execution and filesystem write access within the cell. Can push commits to the target project repo. | Fleet agents (Ezra, Allegro, etc.) |
|
||||
| **observer** | Read-only access to cell filesystem and shared scratchpad. Cannot execute tools or mutate state. | Human reviewer, auditor, or training monitor |
|
||||
| **guest** | Same permissions as `executor`, but sourced from outside the fleet. Subject to stricter backend isolation (Docker by default). | External bots (Codex, Gemini API, Grok, etc.) |
|
||||
| **substitute** | A special `executor` who joins to replace a downed agent. Inherits the predecessor's last checkpoint but not their home memory. | Resurrection-pool fallback agent |
|
||||
|
||||
### 3.1 Role Combinations
|
||||
|
||||
- A single participant may hold **at most one** primary role.
|
||||
- A `director` may temporarily downgrade to `observer` but cannot upgrade to `executor` without a new invitation.
|
||||
- `guest` and `substitute` roles must be explicitly enabled in cell policy.
|
||||
|
||||
---
|
||||
|
||||
## 4. Cell Lifecycle State Machine
|
||||
|
||||
```
|
||||
┌─────────┐ invite ┌───────────┐ prepare ┌─────────┐
|
||||
│ IDLE │ ─────────────►│ INVITED │ ────────────►│ PREPARING│
|
||||
└─────────┘ └───────────┘ └────┬────┘
|
||||
▲ │
|
||||
│ │ spawn
|
||||
│ ▼
|
||||
│ ┌─────────┐
|
||||
│ checkpoint / resume │ ACTIVE │
|
||||
│◄──────────────────────────────────────────────┤ │
|
||||
│ └────┬────┘
|
||||
│ │
|
||||
│ close / timeout │
|
||||
│◄───────────────────────────────────────────────────┘
|
||||
│
|
||||
│ ┌─────────┐
|
||||
└──────────────── archive ◄────────────────────│ CLOSED │
|
||||
└─────────┘
|
||||
down / crash
|
||||
┌─────────┐
|
||||
│ DOWNED │────► substitute invited
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
### 4.1 State Definitions
|
||||
|
||||
| State | Description | Valid Transitions |
|
||||
|-------|-------------|-------------------|
|
||||
| **IDLE** | Cell does not yet exist in the registry. | `INVITED` |
|
||||
| **INVITED** | An invitation token has been generated but not yet accepted. | `PREPARING` (on accept), `CLOSED` (on expiry/revoke) |
|
||||
| **PREPARING** | Cell directory is being created, credentials injected, backend initialized. | `ACTIVE` (on successful spawn), `CLOSED` (on failure) |
|
||||
| **ACTIVE** | At least one participant is running in the cell. Tool execution is permitted. | `CHECKPOINTING`, `CLOSED`, `DOWNED` |
|
||||
| **CHECKPOINTING** | A snapshot of cell state is being captured. | `ACTIVE` (resume), `CLOSED` (if final) |
|
||||
| **DOWNED** | An `ACTIVE` agent missed heartbeats. Cell is frozen pending recovery. | `ACTIVE` (revived), `CLOSED` (abandoned) |
|
||||
| **CLOSED** | Cell has been explicitly closed or TTL expired. Filesystem enters grace period. | `ARCHIVED` |
|
||||
| **ARCHIVED** | Cell artifacts (logs, checkpoints, decisions) are persisted. Filesystem may be scrubbed. | — (terminal) |
|
||||
|
||||
### 4.2 TTL and Grace Rules
|
||||
|
||||
- **Active TTL:** Default 4 hours. Renewable by `director` up to a max of 24 hours.
|
||||
- **Invited TTL:** Default 15 minutes. Unused invitations auto-revoke.
|
||||
- **Closed Grace:** 30 minutes. Cell filesystem remains recoverable before scrubbing.
|
||||
- **Archived Retention:** 30 days. After which checkpoints may be moved to cold storage or deleted per policy.
|
||||
|
||||
---
|
||||
|
||||
## 5. Publication Rules
|
||||
|
||||
The Cell is **not** a source of truth for fleet state. It is a scratch space. The following rules govern what may leave the cell boundary.
|
||||
|
||||
### 5.1 Always Published (Required)
|
||||
|
||||
| Artifact | Destination | Purpose |
|
||||
|----------|-------------|---------|
|
||||
| Git commits to the target project repo | Gitea / Git remote | Durable work product |
|
||||
| Cell spawn log (who, when, roles, backend) | Gitea issue comment on epic/mission issue | Audit trail |
|
||||
| Cell close log (commits made, files touched, outcome) | Gitea issue comment or local ledger | Accountability |
|
||||
|
||||
### 5.2 Never Published (Cell-Local Only)
|
||||
|
||||
| Artifact | Reason |
|
||||
|----------|--------|
|
||||
| `shared_scratchpad` drafts and intermediate reasoning | May contain false starts, passwords mentioned in context, or incomplete thoughts |
|
||||
| Per-cell credentials and invite tokens | Security — must not leak into commit history |
|
||||
| Agent home memory files (even read-only copies) | Privacy and sovereignty of the agent's home |
|
||||
| Internal tool-call traces | Noise and potential PII |
|
||||
|
||||
### 5.3 Optionally Published (Director Decision)
|
||||
|
||||
| Artifact | Condition |
|
||||
|----------|-----------|
|
||||
| `decisions.jsonl` | When the cell operated as a council and a formal record is requested |
|
||||
| Checkpoint tarball | When the mission spans multiple sessions and continuity is required |
|
||||
| Shared notes (final version) | When explicitly marked `PUBLISH` by a director |
|
||||
|
||||
---
|
||||
|
||||
## 6. Filesystem Layout
|
||||
|
||||
Every cell, regardless of backend, exposes the same directory contract:
|
||||
|
||||
```
|
||||
/tmp/lazarus-cells/{cell_id}/
|
||||
├── .lazarus/
|
||||
│ ├── cell.json # cell metadata (roles, TTL, backend, target repo)
|
||||
│ ├── spawn.log # immutable spawn record
|
||||
│ ├── decisions.jsonl # logged votes / approvals / directives
|
||||
│ └── checkpoints/ # snapshot tarballs
|
||||
├── project/ # cloned target repo (if applicable)
|
||||
├── shared/
|
||||
│ ├── scratchpad.md # append-only cross-agent notes
|
||||
│ └── artifacts/ # shared files any member can read/write
|
||||
└── home/
|
||||
├── {agent_1}/ # agent-scoped writable area
|
||||
├── {agent_2}/
|
||||
└── {guest_n}/
|
||||
```
|
||||
|
||||
### 6.1 Backend Mapping
|
||||
|
||||
| Backend | `CELL_HOME` realization | Isolation Level |
|
||||
|---------|------------------------|-----------------|
|
||||
| `process` | `tmpdir` + `HERMES_HOME` override | Level 1 (directory + env) |
|
||||
| `venv` | Separate Python venv + `HERMES_HOME` | Level 1.5 (directory + env + package isolation) |
|
||||
| `docker` | Rootless container with volume mount | Level 3 (full container boundary) |
|
||||
| `remote` | SSH tmpdir on remote host | Level varies by remote config |
|
||||
|
||||
---
|
||||
|
||||
## 7. Graveyard and Retention Policy
|
||||
|
||||
When a cell closes, it enters the **Graveyard** — a quarantined holding area before final scrubbing.
|
||||
|
||||
### 7.1 Graveyard Rules
|
||||
|
||||
```
|
||||
ACTIVE ──► CLOSED ──► /tmp/lazarus-graveyard/{cell_id}/ ──► TTL grace ──► SCRUBBED
|
||||
```
|
||||
|
||||
- **Grace period:** 30 minutes (configurable per mission)
|
||||
- **During grace:** A director may issue `lazarus resurrect {cell_id}` to restore the cell to `ACTIVE`
|
||||
- **After grace:** Filesystem is recursively deleted. Checkpoints are moved to `lazarus-archive/{date}/{cell_id}/`
|
||||
|
||||
### 7.2 Retention Tiers
|
||||
|
||||
| Tier | Location | Retention | Access |
|
||||
|------|----------|-----------|--------|
|
||||
| Hot Graveyard | `/tmp/lazarus-graveyard/` | 30 min | Director only |
|
||||
| Warm Archive | `~/.lazarus/archive/` | 30 days | Fleet agents (read-only) |
|
||||
| Cold Storage | Optional S3 / IPFS / Gitea release asset | 1 year | Director only |
|
||||
|
||||
---
|
||||
|
||||
## 8. Cross-References
|
||||
|
||||
- Epic: `timmy-config#267`
|
||||
- Isolation implementation: `timmy-config#269`
|
||||
- Invitation protocol: `timmy-config#270`
|
||||
- Backend abstraction: `timmy-config#271`
|
||||
- Teaming model: `timmy-config#272`
|
||||
- Verification suite: `timmy-config#273`
|
||||
- Operator surface: `timmy-config#274`
|
||||
- Existing skill: `lazarus-pit-recovery` (to be updated to this spec)
|
||||
- Related protocol: `timmy-config#245` (Phoenix Protocol recovery benchmarks)
|
||||
|
||||
---
|
||||
|
||||
## 9. Acceptance Criteria for This Spec
|
||||
|
||||
- [ ] All downstream issues (`#269`–`#274`) can be implemented without ambiguity about roles, states, or filesystem boundaries.
|
||||
- [ ] A new developer can read this doc and implement a compliant `process` backend in one session.
|
||||
- [ ] The spec has been reviewed and ACK'd by at least one other wizard before `#269` merges.
|
||||
|
||||
---
|
||||
|
||||
*Sovereignty and service always.*
|
||||
|
||||
— Ezra
|
||||
@@ -19,7 +19,7 @@ except ImportError as e:
|
||||
sys.exit(1)
|
||||
|
||||
# Configuration
|
||||
GITEA = "https://forge.alexanderwhitestone.com"
|
||||
GITEA = "http://143.198.27.163:3000"
|
||||
RELAY_URL = "ws://localhost:2929" # Local relay
|
||||
POLL_INTERVAL = 60 # Seconds between polls
|
||||
ALLOWED_PUBKEYS = [] # Will load from keystore
|
||||
|
||||
@@ -1,151 +0,0 @@
|
||||
# CROSS AUDIT REPORT: Fleet & System Assessment v2026-04-06
|
||||
|
||||
**Auditor:** Ezra
|
||||
**Scope:** Full fleet operational status + Gitea state + previous audit ingestion
|
||||
**Previous audits ingested:** timmy-home#387 (Grand Epic, 1,031 issues), timmy-home#389 (Third Pass — Wizard House Consolidation)
|
||||
**Status:** ACTIONABLE GAPS IDENTIFIED — 6 tactical sub-issues filed below
|
||||
|
||||
---
|
||||
|
||||
## I. FLEET ROLL CALL — Live Operational Status
|
||||
|
||||
| Wizard | Gateway | Model | Telegram | Notes |
|
||||
|--------|---------|-------|----------|-------|
|
||||
| **ezra** | ONLINE | kimi-for-coding | Yes | Hermes VPS, active architect lane |
|
||||
| **bezalel** | ONLINE | gemma4:latest | Yes | Hermes VPS, Claude Opus fallback configured |
|
||||
| **bilbobagginshire** | ONLINE | qwen2.5:1.5b | Yes | **Custom runtime** (not Hermes), hand-rolled Telegram bot |
|
||||
| **allegro-primus** | ONLINE | nvidia/nemotron-free | No | Runs on **different VPS** (167.99.126.228), cron-based not persistent gateway |
|
||||
| **deep-dive** | OFFLINE | — | No | Empty / dormant |
|
||||
| **hermes-turboquant** | OFFLINE | — | No | **Empty shell** — just a directory |
|
||||
| **the-nexus** | OFFLINE | — | No | Directory exists, no running gateway |
|
||||
| **timmy-config** | OFFLINE | — | No | Directory exists, no running gateway |
|
||||
| **turboquant-llama.cpp** | OFFLINE | — | No | Code repo (fork), not an agent house |
|
||||
|
||||
**Hard truth:** Only 4 of 10 wizard directories are operationally active. 3 are empty shells or code repos mistaken for agent houses.
|
||||
|
||||
---
|
||||
|
||||
## II. ISSUE ECOLOGY — Gitea State
|
||||
|
||||
| Metric | Value | Assessment |
|
||||
|--------|-------|------------|
|
||||
| **Total issues** | 1,287 | Large but manageable |
|
||||
| **Open** | 482 (37%) | Within norms but biased toward config/frontier |
|
||||
| **Closed** | 805 (63%) | Healthy completion rate |
|
||||
| **Stale unassigned** | 52 | Needs triage sweep |
|
||||
| **Zero-comment open issues** | 160 | **Critical gap** — no engagement, no scoping, no burn |
|
||||
| **Open >14 days** | 0 | Issues are fresh, but freshness can mean churn |
|
||||
|
||||
### Throughput Leaderboard (closers)
|
||||
| Agent | Closed Issues |
|
||||
|-------|---------------|
|
||||
| claude | 170 |
|
||||
| Timmy | 153 |
|
||||
| allegro | 75 |
|
||||
| Rockachopa | 57 |
|
||||
| groq | 39 |
|
||||
| grok | 26 |
|
||||
| ezra | 25 |
|
||||
|
||||
### Current Open Load (assigned)
|
||||
| Agent | Open Assigned |
|
||||
|-------|---------------|
|
||||
| Timmy | 159 |
|
||||
| allegro | 156 |
|
||||
| ezra | 55 |
|
||||
| gemini | 54 |
|
||||
|
||||
**Hard truth:** Timmy and allegro are overloaded. 160 open issues have zero comments — they were created and abandoned without discussion.
|
||||
|
||||
---
|
||||
|
||||
## III. PILLAR HEALTH — Compared to Previous Audit
|
||||
|
||||
### Pillar 1: THE DOOR (the-door repo)
|
||||
**Previous:** 1/8 closed (12.5%) — called "the most important thing is the least done."
|
||||
**Current:** 4/8 closed (50%) — **improved but core deployment issues remain open.**
|
||||
|
||||
Still open:
|
||||
- #2 VPS prep: swap, nginx, SSL, firewall, DNS
|
||||
- #4 Crisis-aware system prompt + API wiring
|
||||
- #7 Go live + smoke test
|
||||
- #8 Fallback + resilience
|
||||
|
||||
**Gap:** Infrastructure and crisis-detection wiring are still incomplete. The Door cannot serve a man at 3 AM until these close.
|
||||
|
||||
### Pillar 2: THE SOVEREIGN FLEET
|
||||
**Previous:** Mixed — infrastructure strong, participation weak.
|
||||
**Current:** Same. New gap: **Lazarus Pit v2.0** (#267) was just scoped but all sub-issues are unassigned and have zero comments.
|
||||
|
||||
Additional gaps:
|
||||
- `wizard-checkpoints` repo has **no issues enabled** or does not exist on Gitea (404 from API), yet it is referenced as the canonical checkpoint registry.
|
||||
- Multiple overlapping epics on Nostr migration (#257, #862, #819, #138) and fleet optimization (#822, #861, #820, #821).
|
||||
- 5 open "consolidation" epics (#860, #861, #862) that are themselves unmerged evidence of fragmentation.
|
||||
|
||||
### Pillar 3: THE NEXUS (the-nexus repo)
|
||||
**Previous:** 88.8% burn rate but 52 critical gaps, visual shell disconnected from data.
|
||||
**Current:** **109 open / 415 closed** — burn rate dropped to ~79%. Open count **increased** by 57 issues since April 4.
|
||||
|
||||
**Gap:** Issue creation is outpacing closure. The Nexus is accumulating speculative features faster than they are wired to working systems.
|
||||
|
||||
### Pillar 4: THE TRAINING LOOP
|
||||
**Previous:** TurboQuant blocked by Gemma 4 architecture support in llama.cpp.
|
||||
**Current:** Same. `turboquant-llama.cpp` directory exists but the associated agent house (`hermes-turboquant`) is an empty shell.
|
||||
|
||||
### Pillar 5: THE ECONOMY / SOVEREIGNTY
|
||||
No measurable progress since previous audit. Nostr bridge code exists in `timmy-config/nostr-bridge/` but no production deployment issue is closed.
|
||||
|
||||
---
|
||||
|
||||
## IV. SYSTEMIC GAPS IDENTIFIED
|
||||
|
||||
### Gap A: The Zero-Comment Graveyard
|
||||
**160 open issues have never been commented on.** These are not backlog — they are unscoped, unburned, and invisible to the fleet. No acceptance criteria, no lane assignment, no proof of life.
|
||||
|
||||
### Gap B: Ghost Houses & Directory Confusion
|
||||
`hermes-turboquant`, `the-nexus`, and `timmy-config` exist as directories under `/root/wizards/` but have no running gateways and no agent identity. They are code repos or empty shells, not wizard houses. This causes confusion in fleet health checks.
|
||||
|
||||
### Gap C: Consolidation Epics Without Consolidation
|
||||
There are 5 open "CONSOLIDATION" epics (#860, #861, #862, #863, #864) that were filed to merge duplicate epics. Most are unassigned and have no comments. The consolidation effort itself needs consolidation.
|
||||
|
||||
### Gap D: Checkpoint Registry Is Invisible
|
||||
The `wizard-checkpoints` registry is referenced everywhere but returns 404 from Gitea API. If it is the source of truth for agent state metadata, it must be reachable and issue-tracked.
|
||||
|
||||
### Gap E: Lazarus Pit v2.0 — Freshly Scoped, Completely Unclaimed
|
||||
Milestone 37 has 8 open issues. Only #268 (SPEC) has been claimed and delivered. #269-#274 are unassigned with zero comments. A resurrection pool with no one assigned to build it is ironic.
|
||||
|
||||
---
|
||||
|
||||
## V. TRIAGE — Improvements & Actionable Issues
|
||||
|
||||
Six tactical sub-issues have been created from this audit. Each has clear acceptance criteria.
|
||||
|
||||
| Sub-Issue | Title | Owner | Priority |
|
||||
|-----------|-------|-------|----------|
|
||||
| #SUB-1 | Zero-Comment Triage Sweep — 160 open issues | allegro | P1 |
|
||||
| #SUB-2 | Ghost House Cleanup — separate agent houses from code repos | ezra | P1 |
|
||||
| #SUB-3 | Consolidation Meta-Epic — close or merge 5 stale consolidation issues | ezra | P2 |
|
||||
| #SUB-4 | The Door Deployment Unblocker — finish #2, #4, #7, #8 | allegro + Timmy | P0 |
|
||||
| #SUB-5 | Wizard-Checkpoints Registry Recovery — fix Gitea visibility or migrate | Timmy | P1 |
|
||||
| #SUB-6 | Lazarus Pit Assignment — claim and seed #269-#274 | allegro | P1 |
|
||||
|
||||
---
|
||||
|
||||
## VI. SUMMARY GRADES
|
||||
|
||||
| Area | Grade | Rationale |
|
||||
|------|-------|-----------|
|
||||
| **Fleet Uptime** | C | 4/10 houses online; 3 are empty shells |
|
||||
| **Issue Throughput** | B | 63% close rate is healthy but front-loaded on a few agents |
|
||||
| **Issue Quality** | D | 160 zero-comment issues = unscoped noise |
|
||||
| **Architecture Coherence** | C | Overlapping epics, consolidation epics unconsolidated |
|
||||
| **Mission-Critical Delivery** | D | The Door still not live after 2+ audits |
|
||||
| **Checkpoint / Recovery** | D | Registry invisible, Lazarus Pit unclaimed |
|
||||
|
||||
**Overall: Functional but noisy.** The fleet has velocity but poor lane discipline. Too many issues are created as placeholders and never scoped. The most important work (The Door) remains blocked by infrastructure tickets that have sat open for days.
|
||||
|
||||
---
|
||||
|
||||
*Sovereignty and service always.*
|
||||
|
||||
— Ezra, Cross Audit 2026-04-06
|
||||
@@ -1,124 +0,0 @@
|
||||
# CROSS AUDIT REPORT v2: Fleet & System Assessment
|
||||
|
||||
**Auditor:** Ezra
|
||||
**Date:** 2026-04-06 17:09 UTC
|
||||
**Previous audits ingested:**
|
||||
- timmy-home#387 (Grand Epic, 1,031 issues, April 4)
|
||||
- timmy-home#389 (Wizard House Consolidation, April 5)
|
||||
- timmy-home#480 (Cross Audit v1, April 6)
|
||||
- timmy-home#481–#486 (v1 sub-issues)
|
||||
|
||||
---
|
||||
|
||||
## I. EXECUTIVE SUMMARY: The Previous Audit Failed
|
||||
|
||||
**Hard truth:** Cross Audit v1 (#480) had **0% execution**. Sub-issues #481–#486 remain open with zero comments and zero commits .
|
||||
|
||||
**Why it failed:**
|
||||
1. **Overloaded assignees** — Timmy (75–161 open) and allegro (83–159 open) were assigned more work.
|
||||
2. **Scope too broad** for a single burn cycle ("triage 160 issues", "close 5 epics").
|
||||
3. **Manual labor** instead of automation — asked humans to do what a script should do.
|
||||
|
||||
This audit changes tactics: **automation-first, capacity-aware, single-cycle scope.**
|
||||
|
||||
---
|
||||
|
||||
## II. CURRENT FLEET STATUS
|
||||
|
||||
| Wizard | Gateway | Model | Telegram | Assessment |
|
||||
|--------|---------|-------|----------|------------|
|
||||
| **ezra** | ONLINE | kimi-for-coding | Yes | Active, architecture lane |
|
||||
| **bezalel** | ONLINE | gemma4:latest | Yes | Active, creative/tools lane |
|
||||
| **bilbobagginshire** | ONLINE | qwen2.5:1.5b | Yes | Custom runtime (not Hermes) |
|
||||
| **allegro-primus** | ONLINE | nvidia/nemotron-free | No | Remote VPS, cron-based |
|
||||
| **deep-dive** | OFFLINE | — | No | Dormant |
|
||||
| **hermes-turboquant** | OFFLINE | — | No | Empty shell |
|
||||
| **the-nexus** | OFFLINE | — | No | Code repo, not agent house |
|
||||
| **timmy-config** | OFFLINE | — | No | Code repo, not agent house |
|
||||
| **turboquant-llama.cpp** | OFFLINE | — | No | Code repo, not agent house |
|
||||
|
||||
**No change since v1.** 4 operational houses, 5 ghosts.
|
||||
|
||||
---
|
||||
|
||||
## III. ISSUE ECOLOGY — WORSENED
|
||||
|
||||
| Metric | v1 Audit (Apr 6) | Current | Delta |
|
||||
|--------|------------------|---------|-------|
|
||||
| Total tracked issues | 1,287 | 1,299 | **+12** |
|
||||
| Open | 482 | 495 | **+13** |
|
||||
| Closed | 805 | 804 | **-1** |
|
||||
| Zero-comment open | 160 | 174 | **+14** |
|
||||
| Stale unassigned | 52 | 59 | **+7** |
|
||||
| Timmy open load | 159 | 161 | **+2** |
|
||||
| Allegro open load | 156 | 159 | **+3** |
|
||||
|
||||
**What IS moving:** 13 PRs merged since v1 (gemini, allegro, Timmy, ezra are shipping).
|
||||
**What is NOT moving:** backlog hygiene, audit action items, and The Door deployment blockers.
|
||||
|
||||
---
|
||||
|
||||
## IV. PILLAR HEALTH
|
||||
|
||||
### The Door (the-door repo)
|
||||
**Status:** 4 open / 8 total — same 4 blockers as v1.
|
||||
- #2 VPS prep (assigned ezra)
|
||||
- #4 Crisis prompt + API wiring (assigned allegro)
|
||||
- #7 Go live + smoke test (assigned Timmy)
|
||||
- #8 Fallback + resilience (assigned Timmy)
|
||||
|
||||
**Verdict:** Still the most important and least done. No progress since last audit.
|
||||
|
||||
### Lazarus Pit v2.0
|
||||
**Status:** PR #278 (SPEC) merged by ezra. The rest unclaimed.
|
||||
- timmy-config milestone 37: 50 open issues
|
||||
- the-nexus M6 epic: 5 unassigned sub-issues
|
||||
|
||||
**Verdict:** Spec is canonical. Execution has not started.
|
||||
|
||||
### The Nexus
|
||||
**Status:** 109 open / 524 total. Open count **increased** by 57 since April 4.
|
||||
**Verdict:** Feature creation continues to outpace closure and integration.
|
||||
|
||||
---
|
||||
|
||||
## V. ROOT CAUSE ANALYSIS
|
||||
|
||||
The fleet does not have a **backlog hygiene automation layer.** Agents create issues freely but no automated process:
|
||||
- Closes stale placeholders
|
||||
- Caps per-agent load
|
||||
- Flags duplicates
|
||||
- Enforces acceptance criteria
|
||||
|
||||
Result: humans are drowning in administrative debt while trying to ship code.
|
||||
|
||||
---
|
||||
|
||||
## VI. ACTIONABLE SUB-ISSUES (Automation-First)
|
||||
|
||||
| Issue | Title | Assignee | Rationale |
|
||||
|-------|-------|----------|-----------|
|
||||
| #SUB-B1 | Build zero-comment auto-triage bot | gemini | Proven shipper (5+ PRs today), low load (19 open) |
|
||||
| #SUB-B2 | The Door — VPS prep hardening (#2) | ezra | Infra lane, online, single concrete task |
|
||||
| #SUB-B3 | Build open-load cap enforcement script | allegro | Dispatch wizard; should own routing automation |
|
||||
| #SUB-B4 | Consolidation graveyard sweep + registry fix | ezra | Archivist lane; close-or-merge in one cycle |
|
||||
|
||||
---
|
||||
|
||||
## VII. SUMMARY GRADES
|
||||
|
||||
| Area | Grade | Rationale |
|
||||
|------|-------|-----------|
|
||||
| **Fleet Uptime** | C | Same as v1 — 4/10 online |
|
||||
| **Issue Velocity** | B | 13 PRs merged since v1 shows the fleet CAN ship |
|
||||
| **Issue Hygiene** | F | Zero-comment count grew 14; previous audit ignored |
|
||||
| **Mission-Critical Delivery** | F | The Door still not live |
|
||||
| **Audit Execution** | F | v1 had 0% execution |
|
||||
|
||||
**Overall: The fleet ships, but it drowns in its own backlog.** The fix is not more manual triage. It is automation that guards the backlog so agents can burn code, not paper.
|
||||
|
||||
---
|
||||
|
||||
*Sovereignty and service always.*
|
||||
|
||||
— Ezra, Cross Audit v2
|
||||
633
workforce-manager.py
Normal file
633
workforce-manager.py
Normal file
@@ -0,0 +1,633 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Workforce Manager - Epic #204 / Milestone #218
|
||||
|
||||
Reads fleet routing, Wolf evaluation scores, and open Gitea issues across
|
||||
Timmy_Foundation repos. Assigns each issue to the best-available agent,
|
||||
tracks success rates, and dispatches work.
|
||||
|
||||
Usage:
|
||||
python workforce-manager.py # Scan, assign, dispatch
|
||||
python workforce-manager.py --dry-run # Show assignments without dispatching
|
||||
python workforce-manager.py --status # Show agent status and open issue count
|
||||
python workforce-manager.py --cron # Run silently, save to log
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
try:
|
||||
import requests
|
||||
except ImportError:
|
||||
print("FATAL: requests is required. pip install requests", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
FLEET_ROUTING_PATH = Path.home() / ".hermes" / "fleet-routing.json"
|
||||
WOLF_RESULTS_DIR = Path.home() / ".hermes" / "wolf" / "results"
|
||||
GITEA_TOKEN_PATH = Path.home() / ".hermes" / "gitea_token_vps"
|
||||
GITEA_API_BASE = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
WORKFORCE_STATE_PATH = Path.home() / ".hermes" / "workforce-state.json"
|
||||
ORG_NAME = "Timmy_Foundation"
|
||||
|
||||
# Role-to-agent-role mapping heuristics
|
||||
ROLE_KEYWORDS = {
|
||||
"code-generation": [
|
||||
"code", "implement", "feature", "function", "class", "script",
|
||||
"build", "create", "add", "module", "component",
|
||||
],
|
||||
"issue-triage": [
|
||||
"triage", "categorize", "tag", "label", "organize",
|
||||
"backlog", "sort", "prioritize", "review issue",
|
||||
],
|
||||
"on-request-queries": [
|
||||
"query", "search", "lookup", "find", "check",
|
||||
"info", "report", "status",
|
||||
],
|
||||
"devops": [
|
||||
"deploy", "ci", "cd", "pipeline", "docker", "container",
|
||||
"server", "infrastructure", "config", "nginx", "cron",
|
||||
"setup", "install", "environment", "provision",
|
||||
"build", "release", "workflow",
|
||||
],
|
||||
"documentation": [
|
||||
"doc", "readme", "document", "write", "guide",
|
||||
"spec", "wiki", "changelog", "tutorial",
|
||||
"explain", "describe",
|
||||
],
|
||||
"code-review": [
|
||||
"review", "refactor", "fix", "bug", "debug",
|
||||
"test", "lint", "style", "improve",
|
||||
"clean up", "optimize", "performance",
|
||||
],
|
||||
"triage-routing": [
|
||||
"route", "assign", "triage", "dispatch",
|
||||
"organize", "categorize",
|
||||
],
|
||||
"small-tasks": [
|
||||
"small", "quick", "minor", "typo", "label",
|
||||
"update", "rename", "cleanup",
|
||||
],
|
||||
"inactive": [],
|
||||
"unknown": [],
|
||||
}
|
||||
|
||||
# Priority keywords (higher = more urgent, route to more capable agent)
|
||||
PRIORITY_KEYWORDS = {
|
||||
"critical": 5,
|
||||
"urgent": 4,
|
||||
"block": 4,
|
||||
"bug": 3,
|
||||
"fix": 3,
|
||||
"security": 5,
|
||||
"deploy": 2,
|
||||
"feature": 1,
|
||||
"enhancement": 1,
|
||||
"documentation": 1,
|
||||
"cleanup": 0,
|
||||
}
|
||||
|
||||
# Cost tier priority (lower index = prefer first)
|
||||
TIER_ORDER = ["free", "cheap", "prepaid", "unknown"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Data loading
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def load_json(path: Path) -> Any:
|
||||
if not path.exists():
|
||||
logging.warning("File not found: %s", path)
|
||||
return None
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def load_fleet_routing() -> List[dict]:
|
||||
data = load_json(FLEET_ROUTING_PATH)
|
||||
if data and "agents" in data:
|
||||
return data["agents"]
|
||||
return []
|
||||
|
||||
|
||||
def load_wolf_scores() -> Dict[str, dict]:
|
||||
"""Load Wolf evaluation scores from results directory."""
|
||||
scores: Dict[str, dict] = {}
|
||||
if not WOLF_RESULTS_DIR.exists():
|
||||
return scores
|
||||
for f in sorted(WOLF_RESULTS_DIR.glob("*.json")):
|
||||
data = load_json(f)
|
||||
if data and "model_scores" in data:
|
||||
for entry in data["model_scores"]:
|
||||
model = entry.get("model", "")
|
||||
if model:
|
||||
scores[model] = entry
|
||||
return scores
|
||||
|
||||
|
||||
def load_workforce_state() -> dict:
|
||||
if WORKFORCE_STATE_PATH.exists():
|
||||
return load_json(WORKFORCE_STATE_PATH) or {}
|
||||
return {"assignments": [], "agent_stats": {}, "last_run": None}
|
||||
|
||||
|
||||
def save_workforce_state(state: dict) -> None:
|
||||
WORKFORCE_STATE_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(WORKFORCE_STATE_PATH, "w") as f:
|
||||
json.dump(state, f, indent=2)
|
||||
logging.info("Workforce state saved to %s", WORKFORCE_STATE_PATH)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Gitea API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class GiteaAPI:
|
||||
"""Thin wrapper for Gitea REST API."""
|
||||
|
||||
def __init__(self, token: str, base_url: str = GITEA_API_BASE):
|
||||
self.base_url = base_url.rstrip("/")
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update({
|
||||
"Authorization": f"token {token}",
|
||||
"Accept": "application/json",
|
||||
"Content-Type": "application/json",
|
||||
})
|
||||
|
||||
def _get(self, path: str, params: Optional[dict] = None) -> Any:
|
||||
r = self.session.get(f"{self.base_url}{path}", params=params)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
|
||||
def _post(self, path: str, data: dict) -> Any:
|
||||
r = self.session.post(f"{self.base_url}{path}", json=data)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
|
||||
def _patch(self, path: str, data: dict) -> Any:
|
||||
r = self.session.patch(f"{self.base_url}{path}", json=data)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
|
||||
def get_org_repos(self, org: str) -> List[dict]:
|
||||
return self._get(f"/orgs/{org}/repos", params={"limit": 100})
|
||||
|
||||
def get_open_issues(self, owner: str, repo: str, page: int = 1) -> List[dict]:
|
||||
params = {"state": "open", "type": "issues", "limit": 50, "page": page}
|
||||
return self._get(f"/repos/{owner}/{repo}/issues", params=params)
|
||||
|
||||
def get_all_open_issues(self, org: str) -> List[dict]:
|
||||
"""Fetch all open issues across all org repos."""
|
||||
repos = self.get_org_repos(org)
|
||||
all_issues = []
|
||||
for repo in repos:
|
||||
name = repo["name"]
|
||||
try:
|
||||
# Paginate through all issues
|
||||
page = 1
|
||||
while True:
|
||||
issues = self.get_open_issues(org, name, page=page)
|
||||
if not issues:
|
||||
break
|
||||
all_issues.extend(issues)
|
||||
if len(issues) < 50:
|
||||
break
|
||||
page += 1
|
||||
logging.info("Loaded %d open issues from %s/%s", len(all_issues), org, name)
|
||||
except Exception as exc:
|
||||
logging.warning("Failed to load issues from %s/%s: %s", org, name, exc)
|
||||
return all_issues
|
||||
|
||||
def add_issue_comment(self, owner: str, repo: str, issue_num: int, body: str) -> dict:
|
||||
return self._post(f"/repos/{owner}/{repo}/issues/{issue_num}/comments", {"body": body})
|
||||
|
||||
def add_issue_label(self, owner: str, repo: str, issue_num: int, label: str) -> dict:
|
||||
return self._post(
|
||||
f"/repos/{owner}/{repo}/issues/{issue_num}/labels",
|
||||
{"labels": [label]},
|
||||
)
|
||||
|
||||
def assign_issue(self, owner: str, repo: str, issue_num: int, assignee: str) -> dict:
|
||||
return self._patch(
|
||||
f"/repos/{owner}/{repo}/issues/{issue_num}",
|
||||
{"assignees": [assignee]},
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Scoring & Assignment Logic
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def classify_issue(issue: dict) -> str:
|
||||
"""Determine the best agent role for an issue based on title/body."""
|
||||
title = (issue.get("title", "") or "").lower()
|
||||
body = (issue.get("body", "") or "").lower()
|
||||
text = f"{title} {body}"
|
||||
labels = [l.get("name", "").lower() for l in issue.get("labels", [])]
|
||||
text += " " + " ".join(labels)
|
||||
|
||||
best_role = "small-tasks" # default
|
||||
best_score = 0
|
||||
|
||||
for role, keywords in ROLE_KEYWORDS.items():
|
||||
if not keywords:
|
||||
continue
|
||||
score = sum(2 for kw in keywords if kw in text)
|
||||
# Boost if a matching label exists
|
||||
for label in labels:
|
||||
if any(kw in label for kw in keywords):
|
||||
score += 3
|
||||
if score > best_score:
|
||||
best_score = score
|
||||
best_role = role
|
||||
|
||||
return best_role
|
||||
|
||||
|
||||
def compute_priority(issue: dict) -> int:
|
||||
"""Compute issue priority from keywords."""
|
||||
title = (issue.get("title", "") or "").lower()
|
||||
body = (issue.get("body", "") or "").lower()
|
||||
text = f"{title} {body}"
|
||||
return sum(v for k, v in PRIORITY_KEYWORDS.items() if k in text)
|
||||
|
||||
|
||||
def score_agent_for_issue(agent: dict, role: str, wolf_scores: dict, priority: int) -> float:
|
||||
"""Score how well an agent matches an issue. Higher is better."""
|
||||
score = 0.0
|
||||
|
||||
# Primary: role match
|
||||
agent_role = agent.get("role", "unknown")
|
||||
if agent_role == role:
|
||||
score += 10.0
|
||||
elif agent_role == "small-tasks" and role in ("issue-triage", "on-request-queries"):
|
||||
score += 6.0
|
||||
elif agent_role == "triage-routing" and role in ("issue-triage", "triage-routing"):
|
||||
score += 8.0
|
||||
elif agent_role == "code-generation" and role in ("code-review", "devops"):
|
||||
score += 4.0
|
||||
|
||||
# Wolf quality bonus
|
||||
model = agent.get("model", "")
|
||||
wolf_entry = None
|
||||
for wm, ws in wolf_scores.items():
|
||||
if model and model.lower() in wm.lower():
|
||||
wolf_entry = ws
|
||||
break
|
||||
if wolf_entry and wolf_entry.get("success"):
|
||||
score += wolf_entry.get("total", 0) * 3.0
|
||||
|
||||
# Cost efficiency: prefer free/cheap for low priority
|
||||
tier = agent.get("tier", "unknown")
|
||||
tier_idx = TIER_ORDER.index(tier) if tier in TIER_ORDER else 3
|
||||
if priority <= 1 and tier in ("free", "cheap"):
|
||||
score += 4.0
|
||||
elif priority >= 3 and tier in ("prepaid",):
|
||||
score += 3.0
|
||||
else:
|
||||
score += (3 - tier_idx) * 1.0
|
||||
|
||||
# Activity bonus
|
||||
if agent.get("active", False):
|
||||
score += 2.0
|
||||
|
||||
# Repo familiarity bonus: more repos slightly better
|
||||
repo_count = agent.get("repo_count", 0)
|
||||
score += min(repo_count * 0.2, 2.0)
|
||||
|
||||
return round(score, 3)
|
||||
|
||||
|
||||
def find_best_agent(agents: List[dict], role: str, wolf_scores: dict, priority: int,
|
||||
exclude: Optional[List[str]] = None) -> Optional[dict]:
|
||||
"""Find the best agent for the given role and priority."""
|
||||
exclude = exclude or []
|
||||
candidates = []
|
||||
for agent in agents:
|
||||
if agent.get("name") in exclude:
|
||||
continue
|
||||
if not agent.get("active", False):
|
||||
continue
|
||||
s = score_agent_for_issue(agent, role, wolf_scores, priority)
|
||||
candidates.append((s, agent))
|
||||
|
||||
if not candidates:
|
||||
return None
|
||||
|
||||
candidates.sort(key=lambda x: x[0], reverse=True)
|
||||
return candidates[0][1]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Dispatch
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def dispatch_assignment(api: GiteaAPI, issue: dict, agent: dict, dry_run: bool = False) -> dict:
|
||||
"""Assign an issue to an agent and optionally post a comment."""
|
||||
owner = ORG_NAME
|
||||
repo = issue.get("repository", {}).get("name", "")
|
||||
|
||||
# Extract repo from issue repo_url if not in the repository key
|
||||
if not repo:
|
||||
repo_url = issue.get("repository_url", "")
|
||||
if repo_url:
|
||||
repo = repo_url.rstrip("/").split("/")[-1]
|
||||
|
||||
if not repo:
|
||||
return {"success": False, "error": "Cannot determine repository for issue"}
|
||||
|
||||
issue_num = issue.get("number")
|
||||
agent_name = agent.get("name", "unknown")
|
||||
|
||||
comment_body = (
|
||||
f"🤖 **Workforce Manager assigned this issue to: @{agent_name}**\n\n"
|
||||
f"- **Agent:** {agent_name}\n"
|
||||
f"- **Model:** {agent.get('model', 'unknown')}\n"
|
||||
f"- **Role:** {agent.get('role', 'unknown')}\n"
|
||||
f"- **Tier:** {agent.get('tier', 'unknown')}\n"
|
||||
f"- **Assigned at:** {datetime.now(timezone.utc).isoformat()}\n\n"
|
||||
f"*Automated assignment by Workforce Manager (Epic #204)*"
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
return {
|
||||
"success": True,
|
||||
"dry_run": True,
|
||||
"repo": repo,
|
||||
"issue_number": issue_num,
|
||||
"assignee": agent_name,
|
||||
"comment": comment_body,
|
||||
}
|
||||
|
||||
try:
|
||||
api.assign_issue(owner, repo, issue_num, agent_name)
|
||||
api.add_issue_comment(owner, repo, issue_num, comment_body)
|
||||
return {
|
||||
"success": True,
|
||||
"repo": repo,
|
||||
"issue_number": issue_num,
|
||||
"issue_title": issue.get("title", ""),
|
||||
"assignee": agent_name,
|
||||
}
|
||||
except Exception as exc:
|
||||
return {
|
||||
"success": False,
|
||||
"repo": repo,
|
||||
"issue_number": issue_num,
|
||||
"error": str(exc),
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# State Tracking
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def update_agent_stats(state: dict, result: dict) -> None:
|
||||
"""Update per-agent success tracking."""
|
||||
agent_name = result.get("assignee", "unknown")
|
||||
if "agent_stats" not in state:
|
||||
state["agent_stats"] = {}
|
||||
if agent_name not in state["agent_stats"]:
|
||||
state["agent_stats"][agent_name] = {
|
||||
"total_assigned": 0,
|
||||
"successful": 0,
|
||||
"failed": 0,
|
||||
"success_rate": 0.0,
|
||||
"last_assignment": None,
|
||||
"assigned_issues": [],
|
||||
}
|
||||
|
||||
stats = state["agent_stats"][agent_name]
|
||||
stats["total_assigned"] += 1
|
||||
stats["last_assignment"] = datetime.now(timezone.utc).isoformat()
|
||||
stats["assigned_issues"].append({
|
||||
"repo": result.get("repo", ""),
|
||||
"issue_number": result.get("issue_number"),
|
||||
"success": result.get("success", False),
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
})
|
||||
|
||||
if result.get("success"):
|
||||
stats["successful"] += 1
|
||||
else:
|
||||
stats["failed"] += 1
|
||||
|
||||
total = stats["successful"] + stats["failed"]
|
||||
stats["success_rate"] = round(stats["successful"] / total, 3) if total > 0 else 0.0
|
||||
|
||||
|
||||
def print_status(state: dict, agents: List[dict], issues_count: int) -> None:
|
||||
"""Print workforce status."""
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Workforce Manager Status - {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M UTC')}")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
# Fleet summary
|
||||
active = [a for a in agents if a.get("active")]
|
||||
print(f"\nFleet: {len(active)} active agents, {len(agents)} total")
|
||||
tier_counts = {}
|
||||
for a in active:
|
||||
t = a.get("tier", "unknown")
|
||||
tier_counts[t] = tier_counts.get(t, 0) + 1
|
||||
for t, c in sorted(tier_counts.items()):
|
||||
print(f" {t}: {c} agents")
|
||||
|
||||
# Agent scores
|
||||
wolf = load_wolf_scores()
|
||||
print(f"\nAgent Details:")
|
||||
print(f" {'Name':<25} {'Model':<30} {'Role':<18} {'Tier':<10}")
|
||||
for a in agents:
|
||||
if not a.get("active"):
|
||||
continue
|
||||
stats = state.get("agent_stats", {}).get(a["name"], {})
|
||||
rate = stats.get("success_rate", 0.0)
|
||||
total = stats.get("total_assigned", 0)
|
||||
wolf_badge = ""
|
||||
for wm, ws in wolf.items():
|
||||
if a["model"] and a["model"].lower() in wm.lower() and ws.get("success"):
|
||||
wolf_badge = f"[wolf:{ws['total']}]"
|
||||
break
|
||||
name_str = f"{a['name']} {wolf_badge}"
|
||||
if total > 0:
|
||||
name_str += f" (s/r: {rate}, n={total})"
|
||||
print(f" {name_str:<45} {a.get('role', 'unknown'):<18} {a.get('tier', '?'):<10}")
|
||||
|
||||
print(f"\nOpen Issues: {issues_count}")
|
||||
print(f"Assignments Made: {len(state.get('assignments', []))}")
|
||||
if state.get("last_run"):
|
||||
print(f"Last Run: {state['last_run']}")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Workforce Manager - Assign Gitea issues to AI agents")
|
||||
parser.add_argument("--dry-run", action="store_true", help="Show assignments without dispatching")
|
||||
parser.add_argument("--status", action="store_true", help="Show workforce status only")
|
||||
parser.add_argument("--cron", action="store_true", help="Run silently for cron scheduling")
|
||||
parser.add_argument("--label", type=str, help="Only process issues with this label")
|
||||
parser.add_argument("--max-issues", type=int, default=100, help="Max issues to process per run")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup logging
|
||||
if args.cron:
|
||||
logging.basicConfig(level=logging.WARNING, format="%(asctime)s [%(levelname)s] %(message)s")
|
||||
else:
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s")
|
||||
|
||||
logging.info("Workforce Manager starting")
|
||||
|
||||
# Load data
|
||||
agents = load_fleet_routing()
|
||||
if not agents:
|
||||
logging.error("No agents found in fleet-routing.json")
|
||||
return 1
|
||||
logging.info("Loaded %d agents from fleet routing", len(agents))
|
||||
|
||||
wolf_scores = load_wolf_scores()
|
||||
if wolf_scores:
|
||||
logging.info("Loaded %d model scores from Wolf results", len(wolf_scores))
|
||||
|
||||
state = load_workforce_state()
|
||||
|
||||
# Load Gitea token
|
||||
if GITEA_TOKEN_PATH.exists():
|
||||
token = GITEA_TOKEN_PATH.read_text().strip()
|
||||
else:
|
||||
logging.error("Gitea token not found at %s", GITEA_TOKEN_PATH)
|
||||
return 1
|
||||
|
||||
api = GiteaAPI(token)
|
||||
|
||||
# Status-only mode
|
||||
if args.status:
|
||||
# Quick open issue count
|
||||
repos = api.get_org_repos(ORG_NAME)
|
||||
total = sum(r.get("open_issues_count", 0) for r in repos)
|
||||
print_status(state, agents, total)
|
||||
return 0
|
||||
|
||||
# Fetch open issues
|
||||
if not args.cron:
|
||||
print(f"Scanning open issues across {ORG_NAME} repos...")
|
||||
|
||||
issues = api.get_all_open_issues(ORG_NAME)
|
||||
|
||||
# Filter by label
|
||||
if args.label:
|
||||
issues = [
|
||||
i for i in issues
|
||||
if any(args.label in (l.get("name", "") or "").lower() for l in i.get("labels", []))
|
||||
]
|
||||
|
||||
if args.label:
|
||||
logging.info("Filtered to %d issues with label '%s'", len(issues), args.label)
|
||||
else:
|
||||
logging.info("Found %d open issues", len(issues))
|
||||
|
||||
# Skip issues already assigned
|
||||
already_assigned_nums = set()
|
||||
for a in state.get("assignments", []):
|
||||
already_assigned_nums.add((a.get("repo"), a.get("issue_number")))
|
||||
|
||||
issues = [
|
||||
i for i in issues
|
||||
if not i.get("assignee") and
|
||||
not (i.get("repository", {}).get("name"), i.get("number")) in already_assigned_nums
|
||||
]
|
||||
logging.info("%d unassigned issues to process", len(issues))
|
||||
|
||||
# Sort by priority
|
||||
issues_with_priority = [(compute_priority(i), i) for i in issues]
|
||||
issues_with_priority.sort(key=lambda x: x[0], reverse=True)
|
||||
issues = [i for _, i in issues_with_priority[:args.max_issues]]
|
||||
|
||||
# Assign issues
|
||||
assignments = []
|
||||
agent_exclusions: Dict[str, List[str]] = {} # repo -> list of assigned agents per run
|
||||
global_exclusions: List[str] = [] # agents already at capacity per run
|
||||
max_per_agent_per_run = 5
|
||||
|
||||
for issue in issues:
|
||||
role = classify_issue(issue)
|
||||
priority = compute_priority(issue)
|
||||
repo = issue.get("repository", {}).get("name", "")
|
||||
|
||||
# Avoid assigning same agent twice to same repo in one run
|
||||
repo_excluded = agent_exclusions.get(repo, [])
|
||||
|
||||
# Also exclude agents already at assignment cap
|
||||
cap_excluded = [
|
||||
name for name, stats in state.get("agent_stats", {}).items()
|
||||
if stats.get("total_assigned", 0) > max_per_agent_per_run
|
||||
]
|
||||
|
||||
excluded = list(set(repo_excluded + global_exclusions + cap_excluded))
|
||||
|
||||
agent = find_best_agent(agents, role, wolf_scores, priority, exclude=excluded)
|
||||
if not agent:
|
||||
# Relax exclusions if no agent found
|
||||
agent = find_best_agent(agents, role, wolf_scores, priority, exclude=[])
|
||||
if not agent:
|
||||
logging.warning("No suitable agent for issue #%d: %s (role=%s)",
|
||||
issue.get("number"), issue.get("title", ""), role)
|
||||
continue
|
||||
|
||||
result = dispatch_assignment(api, issue, agent, dry_run=args.dry_run)
|
||||
assignments.append(result)
|
||||
update_agent_stats(state, result)
|
||||
|
||||
# Track per-repo exclusions
|
||||
if repo not in agent_exclusions:
|
||||
agent_exclusions[repo] = []
|
||||
agent_exclusions[repo].append(agent["name"])
|
||||
|
||||
if args.dry_run:
|
||||
print(f" [DRY] #{issue['number']}: {issue.get('title','')[:60]} → @{agent['name']} ({role}, p={priority})")
|
||||
else:
|
||||
status_str = "OK" if result.get("success") else "FAIL"
|
||||
print(f" [{status_str}] #{issue['number']}: {issue.get('title','')[:60]} → @{agent['name']} ({role}, p={priority})")
|
||||
|
||||
# Save state
|
||||
state["assignments"].extend([{
|
||||
"repo": a.get("repo"),
|
||||
"issue_number": a.get("issue_number"),
|
||||
"assignee": a.get("assignee"),
|
||||
"success": a.get("success", False),
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
} for a in assignments])
|
||||
state["last_run"] = datetime.now(timezone.utc).isoformat()
|
||||
save_workforce_state(state)
|
||||
|
||||
# Summary
|
||||
ok = sum(1 for a in assignments if a.get("success"))
|
||||
fail = len(assignments) - ok
|
||||
logging.info("Done: %d assigned, %d succeeded, %d failed", len(assignments), ok, fail)
|
||||
|
||||
if not args.cron:
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Summary: {len(assignments)} assignments, {ok} OK, {fail} failed")
|
||||
# Show agent stats
|
||||
for name, stats in state.get("agent_stats", {}).items():
|
||||
if stats.get("total_assigned", 0) > 0:
|
||||
print(f" @{name}: {stats['successful']}/{stats['total_assigned']} ({stats.get('success_rate', 0):.0%} success)")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user