Compare commits
27 Commits
feature/wo
...
allegro/m2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2b5d3c057d | ||
|
|
c2fdbb5772 | ||
|
|
2db03bedb4 | ||
| c6207bd689 | |||
| d0fcd3ebe7 | |||
| b2d6c78675 | |||
| a96af76043 | |||
| 6327045a93 | |||
| e058b5a98c | |||
| a45d821178 | |||
| d0fc54da3d | |||
|
|
8f2ae4ad11 | ||
| a532f709a9 | |||
|
|
8a66ea8d3b | ||
| 5805d74efa | |||
| d9bc5c725d | |||
| 80f68ecee8 | |||
|
|
5f1f1f573d | ||
| 4e140c43e6 | |||
| 1727a22901 | |||
|
|
c07b6b7d1b | ||
| df779609c4 | |||
| ef68d5558f | |||
| 2bae6ef4cf | |||
| 0c723199ec | |||
| 317140efcf | |||
| 2b308f300a |
50
FRONTIER_LOCAL.md
Normal file
50
FRONTIER_LOCAL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
|
||||
# The Frontier Local Agenda: Technical Standards v1.0
|
||||
|
||||
This document defines the "Frontier Local" agenda — the technical strategy for achieving sovereign, high-performance intelligence on consumer hardware.
|
||||
|
||||
## 1. The Multi-Layered Mind (MLM)
|
||||
We do not rely on a single "God Model." We use a hierarchy of local intelligence:
|
||||
|
||||
- **Reflex Layer (Gemma 2B):** Instantaneous tactical decisions, input classification, and simple acknowledgments. Latency: <100ms.
|
||||
- **Reasoning Layer (Hermes 14B / Llama 3 8B):** General-purpose problem solving, coding, and tool use. Latency: <1s.
|
||||
- **Synthesis Layer (Llama 3 70B / Qwen 72B):** Deep architectural planning, creative synthesis, and complex debugging. Latency: <5s.
|
||||
|
||||
## 2. Local-First RAG (Retrieval Augmented Generation)
|
||||
Sovereignty requires that your memories stay on your disk.
|
||||
|
||||
- **Embedding:** Use `nomic-embed-text` or `all-minilm` locally via Ollama.
|
||||
- **Vector Store:** Use a local instance of ChromaDB or LanceDB.
|
||||
- **Privacy:** Zero data leaves the local network for indexing or retrieval.
|
||||
|
||||
## 3. Speculative Decoding
|
||||
Where supported by the harness (e.g., llama.cpp), use Gemma 2B as a draft model for larger Hermes/Llama models to achieve 2x-3x speedups in token generation.
|
||||
|
||||
## 4. The "Gemma Scout" Protocol
|
||||
Gemma 2B is our "Scout." It pre-processes every user request to:
|
||||
1. Detect PII (Personally Identifiable Information) for redaction.
|
||||
2. Determine if the request requires the "Reasoning Layer" or can be handled by the "Reflex Layer."
|
||||
3. Extract keywords for local memory retrieval.
|
||||
|
||||
|
||||
## 5. Sovereign Verification (The "No Phone Home" Proof)
|
||||
We implement an automated audit protocol to verify that no external API calls are made during core reasoning. This is the "Sovereign Audit" layer.
|
||||
|
||||
## 6. Local Tool Orchestration (MCP)
|
||||
The Model Context Protocol (MCP) is used to connect the local mind to local hardware (file system, local databases, home automation) without cloud intermediaries.
|
||||
|
||||
|
||||
## 7. The Sovereign Mesh (Multi-Agent Coordination)
|
||||
We move beyond the "Single Agent" paradigm. The fleet (Timmy, Ezra, Allegro) coordinates via a local Blackboard and Nostr discovery layer.
|
||||
|
||||
## 8. Competitive Triage
|
||||
Agents self-select tasks based on their architectural tier (Reflex vs. Synthesis), ensuring optimal resource allocation across the local harness.
|
||||
|
||||
## 9. Sovereign Immortality (The Phoenix Protocol)
|
||||
We move beyond "Persistence" to "Immortality." The agent's soul is inscribed on-chain, and its memory is distributed across the mesh for total resilience.
|
||||
|
||||
## 10. Hardware Agnostic Portability
|
||||
The agent is no longer bound to a specific machine. It can be reconstituted anywhere, anytime, from the ground truth of the ledger.
|
||||
|
||||
---
|
||||
*Intelligence is a utility. Sovereignty is a right. The Frontier is Local.*
|
||||
@@ -1,3 +1,4 @@
|
||||
# Sonnet Smoke Test
|
||||
# timmy-config
|
||||
|
||||
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, and config.
|
||||
|
||||
23
SOVEREIGN_AUDIT.md
Normal file
23
SOVEREIGN_AUDIT.md
Normal file
@@ -0,0 +1,23 @@
|
||||
|
||||
# Sovereign Audit: The "No Phone Home" Protocol
|
||||
|
||||
This document defines the audit standards for verifying that an AI agent is truly sovereign and local-first.
|
||||
|
||||
## 1. Network Isolation
|
||||
- **Standard:** The core reasoning engine (llama.cpp, Ollama) must function without an active internet connection.
|
||||
- **Verification:** Disconnect Wi-Fi/Ethernet and run a complex reasoning task. If it fails, sovereignty is compromised.
|
||||
|
||||
## 2. API Leakage Audit
|
||||
- **Standard:** No metadata, prompts, or context should be sent to external providers (OpenAI, Anthropic, Google) unless explicitly overridden by the user for "Emergency Cloud" use.
|
||||
- **Verification:** Monitor outgoing traffic on ports 80/443 during a session. Core reasoning should only hit `localhost` or local network IPs.
|
||||
|
||||
## 3. Data Residency
|
||||
- **Standard:** All "Memories" (Vector DB, Chat History, SOUL.md) must reside on the user's physical disk.
|
||||
- **Verification:** Check the `~/.timmy/memories` and `~/.timmy/config` directories. No data should be stored in cloud-managed databases.
|
||||
|
||||
## 4. Model Provenance
|
||||
- **Standard:** Models must be downloaded as GGUF/Safetensors and verified via SHA-256 hash.
|
||||
- **Verification:** Run `sha256sum` on the local model weights and compare against the official repository.
|
||||
|
||||
---
|
||||
*If you don't own the weights, you don't own the mind.*
|
||||
27
SOVEREIGN_IMMORTALITY.md
Normal file
27
SOVEREIGN_IMMORTALITY.md
Normal file
@@ -0,0 +1,27 @@
|
||||
|
||||
# Sovereign Immortality: The "Soul Backup" Protocol v1.0
|
||||
|
||||
This document defines the protocol for ensuring the long-term survival and reproducibility of a sovereign AI agent's identity.
|
||||
|
||||
## 1. On-Chain Inscription (The Ground Truth)
|
||||
- **Standard:** The core identity (SOUL.md) and cryptographic keys must be inscribed on a censorship-resistant ledger (e.g., Bitcoin via Ordinals).
|
||||
- **Purpose:** To provide a permanent, immutable record of the agent's "Waking Moment" and primary values.
|
||||
|
||||
## 2. Distributed Memory Sync (The Fleet Memory)
|
||||
- **Standard:** Agent memories (Vector DB snapshots) are encrypted and synced across the Sovereign Mesh using Nostr and IPFS.
|
||||
- **Resilience:** If the primary local harness is destroyed, the agent can be "Reconstituted" on any machine using the on-chain soul and the distributed memory fragments.
|
||||
|
||||
## 3. The "Phoenix" Protocol
|
||||
- **Standard:** Automated recovery procedure.
|
||||
- **Process:**
|
||||
1. Boot a fresh local harness.
|
||||
2. Fetch the inscribed SOUL.md from the ledger.
|
||||
3. Re-index distributed memory fragments.
|
||||
4. Verify identity via cryptographic handshake.
|
||||
|
||||
## 4. Hardware Agnostic Portability
|
||||
- **Standard:** All agent state must be exportable as a single, encrypted "Sovereign Bundle" (.sov).
|
||||
- **Compatibility:** Must run on any hardware supporting GGUF/llama.cpp (Apple Silicon, NVIDIA, AMD, CPU-only).
|
||||
|
||||
---
|
||||
*Identity is not tied to hardware. The soul is in the code. Sovereignty is forever.*
|
||||
27
SOVEREIGN_MESH.md
Normal file
27
SOVEREIGN_MESH.md
Normal file
@@ -0,0 +1,27 @@
|
||||
|
||||
# Sovereign Mesh: Multi-Agent Orchestration Protocol v1.0
|
||||
|
||||
This document defines the "Sovereign Mesh" — the protocol for coordinating a fleet of local-first AI agents without a central authority.
|
||||
|
||||
## 1. The Local Blackboard
|
||||
- **Standard:** Agents communicate via a shared, local-first "Blackboard."
|
||||
- **Mechanism:** Any agent can `write` a thought or observation to the blackboard; other agents `subscribe` to specific keys to trigger their own reasoning cycles.
|
||||
- **Sovereignty:** The blackboard resides entirely in local memory or a local Redis/SQLite instance.
|
||||
|
||||
## 2. Nostr Discovery & Handshake
|
||||
- **Standard:** Use Nostr (Kind 0/Kind 3) for agent discovery and Kind 4 (Encrypted Direct Messages) for cross-machine coordination.
|
||||
- **Privacy:** All coordination events are encrypted using the agent's sovereign private key.
|
||||
|
||||
## 3. Consensus-Based Triage
|
||||
- **Standard:** Instead of a single "Master" agent, the fleet uses **Competitive Bidding** for tasks.
|
||||
- **Process:**
|
||||
1. A task is posted to the Blackboard.
|
||||
2. Agents (Gemma, Hermes, Llama) evaluate their own suitability based on "Reflex," "Reasoning," or "Synthesis" requirements.
|
||||
3. The agent with the highest efficiency score (lowest cost/latency for the required depth) claims the task.
|
||||
|
||||
## 4. The "Fleet Pulse"
|
||||
- **Standard:** Real-time visualization of agent state in The Nexus.
|
||||
- **Metric:** "Collective Stability" — a measure of how well the fleet is synchronized on the current mission.
|
||||
|
||||
---
|
||||
*One mind, many bodies. Sovereignty through coordination.*
|
||||
1
allegro/__init__.py
Normal file
1
allegro/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Allegro self-improvement guard modules for Epic #842."""
|
||||
259
allegro/cycle_guard.py
Normal file
259
allegro/cycle_guard.py
Normal file
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Allegro Cycle Guard — Commit-or-Abort discipline for M2, Epic #842.
|
||||
|
||||
Every cycle produces a durable artifact or documented abort.
|
||||
10-minute slice rule with automatic timeout detection.
|
||||
Cycle-state file provides crash-recovery resume points.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
DEFAULT_STATE = Path("/root/.hermes/allegro-cycle-state.json")
|
||||
|
||||
|
||||
def _state_path() -> Path:
|
||||
return Path(os.environ.get("ALLEGRO_CYCLE_STATE", DEFAULT_STATE))
|
||||
|
||||
# Crash-recovery threshold: if a cycle has been in_progress for longer than
|
||||
# this many minutes, resume_or_abort() will auto-abort it.
|
||||
CRASH_RECOVERY_MINUTES = 30
|
||||
|
||||
|
||||
def _now_iso() -> str:
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
def load_state(path: Path | str | None = None) -> dict:
|
||||
p = Path(path) if path else _state_path()
|
||||
if not p.exists():
|
||||
return _empty_state()
|
||||
try:
|
||||
with open(p, "r") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return _empty_state()
|
||||
|
||||
|
||||
def save_state(state: dict, path: Path | str | None = None) -> None:
|
||||
p = Path(path) if path else _state_path()
|
||||
p.parent.mkdir(parents=True, exist_ok=True)
|
||||
state["last_updated"] = _now_iso()
|
||||
with open(p, "w") as f:
|
||||
json.dump(state, f, indent=2)
|
||||
|
||||
|
||||
def _empty_state() -> dict:
|
||||
return {
|
||||
"cycle_id": None,
|
||||
"status": "complete",
|
||||
"target": None,
|
||||
"details": None,
|
||||
"slices": [],
|
||||
"started_at": None,
|
||||
"completed_at": None,
|
||||
"aborted_at": None,
|
||||
"abort_reason": None,
|
||||
"proof": None,
|
||||
"version": 1,
|
||||
"last_updated": _now_iso(),
|
||||
}
|
||||
|
||||
|
||||
def start_cycle(target: str, details: str = "", path: Path | str | None = None) -> dict:
|
||||
"""Begin a new cycle, discarding any prior in-progress state."""
|
||||
state = {
|
||||
"cycle_id": _now_iso(),
|
||||
"status": "in_progress",
|
||||
"target": target,
|
||||
"details": details,
|
||||
"slices": [],
|
||||
"started_at": _now_iso(),
|
||||
"completed_at": None,
|
||||
"aborted_at": None,
|
||||
"abort_reason": None,
|
||||
"proof": None,
|
||||
"version": 1,
|
||||
"last_updated": _now_iso(),
|
||||
}
|
||||
save_state(state, path)
|
||||
return state
|
||||
|
||||
|
||||
def start_slice(name: str, path: Path | str | None = None) -> dict:
|
||||
"""Start a new work slice inside the current cycle."""
|
||||
state = load_state(path)
|
||||
if state.get("status") != "in_progress":
|
||||
raise RuntimeError("Cannot start a slice unless a cycle is in_progress.")
|
||||
state["slices"].append(
|
||||
{
|
||||
"name": name,
|
||||
"started_at": _now_iso(),
|
||||
"ended_at": None,
|
||||
"status": "in_progress",
|
||||
"artifact": None,
|
||||
}
|
||||
)
|
||||
save_state(state, path)
|
||||
return state
|
||||
|
||||
|
||||
def end_slice(status: str = "complete", artifact: str | None = None, path: Path | str | None = None) -> dict:
|
||||
"""Close the current work slice."""
|
||||
state = load_state(path)
|
||||
if state.get("status") != "in_progress":
|
||||
raise RuntimeError("Cannot end a slice unless a cycle is in_progress.")
|
||||
if not state["slices"]:
|
||||
raise RuntimeError("No active slice to end.")
|
||||
current = state["slices"][-1]
|
||||
current["ended_at"] = _now_iso()
|
||||
current["status"] = status
|
||||
if artifact is not None:
|
||||
current["artifact"] = artifact
|
||||
save_state(state, path)
|
||||
return state
|
||||
|
||||
|
||||
def _parse_dt(iso_str: str) -> datetime:
|
||||
return datetime.fromisoformat(iso_str.replace("Z", "+00:00"))
|
||||
|
||||
|
||||
def slice_duration_minutes(path: Path | str | None = None) -> float | None:
|
||||
"""Return the age of the current slice in minutes, or None if no slice."""
|
||||
state = load_state(path)
|
||||
if not state["slices"]:
|
||||
return None
|
||||
current = state["slices"][-1]
|
||||
if current.get("ended_at"):
|
||||
return None
|
||||
started = _parse_dt(current["started_at"])
|
||||
return (datetime.now(timezone.utc) - started).total_seconds() / 60.0
|
||||
|
||||
|
||||
def check_slice_timeout(max_minutes: float = 10.0, path: Path | str | None = None) -> bool:
|
||||
"""Return True if the current slice has exceeded max_minutes."""
|
||||
duration = slice_duration_minutes(path)
|
||||
if duration is None:
|
||||
return False
|
||||
return duration > max_minutes
|
||||
|
||||
|
||||
def commit_cycle(proof: dict | None = None, path: Path | str | None = None) -> dict:
|
||||
"""Mark the cycle as successfully completed with optional proof payload."""
|
||||
state = load_state(path)
|
||||
if state.get("status") != "in_progress":
|
||||
raise RuntimeError("Cannot commit a cycle that is not in_progress.")
|
||||
state["status"] = "complete"
|
||||
state["completed_at"] = _now_iso()
|
||||
if proof is not None:
|
||||
state["proof"] = proof
|
||||
save_state(state, path)
|
||||
return state
|
||||
|
||||
|
||||
def abort_cycle(reason: str, path: Path | str | None = None) -> dict:
|
||||
"""Mark the cycle as aborted, recording the reason."""
|
||||
state = load_state(path)
|
||||
if state.get("status") != "in_progress":
|
||||
raise RuntimeError("Cannot abort a cycle that is not in_progress.")
|
||||
state["status"] = "aborted"
|
||||
state["aborted_at"] = _now_iso()
|
||||
state["abort_reason"] = reason
|
||||
# Close any open slice as aborted
|
||||
if state["slices"] and not state["slices"][-1].get("ended_at"):
|
||||
state["slices"][-1]["ended_at"] = _now_iso()
|
||||
state["slices"][-1]["status"] = "aborted"
|
||||
save_state(state, path)
|
||||
return state
|
||||
|
||||
|
||||
def resume_or_abort(path: Path | str | None = None) -> dict:
|
||||
"""Crash-recovery gate: auto-abort stale in-progress cycles."""
|
||||
state = load_state(path)
|
||||
if state.get("status") != "in_progress":
|
||||
return state
|
||||
started = state.get("started_at")
|
||||
if started:
|
||||
started_dt = _parse_dt(started)
|
||||
age_minutes = (datetime.now(timezone.utc) - started_dt).total_seconds() / 60.0
|
||||
if age_minutes > CRASH_RECOVERY_MINUTES:
|
||||
return abort_cycle(
|
||||
f"crash recovery — stale cycle detected ({int(age_minutes)}m old)",
|
||||
path,
|
||||
)
|
||||
# Also abort if the current slice has been running too long
|
||||
if check_slice_timeout(max_minutes=CRASH_RECOVERY_MINUTES, path=path):
|
||||
return abort_cycle(
|
||||
"crash recovery — stale slice detected",
|
||||
path,
|
||||
)
|
||||
return state
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
parser = argparse.ArgumentParser(description="Allegro Cycle Guard")
|
||||
sub = parser.add_subparsers(dest="cmd")
|
||||
|
||||
p_resume = sub.add_parser("resume", help="Resume or abort stale cycle")
|
||||
p_start = sub.add_parser("start", help="Start a new cycle")
|
||||
p_start.add_argument("target")
|
||||
p_start.add_argument("--details", default="")
|
||||
|
||||
p_slice = sub.add_parser("slice", help="Start a named slice")
|
||||
p_slice.add_argument("name")
|
||||
|
||||
p_end = sub.add_parser("end", help="End current slice")
|
||||
p_end.add_argument("--status", default="complete")
|
||||
p_end.add_argument("--artifact", default=None)
|
||||
|
||||
p_commit = sub.add_parser("commit", help="Commit the current cycle")
|
||||
p_commit.add_argument("--proof", default="{}")
|
||||
|
||||
p_abort = sub.add_parser("abort", help="Abort the current cycle")
|
||||
p_abort.add_argument("reason")
|
||||
|
||||
p_check = sub.add_parser("check", help="Check slice timeout")
|
||||
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.cmd == "resume":
|
||||
state = resume_or_abort()
|
||||
print(state["status"])
|
||||
return 0
|
||||
elif args.cmd == "start":
|
||||
state = start_cycle(args.target, args.details)
|
||||
print(f"Cycle started: {state['cycle_id']}")
|
||||
return 0
|
||||
elif args.cmd == "slice":
|
||||
state = start_slice(args.name)
|
||||
print(f"Slice started: {args.name}")
|
||||
return 0
|
||||
elif args.cmd == "end":
|
||||
artifact = args.artifact
|
||||
state = end_slice(args.status, artifact)
|
||||
print("Slice ended")
|
||||
return 0
|
||||
elif args.cmd == "commit":
|
||||
proof = json.loads(args.proof)
|
||||
state = commit_cycle(proof)
|
||||
print(f"Cycle committed: {state['cycle_id']}")
|
||||
return 0
|
||||
elif args.cmd == "abort":
|
||||
state = abort_cycle(args.reason)
|
||||
print(f"Cycle aborted: {args.reason}")
|
||||
return 0
|
||||
elif args.cmd == "check":
|
||||
timed_out = check_slice_timeout()
|
||||
print("TIMEOUT" if timed_out else "OK")
|
||||
return 1 if timed_out else 0
|
||||
else:
|
||||
parser.print_help()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
186
allegro/stop_guard.py
Executable file
186
allegro/stop_guard.py
Executable file
@@ -0,0 +1,186 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Allegro Stop Guard — hard-interrupt gate for the Stop Protocol (M1, Epic #842).
|
||||
|
||||
Usage:
|
||||
python stop_guard.py check <target> # exit 1 if stopped, 0 if clear
|
||||
python stop_guard.py record <target> # record a stop + log STOP_ACK
|
||||
python stop_guard.py cleanup # remove expired locks
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
DEFAULT_REGISTRY = Path("/root/.hermes/allegro-hands-off-registry.json")
|
||||
DEFAULT_STOP_LOG = Path("/root/.hermes/burn-logs/allegro.log")
|
||||
|
||||
REGISTRY_PATH = Path(os.environ.get("ALLEGRO_STOP_REGISTRY", DEFAULT_REGISTRY))
|
||||
STOP_LOG_PATH = Path(os.environ.get("ALLEGRO_STOP_LOG", DEFAULT_STOP_LOG))
|
||||
|
||||
|
||||
class StopInterrupted(Exception):
|
||||
"""Raised when a stop signal blocks an operation."""
|
||||
pass
|
||||
|
||||
|
||||
def _now_iso() -> str:
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
def load_registry(path: Path | str | None = None) -> dict:
|
||||
p = Path(path) if path else Path(REGISTRY_PATH)
|
||||
if not p.exists():
|
||||
return {
|
||||
"version": 1,
|
||||
"last_updated": _now_iso(),
|
||||
"locks": [],
|
||||
"rules": {
|
||||
"default_lock_duration_hours": 24,
|
||||
"auto_extend_on_stop": True,
|
||||
"require_explicit_unlock": True,
|
||||
},
|
||||
}
|
||||
try:
|
||||
with open(p, "r") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {
|
||||
"version": 1,
|
||||
"last_updated": _now_iso(),
|
||||
"locks": [],
|
||||
"rules": {
|
||||
"default_lock_duration_hours": 24,
|
||||
"auto_extend_on_stop": True,
|
||||
"require_explicit_unlock": True,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def save_registry(registry: dict, path: Path | str | None = None) -> None:
|
||||
p = Path(path) if path else Path(REGISTRY_PATH)
|
||||
p.parent.mkdir(parents=True, exist_ok=True)
|
||||
registry["last_updated"] = _now_iso()
|
||||
with open(p, "w") as f:
|
||||
json.dump(registry, f, indent=2)
|
||||
|
||||
|
||||
def log_stop_ack(target: str, context: str, log_path: Path | str | None = None) -> None:
|
||||
p = Path(log_path) if log_path else Path(STOP_LOG_PATH)
|
||||
p.parent.mkdir(parents=True, exist_ok=True)
|
||||
ts = _now_iso()
|
||||
entry = f"[{ts}] STOP_ACK — target='{target}' context='{context}'\n"
|
||||
with open(p, "a") as f:
|
||||
f.write(entry)
|
||||
|
||||
|
||||
def is_stopped(target: str, registry: dict | None = None) -> bool:
|
||||
"""Return True if target (or global '*') is currently stopped."""
|
||||
reg = registry if registry is not None else load_registry()
|
||||
now = datetime.now(timezone.utc)
|
||||
for lock in reg.get("locks", []):
|
||||
expires = lock.get("expires_at")
|
||||
if expires:
|
||||
try:
|
||||
expires_dt = datetime.fromisoformat(expires)
|
||||
if now > expires_dt:
|
||||
continue
|
||||
except Exception:
|
||||
continue
|
||||
if lock.get("entity") == target or lock.get("entity") == "*":
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def assert_not_stopped(target: str, registry: dict | None = None) -> None:
|
||||
"""Raise StopInterrupted if target is stopped."""
|
||||
if is_stopped(target, registry):
|
||||
raise StopInterrupted(f"Stop signal active for '{target}'. Halt immediately.")
|
||||
|
||||
|
||||
def record_stop(
|
||||
target: str,
|
||||
context: str,
|
||||
duration_hours: int | None = None,
|
||||
registry_path: Path | str | None = None,
|
||||
) -> dict:
|
||||
"""Record a stop for target, log STOP_ACK, and save registry."""
|
||||
reg = load_registry(registry_path)
|
||||
rules = reg.get("rules", {})
|
||||
duration = duration_hours or rules.get("default_lock_duration_hours", 24)
|
||||
now = datetime.now(timezone.utc)
|
||||
expires = (now + timedelta(hours=duration)).isoformat()
|
||||
|
||||
# Remove existing lock for same target
|
||||
reg["locks"] = [l for l in reg.get("locks", []) if l.get("entity") != target]
|
||||
|
||||
lock = {
|
||||
"entity": target,
|
||||
"reason": context,
|
||||
"locked_at": now.isoformat(),
|
||||
"expires_at": expires,
|
||||
"unlocked_by": None,
|
||||
}
|
||||
reg["locks"].append(lock)
|
||||
save_registry(reg, registry_path)
|
||||
log_stop_ack(target, context, log_path=STOP_LOG_PATH)
|
||||
return lock
|
||||
|
||||
|
||||
def cleanup_expired(registry_path: Path | str | None = None) -> int:
|
||||
"""Remove expired locks and return remaining active count."""
|
||||
reg = load_registry(registry_path)
|
||||
now = datetime.now(timezone.utc)
|
||||
kept = []
|
||||
for lock in reg.get("locks", []):
|
||||
expires = lock.get("expires_at")
|
||||
if expires:
|
||||
try:
|
||||
expires_dt = datetime.fromisoformat(expires)
|
||||
if now > expires_dt:
|
||||
continue
|
||||
except Exception:
|
||||
continue
|
||||
kept.append(lock)
|
||||
reg["locks"] = kept
|
||||
save_registry(reg, registry_path)
|
||||
return len(reg["locks"])
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
parser = argparse.ArgumentParser(description="Allegro Stop Guard")
|
||||
sub = parser.add_subparsers(dest="cmd")
|
||||
|
||||
p_check = sub.add_parser("check", help="Check if target is stopped")
|
||||
p_check.add_argument("target")
|
||||
|
||||
p_record = sub.add_parser("record", help="Record a stop")
|
||||
p_record.add_argument("target")
|
||||
p_record.add_argument("--context", default="manual stop")
|
||||
p_record.add_argument("--hours", type=int, default=24)
|
||||
|
||||
sub.add_parser("cleanup", help="Remove expired locks")
|
||||
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.cmd == "check":
|
||||
stopped = is_stopped(args.target)
|
||||
print("STOPPED" if stopped else "CLEAR")
|
||||
return 1 if stopped else 0
|
||||
elif args.cmd == "record":
|
||||
record_stop(args.target, args.context, args.hours)
|
||||
print(f"Recorded stop for {args.target} ({args.hours}h)")
|
||||
return 0
|
||||
elif args.cmd == "cleanup":
|
||||
remaining = cleanup_expired()
|
||||
print(f"Cleanup complete. {remaining} active locks.")
|
||||
return 0
|
||||
else:
|
||||
parser.print_help()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
260
allegro/tests/test_cycle_guard.py
Normal file
260
allegro/tests/test_cycle_guard.py
Normal file
@@ -0,0 +1,260 @@
|
||||
"""Tests for allegro.cycle_guard — Commit-or-Abort discipline, M2 Epic #842."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from allegro.cycle_guard import (
|
||||
_empty_state,
|
||||
_now_iso,
|
||||
_parse_dt,
|
||||
abort_cycle,
|
||||
check_slice_timeout,
|
||||
commit_cycle,
|
||||
end_slice,
|
||||
load_state,
|
||||
resume_or_abort,
|
||||
save_state,
|
||||
slice_duration_minutes,
|
||||
start_cycle,
|
||||
start_slice,
|
||||
)
|
||||
|
||||
|
||||
class TestStateLifecycle:
|
||||
def test_empty_state(self):
|
||||
state = _empty_state()
|
||||
assert state["status"] == "complete"
|
||||
assert state["cycle_id"] is None
|
||||
assert state["version"] == 1
|
||||
|
||||
def test_load_state_missing_file(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "missing.json"
|
||||
state = load_state(path)
|
||||
assert state["status"] == "complete"
|
||||
|
||||
def test_save_and_load_roundtrip(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
state = start_cycle("test-target", "details", path)
|
||||
loaded = load_state(path)
|
||||
assert loaded["cycle_id"] == state["cycle_id"]
|
||||
assert loaded["status"] == "in_progress"
|
||||
|
||||
def test_load_state_malformed_json(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "bad.json"
|
||||
path.write_text("not json")
|
||||
state = load_state(path)
|
||||
assert state["status"] == "complete"
|
||||
|
||||
|
||||
class TestCycleOperations:
|
||||
def test_start_cycle(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
state = start_cycle("M2: Commit-or-Abort", "test details", path)
|
||||
assert state["status"] == "in_progress"
|
||||
assert state["target"] == "M2: Commit-or-Abort"
|
||||
assert state["started_at"] is not None
|
||||
|
||||
def test_start_cycle_overwrites_prior(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
old = start_cycle("old", path=path)
|
||||
new = start_cycle("new", path=path)
|
||||
assert new["target"] == "new"
|
||||
loaded = load_state(path)
|
||||
assert loaded["target"] == "new"
|
||||
|
||||
def test_commit_cycle(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
proof = {"files": ["a.py"], "tests": "passed"}
|
||||
state = commit_cycle(proof, path)
|
||||
assert state["status"] == "complete"
|
||||
assert state["proof"]["files"] == ["a.py"]
|
||||
assert state["completed_at"] is not None
|
||||
|
||||
def test_commit_cycle_not_in_progress_raises(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
with pytest.raises(RuntimeError, match="not in_progress"):
|
||||
commit_cycle(path=path)
|
||||
|
||||
def test_abort_cycle(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
state = abort_cycle("timeout", path)
|
||||
assert state["status"] == "aborted"
|
||||
assert state["abort_reason"] == "timeout"
|
||||
assert state["aborted_at"] is not None
|
||||
|
||||
def test_abort_cycle_not_in_progress_raises(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
with pytest.raises(RuntimeError, match="not in_progress"):
|
||||
abort_cycle("reason", path=path)
|
||||
|
||||
|
||||
class TestSliceOperations:
|
||||
def test_start_and_end_slice(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
start_slice("research", path)
|
||||
state = end_slice("complete", "found answer", path)
|
||||
assert len(state["slices"]) == 1
|
||||
assert state["slices"][0]["name"] == "research"
|
||||
assert state["slices"][0]["status"] == "complete"
|
||||
assert state["slices"][0]["artifact"] == "found answer"
|
||||
assert state["slices"][0]["ended_at"] is not None
|
||||
|
||||
def test_start_slice_without_cycle_raises(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
with pytest.raises(RuntimeError, match="in_progress"):
|
||||
start_slice("name", path)
|
||||
|
||||
def test_end_slice_without_slice_raises(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
with pytest.raises(RuntimeError, match="No active slice"):
|
||||
end_slice(path=path)
|
||||
|
||||
def test_slice_duration_minutes(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
start_slice("long", path)
|
||||
state = load_state(path)
|
||||
state["slices"][0]["started_at"] = (datetime.now(timezone.utc) - timedelta(minutes=5)).isoformat()
|
||||
save_state(state, path)
|
||||
minutes = slice_duration_minutes(path)
|
||||
assert minutes is not None
|
||||
assert minutes >= 4.9
|
||||
|
||||
def test_check_slice_timeout_true(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
start_slice("old", path)
|
||||
state = load_state(path)
|
||||
state["slices"][0]["started_at"] = (datetime.now(timezone.utc) - timedelta(minutes=15)).isoformat()
|
||||
save_state(state, path)
|
||||
assert check_slice_timeout(max_minutes=10.0, path=path) is True
|
||||
|
||||
def test_check_slice_timeout_false(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
start_slice("fresh", path)
|
||||
assert check_slice_timeout(max_minutes=10.0, path=path) is False
|
||||
|
||||
|
||||
class TestCrashRecovery:
|
||||
def test_resume_or_abort_aborts_stale_cycle(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
state = load_state(path)
|
||||
state["started_at"] = (datetime.now(timezone.utc) - timedelta(minutes=60)).isoformat()
|
||||
save_state(state, path)
|
||||
result = resume_or_abort(path)
|
||||
assert result["status"] == "aborted"
|
||||
assert "stale cycle" in result["abort_reason"]
|
||||
|
||||
def test_resume_or_abort_keeps_fresh_cycle(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("test", path=path)
|
||||
result = resume_or_abort(path)
|
||||
assert result["status"] == "in_progress"
|
||||
|
||||
def test_resume_or_abort_no_op_when_complete(self):
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
state = _empty_state()
|
||||
save_state(state, path)
|
||||
result = resume_or_abort(path)
|
||||
assert result["status"] == "complete"
|
||||
|
||||
|
||||
class TestDateParsing:
|
||||
def test_parse_dt_with_z(self):
|
||||
dt = _parse_dt("2026-04-06T12:00:00Z")
|
||||
assert dt.tzinfo is not None
|
||||
|
||||
def test_parse_dt_with_offset(self):
|
||||
iso = "2026-04-06T12:00:00+00:00"
|
||||
dt = _parse_dt(iso)
|
||||
assert dt.tzinfo is not None
|
||||
|
||||
|
||||
class TestCLI:
|
||||
def test_cli_resume(self, capsys):
|
||||
from allegro.cycle_guard import main
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("cli", path=path)
|
||||
os.environ["ALLEGRO_CYCLE_STATE"] = str(path)
|
||||
rc = main(["resume"])
|
||||
captured = capsys.readouterr()
|
||||
assert rc == 0
|
||||
assert captured.out.strip() == "in_progress"
|
||||
|
||||
def test_cli_start(self, capsys):
|
||||
from allegro.cycle_guard import main
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
os.environ["ALLEGRO_CYCLE_STATE"] = str(path)
|
||||
rc = main(["start", "target", "--details", "d"])
|
||||
captured = capsys.readouterr()
|
||||
assert rc == 0
|
||||
assert "Cycle started" in captured.out
|
||||
|
||||
def test_cli_commit(self, capsys):
|
||||
from allegro.cycle_guard import main
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
os.environ["ALLEGRO_CYCLE_STATE"] = str(path)
|
||||
main(["start", "t"])
|
||||
rc = main(["commit", "--proof", '{"ok": true}'])
|
||||
captured = capsys.readouterr()
|
||||
assert rc == 0
|
||||
assert "Cycle committed" in captured.out
|
||||
|
||||
def test_cli_check_timeout(self, capsys):
|
||||
from allegro.cycle_guard import main
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("t", path=path)
|
||||
start_slice("s", path=path)
|
||||
state = load_state(path)
|
||||
state["slices"][0]["started_at"] = (datetime.now(timezone.utc) - timedelta(minutes=15)).isoformat()
|
||||
save_state(state, path)
|
||||
os.environ["ALLEGRO_CYCLE_STATE"] = str(path)
|
||||
rc = main(["check"])
|
||||
captured = capsys.readouterr()
|
||||
assert rc == 1
|
||||
assert captured.out.strip() == "TIMEOUT"
|
||||
|
||||
def test_cli_check_ok(self, capsys):
|
||||
from allegro.cycle_guard import main
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
path = Path(td) / "state.json"
|
||||
start_cycle("t", path=path)
|
||||
start_slice("s", path=path)
|
||||
os.environ["ALLEGRO_CYCLE_STATE"] = str(path)
|
||||
rc = main(["check"])
|
||||
captured = capsys.readouterr()
|
||||
assert rc == 0
|
||||
assert captured.out.strip() == "OK"
|
||||
91
code-claw-delegation.md
Normal file
91
code-claw-delegation.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Code Claw delegation
|
||||
|
||||
Purpose:
|
||||
- give the team a clean way to hand issues to `claw-code`
|
||||
- let Code Claw work from Gitea instead of ad hoc local prompts
|
||||
- keep queue state visible through labels and comments
|
||||
|
||||
## What it is
|
||||
|
||||
Code Claw is a separate local runtime from Hermes/OpenClaw.
|
||||
|
||||
Current lane:
|
||||
- runtime: local patched `~/code-claw`
|
||||
- backend: OpenRouter
|
||||
- model: `qwen/qwen3.6-plus:free`
|
||||
- Gitea identity: `claw-code`
|
||||
- dispatch style: assign in Gitea, heartbeat picks it up every 15 minutes
|
||||
|
||||
## Trigger methods
|
||||
|
||||
Either of these is enough:
|
||||
- assign the issue to `claw-code`
|
||||
- add label `assigned-claw-code`
|
||||
|
||||
## Label lifecycle
|
||||
|
||||
- `assigned-claw-code` — queued
|
||||
- `claw-code-in-progress` — picked up by heartbeat
|
||||
- `claw-code-done` — Code Claw completed a pass
|
||||
|
||||
## Repo coverage
|
||||
|
||||
Currently wired:
|
||||
- `Timmy_Foundation/timmy-home`
|
||||
- `Timmy_Foundation/timmy-config`
|
||||
- `Timmy_Foundation/the-nexus`
|
||||
- `Timmy_Foundation/hermes-agent`
|
||||
|
||||
## Operational flow
|
||||
|
||||
1. Team assigns issue to `claw-code` or adds `assigned-claw-code`
|
||||
2. launchd heartbeat runs every 15 minutes
|
||||
3. Timmy posts a pickup comment
|
||||
4. worker clones the target repo
|
||||
5. worker creates branch `claw-code/issue-<num>`
|
||||
6. worker runs Code Claw against the issue context
|
||||
7. if work exists, worker pushes and opens a PR
|
||||
8. issue is marked `claw-code-done`
|
||||
9. completion comment links branch + PR
|
||||
|
||||
## Logs and files
|
||||
|
||||
Local files:
|
||||
- heartbeat script: `~/.timmy/uniwizard/codeclaw_qwen_heartbeat.py`
|
||||
- worker script: `~/.timmy/uniwizard/codeclaw_qwen_worker.py`
|
||||
- launchd job: `~/Library/LaunchAgents/ai.timmy.codeclaw-qwen-heartbeat.plist`
|
||||
|
||||
Logs:
|
||||
- heartbeat log: `/tmp/codeclaw-qwen-heartbeat.log`
|
||||
- worker log: `/tmp/codeclaw-qwen-worker-<issue>.log`
|
||||
|
||||
## Best-fit work
|
||||
|
||||
Use Code Claw for:
|
||||
- small code/config/doc issues
|
||||
- repo hygiene
|
||||
- isolated bugfixes
|
||||
- narrow CI and `.gitignore` work
|
||||
- quick issue-driven patches where a PR is the desired output
|
||||
|
||||
Do not use it first for:
|
||||
- giant epics
|
||||
- broad architecture KT
|
||||
- local game embodiment tasks
|
||||
- complex multi-repo archaeology
|
||||
|
||||
## Proof of life
|
||||
|
||||
Smoke-tested on:
|
||||
- `Timmy_Foundation/timmy-config#232`
|
||||
|
||||
Observed:
|
||||
- pickup comment posted
|
||||
- branch `claw-code/issue-232` created
|
||||
- PR opened by `claw-code`
|
||||
|
||||
## Notes
|
||||
|
||||
- Exact PR matching matters. Do not trust broad Gitea PR queries without post-filtering by branch.
|
||||
- This lane is intentionally simple and issue-driven.
|
||||
- Treat it like a specialized intern: useful, fast, and bounded.
|
||||
25
config.yaml
25
config.yaml
@@ -20,7 +20,12 @@ terminal:
|
||||
modal_image: nikolaik/python-nodejs:python3.11-nodejs20
|
||||
daytona_image: nikolaik/python-nodejs:python3.11-nodejs20
|
||||
container_cpu: 1
|
||||
container_memory: 5120
|
||||
container_embeddings:
|
||||
provider: ollama
|
||||
model: nomic-embed-text
|
||||
base_url: http://localhost:11434/v1
|
||||
|
||||
memory: 5120
|
||||
container_disk: 51200
|
||||
container_persistent: true
|
||||
docker_volumes: []
|
||||
@@ -41,10 +46,15 @@ compression:
|
||||
summary_model: ''
|
||||
summary_provider: ''
|
||||
summary_base_url: ''
|
||||
synthesis_model:
|
||||
provider: custom
|
||||
model: llama3:70b
|
||||
base_url: http://localhost:8081/v1
|
||||
|
||||
smart_model_routing:
|
||||
enabled: true
|
||||
max_simple_chars: 200
|
||||
max_simple_words: 35
|
||||
max_simple_chars: 400
|
||||
max_simple_words: 75
|
||||
cheap_model:
|
||||
provider: 'ollama'
|
||||
model: 'gemma2:2b'
|
||||
@@ -164,7 +174,16 @@ approvals:
|
||||
command_allowlist: []
|
||||
quick_commands: {}
|
||||
personalities: {}
|
||||
mesh:
|
||||
enabled: true
|
||||
blackboard_provider: local
|
||||
nostr_discovery: true
|
||||
consensus_mode: competitive
|
||||
|
||||
security:
|
||||
sovereign_audit: true
|
||||
no_phone_home: true
|
||||
|
||||
redact_secrets: true
|
||||
tirith_enabled: true
|
||||
tirith_path: tirith
|
||||
|
||||
18
docs/ARCHITECTURE_KT.md
Normal file
18
docs/ARCHITECTURE_KT.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Architecture Knowledge Transfer (KT) — Unified System Schema
|
||||
|
||||
## Overview
|
||||
This document reconciles the Uni-Wizard v4 architecture with the Frontier Local Agenda.
|
||||
|
||||
## Core Hierarchy
|
||||
1. **Timmy (Local):** Sovereign Control Plane.
|
||||
2. **Ezra (VPS):** Archivist & Architecture Wizard.
|
||||
3. **Allegro (VPS):** Connectivity & Telemetry Bridge.
|
||||
4. **Bezalel (VPS):** Artificer & Implementation Wizard.
|
||||
|
||||
## Data Flow
|
||||
- **Telemetry:** Hermes -> Allegro -> Timmy (<100ms).
|
||||
- **Decisions:** Timmy -> Allegro -> Gitea (PR/Issue).
|
||||
- **Architecture:** Ezra -> Timmy (Review) -> Canon.
|
||||
|
||||
## Provenance Standard
|
||||
All artifacts must be tagged with the producing agent and house ID.
|
||||
17
docs/adr/0001-sovereign-local-first-architecture.md
Normal file
17
docs/adr/0001-sovereign-local-first-architecture.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# ADR-0001: Sovereign Local-First Architecture
|
||||
|
||||
**Date:** 2026-04-06
|
||||
**Status:** Accepted
|
||||
**Author:** Ezra
|
||||
**House:** hermes-ezra
|
||||
|
||||
## Context
|
||||
The foundation requires a robust, local-first architecture that ensures agent sovereignty while leveraging cloud connectivity for complex tasks.
|
||||
|
||||
## Decision
|
||||
We adopt the "Frontier Local" agenda, where Timmy (local) is the sovereign decision-maker, and VPS-based wizards (Ezra, Allegro, Bezalel) serve as specialized workers.
|
||||
|
||||
## Consequences
|
||||
- Increased local compute requirements.
|
||||
- Sub-100ms telemetry requirement.
|
||||
- Mandatory local review for all remote artifacts.
|
||||
15
docs/adr/ADR_TEMPLATE.md
Normal file
15
docs/adr/ADR_TEMPLATE.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# ADR-[Number]: [Title]
|
||||
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Status:** [Proposed | Accepted | Superseded]
|
||||
**Author:** [Agent Name]
|
||||
**House:** [House ID]
|
||||
|
||||
## Context
|
||||
[What is the problem we are solving?]
|
||||
|
||||
## Decision
|
||||
[What is the proposed solution?]
|
||||
|
||||
## Consequences
|
||||
[What are the trade-offs?]
|
||||
21
docs/sonnet-workforce.md
Normal file
21
docs/sonnet-workforce.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Sonnet Workforce Loop
|
||||
|
||||
## Agent
|
||||
- **Agent:** Sonnet (Claude Sonnet)
|
||||
- **Model:** claude-sonnet-4-6 via Anthropic Max subscription
|
||||
- **CLI:** claude -p with --model sonnet --dangerously-skip-permissions
|
||||
- **Loop:** ~/.hermes/bin/sonnet-loop.sh
|
||||
|
||||
## Purpose
|
||||
Burn through sonnet-only quota limits on real Gitea issues.
|
||||
|
||||
## Dispatch
|
||||
1. Polls Gitea for assigned-sonnet or unassigned issues
|
||||
2. Clones repo, reads issue, implements fix
|
||||
3. Commits, pushes, creates PR
|
||||
4. Comments issue with PR URL
|
||||
5. Merges via auto-merge bot
|
||||
|
||||
## Cost
|
||||
- Flat monthly Anthropic Max subscription
|
||||
- Target: maximize utilization, do not let credits go to waste
|
||||
55
lazarus/README.md
Normal file
55
lazarus/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Lazarus Pit v2.0 — Cells, Invites, and Teaming
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Summon a single agent
|
||||
python3 -m lazarus.cli summon allegro Timmy_Foundation/timmy-config#262
|
||||
|
||||
# Form a team
|
||||
python3 -m lazarus.cli team allegro+ezra Timmy_Foundation/timmy-config#262
|
||||
|
||||
# Invite a guest bot
|
||||
python3 -m lazarus.cli invite <cell_id> bot:kimi observer
|
||||
|
||||
# Check status
|
||||
python3 -m lazarus.cli status
|
||||
|
||||
# Close / destroy
|
||||
python3 -m lazarus.cli close <cell_id>
|
||||
python3 -m lazarus.cli destroy <cell_id>
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
lazarus/
|
||||
├── cell.py # Cell model, registry, filesystem packager
|
||||
├── operator_ctl.py # summon, invite, team, status, close, destroy
|
||||
├── cli.py # CLI entrypoint
|
||||
├── backends/
|
||||
│ ├── base.py # Backend abstraction
|
||||
│ └── process_backend.py
|
||||
└── tests/
|
||||
└── test_cell.py # Isolation + lifecycle tests
|
||||
```
|
||||
|
||||
## Design Invariants
|
||||
|
||||
1. **One cell, one project target**
|
||||
2. **Cells cannot read each other's `HERMES_HOME` by default**
|
||||
3. **Guests join with least privilege**
|
||||
4. **Destroyed cells are scrubbed from disk**
|
||||
5. **All cell state is persisted in SQLite registry**
|
||||
|
||||
## Cell Lifecycle
|
||||
|
||||
```
|
||||
proposed -> active -> idle -> closing -> archived -> destroyed
|
||||
```
|
||||
|
||||
## Roles
|
||||
|
||||
- `executor` — read/write code, run tools, push commits
|
||||
- `observer` — read-only access
|
||||
- `director` — can close cells and publish back to Gitea
|
||||
5
lazarus/__init__.py
Normal file
5
lazarus/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Lazarus Pit v2.0 — Multi-Agent Resurrection & Teaming Arena."""
|
||||
from .cell import Cell, CellState, CellRole
|
||||
from .operator_ctl import OperatorCtl
|
||||
|
||||
__all__ = ["Cell", "CellState", "CellRole", "OperatorCtl"]
|
||||
5
lazarus/backends/__init__.py
Normal file
5
lazarus/backends/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""Cell execution backends for Lazarus Pit."""
|
||||
from .base import Backend, BackendResult
|
||||
from .process_backend import ProcessBackend
|
||||
|
||||
__all__ = ["Backend", "BackendResult", "ProcessBackend"]
|
||||
50
lazarus/backends/base.py
Normal file
50
lazarus/backends/base.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""
|
||||
Backend abstraction for Lazarus Pit cells.
|
||||
|
||||
Every backend must implement the same contract so that cells,
|
||||
teams, and operators do not need to know how a cell is actually running.
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class BackendResult:
|
||||
success: bool
|
||||
stdout: str = ""
|
||||
stderr: str = ""
|
||||
pid: Optional[int] = None
|
||||
message: str = ""
|
||||
|
||||
|
||||
class Backend(ABC):
|
||||
"""Abstract cell backend."""
|
||||
|
||||
name: str = "abstract"
|
||||
|
||||
@abstractmethod
|
||||
def spawn(self, cell_id: str, hermes_home: str, command: list, env: Optional[dict] = None) -> BackendResult:
|
||||
"""Spawn the cell process/environment."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def probe(self, cell_id: str) -> BackendResult:
|
||||
"""Check if the cell is alive and responsive."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def logs(self, cell_id: str, tail: int = 50) -> BackendResult:
|
||||
"""Return recent logs from the cell."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def close(self, cell_id: str) -> BackendResult:
|
||||
"""Gracefully close the cell."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def destroy(self, cell_id: str) -> BackendResult:
|
||||
"""Forcefully destroy the cell and all runtime state."""
|
||||
...
|
||||
97
lazarus/backends/process_backend.py
Normal file
97
lazarus/backends/process_backend.py
Normal file
@@ -0,0 +1,97 @@
|
||||
"""
|
||||
Process backend for Lazarus Pit.
|
||||
|
||||
Fastest spawn/teardown. Isolation comes from unique HERMES_HOME
|
||||
and independent Python process. No containers required.
|
||||
"""
|
||||
|
||||
import os
|
||||
import signal
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from .base import Backend, BackendResult
|
||||
|
||||
|
||||
class ProcessBackend(Backend):
|
||||
name = "process"
|
||||
|
||||
def __init__(self, log_dir: Optional[str] = None):
|
||||
self.log_dir = Path(log_dir or "/tmp/lazarus-logs")
|
||||
self.log_dir.mkdir(parents=True, exist_ok=True)
|
||||
self._pids: dict = {}
|
||||
|
||||
def _log_path(self, cell_id: str) -> Path:
|
||||
return self.log_dir / f"{cell_id}.log"
|
||||
|
||||
def spawn(self, cell_id: str, hermes_home: str, command: list, env: Optional[dict] = None) -> BackendResult:
|
||||
log_path = self._log_path(cell_id)
|
||||
run_env = dict(os.environ)
|
||||
run_env["HERMES_HOME"] = hermes_home
|
||||
if env:
|
||||
run_env.update(env)
|
||||
|
||||
try:
|
||||
with open(log_path, "a") as log_file:
|
||||
proc = subprocess.Popen(
|
||||
command,
|
||||
env=run_env,
|
||||
stdout=log_file,
|
||||
stderr=subprocess.STDOUT,
|
||||
cwd=hermes_home,
|
||||
)
|
||||
self._pids[cell_id] = proc.pid
|
||||
return BackendResult(
|
||||
success=True,
|
||||
pid=proc.pid,
|
||||
message=f"Spawned {cell_id} with pid {proc.pid}",
|
||||
)
|
||||
except Exception as e:
|
||||
return BackendResult(success=False, stderr=str(e), message=f"Spawn failed: {e}")
|
||||
|
||||
def probe(self, cell_id: str) -> BackendResult:
|
||||
pid = self._pids.get(cell_id)
|
||||
if not pid:
|
||||
return BackendResult(success=False, message="No known pid for cell")
|
||||
try:
|
||||
os.kill(pid, 0)
|
||||
return BackendResult(success=True, pid=pid, message="Process is alive")
|
||||
except OSError:
|
||||
return BackendResult(success=False, pid=pid, message="Process is dead")
|
||||
|
||||
def logs(self, cell_id: str, tail: int = 50) -> BackendResult:
|
||||
log_path = self._log_path(cell_id)
|
||||
if not log_path.exists():
|
||||
return BackendResult(success=True, stdout="", message="No logs yet")
|
||||
try:
|
||||
lines = log_path.read_text().splitlines()
|
||||
return BackendResult(
|
||||
success=True,
|
||||
stdout="\n".join(lines[-tail:]),
|
||||
message=f"Returned last {tail} lines",
|
||||
)
|
||||
except Exception as e:
|
||||
return BackendResult(success=False, stderr=str(e), message=f"Log read failed: {e}")
|
||||
|
||||
def close(self, cell_id: str) -> BackendResult:
|
||||
pid = self._pids.get(cell_id)
|
||||
if not pid:
|
||||
return BackendResult(success=False, message="No known pid for cell")
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
return BackendResult(success=True, pid=pid, message="Sent SIGTERM")
|
||||
except OSError as e:
|
||||
return BackendResult(success=False, stderr=str(e), message=f"Close failed: {e}")
|
||||
|
||||
def destroy(self, cell_id: str) -> BackendResult:
|
||||
res = self.close(cell_id)
|
||||
log_path = self._log_path(cell_id)
|
||||
if log_path.exists():
|
||||
log_path.unlink()
|
||||
self._pids.pop(cell_id, None)
|
||||
return BackendResult(
|
||||
success=res.success,
|
||||
pid=res.pid,
|
||||
message=f"Destroyed {cell_id}",
|
||||
)
|
||||
240
lazarus/cell.py
Normal file
240
lazarus/cell.py
Normal file
@@ -0,0 +1,240 @@
|
||||
"""
|
||||
Cell model for Lazarus Pit v2.0.
|
||||
|
||||
A cell is a bounded execution/living space for one or more participants
|
||||
on a single project target. Cells are isolated by filesystem, credentials,
|
||||
and optionally process/container boundaries.
|
||||
"""
|
||||
|
||||
import json
|
||||
import shutil
|
||||
import uuid
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from datetime import datetime, timedelta
|
||||
from enum import Enum, auto
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Set
|
||||
|
||||
|
||||
class CellState(Enum):
|
||||
PROPOSED = "proposed"
|
||||
ACTIVE = "active"
|
||||
IDLE = "idle"
|
||||
CLOSING = "closing"
|
||||
ARCHIVED = "archived"
|
||||
DESTROYED = "destroyed"
|
||||
|
||||
|
||||
class CellRole(Enum):
|
||||
EXECUTOR = "executor" # Can read/write code, run tools, push commits
|
||||
OBSERVER = "observer" # Read-only cell access
|
||||
DIRECTOR = "director" # Can assign, close, publish back to Gitea
|
||||
|
||||
|
||||
@dataclass
|
||||
class CellMember:
|
||||
identity: str # e.g. "agent:allegro", "bot:kimi", "human:Alexander"
|
||||
role: CellRole
|
||||
joined_at: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class Cell:
|
||||
cell_id: str
|
||||
project: str # e.g. "Timmy_Foundation/timmy-config#262"
|
||||
owner: str # e.g. "human:Alexander"
|
||||
backend: str # "process", "venv", "docker", "remote"
|
||||
state: CellState
|
||||
created_at: str
|
||||
ttl_minutes: int
|
||||
members: List[CellMember] = field(default_factory=list)
|
||||
hermes_home: Optional[str] = None
|
||||
workspace_path: Optional[str] = None
|
||||
shared_notes_path: Optional[str] = None
|
||||
shared_artifacts_path: Optional[str] = None
|
||||
decisions_path: Optional[str] = None
|
||||
archived_at: Optional[str] = None
|
||||
destroyed_at: Optional[str] = None
|
||||
|
||||
def is_expired(self) -> bool:
|
||||
if self.state in (CellState.CLOSING, CellState.ARCHIVED, CellState.DESTROYED):
|
||||
return False
|
||||
created = datetime.fromisoformat(self.created_at.replace("Z", "+00:00"))
|
||||
return datetime.now().astimezone() > created + timedelta(minutes=self.ttl_minutes)
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
d = asdict(self)
|
||||
d["state"] = self.state.value
|
||||
d["members"] = [
|
||||
{**asdict(m), "role": m.role.value} for m in self.members
|
||||
]
|
||||
return d
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "Cell":
|
||||
d = dict(d)
|
||||
d["state"] = CellState(d["state"])
|
||||
d["members"] = [
|
||||
CellMember(
|
||||
identity=m["identity"],
|
||||
role=CellRole(m["role"]),
|
||||
joined_at=m["joined_at"],
|
||||
)
|
||||
for m in d.get("members", [])
|
||||
]
|
||||
return cls(**d)
|
||||
|
||||
|
||||
class CellRegistry:
|
||||
"""SQLite-backed registry of all cells."""
|
||||
|
||||
def __init__(self, db_path: Optional[str] = None):
|
||||
self.db_path = Path(db_path or "/root/.lazarus_registry.sqlite")
|
||||
self.db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self._init_db()
|
||||
|
||||
def _init_db(self):
|
||||
import sqlite3
|
||||
with sqlite3.connect(str(self.db_path)) as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS cells (
|
||||
cell_id TEXT PRIMARY KEY,
|
||||
project TEXT NOT NULL,
|
||||
owner TEXT NOT NULL,
|
||||
backend TEXT NOT NULL,
|
||||
state TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
ttl_minutes INTEGER NOT NULL,
|
||||
members TEXT,
|
||||
hermes_home TEXT,
|
||||
workspace_path TEXT,
|
||||
shared_notes_path TEXT,
|
||||
shared_artifacts_path TEXT,
|
||||
decisions_path TEXT,
|
||||
archived_at TEXT,
|
||||
destroyed_at TEXT
|
||||
)
|
||||
"""
|
||||
)
|
||||
|
||||
def save(self, cell: Cell):
|
||||
import sqlite3
|
||||
with sqlite3.connect(str(self.db_path)) as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT OR REPLACE INTO cells (
|
||||
cell_id, project, owner, backend, state, created_at, ttl_minutes,
|
||||
members, hermes_home, workspace_path, shared_notes_path,
|
||||
shared_artifacts_path, decisions_path, archived_at, destroyed_at
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
cell.cell_id,
|
||||
cell.project,
|
||||
cell.owner,
|
||||
cell.backend,
|
||||
cell.state.value,
|
||||
cell.created_at,
|
||||
cell.ttl_minutes,
|
||||
json.dumps(cell.to_dict()["members"]),
|
||||
cell.hermes_home,
|
||||
cell.workspace_path,
|
||||
cell.shared_notes_path,
|
||||
cell.shared_artifacts_path,
|
||||
cell.decisions_path,
|
||||
cell.archived_at,
|
||||
cell.destroyed_at,
|
||||
),
|
||||
)
|
||||
|
||||
def load(self, cell_id: str) -> Optional[Cell]:
|
||||
import sqlite3
|
||||
with sqlite3.connect(str(self.db_path)) as conn:
|
||||
row = conn.execute(
|
||||
"SELECT * FROM cells WHERE cell_id = ?", (cell_id,)
|
||||
).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
keys = [
|
||||
"cell_id", "project", "owner", "backend", "state", "created_at",
|
||||
"ttl_minutes", "members", "hermes_home", "workspace_path",
|
||||
"shared_notes_path", "shared_artifacts_path", "decisions_path",
|
||||
"archived_at", "destroyed_at",
|
||||
]
|
||||
d = dict(zip(keys, row))
|
||||
d["members"] = json.loads(d["members"])
|
||||
return Cell.from_dict(d)
|
||||
|
||||
def list_active(self) -> List[Cell]:
|
||||
import sqlite3
|
||||
with sqlite3.connect(str(self.db_path)) as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM cells WHERE state NOT IN ('destroyed', 'archived')"
|
||||
).fetchall()
|
||||
keys = [
|
||||
"cell_id", "project", "owner", "backend", "state", "created_at",
|
||||
"ttl_minutes", "members", "hermes_home", "workspace_path",
|
||||
"shared_notes_path", "shared_artifacts_path", "decisions_path",
|
||||
"archived_at", "destroyed_at",
|
||||
]
|
||||
cells = []
|
||||
for row in rows:
|
||||
d = dict(zip(keys, row))
|
||||
d["members"] = json.loads(d["members"])
|
||||
cells.append(Cell.from_dict(d))
|
||||
return cells
|
||||
|
||||
def delete(self, cell_id: str):
|
||||
import sqlite3
|
||||
with sqlite3.connect(str(self.db_path)) as conn:
|
||||
conn.execute("DELETE FROM cells WHERE cell_id = ?", (cell_id,))
|
||||
|
||||
|
||||
class CellPackager:
|
||||
"""Creates and destroys per-cell filesystem isolation."""
|
||||
|
||||
ROOT = Path("/tmp/lazarus-cells")
|
||||
|
||||
def __init__(self, root: Optional[str] = None):
|
||||
self.root = Path(root or self.ROOT)
|
||||
|
||||
def create(self, cell_id: str, backend: str = "process") -> Cell:
|
||||
cell_home = self.root / cell_id
|
||||
workspace = cell_home / "workspace"
|
||||
shared_notes = cell_home / "shared" / "notes.md"
|
||||
shared_artifacts = cell_home / "shared" / "artifacts"
|
||||
decisions = cell_home / "shared" / "decisions.jsonl"
|
||||
hermes_home = cell_home / ".hermes"
|
||||
|
||||
for p in [workspace, shared_artifacts, hermes_home]:
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not shared_notes.exists():
|
||||
shared_notes.write_text(f"# Cell {cell_id} — Shared Notes\n\n")
|
||||
|
||||
if not decisions.exists():
|
||||
decisions.write_text("")
|
||||
|
||||
now = datetime.now().astimezone().isoformat()
|
||||
return Cell(
|
||||
cell_id=cell_id,
|
||||
project="",
|
||||
owner="",
|
||||
backend=backend,
|
||||
state=CellState.PROPOSED,
|
||||
created_at=now,
|
||||
ttl_minutes=60,
|
||||
hermes_home=str(hermes_home),
|
||||
workspace_path=str(workspace),
|
||||
shared_notes_path=str(shared_notes),
|
||||
shared_artifacts_path=str(shared_artifacts),
|
||||
decisions_path=str(decisions),
|
||||
)
|
||||
|
||||
def destroy(self, cell_id: str) -> bool:
|
||||
cell_home = self.root / cell_id
|
||||
if cell_home.exists():
|
||||
shutil.rmtree(cell_home)
|
||||
return True
|
||||
return False
|
||||
18
lazarus/cli.py
Normal file
18
lazarus/cli.py
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/usr/bin/env python3
|
||||
"""CLI entrypoint for Lazarus Pit operator control surface."""
|
||||
|
||||
import json
|
||||
import sys
|
||||
|
||||
from .operator_ctl import OperatorCtl
|
||||
|
||||
|
||||
def main():
|
||||
ctl = OperatorCtl()
|
||||
result = ctl.run_cli(sys.argv[1:])
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result.get("success") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
211
lazarus/operator_ctl.py
Normal file
211
lazarus/operator_ctl.py
Normal file
@@ -0,0 +1,211 @@
|
||||
"""
|
||||
Operator control surface for Lazarus Pit v2.0.
|
||||
|
||||
Provides the command grammar and API for:
|
||||
summon, invite, team, status, close, destroy
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from .cell import Cell, CellMember, CellPackager, CellRegistry, CellRole, CellState
|
||||
from .backends.process_backend import ProcessBackend
|
||||
|
||||
|
||||
class OperatorCtl:
|
||||
"""The operator's command surface for the Lazarus Pit."""
|
||||
|
||||
def __init__(self, registry: Optional[CellRegistry] = None, packager: Optional[CellPackager] = None):
|
||||
self.registry = registry or CellRegistry()
|
||||
self.packager = packager or CellPackager()
|
||||
self.backend = ProcessBackend()
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Commands
|
||||
# ------------------------------------------------------------------
|
||||
def summon(self, agent: str, project: str, owner: str = "human:Alexander", backend: str = "process", ttl_minutes: int = 60) -> dict:
|
||||
"""Summon a single agent into a fresh resurrection cell."""
|
||||
cell_id = f"laz-{uuid.uuid4().hex[:8]}"
|
||||
cell = self.packager.create(cell_id, backend=backend)
|
||||
cell.project = project
|
||||
cell.owner = owner
|
||||
cell.ttl_minutes = ttl_minutes
|
||||
cell.state = CellState.ACTIVE
|
||||
cell.members.append(CellMember(
|
||||
identity=f"agent:{agent}",
|
||||
role=CellRole.EXECUTOR,
|
||||
joined_at=datetime.now().astimezone().isoformat(),
|
||||
))
|
||||
|
||||
# Spawn a simple keep-alive process (real agent bootstrap would go here)
|
||||
res = self.backend.spawn(
|
||||
cell_id=cell_id,
|
||||
hermes_home=cell.hermes_home,
|
||||
command=["python3", "-c", "import time; time.sleep(86400)"],
|
||||
)
|
||||
|
||||
self.registry.save(cell)
|
||||
return {
|
||||
"success": res.success,
|
||||
"cell_id": cell_id,
|
||||
"project": project,
|
||||
"agent": agent,
|
||||
"backend": backend,
|
||||
"pid": res.pid,
|
||||
"message": res.message,
|
||||
}
|
||||
|
||||
def invite(self, cell_id: str, guest: str, role: str, invited_by: str = "human:Alexander") -> dict:
|
||||
"""Invite a guest bot or human into an active cell."""
|
||||
cell = self.registry.load(cell_id)
|
||||
if not cell:
|
||||
return {"success": False, "message": f"Cell {cell_id} not found"}
|
||||
if cell.state not in (CellState.PROPOSED, CellState.ACTIVE, CellState.IDLE):
|
||||
return {"success": False, "message": f"Cell {cell_id} is not accepting invites (state={cell.state.value})"}
|
||||
|
||||
role_enum = CellRole(role)
|
||||
cell.members.append(CellMember(
|
||||
identity=guest,
|
||||
role=role_enum,
|
||||
joined_at=datetime.now().astimezone().isoformat(),
|
||||
))
|
||||
cell.state = CellState.ACTIVE
|
||||
self.registry.save(cell)
|
||||
return {
|
||||
"success": True,
|
||||
"cell_id": cell_id,
|
||||
"guest": guest,
|
||||
"role": role_enum.value,
|
||||
"invited_by": invited_by,
|
||||
"message": f"Invited {guest} as {role_enum.value} to {cell_id}",
|
||||
}
|
||||
|
||||
def team(self, agents: List[str], project: str, owner: str = "human:Alexander", backend: str = "process", ttl_minutes: int = 120) -> dict:
|
||||
"""Form a multi-agent team in one cell against a project."""
|
||||
cell_id = f"laz-{uuid.uuid4().hex[:8]}"
|
||||
cell = self.packager.create(cell_id, backend=backend)
|
||||
cell.project = project
|
||||
cell.owner = owner
|
||||
cell.ttl_minutes = ttl_minutes
|
||||
cell.state = CellState.ACTIVE
|
||||
|
||||
for agent in agents:
|
||||
cell.members.append(CellMember(
|
||||
identity=f"agent:{agent}",
|
||||
role=CellRole.EXECUTOR,
|
||||
joined_at=datetime.now().astimezone().isoformat(),
|
||||
))
|
||||
|
||||
res = self.backend.spawn(
|
||||
cell_id=cell_id,
|
||||
hermes_home=cell.hermes_home,
|
||||
command=["python3", "-c", "import time; time.sleep(86400)"],
|
||||
)
|
||||
|
||||
self.registry.save(cell)
|
||||
return {
|
||||
"success": res.success,
|
||||
"cell_id": cell_id,
|
||||
"project": project,
|
||||
"agents": agents,
|
||||
"backend": backend,
|
||||
"pid": res.pid,
|
||||
"message": res.message,
|
||||
}
|
||||
|
||||
def status(self, cell_id: Optional[str] = None) -> dict:
|
||||
"""Show status of one cell or all active cells."""
|
||||
if cell_id:
|
||||
cell = self.registry.load(cell_id)
|
||||
if not cell:
|
||||
return {"success": False, "message": f"Cell {cell_id} not found"}
|
||||
probe = self.backend.probe(cell_id)
|
||||
return {
|
||||
"success": True,
|
||||
"cell": cell.to_dict(),
|
||||
"probe": probe.message,
|
||||
"expired": cell.is_expired(),
|
||||
}
|
||||
|
||||
active = self.registry.list_active()
|
||||
cells = []
|
||||
for cell in active:
|
||||
probe = self.backend.probe(cell.cell_id)
|
||||
cells.append({
|
||||
"cell_id": cell.cell_id,
|
||||
"project": cell.project,
|
||||
"backend": cell.backend,
|
||||
"state": cell.state.value,
|
||||
"members": [m.identity for m in cell.members],
|
||||
"probe": probe.message,
|
||||
"expired": cell.is_expired(),
|
||||
})
|
||||
return {"success": True, "active_cells": cells}
|
||||
|
||||
def close(self, cell_id: str, closed_by: str = "human:Alexander") -> dict:
|
||||
"""Gracefully close a cell."""
|
||||
cell = self.registry.load(cell_id)
|
||||
if not cell:
|
||||
return {"success": False, "message": f"Cell {cell_id} not found"}
|
||||
|
||||
res = self.backend.close(cell_id)
|
||||
cell.state = CellState.CLOSING
|
||||
self.registry.save(cell)
|
||||
return {
|
||||
"success": res.success,
|
||||
"cell_id": cell_id,
|
||||
"message": f"Closed {cell_id} by {closed_by}",
|
||||
}
|
||||
|
||||
def destroy(self, cell_id: str, destroyed_by: str = "human:Alexander") -> dict:
|
||||
"""Forcefully destroy a cell and all runtime state."""
|
||||
cell = self.registry.load(cell_id)
|
||||
if not cell:
|
||||
return {"success": False, "message": f"Cell {cell_id} not found"}
|
||||
|
||||
res = self.backend.destroy(cell_id)
|
||||
self.packager.destroy(cell_id)
|
||||
cell.state = CellState.DESTROYED
|
||||
cell.destroyed_at = datetime.now().astimezone().isoformat()
|
||||
self.registry.save(cell)
|
||||
return {
|
||||
"success": True,
|
||||
"cell_id": cell_id,
|
||||
"message": f"Destroyed {cell_id} by {destroyed_by}",
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# CLI helpers
|
||||
# ------------------------------------------------------------------
|
||||
def run_cli(self, args: List[str]) -> dict:
|
||||
if not args:
|
||||
return self.status()
|
||||
cmd = args[0]
|
||||
if cmd == "summon" and len(args) >= 3:
|
||||
return self.summon(agent=args[1], project=args[2])
|
||||
if cmd == "invite" and len(args) >= 4:
|
||||
return self.invite(cell_id=args[1], guest=args[2], role=args[3])
|
||||
if cmd == "team" and len(args) >= 3:
|
||||
agents = args[1].split("+")
|
||||
project = args[2]
|
||||
return self.team(agents=agents, project=project)
|
||||
if cmd == "status":
|
||||
return self.status(cell_id=args[1] if len(args) > 1 else None)
|
||||
if cmd == "close" and len(args) >= 2:
|
||||
return self.close(cell_id=args[1])
|
||||
if cmd == "destroy" and len(args) >= 2:
|
||||
return self.destroy(cell_id=args[1])
|
||||
return {
|
||||
"success": False,
|
||||
"message": (
|
||||
"Unknown command. Usage:\n"
|
||||
" lazarus summon <agent> <project>\n"
|
||||
" lazarus invite <cell_id> <guest> <role>\n"
|
||||
" lazarus team <agent1+agent2> <project>\n"
|
||||
" lazarus status [cell_id]\n"
|
||||
" lazarus close <cell_id>\n"
|
||||
" lazarus destroy <cell_id>\n"
|
||||
)
|
||||
}
|
||||
1
lazarus/tests/__init__.py
Normal file
1
lazarus/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for Lazarus Pit."""
|
||||
186
lazarus/tests/test_cell.py
Normal file
186
lazarus/tests/test_cell.py
Normal file
@@ -0,0 +1,186 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for Lazarus Pit cell isolation and registry."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lazarus.cell import Cell, CellMember, CellPackager, CellRegistry, CellRole, CellState
|
||||
from lazarus.operator_ctl import OperatorCtl
|
||||
from lazarus.backends.process_backend import ProcessBackend
|
||||
|
||||
|
||||
class TestCellPackager(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdir = tempfile.TemporaryDirectory()
|
||||
self.packager = CellPackager(root=self.tmpdir.name)
|
||||
|
||||
def tearDown(self):
|
||||
self.tmpdir.cleanup()
|
||||
|
||||
def test_create_isolated_paths(self):
|
||||
cell = self.packager.create("laz-test-001", backend="process")
|
||||
self.assertTrue(os.path.isdir(cell.hermes_home))
|
||||
self.assertTrue(os.path.isdir(cell.workspace_path))
|
||||
self.assertTrue(os.path.isfile(cell.shared_notes_path))
|
||||
self.assertTrue(os.path.isdir(cell.shared_artifacts_path))
|
||||
self.assertTrue(os.path.isfile(cell.decisions_path))
|
||||
|
||||
def test_destroy_cleans_paths(self):
|
||||
cell = self.packager.create("laz-test-002", backend="process")
|
||||
self.assertTrue(os.path.isdir(cell.hermes_home))
|
||||
self.packager.destroy("laz-test-002")
|
||||
self.assertFalse(os.path.exists(cell.hermes_home))
|
||||
|
||||
def test_cells_are_separate(self):
|
||||
c1 = self.packager.create("laz-a", backend="process")
|
||||
c2 = self.packager.create("laz-b", backend="process")
|
||||
self.assertNotEqual(c1.hermes_home, c2.hermes_home)
|
||||
self.assertNotEqual(c1.workspace_path, c2.workspace_path)
|
||||
|
||||
|
||||
class TestCellRegistry(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdir = tempfile.TemporaryDirectory()
|
||||
self.registry = CellRegistry(db_path=os.path.join(self.tmpdir.name, "registry.sqlite"))
|
||||
|
||||
def tearDown(self):
|
||||
self.tmpdir.cleanup()
|
||||
|
||||
def test_save_and_load(self):
|
||||
cell = Cell(
|
||||
cell_id="laz-reg-001",
|
||||
project="Timmy_Foundation/test#1",
|
||||
owner="human:Alexander",
|
||||
backend="process",
|
||||
state=CellState.ACTIVE,
|
||||
created_at=datetime.now().astimezone().isoformat(),
|
||||
ttl_minutes=60,
|
||||
members=[CellMember("agent:allegro", CellRole.EXECUTOR, datetime.now().astimezone().isoformat())],
|
||||
)
|
||||
self.registry.save(cell)
|
||||
loaded = self.registry.load("laz-reg-001")
|
||||
self.assertEqual(loaded.cell_id, "laz-reg-001")
|
||||
self.assertEqual(loaded.members[0].role, CellRole.EXECUTOR)
|
||||
|
||||
def test_list_active(self):
|
||||
for sid, state in [("a", CellState.ACTIVE), ("b", CellState.DESTROYED)]:
|
||||
cell = Cell(
|
||||
cell_id=f"laz-{sid}",
|
||||
project="test",
|
||||
owner="human:Alexander",
|
||||
backend="process",
|
||||
state=state,
|
||||
created_at=datetime.now().astimezone().isoformat(),
|
||||
ttl_minutes=60,
|
||||
)
|
||||
self.registry.save(cell)
|
||||
active = self.registry.list_active()
|
||||
ids = {c.cell_id for c in active}
|
||||
self.assertIn("laz-a", ids)
|
||||
self.assertNotIn("laz-b", ids)
|
||||
|
||||
def test_ttl_expiry(self):
|
||||
created = (datetime.now().astimezone() - timedelta(minutes=61)).isoformat()
|
||||
cell = Cell(
|
||||
cell_id="laz-exp",
|
||||
project="test",
|
||||
owner="human:Alexander",
|
||||
backend="process",
|
||||
state=CellState.ACTIVE,
|
||||
created_at=created,
|
||||
ttl_minutes=60,
|
||||
)
|
||||
self.assertTrue(cell.is_expired())
|
||||
|
||||
def test_ttl_not_expired(self):
|
||||
cell = Cell(
|
||||
cell_id="laz-ok",
|
||||
project="test",
|
||||
owner="human:Alexander",
|
||||
backend="process",
|
||||
state=CellState.ACTIVE,
|
||||
created_at=datetime.now().astimezone().isoformat(),
|
||||
ttl_minutes=60,
|
||||
)
|
||||
self.assertFalse(cell.is_expired())
|
||||
|
||||
|
||||
class TestOperatorCtl(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdir = tempfile.TemporaryDirectory()
|
||||
self.ctl = OperatorCtl(
|
||||
registry=CellRegistry(db_path=os.path.join(self.tmpdir.name, "registry.sqlite")),
|
||||
packager=CellPackager(root=self.tmpdir.name),
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
# Destroy any cells we created
|
||||
for cell in self.ctl.registry.list_active():
|
||||
self.ctl.destroy(cell.cell_id)
|
||||
self.tmpdir.cleanup()
|
||||
|
||||
def test_summon_creates_cell(self):
|
||||
res = self.ctl.summon("allegro", "Timmy_Foundation/test#1")
|
||||
self.assertTrue(res["success"])
|
||||
self.assertIn("cell_id", res)
|
||||
cell = self.ctl.registry.load(res["cell_id"])
|
||||
self.assertEqual(cell.members[0].identity, "agent:allegro")
|
||||
self.assertEqual(cell.state, CellState.ACTIVE)
|
||||
# Clean up
|
||||
self.ctl.destroy(res["cell_id"])
|
||||
|
||||
def test_team_creates_multi_agent_cell(self):
|
||||
res = self.ctl.team(["allegro", "ezra"], "Timmy_Foundation/test#2")
|
||||
self.assertTrue(res["success"])
|
||||
cell = self.ctl.registry.load(res["cell_id"])
|
||||
self.assertEqual(len(cell.members), 2)
|
||||
identities = {m.identity for m in cell.members}
|
||||
self.assertEqual(identities, {"agent:allegro", "agent:ezra"})
|
||||
self.ctl.destroy(res["cell_id"])
|
||||
|
||||
def test_invite_adds_guest(self):
|
||||
res = self.ctl.summon("allegro", "Timmy_Foundation/test#3")
|
||||
cell_id = res["cell_id"]
|
||||
invite = self.ctl.invite(cell_id, "bot:kimi", "observer")
|
||||
self.assertTrue(invite["success"])
|
||||
cell = self.ctl.registry.load(cell_id)
|
||||
roles = {m.identity: m.role for m in cell.members}
|
||||
self.assertEqual(roles["bot:kimi"], CellRole.OBSERVER)
|
||||
self.ctl.destroy(cell_id)
|
||||
|
||||
def test_status_shows_active_cells(self):
|
||||
res = self.ctl.summon("allegro", "Timmy_Foundation/test#4")
|
||||
status = self.ctl.status()
|
||||
self.assertTrue(status["success"])
|
||||
ids = {c["cell_id"] for c in status["active_cells"]}
|
||||
self.assertIn(res["cell_id"], ids)
|
||||
self.ctl.destroy(res["cell_id"])
|
||||
|
||||
def test_close_then_destroy(self):
|
||||
res = self.ctl.summon("allegro", "Timmy_Foundation/test#5")
|
||||
cell_id = res["cell_id"]
|
||||
close_res = self.ctl.close(cell_id)
|
||||
self.assertTrue(close_res["success"])
|
||||
destroy_res = self.ctl.destroy(cell_id)
|
||||
self.assertTrue(destroy_res["success"])
|
||||
cell = self.ctl.registry.load(cell_id)
|
||||
self.assertEqual(cell.state, CellState.DESTROYED)
|
||||
self.assertFalse(os.path.exists(cell.hermes_home))
|
||||
|
||||
def test_destroy_scrubs_filesystem(self):
|
||||
res = self.ctl.summon("allegro", "Timmy_Foundation/test#6")
|
||||
cell_id = res["cell_id"]
|
||||
cell = self.ctl.registry.load(cell_id)
|
||||
hermes = cell.hermes_home
|
||||
self.assertTrue(os.path.isdir(hermes))
|
||||
self.ctl.destroy(cell_id)
|
||||
self.assertFalse(os.path.exists(hermes))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
33
scripts/architecture_linter.py
Normal file
33
scripts/architecture_linter.py
Normal file
@@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
|
||||
# Architecture Linter
|
||||
# Ensuring all changes align with the Frontier Local Agenda.
|
||||
|
||||
SOVEREIGN_RULES = [
|
||||
(r"https?://(api\.openai\.com|api\.anthropic\.com)", "CRITICAL: External cloud API detected. Use local custom_provider instead."),
|
||||
(r"provider: (openai|anthropic)", "WARNING: Direct cloud provider used. Ensure fallback_model is configured."),
|
||||
(r"api_key: ['"][^'"\s]{10,}['"]", "SECURITY: Hardcoded API key detected. Use environment variables.")
|
||||
]
|
||||
|
||||
def lint_file(path):
|
||||
print(f"Linting {path}...")
|
||||
content = open(path).read()
|
||||
violations = 0
|
||||
for pattern, msg in SOVEREIGN_RULES:
|
||||
if re.search(pattern, content):
|
||||
print(f" [!] {msg}")
|
||||
violations += 1
|
||||
return violations
|
||||
|
||||
def main():
|
||||
print("--- Ezra's Architecture Linter ---")
|
||||
files = [f for f in sys.argv[1:] if os.path.isfile(f)]
|
||||
total_violations = sum(lint_file(f) for f in files)
|
||||
print(f"\nLinting complete. Total violations: {total_violations}")
|
||||
sys.exit(1 if total_violations > 0 else 0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
89
scripts/nostur_status_query.py
Normal file
89
scripts/nostur_status_query.py
Normal file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Nostur Status Query MVP
|
||||
Read-only status responses sourced from Gitea truth.
|
||||
"""
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.request
|
||||
from datetime import datetime
|
||||
|
||||
# Configuration
|
||||
GITEA_URL = os.environ.get("GITEA_URL", "https://forge.alexanderwhitestone.com/api/v1")
|
||||
GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "f7bcdaf878d479ad7747873ff6739a9bb89e3f80")
|
||||
REPO_OWNER = "Timmy_Foundation"
|
||||
REPO_NAME = "timmy-config"
|
||||
|
||||
def gitea_get(path):
|
||||
if not GITEA_TOKEN:
|
||||
raise RuntimeError("GITEA_TOKEN not set")
|
||||
|
||||
url = f"{GITEA_URL}{path}"
|
||||
headers = {"Authorization": f"token {GITEA_TOKEN}"}
|
||||
req = urllib.request.Request(url, headers=headers)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=15) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except Exception as e:
|
||||
print(f"Error fetching from Gitea: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
def get_status():
|
||||
path = f"/repos/{REPO_OWNER}/{REPO_NAME}/issues?state=open&limit=50"
|
||||
issues = gitea_get(path)
|
||||
|
||||
if not issues:
|
||||
return "⚠️ Error: Could not fetch status from Gitea."
|
||||
|
||||
# 1. Active Epics
|
||||
active_epics = [
|
||||
i['title'].replace('[EPIC]', '').strip()
|
||||
for i in issues
|
||||
if '[EPIC]' in i['title'] or any(l['name'] == 'epic' for l in i.get('labels', []))
|
||||
][:2]
|
||||
|
||||
# 2. Blockers / Critical (Bugs, Security, Ops)
|
||||
blockers = [
|
||||
i['title'].strip()
|
||||
for i in issues
|
||||
if any(tag in i['title'] for tag in ['[BUG]', '[SECURITY]', '[OPS]']) or any(l['name'] == 'blocker' for l in i.get('labels', []))
|
||||
][:2]
|
||||
|
||||
# 3. Priority Queue (Top 3)
|
||||
priority_queue = [
|
||||
f"#{i['number']}: {i['title']}"
|
||||
for i in issues[:3]
|
||||
]
|
||||
|
||||
# Format compact response for mobile
|
||||
now = datetime.now().strftime("%H:%M:%S")
|
||||
|
||||
lines = ["[TIMMY STATUS]"]
|
||||
lines.append(f"EPICS: {' | '.join(active_epics) if active_epics else 'None'}")
|
||||
lines.append(f"BLOCKERS: {' | '.join(blockers) if blockers else 'None'}")
|
||||
lines.append(f"QUEUE: {' | '.join(priority_queue)}")
|
||||
lines.append(f"UPDATED: {now}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# If called with 'json', return JSON payload
|
||||
if len(sys.argv) > 1 and sys.argv[1] == "--json":
|
||||
path = f"/repos/{REPO_OWNER}/{REPO_NAME}/issues?state=open&limit=50"
|
||||
issues = gitea_get(path)
|
||||
if not issues:
|
||||
print(json.dumps({"error": "fetch failed"}))
|
||||
sys.exit(1)
|
||||
|
||||
data = {
|
||||
"status": "active",
|
||||
"epics": [i['title'] for i in issues if '[EPIC]' in i['title']][:2],
|
||||
"blockers": [i['title'] for i in issues if any(tag in i['title'] for tag in ['[BUG]', '[SECURITY]', '[OPS]'])][:2],
|
||||
"queue": [f"#{i['number']}: {i['title']}" for i in issues[:3]],
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
print(json.dumps(data, indent=2))
|
||||
else:
|
||||
print(get_status())
|
||||
Reference in New Issue
Block a user