Compare commits

..

2 Commits

Author SHA1 Message Date
Alexander Whitestone
477ec86467 feat: harden Bezalel tailscale bootstrap packet (#535)
Some checks failed
Agent PR Gate / gate (pull_request) Failing after 43s
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 30s
Smoke Test / smoke (pull_request) Failing after 28s
Agent PR Gate / report (pull_request) Successful in 7s
2026-04-22 00:08:33 -04:00
Alexander Whitestone
f83fdb7d55 test: cover hardened Bezalel Tailscale bootstrap packet (#535) 2026-04-22 00:07:32 -04:00
10 changed files with 188 additions and 560 deletions

20
SOUL.md
View File

@@ -137,26 +137,6 @@ The inscription predates any given model. It will outlast every API.
---
## What Honesty Requires
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
The following are not optional. Any implementation that calls itself Timmy must build them.
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
---
## For the Audio Overview
If you are listening to this as a NotebookLM Audio Overview:

View File

@@ -0,0 +1,96 @@
# Bezalel Tailscale Bootstrap
Refs #535
This is the repo-side operator packet for installing Tailscale on the Bezalel VPS and verifying the internal network path for federation work.
Important truth:
- issue #535 names `104.131.15.18`
- older Bezalel control-plane docs also mention `159.203.146.185`
- the current source of truth in this repo is `ansible/inventory/hosts.ini`, which currently resolves `bezalel` to `67.205.155.108`
Because of that drift, `scripts/bezalel_tailscale_bootstrap.py` now resolves the target host from `ansible/inventory/hosts.ini` by default instead of trusting a stale hardcoded IP.
## What the script does
`python3 scripts/bezalel_tailscale_bootstrap.py`
Safe by default:
- builds the remote bootstrap script
- writes it locally to `/tmp/bezalel_tailscale_bootstrap.sh`
- prints the SSH command needed to run it
- does **not** touch the VPS unless `--apply` is passed
When applied, the remote script does all of the issues repo-side bootstrap steps:
- installs Tailscale
- runs `tailscale up --ssh --hostname bezalel`
- appends the provided Mac SSH public key to `~/.ssh/authorized_keys`
- prints `tailscale status --json`
- pings the expected peer targets:
- Mac: `100.124.176.28`
- Ezra: `100.126.61.75`
## Required secrets / inputs
- Tailscale auth key
- Mac SSH public key
Provide them either directly or through files:
- `--auth-key` or `--auth-key-file`
- `--ssh-public-key` or `--ssh-public-key-file`
## Dry-run example
```bash
python3 scripts/bezalel_tailscale_bootstrap.py \
--auth-key-file ~/.config/tailscale/auth_key \
--ssh-public-key-file ~/.ssh/id_ed25519.pub \
--json
```
This prints:
- resolved host
- host source (`inventory:<path>` when pulled from `ansible/inventory/hosts.ini`)
- local script path
- SSH command to execute
- peer targets
## Apply example
```bash
python3 scripts/bezalel_tailscale_bootstrap.py \
--auth-key-file ~/.config/tailscale/auth_key \
--ssh-public-key-file ~/.ssh/id_ed25519.pub \
--apply \
--json
```
## Verifying success after apply
The script now parses the remote stdout into structured verification data:
- `verification.tailscale.self.tailscale_ips`
- `verification.tailscale.self.dns_name`
- `verification.peers`
- `verification.ping_ok`
A successful run should show:
- at least one Bezalel Tailscale IP under `tailscale_ips`
- `ping_ok.mac = 100.124.176.28`
- `ping_ok.ezra = 100.126.61.75`
## Expected remote install commands
```bash
curl -fsSL https://tailscale.com/install.sh | sh
tailscale up --ssh --hostname bezalel
install -d -m 700 ~/.ssh
touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys
tailscale status --json
```
## Why this PR does not claim live completion
This repo can safely ship the bootstrap script, host resolution logic, structured proof parsing, and operator packet.
It cannot honestly claim that Bezalel was actually joined to the tailnet unless a human/operator runs the script with a real auth key and real SSH access to the VPS.
That means the correct PR language for #535 is advancement, not pretend closure.

View File

@@ -14,6 +14,7 @@ Quick-reference index for common operational tasks across the Timmy Foundation i
| Agent scorecard | fleet-ops | `python3 scripts/agent_scorecard.py` |
| View fleet manifest | fleet-ops | `cat manifest.yaml` |
| Run nightly codebase genome pass | timmy-home | `python3 scripts/codebase_genome_nightly.py --dry-run` |
| Prepare Bezalel Tailscale bootstrap | timmy-home | `python3 scripts/bezalel_tailscale_bootstrap.py --auth-key-file <path> --ssh-public-key-file <path> --json` |
## the-nexus (Frontend + Brain)

View File

@@ -16,11 +16,14 @@ import argparse
import json
import shlex
import subprocess
import re
from json import JSONDecoder
from pathlib import Path
from typing import Any
DEFAULT_HOST = "159.203.146.185"
DEFAULT_HOST = "67.205.155.108"
DEFAULT_HOSTNAME = "bezalel"
DEFAULT_INVENTORY_PATH = Path(__file__).resolve().parents[1] / "ansible" / "inventory" / "hosts.ini"
DEFAULT_PEERS = {
"mac": "100.124.176.28",
"ezra": "100.126.61.75",
@@ -66,6 +69,37 @@ def parse_tailscale_status(payload: dict[str, Any]) -> dict[str, Any]:
}
def resolve_host(host: str | None, inventory_path: Path = DEFAULT_INVENTORY_PATH, hostname: str = DEFAULT_HOSTNAME) -> tuple[str, str]:
if host:
return host, "explicit"
if inventory_path.exists():
pattern = re.compile(rf"^{re.escape(hostname)}\s+.*ansible_host=([^\s]+)")
for line in inventory_path.read_text().splitlines():
match = pattern.search(line.strip())
if match:
return match.group(1), f"inventory:{inventory_path}"
return DEFAULT_HOST, "default"
def parse_apply_output(stdout: str) -> dict[str, Any]:
result: dict[str, Any] = {"tailscale": None, "ping_ok": {}}
text = stdout or ""
start = text.find("{")
if start != -1:
try:
payload, _ = JSONDecoder().raw_decode(text[start:])
if isinstance(payload, dict):
result["tailscale"] = parse_tailscale_status(payload)
except Exception:
pass
for line in text.splitlines():
if line.startswith("PING_OK:"):
_, name, ip = line.split(":", 2)
result["ping_ok"][name] = ip
return result
def build_ssh_command(host: str, remote_script_path: str = "/tmp/bezalel_tailscale_bootstrap.sh") -> list[str]:
return ["ssh", host, f"bash {shlex.quote(remote_script_path)}"]
@@ -89,8 +123,9 @@ def parse_peer_args(items: list[str]) -> dict[str, str]:
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Prepare or execute Tailscale bootstrap for the Bezalel VPS.")
parser.add_argument("--host", default=DEFAULT_HOST)
parser.add_argument("--host")
parser.add_argument("--hostname", default=DEFAULT_HOSTNAME)
parser.add_argument("--inventory-path", type=Path, default=DEFAULT_INVENTORY_PATH)
parser.add_argument("--auth-key", help="Tailscale auth key")
parser.add_argument("--auth-key-file", type=Path, help="Path to file containing the Tailscale auth key")
parser.add_argument("--ssh-public-key", help="SSH public key to append to authorized_keys")
@@ -116,6 +151,7 @@ def main() -> None:
auth_key = _read_secret(args.auth_key, args.auth_key_file)
ssh_public_key = _read_secret(args.ssh_public_key, args.ssh_public_key_file)
peers = parse_peer_args(args.peer)
resolved_host, host_source = resolve_host(args.host, args.inventory_path, args.hostname)
if not auth_key:
raise SystemExit("Missing Tailscale auth key. Use --auth-key or --auth-key-file.")
@@ -126,28 +162,31 @@ def main() -> None:
write_script(args.script_out, script)
payload: dict[str, Any] = {
"host": args.host,
"host": resolved_host,
"host_source": host_source,
"hostname": args.hostname,
"inventory_path": str(args.inventory_path),
"script_out": str(args.script_out),
"remote_script_path": args.remote_script_path,
"ssh_command": build_ssh_command(args.host, args.remote_script_path),
"ssh_command": build_ssh_command(resolved_host, args.remote_script_path),
"peer_targets": peers,
"applied": False,
}
if args.apply:
result = run_remote(args.host, args.remote_script_path)
result = run_remote(resolved_host, args.remote_script_path)
payload["applied"] = True
payload["exit_code"] = result.returncode
payload["stdout"] = result.stdout
payload["stderr"] = result.stderr
payload["verification"] = parse_apply_output(result.stdout)
if args.json:
print(json.dumps(payload, indent=2))
return
print("--- Bezalel Tailscale Bootstrap ---")
print(f"Host: {args.host}")
print(f"Host: {resolved_host} ({host_source})")
print(f"Local script: {args.script_out}")
print("SSH command: " + " ".join(payload["ssh_command"]))
if args.apply:

View File

@@ -1,12 +1 @@
# Timmy core module
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
from .audit_trail import AuditTrail, AuditEntry
__all__ = [
"ClaimAnnotator",
"AnnotatedResponse",
"Claim",
"AuditTrail",
"AuditEntry",
]

View File

@@ -1,156 +0,0 @@
#!/usr/bin/env python3
"""
Response Claim Annotator — Source Distinction System
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
a verified source I can point to, or my own pattern-matching. My user must be
able to tell which is which."
"""
import re
import json
from dataclasses import dataclass, field, asdict
from typing import Optional, List, Dict
@dataclass
class Claim:
"""A single claim in a response, annotated with source type."""
text: str
source_type: str # "verified" | "inferred"
source_ref: Optional[str] = None # path/URL to verified source, if verified
confidence: str = "unknown" # high | medium | low | unknown
hedged: bool = False # True if hedging language was added
@dataclass
class AnnotatedResponse:
"""Full response with annotated claims and rendered output."""
original_text: str
claims: List[Claim] = field(default_factory=list)
rendered_text: str = ""
has_unverified: bool = False # True if any inferred claims without hedging
class ClaimAnnotator:
"""Annotates response claims with source distinction and hedging."""
# Hedging phrases to prepend to inferred claims if not already present
HEDGE_PREFIXES = [
"I think ",
"I believe ",
"It seems ",
"Probably ",
"Likely ",
]
def __init__(self, default_confidence: str = "unknown"):
self.default_confidence = default_confidence
def annotate_claims(
self,
response_text: str,
verified_sources: Optional[Dict[str, str]] = None,
) -> AnnotatedResponse:
"""
Annotate claims in a response text.
Args:
response_text: Raw response from the model
verified_sources: Dict mapping claim substrings to source references
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
Returns:
AnnotatedResponse with claims marked and rendered text
"""
verified_sources = verified_sources or {}
claims = []
has_unverified = False
# Simple sentence splitting (naive, but sufficient for MVP)
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
for sent in sentences:
# Check if sentence is a claim we can verify
matched_source = None
for claim_substr, source_ref in verified_sources.items():
if claim_substr.lower() in sent.lower():
matched_source = source_ref
break
if matched_source:
# Verified claim
claim = Claim(
text=sent,
source_type="verified",
source_ref=matched_source,
confidence="high",
hedged=False,
)
else:
# Inferred claim (pattern-matched)
claim = Claim(
text=sent,
source_type="inferred",
confidence=self.default_confidence,
hedged=self._has_hedge(sent),
)
if not claim.hedged:
has_unverified = True
claims.append(claim)
# Render the annotated response
rendered = self._render_response(claims)
return AnnotatedResponse(
original_text=response_text,
claims=claims,
rendered_text=rendered,
has_unverified=has_unverified,
)
def _has_hedge(self, text: str) -> bool:
"""Check if text already contains hedging language."""
text_lower = text.lower()
for prefix in self.HEDGE_PREFIXES:
if text_lower.startswith(prefix.lower()):
return True
# Also check for inline hedges
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
return any(word in text_lower for word in hedge_words)
def _render_response(self, claims: List[Claim]) -> str:
"""
Render response with source distinction markers.
Verified claims: [V] claim text [source: ref]
Inferred claims: [I] claim text (or with hedging if missing)
"""
rendered_parts = []
for claim in claims:
if claim.source_type == "verified":
part = f"[V] {claim.text}"
if claim.source_ref:
part += f" [source: {claim.source_ref}]"
else: # inferred
if not claim.hedged:
# Add hedging if missing
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
part = f"[I] {hedged_text}"
else:
part = f"[I] {claim.text}"
rendered_parts.append(part)
return " ".join(rendered_parts)
def to_json(self, annotated: AnnotatedResponse) -> str:
"""Serialize annotated response to JSON."""
return json.dumps(
{
"original_text": annotated.original_text,
"rendered_text": annotated.rendered_text,
"has_unverified": annotated.has_unverified,
"claims": [asdict(c) for c in annotated.claims],
},
indent=2,
ensure_ascii=False,
)

View File

@@ -2,9 +2,12 @@ from scripts.bezalel_tailscale_bootstrap import (
DEFAULT_PEERS,
build_remote_script,
build_ssh_command,
parse_apply_output,
parse_peer_args,
parse_tailscale_status,
resolve_host,
)
from pathlib import Path
def test_build_remote_script_contains_install_up_and_key_append():
@@ -78,3 +81,46 @@ def test_parse_peer_args_merges_overrides_into_defaults():
"ezra": "100.126.61.76",
"forge": "100.70.0.9",
}
def test_resolve_host_prefers_inventory_over_stale_default(tmp_path: Path):
inventory = tmp_path / "hosts.ini"
inventory.write_text(
"[fleet]\n"
"ezra ansible_host=143.198.27.163 ansible_user=root\n"
"bezalel ansible_host=67.205.155.108 ansible_user=root\n"
)
host, source = resolve_host(None, inventory)
assert host == "67.205.155.108"
assert source == f"inventory:{inventory}"
def test_parse_apply_output_extracts_status_and_ping_markers():
stdout = (
'{"Self": {"HostName": "bezalel", "DNSName": "bezalel.tailnet.ts.net", "TailscaleIPs": ["100.90.0.10"]}, '
'"Peer": {"node-1": {"HostName": "ezra", "TailscaleIPs": ["100.126.61.75"]}}}'
"\nPING_OK:mac:100.124.176.28\n"
"PING_OK:ezra:100.126.61.75\n"
)
result = parse_apply_output(stdout)
assert result["tailscale"]["self"]["tailscale_ips"] == ["100.90.0.10"]
assert result["ping_ok"] == {"mac": "100.124.176.28", "ezra": "100.126.61.75"}
def test_runbook_doc_exists_and_mentions_inventory_auth_and_peer_checks():
doc = Path("docs/BEZALEL_TAILSCALE_BOOTSTRAP.md")
assert doc.exists(), "missing docs/BEZALEL_TAILSCALE_BOOTSTRAP.md"
text = doc.read_text()
assert "ansible/inventory/hosts.ini" in text
assert "tailscale up" in text
assert "authorized_keys" in text
assert "100.124.176.28" in text
assert "100.126.61.75" in text
runbook = Path("docs/RUNBOOK_INDEX.md").read_text()
assert "Prepare Bezalel Tailscale bootstrap" in runbook
assert "scripts/bezalel_tailscale_bootstrap.py" in runbook

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env python3
"""Smoke test for load_cap_enforcer.py — validates structure and dry-run path.
Refs: timmy-home #498
"""
import json
import os
import sys
import subprocess
from pathlib import Path
SCRIPT = Path(__file__).parent.parent / "timmy-config" / "bin" / "load_cap_enforcer.py"
def test_script_exists_and_is_executable():
assert SCRIPT.exists(), f"Script not found: {SCRIPT}"
assert os.access(SCRIPT, os.X_OK), "Script not executable"
def test_dry_run_help():
result = subprocess.run([sys.executable, str(SCRIPT), "--help"], capture_output=True, text=True)
assert result.returncode == 0
assert "--dry-run" in result.stdout
assert "--cap" in result.stdout
assert "Enforce open-issue load cap" in result.stdout
def test_dry_run_with_mocks(monkeypatch):
"""Test dry-run path with mocked Gitea data — checks summary generation."""
# Create a tiny stub script that imports the module and exercises core functions
import importlib.util
spec = importlib.util.spec_from_file_location("load_cap_enforcer", SCRIPT)
mod = importlib.util.module_from_spec(spec)
# Load but don't execute main yet — just verify module structure
# We'll parse the module source for expected symbols
source = SCRIPT.read_text()
assert "fetch_all_open_issues" in source
assert "build_summary" in source
assert "unassignment_map" in source
assert "COMMENT_TEMPLATE" in source
assert "Unassigned from @{assignee} due to load cap" in source
if __name__ == "__main__":
# Run minimal smoke checks when invoked directly
test_script_exists_and_is_executable()
print("✓ Script exists and is executable")
test_dry_run_help()
print("✓ --help works")
test_dry_run_with_mocks(type('obj', (object,), {'assert': lambda *a: True})())
print("✓ Core structure verified")
print("\nAll smoke tests passed.")

View File

@@ -1,103 +0,0 @@
#!/usr/bin/env python3
"""Tests for claim_annotator.py — verifies source distinction is present."""
import sys
import os
import json
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
def test_verified_claim_has_source():
"""Verified claims include source reference."""
annotator = ClaimAnnotator()
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
response = "Paris is the capital of France. It is a beautiful city."
result = annotator.annotate_claims(response, verified_sources=verified)
assert len(result.claims) > 0
verified_claims = [c for c in result.claims if c.source_type == "verified"]
assert len(verified_claims) == 1
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
assert "[V]" in result.rendered_text
assert "[source:" in result.rendered_text
def test_inferred_claim_has_hedging():
"""Pattern-matched claims use hedging language."""
annotator = ClaimAnnotator()
response = "The weather is nice today. It might rain tomorrow."
result = annotator.annotate_claims(response)
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
assert len(inferred_claims) >= 1
# Check that rendered text has [I] marker
assert "[I]" in result.rendered_text
# Check that unhedged inferred claims get hedging
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
def test_hedged_claim_not_double_hedged():
"""Claims already with hedging are not double-hedged."""
annotator = ClaimAnnotator()
response = "I think the sky is blue. It is a nice day."
result = annotator.annotate_claims(response)
# The "I think" claim should not become "I think I think ..."
assert "I think I think" not in result.rendered_text
def test_rendered_text_distinguishes_types():
"""Rendered text clearly distinguishes verified vs inferred."""
annotator = ClaimAnnotator()
verified = {"Earth is round": "https://science.org/earth"}
response = "Earth is round. Stars are far away."
result = annotator.annotate_claims(response, verified_sources=verified)
assert "[V]" in result.rendered_text # verified marker
assert "[I]" in result.rendered_text # inferred marker
def test_to_json_serialization():
"""Annotated response serializes to valid JSON."""
annotator = ClaimAnnotator()
response = "Test claim."
result = annotator.annotate_claims(response)
json_str = annotator.to_json(result)
parsed = json.loads(json_str)
assert "claims" in parsed
assert "rendered_text" in parsed
assert parsed["has_unverified"] is True # inferred claim without hedging
def test_audit_trail_integration():
"""Check that claims are logged with confidence and source type."""
# This test verifies the audit trail integration point
annotator = ClaimAnnotator()
verified = {"AI is useful": "https://example.com/ai"}
response = "AI is useful. It can help with tasks."
result = annotator.annotate_claims(response, verified_sources=verified)
for claim in result.claims:
assert claim.source_type in ("verified", "inferred")
assert claim.confidence in ("high", "medium", "low", "unknown")
if claim.source_type == "verified":
assert claim.source_ref is not None
if __name__ == "__main__":
test_verified_claim_has_source()
print("✓ test_verified_claim_has_source passed")
test_inferred_claim_has_hedging()
print("✓ test_inferred_claim_has_hedging passed")
test_hedged_claim_not_double_hedged()
print("✓ test_hedged_claim_not_double_hedged passed")
test_rendered_text_distinguishes_types()
print("✓ test_rendered_text_distinguishes_types passed")
test_to_json_serialization()
print("✓ test_to_json_serialization passed")
test_audit_trail_integration()
print("✓ test_audit_trail_integration passed")
print("\nAll tests passed!")

View File

@@ -1,210 +0,0 @@
#!/usr/bin/env python3
"""
Open-Load Cap Enforcement — Audit-B3
Scans multiple repos for open issues, enforces a per-agent open-issue cap,
auto-unassigns overflow (oldest first), and posts a summary.
Acceptance (timmy-home #498):
- Lives in timmy-config/bin/load_cap_enforcer.py
- Scans timmy-home, timmy-config, the-nexus, hermes-agent
- Cap: 25 open issues per agent (configurable)
- Unassign oldest overflow, comment on each
- Dry-run first, then live; summary posted on parent issue #495
"""
import argparse
import json
import os
import sys
import urllib.request
import urllib.error
from collections import defaultdict
from datetime import datetime, timezone
from pathlib import Path
# ── Configuration ─────────────────────────────────────────────────────────────
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
ORG = "Timmy_Foundation"
REPOS = ["timmy-home", "timmy-config", "the-nexus", "hermes-agent"]
TOKEN_PATH = Path.home() / ".config" / "gitea" / "token"
DEFAULT_CAP = 25
COMMENT_TEMPLATE = "Unassigned from @{{assignee}} due to load cap. Available for pickup."
def load_token() -> str:
if TOKEN_PATH.exists():
return TOKEN_PATH.read_text().strip()
tok = os.environ.get("GITEA_TOKEN", "")
if tok:
return tok
sys.exit("ERROR: Gitea token not found at ~/.config/gitea/token or GITEA_TOKEN env")
def api(method: str, path: str, token: str, data=None):
url = f"{GITEA_BASE}{path}"
body = json.dumps(data).encode() if data else None
headers = {"Authorization": f"token {token}"}
if body:
headers["Content-Type"] = "application/json"
req = urllib.request.Request(url, data=body, headers=headers, method=method)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read()), resp.status
except urllib.error.HTTPError as e:
err = e.read().decode() if e.fp else str(e)
print(f" API {e.code}: {err}", file=sys.stderr)
return None, e.code
except Exception as e:
print(f" Request error: {e}", file=sys.stderr)
return None, None
def fetch_all_open_issues(token: str):
all_issues = []
for repo in REPOS:
page = 1
while True:
data, status = api("GET", f"/repos/{ORG}/{repo}/issues?state=open&page={page}&limit=50", token)
if status != 200 or not data:
break
all_issues.extend(data)
if len(data) < 50:
break
page += 1
return all_issues
def build_summary(by_agent: dict, unassignment_map: dict):
lines = []
lines.append("Agent | Before | After | Unassigned Count")
lines.append("-" * 50)
for agent in sorted(by_agent.keys()):
before = by_agent[agent]["before"]
after = by_agent[agent]["after"]
unassigned = len(unassignment_map.get(agent, []))
lines.append(f"@{agent} | {before} | {after} | {unassigned}")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Enforce open-issue load cap per agent")
parser.add_argument("--dry-run", action="store_true", help="Report without making changes")
parser.add_argument("--cap", type=int, default=DEFAULT_CAP, help=f"Max open issues per agent (default: {DEFAULT_CAP})")
parser.add_argument("--output", type=str, default=None, help="Write summary to file")
parser.add_argument("--comment-on", type=int, default=None, help="Post summary as comment on timmy-home issue N")
args = parser.parse_args()
token = load_token()
print(f"Fetching open issues from {', '.join(REPOS)} ...")
issues = fetch_all_open_issues(token)
print(f"Fetched {len(issues)} open issues.")
# Group by assignee
by_agent = defaultdict(lambda: {"before": 0, "issues": []})
for iss in issues:
for a in (iss.get("assignees") or []):
login = a.get("login")
if login:
by_agent[login]["issues"].append(iss)
by_agent[login]["before"] += 1
print(f"\nAgents with open issues: {list(by_agent.keys())}")
for agent, d in sorted(by_agent.items()):
print(f" @{agent}: {d['before']} issues")
# Identify overflow
unassignment_map = defaultdict(list)
for agent, d in by_agent.items():
count = d["before"]
if count > args.cap:
overflow = count - args.cap
issues_sorted = sorted(d["issues"], key=lambda i: i.get("created_at", ""))
unassignment_map[agent] = issues_sorted[:overflow]
print(f"\n@{agent} exceeds cap ({count} > {args.cap}); will unassign {overflow} oldest issue(s):")
for iss in issues_sorted[:overflow]:
print(f" - #{iss['number']}: {iss.get('title', '')[:50]}")
# Dry-run: just show summary and exit
if args.dry_run:
print("\n=== DRY RUN — no changes made ===")
# For dry-run, after = before (no changes)
for agent in by_agent:
by_agent[agent]["after"] = by_agent[agent]["before"]
summary = build_summary(by_agent, unassignment_map)
print("\n" + summary)
if args.output:
Path(args.output).write_text(summary)
print(f"\nSummary written to {args.output}")
return 0
# LIVE: perform unassignments and comments (concurrent)
print("\n=== LIVE RUN — executing ===")
from concurrent.futures import ThreadPoolExecutor, as_completed
import threading
lock = threading.Lock()
tasks = []
for agent, issues_to_unassign in unassignment_map.items():
for iss in issues_to_unassign:
issue_num = iss["number"]
repo_name = next(
(r for r in REPOS if f"/{r}/issues/" in iss.get("html_url", "")), REPOS[0]
)
tasks.append((agent, issue_num, repo_name, iss))
print(f"Total unassignment tasks: {len(tasks)}")
def do_task(agent, issue_num, repo_name, iss):
# Unassign
_, status1 = api("PATCH", f"/repos/{ORG}/{repo_name}/issues/{issue_num}", token, {"assignees": []})
if status1 not in (200, 201, 204):
return (agent, issue_num, repo_name, False, f"unassign HTTP {status1}")
# Comment
comment_body = COMMENT_TEMPLATE.format(assignee=agent)
_, status2 = api("POST", f"/repos/{ORG}/{repo_name}/issues/{issue_num}/comments", token, {"body": comment_body})
if status2 not in (200, 201):
return (agent, issue_num, repo_name, True, f"unassigned but comment HTTP {status2}")
return (agent, issue_num, repo_name, True, "OK")
completed = 0
with ThreadPoolExecutor(max_workers=12) as executor:
futures = [executor.submit(do_task, a, n, r, i) for (a, n, r, i) in tasks]
for fut in as_completed(futures):
agent, num, repo, ok, msg = fut.result()
with lock:
completed += 1
if completed % 50 == 0:
print(f" Progress: {completed}/{len(tasks)}")
if ok:
print(f" ✓ #{num} ({repo})")
else:
print(f" ✗ #{num} ({repo}): {msg}")
# Recompute after counts for summary
print("\nRecomputing after counts ...")
after_issues = fetch_all_open_issues(token)
by_agent_after = defaultdict(int)
for iss in after_issues:
for a in (iss.get("assignees") or []):
by_agent_after[a.get("login")] += 1
for agent in by_agent:
by_agent[agent]["after"] = by_agent_after.get(agent, 0)
summary = build_summary(by_agent, unassignment_map)
print("\n=== SUMMARY ===")
print(summary)
if args.output:
Path(args.output).write_text(summary)
print(f"Summary written to {args.output}")
if args.comment_on:
body = f"Open-load cap enforcement run (cap={args.cap}):\n\n```\n{summary}\n```"
_, status = api("POST", f"/repos/{ORG}/timmy-home/issues/{args.comment_on}/comments", token, {"body": body})
if status in (200, 201):
print(f"\nSummary posted as comment on timmy-home issue #{args.comment_on}")
else:
print(f"\nWARNING: failed to post comment (HTTP {status})")
return 0
if __name__ == "__main__":
sys.exit(main())