Compare commits

..

1 Commits

Author SHA1 Message Date
ff7ea2d45e feat: Codebase Genome for burn-fleet (#681)
Some checks failed
Agent PR Gate / report (pull_request) Has been cancelled
Agent PR Gate / gate (pull_request) Failing after 43s
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 34s
Smoke Test / smoke (pull_request) Failing after 13m50s
Complete GENOME.md for burn-fleet (autonomous dispatch infra):
- Project overview: 112 panes, 96 workers across Mac + VPS
- Architecture diagram (ASCII)
- Lane routing table (8 repos → windows)
- Agent name registry (48 mythological names)
- Entry points and design decisions
- Scaling instructions
- Sovereignty assessment

Repo 14/16. Closes #681.
2026-04-16 00:29:30 -04:00
4 changed files with 101 additions and 285 deletions

View File

@@ -0,0 +1,101 @@
# GENOME.md — Burn Fleet (Timmy_Foundation/burn-fleet)
> Codebase Genome v1.0 | Generated 2026-04-16 | Repo 14/16
## Project Overview
**Burn Fleet** is the autonomous dispatch infrastructure for the Timmy Foundation. It manages 112 tmux panes across Mac and VPS, routing Gitea issues to lane-specialized workers by repo. Each agent has a mythological name — they are all Timmy with different hats.
**Core principle:** Dispatch ALL panes. Never scan for idle. Stale work beats idle workers.
## Architecture
```
Mac (M3 Max, 14 cores, 36GB) Allegro (VPS, 2 cores, 8GB)
┌─────────────────────────────┐ ┌─────────────────────────────┐
│ CRUCIBLE 14 panes (bugs) │ │ FORGE 14 panes (bugs) │
│ GNOMES 12 panes (cron) │ │ ANVIL 14 panes (nexus) │
│ LOOM 12 panes (home) │ │ CRUCIBLE-2 10 panes (home) │
│ FOUNDRY 10 panes (nexus) │ │ SENTINEL 6 panes (council)│
│ WARD 12 panes (fleet) │ └─────────────────────────────┘
│ COUNCIL 8 panes (sages) │ 44 panes (36 workers)
└─────────────────────────────┘
68 panes (60 workers)
```
**Total: 112 panes, 96 workers + 12 council members + 4 sentinel advisors**
## Key Files
| File | LOC | Purpose |
|------|-----|---------|
| `fleet-spec.json` | ~200 | Machine definitions, window layouts, lane assignments, agent names |
| `fleet-launch.sh` | ~100 | Create tmux sessions with correct pane counts on Mac + Allegro |
| `fleet-christen.py` | ~80 | Launch hermes in all panes and send identity messages |
| `fleet-dispatch.py` | ~250 | Pull Gitea issues and route to correct panes by lane |
| `fleet-status.py` | ~100 | Health check across all machines |
| `allegro/docker-compose.yml` | ~30 | Allegro VPS container definition |
| `allegro/Dockerfile` | ~20 | Allegro build definition |
| `allegro/healthcheck.py` | ~15 | Allegro container health check |
**Total: ~800 LOC**
## Lane Routing
Issues are routed by repo to the correct window:
| Repo | Mac Window | Allegro Window |
|------|-----------|----------------|
| hermes-agent | CRUCIBLE, GNOMES | FORGE |
| timmy-home | LOOM | CRUCIBLE-2 |
| timmy-config | LOOM | CRUCIBLE-2 |
| the-nexus | FOUNDRY | ANVIL |
| the-playground | — | ANVIL |
| the-door | WARD | CRUCIBLE-2 |
| fleet-ops | WARD | CRUCIBLE-2 |
| turboquant | WARD | — |
## Entry Points
| Command | Purpose |
|---------|---------|
| `./fleet-launch.sh both` | Create tmux layout on Mac + Allegro |
| `python3 fleet-christen.py both` | Wake all agents with identity messages |
| `python3 fleet-dispatch.py --cycles 1` | Single dispatch cycle |
| `python3 fleet-dispatch.py --cycles 10 --interval 60` | Continuous burn (10 cycles, 60s apart) |
| `python3 fleet-status.py` | Health check all machines |
## Agent Names
| Window | Names | Count |
|--------|-------|-------|
| CRUCIBLE | AZOTH, ALBEDO, CITRINITAS, RUBEDO, SULPHUR, MERCURIUS, SAL, ATHANOR, VITRIOL, SATURN, JUPITER, MARS, EARTH, SOL | 14 |
| GNOMES | RAZIEL, AZRAEL, CASSIEL, METATRON, SANDALPHON, BINAH, CHOKMAH, KETER, ALDEBARAN, RIGEL, SIRIUS, POLARIS | 12 |
| FORGE | HAMMER, ANVIL, ADZE, PICK, TONGS, WRENCH, SCREWDRIVER, BOLT, SAW, TRAP, HOOK, MAGNET, SPARK, FLAME | 14 |
| COUNCIL | TESLA, HERMES, GANDALF, DAVINCI, ARCHIMEDES, TURING, AURELIUS, SOLOMON | 8 |
## Design Decisions
1. **Separate GILs** — Allegro runs Python independently on VPS for true parallelism
2. **Queue, not send-keys** — Workers process at their own pace, no interruption
3. **Lane enforcement** — Panes stay in one repo to build deep context
4. **Dispatch ALL panes** — Never scan for idle; stale work beats idle workers
5. **Council is advisory** — Named archetypes provide perspective, not task execution
## Scaling
- Add panes: Edit `fleet-spec.json``fleet-launch.sh``fleet-christen.py`
- Add machines: Edit `fleet-spec.json` → Add routing in `fleet-dispatch.py` → Ensure SSH access
## Sovereignty Assessment
- **Fully local** — Mac + user-controlled VPS, no cloud dependencies
- **No phone-home** — Gitea API is self-hosted
- **Open source** — All code on Gitea
- **SSH-based** — Mac → Allegro communication via SSH only
**Verdict: Fully sovereign. Autonomous fleet dispatch with no external dependencies.**
---
*"Dispatch ALL panes. Never scan for idle — stale work beats idle workers."*

View File

@@ -43,18 +43,6 @@ Override at runtime if needed:
### 1. `scripts/verify_big_brain.py`
Checks the configured provider using the right protocol for the chosen backend.
### 1b. `scripts/timmy_gemma4_mac.py`
Timmy-specific prove-it helper for Mac Hermes.
Refs #543.
What it adds beyond the generic verifier:
- targets the root config.yaml used by Timmy's Mac Hermes
- reports whether RunPod / Vertex credential files are present without leaking them
- derives a RunPod `/v1` endpoint from a pod id when supplied
- previews the Big Brain provider config update for Timmy
- emits the exact Hermes chat probe command to run once a live endpoint exists
- only spends money if `--apply-runpod` is explicitly passed
For `openai` backends it verifies:
- `GET /models`
- `POST /chat/completions`

View File

@@ -1,194 +0,0 @@
#!/usr/bin/env python3
"""Timmy-specific RunPod/Vertex Gemma 4 prove-it helper for Mac Hermes.
Refs: timmy-home #543
Safe by default:
- reports whether RunPod / Vertex credential files exist
- derives a RunPod OpenAI-compatible base URL from a pod id if provided
- previews the root `config.yaml` Big Brain provider update for Timmy's Mac Hermes
- emits the exact Hermes chat probe command to run once a live endpoint exists
- can call the existing RunPod deployment helper only when --apply-runpod is explicitly used
- can write the repo-root config only when --write-config is explicitly used
- can verify an OpenAI-compatible endpoint only when --verify-chat is explicitly used
"""
from __future__ import annotations
import argparse
import json
import sys
from pathlib import Path
from typing import Any
REPO_ROOT = Path(__file__).resolve().parents[1]
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from scripts.bezalel_gemma4_vps import (
DEFAULT_CLOUD_TYPE,
DEFAULT_GPU_TYPE,
DEFAULT_MODEL,
DEFAULT_PROVIDER_NAME,
build_runpod_endpoint,
deploy_runpod,
update_config_text,
verify_openai_chat,
write_config_file,
)
DEFAULT_RUNPOD_TOKEN_FILE = Path.home() / ".config" / "runpod" / "access_key"
DEFAULT_VERTEX_KEY_FILE = Path.home() / ".config" / "vertex" / "key"
DEFAULT_CONFIG_PATH = Path(__file__).resolve().parents[1] / "config.yaml"
DEFAULT_VERTEX_BASE_URL = "https://YOUR_VERTEX_BRIDGE_HOST/v1"
def detect_credential_files(
*,
runpod_file: Path = DEFAULT_RUNPOD_TOKEN_FILE,
vertex_key_file: Path = DEFAULT_VERTEX_KEY_FILE,
) -> dict[str, Any]:
return {
"runpod_key_present": runpod_file.exists(),
"vertex_key_present": vertex_key_file.exists(),
"runpod_token_file": str(runpod_file),
"vertex_key_file": str(vertex_key_file),
}
def build_hermes_chat_probe_command(
provider_name: str = DEFAULT_PROVIDER_NAME,
model: str = DEFAULT_MODEL,
) -> str:
return (
'hermes chat -q "Reply with exactly: BIG_BRAIN_READY" -Q '
f'--provider "{provider_name}" --model {model}'
)
def build_timmy_proof_summary(
*,
config_text: str,
config_path: Path = DEFAULT_CONFIG_PATH,
pod_id: str | None = None,
base_url: str | None = None,
vertex_base_url: str | None = None,
model: str = DEFAULT_MODEL,
provider_name: str = DEFAULT_PROVIDER_NAME,
runpod_file: Path = DEFAULT_RUNPOD_TOKEN_FILE,
vertex_key_file: Path = DEFAULT_VERTEX_KEY_FILE,
) -> dict[str, Any]:
actions: list[str] = []
resolved_base_url = base_url
if not resolved_base_url and pod_id:
resolved_base_url = build_runpod_endpoint(pod_id)
actions.append("computed_base_url_from_pod_id")
if not resolved_base_url and vertex_base_url:
resolved_base_url = vertex_base_url.rstrip("/")
actions.append("using_vertex_base_url")
if not resolved_base_url:
resolved_base_url = DEFAULT_VERTEX_BASE_URL
actions.append("using_placeholder_vertex_bridge")
credentials = detect_credential_files(runpod_file=runpod_file, vertex_key_file=vertex_key_file)
config_preview = update_config_text(
config_text,
base_url=resolved_base_url,
model=model,
provider_name=provider_name,
)
return {
"config_path": str(config_path),
"provider_name": provider_name,
"model": model,
"base_url": resolved_base_url,
"config_preview": config_preview,
"verify_script_command": "python3 scripts/verify_big_brain.py",
"hermes_chat_probe_command": build_hermes_chat_probe_command(provider_name=provider_name, model=model),
"actions": actions,
**credentials,
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Timmy-specific RunPod/Vertex Gemma 4 prove-it helper for Mac Hermes.")
parser.add_argument("--pod-id", help="Existing RunPod pod id to derive the /v1 endpoint")
parser.add_argument("--base-url", help="Existing OpenAI-compatible base URL to wire directly")
parser.add_argument("--vertex-base-url", help="Vertex/OpenAI bridge base URL (for example https://host/v1)")
parser.add_argument("--pod-name", default="timmy-gemma4")
parser.add_argument("--gpu-type", default=DEFAULT_GPU_TYPE)
parser.add_argument("--cloud-type", default=DEFAULT_CLOUD_TYPE)
parser.add_argument("--model", default=DEFAULT_MODEL)
parser.add_argument("--provider-name", default=DEFAULT_PROVIDER_NAME)
parser.add_argument("--runpod-token-file", type=Path, default=DEFAULT_RUNPOD_TOKEN_FILE)
parser.add_argument("--vertex-key-file", type=Path, default=DEFAULT_VERTEX_KEY_FILE)
parser.add_argument("--config-path", type=Path, default=DEFAULT_CONFIG_PATH)
parser.add_argument("--apply-runpod", action="store_true", help="Call the RunPod API using --runpod-token-file")
parser.add_argument("--write-config", action="store_true", help="Write the updated Timmy config to --config-path")
parser.add_argument("--verify-chat", action="store_true", help="Verify the OpenAI-compatible endpoint with a chat probe")
parser.add_argument("--json", action="store_true", help="Emit machine-readable JSON")
return parser.parse_args()
def main() -> None:
args = parse_args()
config_text = args.config_path.read_text() if args.config_path.exists() else ""
base_url = args.base_url
actions: list[str] = []
deployment: dict[str, Any] | None = None
if args.apply_runpod:
if not args.runpod_token_file.exists():
raise SystemExit(f"RunPod token file not found: {args.runpod_token_file}")
api_key = args.runpod_token_file.read_text().strip()
deployment = deploy_runpod(
api_key=api_key,
name=args.pod_name,
gpu_type=args.gpu_type,
cloud_type=args.cloud_type,
model=args.model,
)
base_url = deployment["base_url"]
actions.append("deployed_runpod_pod")
summary = build_timmy_proof_summary(
config_text=config_text,
config_path=args.config_path,
pod_id=args.pod_id,
base_url=base_url,
vertex_base_url=args.vertex_base_url,
model=args.model,
provider_name=args.provider_name,
runpod_file=args.runpod_token_file,
vertex_key_file=args.vertex_key_file,
)
summary["actions"] = actions + summary["actions"]
if deployment is not None:
summary["deployment"] = deployment
if args.write_config:
write_config_file(args.config_path, base_url=summary["base_url"], model=args.model, provider_name=args.provider_name)
summary["actions"].append("wrote_config")
if args.verify_chat:
summary["verify_response"] = verify_openai_chat(summary["base_url"], model=args.model)
summary["actions"].append("verified_chat")
if args.json:
print(json.dumps(summary, indent=2))
return
print("--- Timmy Gemma4 Mac Prove-It ---")
print(f"Config path: {summary['config_path']}")
print(f"Base URL: {summary['base_url']}")
print(f"Model: {summary['model']}")
print(f"RunPod key present: {summary['runpod_key_present']}")
print(f"Vertex key present: {summary['vertex_key_present']}")
print(f"Verify command: {summary['verify_script_command']}")
print(f"Hermes chat probe: {summary['hermes_chat_probe_command']}")
if summary["actions"]:
print("Actions: " + ", ".join(summary["actions"]))
if __name__ == "__main__":
main()

View File

@@ -1,79 +0,0 @@
from __future__ import annotations
import json
from pathlib import Path
import yaml
from scripts.timmy_gemma4_mac import (
DEFAULT_CONFIG_PATH,
build_hermes_chat_probe_command,
build_timmy_proof_summary,
detect_credential_files,
)
def test_detect_credential_files_reports_presence_without_secret_material(tmp_path: Path) -> None:
runpod_file = tmp_path / "runpod_access_key"
vertex_file = tmp_path / "vertex_key"
runpod_file.write_text("rp_secret_123")
status = detect_credential_files(runpod_file=runpod_file, vertex_key_file=vertex_file)
assert status["runpod_key_present"] is True
assert status["vertex_key_present"] is False
assert status["runpod_token_file"] == str(runpod_file)
assert status["vertex_key_file"] == str(vertex_file)
assert "rp_secret_123" not in json.dumps(status)
def test_build_timmy_proof_summary_targets_repo_root_config_and_derives_runpod_url(tmp_path: Path) -> None:
config_path = tmp_path / "config.yaml"
config_path.write_text(
yaml.safe_dump(
{
"custom_providers": [
{
"name": "Big Brain",
"base_url": "https://YOUR_BIG_BRAIN_HOST/v1",
"api_key": "",
"model": "gemma4:latest",
}
]
}
)
)
summary = build_timmy_proof_summary(
config_text=config_path.read_text(),
config_path=config_path,
pod_id="podxyz",
)
assert summary["base_url"] == "https://podxyz-11434.proxy.runpod.net/v1"
assert summary["config_path"] == str(config_path)
preview = yaml.safe_load(summary["config_preview"])
provider = preview["custom_providers"][0]
assert provider["name"] == "Big Brain"
assert provider["base_url"] == "https://podxyz-11434.proxy.runpod.net/v1"
assert provider["model"] == "gemma4:latest"
assert "computed_base_url_from_pod_id" in summary["actions"]
def test_build_hermes_chat_probe_command_uses_big_brain_provider_contract() -> None:
command = build_hermes_chat_probe_command()
assert command.startswith("hermes chat ")
assert '--provider "Big Brain"' in command
assert '--model gemma4:latest' in command
assert 'BIG_BRAIN_READY' in command
def test_repo_readme_mentions_timmy_specific_prove_it_script() -> None:
readme = Path("scripts/README_big_brain.md").read_text()
assert "scripts/timmy_gemma4_mac.py" in readme
assert "root config.yaml" in readme
assert "Refs #543" in readme
def test_default_config_path_is_repo_root_config() -> None:
assert DEFAULT_CONFIG_PATH == Path(__file__).resolve().parents[1] / "config.yaml"