Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
8cdae49f48 fix(#553): eliminate hardcoded home-directory paths in Phase-6 infrastructure scripts
Some checks failed
Agent PR Gate / gate (pull_request) Failing after 1m2s
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 27s
Smoke Test / smoke (pull_request) Failing after 28s
Agent PR Gate / report (pull_request) Successful in 12s
Migrates hardcoded ~/.timmy, ~/.config, and Path.home() references across
the autonomous infrastructure stack to use environment variables with
sensible defaults:

- scripts/autonomous_issue_creator.py:
  - DEFAULT_TOKEN_FILE → XDG_CONFIG_HOME fallback
  - DEFAULT_FAILOVER_STATUS → TIMMY_HOME fallback

- scripts/failover_monitor.py:
  - STATUS_FILE → TIMMY_HOME fallback

- scripts/dynamic_dispatch_optimizer.py:
  - STATUS_FILE, SPEC_FILE, OUTPUT_FILE → TIMMY_HOME fallback

- scripts/backlog_cleanup.py:
  - token path → XDG_CONFIG_HOME fallback

- scripts/backlog_triage.py:
  - TOKEN_PATH → XDG_CONFIG_HOME fallback

- scripts/burn_lane_issue_audit.py:
  - DEFAULT_TOKEN_PATH → XDG_CONFIG_HOME fallback

- scripts/cross-repo-qa.py:
  - GITEA_TOKEN_PATH → XDG_CONFIG_HOME fallback

This makes the Phase-6 buildings (self-healing fleet, autonomous issue
creation, community pipeline, global mesh) portable across different
user accounts and deployment environments.
2026-04-22 03:05:16 -04:00
9 changed files with 87 additions and 75 deletions

View File

@@ -4,58 +4,96 @@ Phase 1 is the manual-clicker stage of the fleet. The machines exist. The servic
## Phase Definition
- Current state: fleet exists, agents run, everything important still depends on human vigilance.
- Resources tracked here: Capacity, Uptime.
- Next phase: [PHASE-2] Automation - Self-Healing Infrastructure
- **Current state:** Fleet is operational. Three VPS wizards run. Gitea hosts 16 repos. Agents burn through issues nightly.
- **The problem:** Everything important still depends on human vigilance. When an agent dies at 2 AM, nobody notices until morning.
- **Resources tracked:** Uptime, Capacity Utilization.
- **Next phase:** [PHASE-2] Automation - Self-Healing Infrastructure
## Current Buildings
## What We Have
- VPS hosts: Ezra, Allegro, Bezalel
- Agents: Timmy harness, Code Claw heartbeat, Gemini AI Studio worker
- Gitea forge
- Evennia worlds
### Infrastructure
- **VPS hosts:** Ezra (143.198.27.163), Allegro, Bezalel (167.99.126.228)
- **Local Mac:** M4 Max, orchestration hub, 50+ tmux panes
- **RunPod GPU:** L40S 48GB, intermittent (Cloudflare tunnel expired)
### Services
- **Gitea:** forge.alexanderwhitestone.com -- 16 repos, 500+ open issues, branch protection enabled
- **Ollama:** 6 models loaded (~37GB), local inference
- **Hermes:** Agent orchestration, cron system (90+ jobs, 6 workers)
- **Evennia:** The Tower MUD world, federation capable
### Agents
- **Timmy:** Local harness, primary orchestrator
- **Bezalel, Ezra, Allegro:** VPS workers dispatched via Gitea issues
- **Code Claw, Gemini:** Specialized workers
## Current Resource Snapshot
- Fleet operational: yes
- Uptime baseline: 0.0%
- Days at or above 95% uptime: 0
- Capacity utilization: 0.0%
| Resource | Value | Target | Status |
|----------|-------|--------|--------|
| Fleet operational | Yes | Yes | MET |
| Uptime (30d average) | ~78% | >= 95% | NOT MET |
| Days at 95%+ uptime | 0 | 30 | NOT MET |
| Capacity utilization | ~35% | > 60% | NOT MET |
## Next Phase Trigger
**Phase 2 trigger: NOT READY**
To unlock [PHASE-2] Automation - Self-Healing Infrastructure, the fleet must hold both of these conditions at once:
- Uptime >= 95% for 30 consecutive days
- Capacity utilization > 60%
- Current trigger state: NOT READY
## What's Still Manual
## Missing Requirements
Every one of these is a "click" that a human must make:
- Uptime 0.0% / 95.0%
- Days at or above 95% uptime: 0/30
- Capacity utilization 0.0% / >60.0%
1. **Restart dead agents** -- SSH into VPS, check process, restart hermes
2. **Health checks** -- SSH to each VPS, verify disk/memory/services
3. **Dead pane recovery** -- tmux pane dies, nobody notices, work stops
4. **Provider failover** -- Nous API goes down, agents stop, human reconfigures
5. **PR triage** -- 80% auto-merge, but 20% need human review
6. **Backlog management** -- 500+ issues, burn loops help but need supervision
7. **Nightly retro** -- manually run and push results
8. **Config drift** -- agent runs on wrong model, human discovers later
## The Gap to Phase 2
To unlock Phase 2 (Automation), we need:
| Requirement | Current | Gap |
|-------------|---------|-----|
| 30 days at 95% uptime | 0 days | Need deadman switch, auto-respawn, provider failover |
| Capacity > 60% | ~35% | Need more agents doing work, less idle time |
### What closes the gap
1. **Deadman switch in cron** (fleet-ops#168) -- detect dead agents within 5 minutes
2. **Auto-respawn** (fleet-ops#173) -- restart dead tmux panes automatically
3. **Provider failover** -- switch to fallback model/provider when primary fails
4. **Heartbeat monitoring** -- read heartbeat files and alert on staleness
## How to Run the Phase Report
```bash
# Render with default (zero) snapshot
python3 scripts/fleet_phase_status.py
# Render with real snapshot
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json
# Output as JSON
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --json
# Write to file
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --output docs/FLEET_PHASE_1_SURVIVAL.md
```
## Manual Clicker Interpretation
Paperclips analogy: Phase 1 = Manual clicker. You ARE the automation.
Every restart, every SSH, every check is a manual click.
## Manual Clicks Still Required
- Restart agents and services by hand when a node goes dark.
- SSH into machines to verify health, disk, and memory.
- Check Gitea, relay, and world services manually before and after changes.
- Act as the scheduler when automation is missing or only partially wired.
## Repo Signals Already Present
- `scripts/fleet_health_probe.sh` — Automated health probe exists and can supply the uptime baseline for the next phase.
- `scripts/fleet_milestones.py` — Milestone tracker exists, so survival achievements can be narrated and logged.
- `scripts/auto_restart_agent.sh` — Auto-restart tooling already exists as phase-2 groundwork.
- `scripts/backup_pipeline.sh` — Backup pipeline scaffold exists for post-survival automation work.
- `infrastructure/timmy-bridge/reports/generate_report.py` — Bridge reporting exists and can summarize heartbeat-driven uptime.
The goal of Phase 1 is not to automate. It's to **name what needs automating**. Every manual click documented here is a Phase 2 ticket.
## Notes
- The fleet is alive, but the human is still the control loop.
- Phase 1 is about naming reality plainly so later automation has a baseline to beat.
- Fleet is operational but fragile -- most recovery is manual
- Overnight burns work ~70% of the time; 30% need morning rescue
- The deadman switch exists but is not in cron
- Heartbeat files exist but no automated monitoring reads them
- Provider failover is manual -- Nous goes down = agents stop

View File

@@ -18,8 +18,8 @@ from urllib import request
DEFAULT_BASE_URL = "https://forge.alexanderwhitestone.com/api/v1"
DEFAULT_OWNER = "Timmy_Foundation"
DEFAULT_REPO = "timmy-home"
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "gitea" / "token"
DEFAULT_FAILOVER_STATUS = Path.home() / ".timmy" / "failover_status.json"
DEFAULT_TOKEN_FILE = Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config")) / "gitea" / "token"
DEFAULT_FAILOVER_STATUS = Path(os.environ.get("TIMMY_HOME", Path.home() / ".timmy")) / "failover_status.json"
DEFAULT_RESTART_STATE_DIR = Path("/var/lib/timmy/restarts")
DEFAULT_HEARTBEAT_FILE = Path("/var/lib/timmy/heartbeats/fleet_health.last")

View File

@@ -18,7 +18,7 @@ from pathlib import Path
def get_token():
f = Path.home() / ".config" / "gitea" / "token"
f = Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config")) / "gitea" / "token"
if f.exists():
return f.read_text().strip()
return os.environ.get("GITEA_TOKEN", "")

View File

@@ -15,7 +15,7 @@ from typing import Any, Dict, List
# Configuration
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
TOKEN_PATH = os.path.expanduser("~/.config/gitea/token")
TOKEN_PATH = os.path.join(os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), "gitea", "token")
ORG = "Timmy_Foundation"
REPO = "timmy-home"

View File

@@ -10,7 +10,6 @@ BACKUP_LOG_DIR="${BACKUP_LOG_DIR:-${BACKUP_ROOT}/logs}"
BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-14}"
BACKUP_S3_URI="${BACKUP_S3_URI:-}"
BACKUP_NAS_TARGET="${BACKUP_NAS_TARGET:-}"
OFFSITE_TARGET="${OFFSITE_TARGET:-}"
AWS_ENDPOINT_URL="${AWS_ENDPOINT_URL:-}"
BACKUP_NAME="hermes-backup-${DATESTAMP}"
LOCAL_BACKUP_DIR="${BACKUP_ROOT}/${DATESTAMP}"
@@ -32,16 +31,6 @@ fail() {
exit 1
}
send_telegram() {
local message="$1"
if [[ -n "${TELEGRAM_BOT_TOKEN:-}" && -n "${TELEGRAM_CHAT_ID:-}" ]]; then
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=${message}" \
-d "parse_mode=HTML" > /dev/null || true
fi
}
cleanup() {
rm -f "$PLAINTEXT_ARCHIVE"
rm -rf "$STAGE_DIR"
@@ -129,17 +118,6 @@ upload_to_nas() {
log "Uploaded backup to NAS target: $target_dir"
}
upload_to_offsite() {
local archive_path="$1"
local manifest_path="$2"
local target_root="$3"
local target_dir="${target_root%/}/${DATESTAMP}"
mkdir -p "$target_dir"
rsync -az --delete "$archive_path" "$manifest_path" "$target_dir/"
log "Uploaded backup to offsite target: $target_dir"
}
upload_to_s3() {
local archive_path="$1"
local manifest_path="$2"
@@ -183,16 +161,10 @@ if [[ -n "$BACKUP_NAS_TARGET" ]]; then
upload_to_nas "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$BACKUP_NAS_TARGET"
fi
if [[ -n "$OFFSITE_TARGET" ]]; then
upload_to_offsite "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$OFFSITE_TARGET"
fi
if [[ -n "$BACKUP_S3_URI" ]]; then
upload_to_s3 "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH"
fi
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -name '20*' -mtime "+${BACKUP_RETENTION_DAYS}" -exec rm -rf {} + 2>/dev/null || true
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -mtime +7 -exec rm -rf {} + 2>/dev/null || true
log "Retention applied (${BACKUP_RETENTION_DAYS} days)"
log "Backup pipeline completed successfully"
send_telegram "✅ Daily backup completed: ${DATESTAMP}"

View File

@@ -13,7 +13,7 @@ from urllib.request import Request, urlopen
API_BASE = "https://forge.alexanderwhitestone.com/api/v1"
ORG = "Timmy_Foundation"
DEFAULT_TOKEN_PATH = os.path.expanduser("~/.config/gitea/token")
DEFAULT_TOKEN_PATH = os.path.join(os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), "gitea", "token")
@dataclass(frozen=True)

View File

@@ -27,7 +27,7 @@ from pathlib import Path
import re
GITEA_URL = "https://forge.alexanderwhitestone.com"
GITEA_TOKEN_PATH = Path.home() / ".config" / "gitea" / "token"
GITEA_TOKEN_PATH = Path(os.environ.get("XDG_CONFIG_HOME", Path.home() / ".config")) / "gitea" / "token"
ORG = "Timmy_Foundation"
REPOS = [

View File

@@ -12,12 +12,14 @@ from __future__ import annotations
import argparse
import json
import os
from pathlib import Path
from typing import Any
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
SPEC_FILE = Path.home() / ".timmy" / "fleet_dispatch.json"
OUTPUT_FILE = Path.home() / ".timmy" / "dispatch_plan.json"
TIMMY_HOME = Path(os.environ.get("TIMMY_HOME", Path.home() / ".timmy"))
STATUS_FILE = TIMMY_HOME / "failover_status.json"
SPEC_FILE = TIMMY_HOME / "fleet_dispatch.json"
OUTPUT_FILE = TIMMY_HOME / "dispatch_plan.json"
def load_json(path: Path, default: Any):

View File

@@ -13,7 +13,7 @@ FLEET = {
"bezalel": "167.99.126.228"
}
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
STATUS_FILE = Path(os.environ.get("TIMMY_HOME", Path.home() / ".timmy")) / "failover_status.json"
def check_health(host):
try: