Compare commits
1 Commits
fix/534-v2
...
fix/547-ph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
eb41220ae4 |
@@ -4,96 +4,58 @@ Phase 1 is the manual-clicker stage of the fleet. The machines exist. The servic
|
||||
|
||||
## Phase Definition
|
||||
|
||||
- **Current state:** Fleet is operational. Three VPS wizards run. Gitea hosts 16 repos. Agents burn through issues nightly.
|
||||
- **The problem:** Everything important still depends on human vigilance. When an agent dies at 2 AM, nobody notices until morning.
|
||||
- **Resources tracked:** Uptime, Capacity Utilization.
|
||||
- **Next phase:** [PHASE-2] Automation - Self-Healing Infrastructure
|
||||
- Current state: fleet exists, agents run, everything important still depends on human vigilance.
|
||||
- Resources tracked here: Capacity, Uptime.
|
||||
- Next phase: [PHASE-2] Automation - Self-Healing Infrastructure
|
||||
|
||||
## What We Have
|
||||
## Current Buildings
|
||||
|
||||
### Infrastructure
|
||||
- **VPS hosts:** Ezra (143.198.27.163), Allegro, Bezalel (167.99.126.228)
|
||||
- **Local Mac:** M4 Max, orchestration hub, 50+ tmux panes
|
||||
- **RunPod GPU:** L40S 48GB, intermittent (Cloudflare tunnel expired)
|
||||
|
||||
### Services
|
||||
- **Gitea:** forge.alexanderwhitestone.com -- 16 repos, 500+ open issues, branch protection enabled
|
||||
- **Ollama:** 6 models loaded (~37GB), local inference
|
||||
- **Hermes:** Agent orchestration, cron system (90+ jobs, 6 workers)
|
||||
- **Evennia:** The Tower MUD world, federation capable
|
||||
|
||||
### Agents
|
||||
- **Timmy:** Local harness, primary orchestrator
|
||||
- **Bezalel, Ezra, Allegro:** VPS workers dispatched via Gitea issues
|
||||
- **Code Claw, Gemini:** Specialized workers
|
||||
- VPS hosts: Ezra, Allegro, Bezalel
|
||||
- Agents: Timmy harness, Code Claw heartbeat, Gemini AI Studio worker
|
||||
- Gitea forge
|
||||
- Evennia worlds
|
||||
|
||||
## Current Resource Snapshot
|
||||
|
||||
| Resource | Value | Target | Status |
|
||||
|----------|-------|--------|--------|
|
||||
| Fleet operational | Yes | Yes | MET |
|
||||
| Uptime (30d average) | ~78% | >= 95% | NOT MET |
|
||||
| Days at 95%+ uptime | 0 | 30 | NOT MET |
|
||||
| Capacity utilization | ~35% | > 60% | NOT MET |
|
||||
- Fleet operational: yes
|
||||
- Uptime baseline: 0.0%
|
||||
- Days at or above 95% uptime: 0
|
||||
- Capacity utilization: 0.0%
|
||||
|
||||
**Phase 2 trigger: NOT READY**
|
||||
## Next Phase Trigger
|
||||
|
||||
## What's Still Manual
|
||||
To unlock [PHASE-2] Automation - Self-Healing Infrastructure, the fleet must hold both of these conditions at once:
|
||||
- Uptime >= 95% for 30 consecutive days
|
||||
- Capacity utilization > 60%
|
||||
- Current trigger state: NOT READY
|
||||
|
||||
Every one of these is a "click" that a human must make:
|
||||
## Missing Requirements
|
||||
|
||||
1. **Restart dead agents** -- SSH into VPS, check process, restart hermes
|
||||
2. **Health checks** -- SSH to each VPS, verify disk/memory/services
|
||||
3. **Dead pane recovery** -- tmux pane dies, nobody notices, work stops
|
||||
4. **Provider failover** -- Nous API goes down, agents stop, human reconfigures
|
||||
5. **PR triage** -- 80% auto-merge, but 20% need human review
|
||||
6. **Backlog management** -- 500+ issues, burn loops help but need supervision
|
||||
7. **Nightly retro** -- manually run and push results
|
||||
8. **Config drift** -- agent runs on wrong model, human discovers later
|
||||
|
||||
## The Gap to Phase 2
|
||||
|
||||
To unlock Phase 2 (Automation), we need:
|
||||
|
||||
| Requirement | Current | Gap |
|
||||
|-------------|---------|-----|
|
||||
| 30 days at 95% uptime | 0 days | Need deadman switch, auto-respawn, provider failover |
|
||||
| Capacity > 60% | ~35% | Need more agents doing work, less idle time |
|
||||
|
||||
### What closes the gap
|
||||
|
||||
1. **Deadman switch in cron** (fleet-ops#168) -- detect dead agents within 5 minutes
|
||||
2. **Auto-respawn** (fleet-ops#173) -- restart dead tmux panes automatically
|
||||
3. **Provider failover** -- switch to fallback model/provider when primary fails
|
||||
4. **Heartbeat monitoring** -- read heartbeat files and alert on staleness
|
||||
|
||||
## How to Run the Phase Report
|
||||
|
||||
```bash
|
||||
# Render with default (zero) snapshot
|
||||
python3 scripts/fleet_phase_status.py
|
||||
|
||||
# Render with real snapshot
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json
|
||||
|
||||
# Output as JSON
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --json
|
||||
|
||||
# Write to file
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --output docs/FLEET_PHASE_1_SURVIVAL.md
|
||||
```
|
||||
- Uptime 0.0% / 95.0%
|
||||
- Days at or above 95% uptime: 0/30
|
||||
- Capacity utilization 0.0% / >60.0%
|
||||
|
||||
## Manual Clicker Interpretation
|
||||
|
||||
Paperclips analogy: Phase 1 = Manual clicker. You ARE the automation.
|
||||
Every restart, every SSH, every check is a manual click.
|
||||
|
||||
The goal of Phase 1 is not to automate. It's to **name what needs automating**. Every manual click documented here is a Phase 2 ticket.
|
||||
## Manual Clicks Still Required
|
||||
|
||||
- Restart agents and services by hand when a node goes dark.
|
||||
- SSH into machines to verify health, disk, and memory.
|
||||
- Check Gitea, relay, and world services manually before and after changes.
|
||||
- Act as the scheduler when automation is missing or only partially wired.
|
||||
|
||||
## Repo Signals Already Present
|
||||
|
||||
- `scripts/fleet_health_probe.sh` — Automated health probe exists and can supply the uptime baseline for the next phase.
|
||||
- `scripts/fleet_milestones.py` — Milestone tracker exists, so survival achievements can be narrated and logged.
|
||||
- `scripts/auto_restart_agent.sh` — Auto-restart tooling already exists as phase-2 groundwork.
|
||||
- `scripts/backup_pipeline.sh` — Backup pipeline scaffold exists for post-survival automation work.
|
||||
- `infrastructure/timmy-bridge/reports/generate_report.py` — Bridge reporting exists and can summarize heartbeat-driven uptime.
|
||||
|
||||
## Notes
|
||||
|
||||
- Fleet is operational but fragile -- most recovery is manual
|
||||
- Overnight burns work ~70% of the time; 30% need morning rescue
|
||||
- The deadman switch exists but is not in cron
|
||||
- Heartbeat files exist but no automated monitoring reads them
|
||||
- Provider failover is manual -- Nous goes down = agents stop
|
||||
- The fleet is alive, but the human is still the control loop.
|
||||
- Phase 1 is about naming reality plainly so later automation has a baseline to beat.
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
# Bezalel World Server Configuration
|
||||
|
||||
This directory contains the Evennia server configuration for Bezalel, the forge-and-testbed wizard house.
|
||||
|
||||
## Quick Start
|
||||
|
||||
To fix the Evennia settings on the Bezalel VPS (104.131.15.18):
|
||||
|
||||
```bash
|
||||
# SSH to Bezalel and run the fix script
|
||||
ssh root@104.131.15.18 'bash -s' < scripts/fix_evennia_settings.sh
|
||||
```
|
||||
|
||||
Or manually:
|
||||
|
||||
```bash
|
||||
cd /root/wizards/bezalel/evennia/bezalel_world/server/conf
|
||||
|
||||
# Copy the fixed settings
|
||||
cp ~/timmy-home/evennia/bezalel_world/server/conf/settings.py ./settings.py
|
||||
|
||||
# Clean and reinitialize DB
|
||||
cd /root/wizards/bezalel/evennia/bezalel_world
|
||||
rm -f server/evennia.db3
|
||||
/root/wizards/bezalel/evennia/venv/bin/evennia migrate
|
||||
|
||||
# Create superuser
|
||||
/root/wizards/bezalel/evennia/venv/bin/python3 -c "
|
||||
import sys, os
|
||||
sys.setrecursionlimit(5000)
|
||||
os.environ['DJANGO_SETTINGS_MODULE'] = 'server.conf.settings'
|
||||
import django
|
||||
django.setup()
|
||||
from evennia.accounts.accounts import AccountDB
|
||||
AccountDB.objects.create_superuser('Timmy', 'timmy@tower.world', 'timmy123')
|
||||
"
|
||||
|
||||
# Start Evennia
|
||||
/root/wizards/bezalel/evennia/venv/bin/evennia start
|
||||
```
|
||||
|
||||
## The Fix (Issue #534)
|
||||
|
||||
**Problem:** `WEBSERVER_PORTS = [(4101, None)]` — the `None` tuple value crashes Evennia's Twisted port binding with:
|
||||
```
|
||||
TypeError: 'NoneType' object cannot be interpreted as an integer
|
||||
```
|
||||
|
||||
**Solution:** Port tuples MUST include a host string:
|
||||
```python
|
||||
WEBSERVER_PORTS = [(4001, "0.0.0.0")]
|
||||
TELNET_PORTS = [(4000, "0.0.0.0")]
|
||||
WEBSOCKET_PORTS = [(4002, "0.0.0.0")]
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
After starting Evennia:
|
||||
|
||||
```bash
|
||||
evennia status # Should show Portal and Server running
|
||||
ss -tlnp | grep 4000 # Telnet port
|
||||
ss -tlnp | grep 4001 # Web port
|
||||
ss -tlnp | grep 4002 # WebSocket port
|
||||
```
|
||||
|
||||
Test connection:
|
||||
```bash
|
||||
telnet 104.131.15.18 4000
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
server/
|
||||
├── conf/
|
||||
│ ├── __init__.py
|
||||
│ └── settings.py # Main settings file (FIXED for #534)
|
||||
├── logs/ # Evennia logs
|
||||
└── evennia.db3 # SQLite database (created at runtime)
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
- Gitea Issue: [timmy-home#534](https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-home/issues/534)
|
||||
- Evennia Docs: https://www.evennia.com/docs/latest/Setup/Settings-Default.html
|
||||
- World Plan: docs/BEZALEL_EVENNIA_WORLD.md
|
||||
@@ -1,87 +0,0 @@
|
||||
r"""
|
||||
Evennia settings file for Bezalel World.
|
||||
|
||||
This is the sovereign Evennia configuration for the Bezalel forge-and-testbed wizard.
|
||||
Reference: timmy-home#534
|
||||
|
||||
The available options are found in the default settings file found here:
|
||||
https://www.evennia.com/docs/latest/Setup/Settings-Default.html
|
||||
"""
|
||||
|
||||
# Use the defaults from Evennia unless explicitly overridden
|
||||
from evennia.settings_default import *
|
||||
|
||||
######################################################################
|
||||
# Evennia base server config
|
||||
######################################################################
|
||||
|
||||
# Server name
|
||||
SERVERNAME = "bezalel_world"
|
||||
|
||||
######################################################################
|
||||
# Network ports - FIXED for #534
|
||||
# Port tuples MUST include a host string, not None
|
||||
######################################################################
|
||||
|
||||
# Web server port (HTTP)
|
||||
WEBSERVER_PORTS = [(4001, "0.0.0.0")]
|
||||
|
||||
# Telnet server port
|
||||
TELNET_PORTS = [(4000, "0.0.0.0")]
|
||||
|
||||
# WebSocket port for webclient
|
||||
WEBSOCKET_PORTS = [(4002, "0.0.0.0")]
|
||||
|
||||
######################################################################
|
||||
# Database configuration
|
||||
# Using SQLite for sovereign local deployment
|
||||
######################################################################
|
||||
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ENGINE': 'django.db.backends.sqlite3',
|
||||
'NAME': os.path.join(GAME_DIR, 'server', 'evennia.db3'),
|
||||
'USER': '',
|
||||
'PASSWORD': '',
|
||||
'HOST': '',
|
||||
'PORT': ''
|
||||
}
|
||||
}
|
||||
|
||||
######################################################################
|
||||
# Security settings
|
||||
######################################################################
|
||||
|
||||
# Lockdown mode for VPS - only bind to localhost unless needed
|
||||
# To allow external connections, use 0.0.0.0 in port tuples above
|
||||
ALLOWED_HOSTS = ['*'] # VPS needs this for external access
|
||||
|
||||
######################################################################
|
||||
# Game world defaults
|
||||
######################################################################
|
||||
|
||||
# Start location for new characters
|
||||
DEFAULT_HOME = "#2" # Limbo
|
||||
|
||||
# Start location for guests
|
||||
GUEST_HOME = "#2"
|
||||
|
||||
######################################################################
|
||||
# Telnet settings
|
||||
######################################################################
|
||||
|
||||
TELNET_INTERFACES = ['0.0.0.0']
|
||||
|
||||
######################################################################
|
||||
# Web server settings
|
||||
######################################################################
|
||||
|
||||
WEBSERVER_INTERFACES = ['0.0.0.0']
|
||||
|
||||
######################################################################
|
||||
# Settings given in secret_settings.py override those in this file.
|
||||
######################################################################
|
||||
try:
|
||||
from server.conf.secret_settings import *
|
||||
except ImportError:
|
||||
print("secret_settings.py file not found or failed to import.")
|
||||
@@ -10,6 +10,7 @@ BACKUP_LOG_DIR="${BACKUP_LOG_DIR:-${BACKUP_ROOT}/logs}"
|
||||
BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-14}"
|
||||
BACKUP_S3_URI="${BACKUP_S3_URI:-}"
|
||||
BACKUP_NAS_TARGET="${BACKUP_NAS_TARGET:-}"
|
||||
OFFSITE_TARGET="${OFFSITE_TARGET:-}"
|
||||
AWS_ENDPOINT_URL="${AWS_ENDPOINT_URL:-}"
|
||||
BACKUP_NAME="hermes-backup-${DATESTAMP}"
|
||||
LOCAL_BACKUP_DIR="${BACKUP_ROOT}/${DATESTAMP}"
|
||||
@@ -31,6 +32,16 @@ fail() {
|
||||
exit 1
|
||||
}
|
||||
|
||||
send_telegram() {
|
||||
local message="$1"
|
||||
if [[ -n "${TELEGRAM_BOT_TOKEN:-}" && -n "${TELEGRAM_CHAT_ID:-}" ]]; then
|
||||
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
|
||||
-d "chat_id=${TELEGRAM_CHAT_ID}" \
|
||||
-d "text=${message}" \
|
||||
-d "parse_mode=HTML" > /dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
rm -f "$PLAINTEXT_ARCHIVE"
|
||||
rm -rf "$STAGE_DIR"
|
||||
@@ -118,6 +129,17 @@ upload_to_nas() {
|
||||
log "Uploaded backup to NAS target: $target_dir"
|
||||
}
|
||||
|
||||
upload_to_offsite() {
|
||||
local archive_path="$1"
|
||||
local manifest_path="$2"
|
||||
local target_root="$3"
|
||||
|
||||
local target_dir="${target_root%/}/${DATESTAMP}"
|
||||
mkdir -p "$target_dir"
|
||||
rsync -az --delete "$archive_path" "$manifest_path" "$target_dir/"
|
||||
log "Uploaded backup to offsite target: $target_dir"
|
||||
}
|
||||
|
||||
upload_to_s3() {
|
||||
local archive_path="$1"
|
||||
local manifest_path="$2"
|
||||
@@ -161,10 +183,16 @@ if [[ -n "$BACKUP_NAS_TARGET" ]]; then
|
||||
upload_to_nas "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$BACKUP_NAS_TARGET"
|
||||
fi
|
||||
|
||||
if [[ -n "$OFFSITE_TARGET" ]]; then
|
||||
upload_to_offsite "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$OFFSITE_TARGET"
|
||||
fi
|
||||
|
||||
if [[ -n "$BACKUP_S3_URI" ]]; then
|
||||
upload_to_s3 "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH"
|
||||
fi
|
||||
|
||||
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -name '20*' -mtime "+${BACKUP_RETENTION_DAYS}" -exec rm -rf {} + 2>/dev/null || true
|
||||
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -mtime +7 -exec rm -rf {} + 2>/dev/null || true
|
||||
log "Retention applied (${BACKUP_RETENTION_DAYS} days)"
|
||||
log "Backup pipeline completed successfully"
|
||||
send_telegram "✅ Daily backup completed: ${DATESTAMP}"
|
||||
|
||||
@@ -15,20 +15,13 @@ EVENNIA_DIR="/root/wizards/bezalel/evennia/bezalel_world"
|
||||
SETTINGS="${EVENNIA_DIR}/server/conf/settings.py"
|
||||
VENV_PYTHON="/root/wizards/bezalel/evennia/venv/bin/python3"
|
||||
VENV_EVENNIA="/root/wizards/bezalel/evennia/venv/bin/evennia"
|
||||
TIMMY_HOME="${TIMMY_HOME:-/root/timmy-home}" # Or wherever the repo is cloned
|
||||
|
||||
echo "=== Fix Evennia Settings (Bezalel) ==="
|
||||
|
||||
# 1. Fix settings.py — prefer repo version, fallback to sed patch
|
||||
# 1. Fix settings.py — remove bad port tuples
|
||||
echo "Fixing settings.py..."
|
||||
if [ -f "${TIMMY_HOME}/evennia/bezalel_world/server/conf/settings.py" ]; then
|
||||
# Use the fixed settings from the repo
|
||||
mkdir -p "$(dirname "$SETTINGS")"
|
||||
cp "${TIMMY_HOME}/evennia/bezalel_world/server/conf/settings.py" "$SETTINGS"
|
||||
echo "Copied fixed settings from timmy-home repo."
|
||||
elif [ -f "$SETTINGS" ]; then
|
||||
# Fallback: patch in place
|
||||
echo "Patching existing settings..."
|
||||
if [ -f "$SETTINGS" ]; then
|
||||
# Remove broken port lines
|
||||
sed -i '/WEBSERVER_PORTS/d' "$SETTINGS"
|
||||
sed -i '/TELNET_PORTS/d' "$SETTINGS"
|
||||
sed -i '/WEBSOCKET_PORTS/d' "$SETTINGS"
|
||||
@@ -42,7 +35,7 @@ elif [ -f "$SETTINGS" ]; then
|
||||
echo 'TELNET_PORTS = [(4000, "0.0.0.0")]' >> "$SETTINGS"
|
||||
echo 'WEBSOCKET_PORTS = [(4002, "0.0.0.0")]' >> "$SETTINGS"
|
||||
|
||||
echo "Patched existing settings file."
|
||||
echo "Settings fixed."
|
||||
else
|
||||
echo "ERROR: Settings file not found at $SETTINGS"
|
||||
exit 1
|
||||
|
||||
Reference in New Issue
Block a user