Compare commits
2 Commits
fix/547-ph
...
fix/535
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
477ec86467 | ||
|
|
f83fdb7d55 |
96
docs/BEZALEL_TAILSCALE_BOOTSTRAP.md
Normal file
96
docs/BEZALEL_TAILSCALE_BOOTSTRAP.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Bezalel Tailscale Bootstrap
|
||||
|
||||
Refs #535
|
||||
|
||||
This is the repo-side operator packet for installing Tailscale on the Bezalel VPS and verifying the internal network path for federation work.
|
||||
|
||||
Important truth:
|
||||
- issue #535 names `104.131.15.18`
|
||||
- older Bezalel control-plane docs also mention `159.203.146.185`
|
||||
- the current source of truth in this repo is `ansible/inventory/hosts.ini`, which currently resolves `bezalel` to `67.205.155.108`
|
||||
|
||||
Because of that drift, `scripts/bezalel_tailscale_bootstrap.py` now resolves the target host from `ansible/inventory/hosts.ini` by default instead of trusting a stale hardcoded IP.
|
||||
|
||||
## What the script does
|
||||
|
||||
`python3 scripts/bezalel_tailscale_bootstrap.py`
|
||||
|
||||
Safe by default:
|
||||
- builds the remote bootstrap script
|
||||
- writes it locally to `/tmp/bezalel_tailscale_bootstrap.sh`
|
||||
- prints the SSH command needed to run it
|
||||
- does **not** touch the VPS unless `--apply` is passed
|
||||
|
||||
When applied, the remote script does all of the issue’s repo-side bootstrap steps:
|
||||
- installs Tailscale
|
||||
- runs `tailscale up --ssh --hostname bezalel`
|
||||
- appends the provided Mac SSH public key to `~/.ssh/authorized_keys`
|
||||
- prints `tailscale status --json`
|
||||
- pings the expected peer targets:
|
||||
- Mac: `100.124.176.28`
|
||||
- Ezra: `100.126.61.75`
|
||||
|
||||
## Required secrets / inputs
|
||||
|
||||
- Tailscale auth key
|
||||
- Mac SSH public key
|
||||
|
||||
Provide them either directly or through files:
|
||||
- `--auth-key` or `--auth-key-file`
|
||||
- `--ssh-public-key` or `--ssh-public-key-file`
|
||||
|
||||
## Dry-run example
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_tailscale_bootstrap.py \
|
||||
--auth-key-file ~/.config/tailscale/auth_key \
|
||||
--ssh-public-key-file ~/.ssh/id_ed25519.pub \
|
||||
--json
|
||||
```
|
||||
|
||||
This prints:
|
||||
- resolved host
|
||||
- host source (`inventory:<path>` when pulled from `ansible/inventory/hosts.ini`)
|
||||
- local script path
|
||||
- SSH command to execute
|
||||
- peer targets
|
||||
|
||||
## Apply example
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_tailscale_bootstrap.py \
|
||||
--auth-key-file ~/.config/tailscale/auth_key \
|
||||
--ssh-public-key-file ~/.ssh/id_ed25519.pub \
|
||||
--apply \
|
||||
--json
|
||||
```
|
||||
|
||||
## Verifying success after apply
|
||||
|
||||
The script now parses the remote stdout into structured verification data:
|
||||
- `verification.tailscale.self.tailscale_ips`
|
||||
- `verification.tailscale.self.dns_name`
|
||||
- `verification.peers`
|
||||
- `verification.ping_ok`
|
||||
|
||||
A successful run should show:
|
||||
- at least one Bezalel Tailscale IP under `tailscale_ips`
|
||||
- `ping_ok.mac = 100.124.176.28`
|
||||
- `ping_ok.ezra = 100.126.61.75`
|
||||
|
||||
## Expected remote install commands
|
||||
|
||||
```bash
|
||||
curl -fsSL https://tailscale.com/install.sh | sh
|
||||
tailscale up --ssh --hostname bezalel
|
||||
install -d -m 700 ~/.ssh
|
||||
touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys
|
||||
tailscale status --json
|
||||
```
|
||||
|
||||
## Why this PR does not claim live completion
|
||||
|
||||
This repo can safely ship the bootstrap script, host resolution logic, structured proof parsing, and operator packet.
|
||||
It cannot honestly claim that Bezalel was actually joined to the tailnet unless a human/operator runs the script with a real auth key and real SSH access to the VPS.
|
||||
|
||||
That means the correct PR language for #535 is advancement, not pretend closure.
|
||||
@@ -4,58 +4,96 @@ Phase 1 is the manual-clicker stage of the fleet. The machines exist. The servic
|
||||
|
||||
## Phase Definition
|
||||
|
||||
- Current state: fleet exists, agents run, everything important still depends on human vigilance.
|
||||
- Resources tracked here: Capacity, Uptime.
|
||||
- Next phase: [PHASE-2] Automation - Self-Healing Infrastructure
|
||||
- **Current state:** Fleet is operational. Three VPS wizards run. Gitea hosts 16 repos. Agents burn through issues nightly.
|
||||
- **The problem:** Everything important still depends on human vigilance. When an agent dies at 2 AM, nobody notices until morning.
|
||||
- **Resources tracked:** Uptime, Capacity Utilization.
|
||||
- **Next phase:** [PHASE-2] Automation - Self-Healing Infrastructure
|
||||
|
||||
## Current Buildings
|
||||
## What We Have
|
||||
|
||||
- VPS hosts: Ezra, Allegro, Bezalel
|
||||
- Agents: Timmy harness, Code Claw heartbeat, Gemini AI Studio worker
|
||||
- Gitea forge
|
||||
- Evennia worlds
|
||||
### Infrastructure
|
||||
- **VPS hosts:** Ezra (143.198.27.163), Allegro, Bezalel (167.99.126.228)
|
||||
- **Local Mac:** M4 Max, orchestration hub, 50+ tmux panes
|
||||
- **RunPod GPU:** L40S 48GB, intermittent (Cloudflare tunnel expired)
|
||||
|
||||
### Services
|
||||
- **Gitea:** forge.alexanderwhitestone.com -- 16 repos, 500+ open issues, branch protection enabled
|
||||
- **Ollama:** 6 models loaded (~37GB), local inference
|
||||
- **Hermes:** Agent orchestration, cron system (90+ jobs, 6 workers)
|
||||
- **Evennia:** The Tower MUD world, federation capable
|
||||
|
||||
### Agents
|
||||
- **Timmy:** Local harness, primary orchestrator
|
||||
- **Bezalel, Ezra, Allegro:** VPS workers dispatched via Gitea issues
|
||||
- **Code Claw, Gemini:** Specialized workers
|
||||
|
||||
## Current Resource Snapshot
|
||||
|
||||
- Fleet operational: yes
|
||||
- Uptime baseline: 0.0%
|
||||
- Days at or above 95% uptime: 0
|
||||
- Capacity utilization: 0.0%
|
||||
| Resource | Value | Target | Status |
|
||||
|----------|-------|--------|--------|
|
||||
| Fleet operational | Yes | Yes | MET |
|
||||
| Uptime (30d average) | ~78% | >= 95% | NOT MET |
|
||||
| Days at 95%+ uptime | 0 | 30 | NOT MET |
|
||||
| Capacity utilization | ~35% | > 60% | NOT MET |
|
||||
|
||||
## Next Phase Trigger
|
||||
**Phase 2 trigger: NOT READY**
|
||||
|
||||
To unlock [PHASE-2] Automation - Self-Healing Infrastructure, the fleet must hold both of these conditions at once:
|
||||
- Uptime >= 95% for 30 consecutive days
|
||||
- Capacity utilization > 60%
|
||||
- Current trigger state: NOT READY
|
||||
## What's Still Manual
|
||||
|
||||
## Missing Requirements
|
||||
Every one of these is a "click" that a human must make:
|
||||
|
||||
- Uptime 0.0% / 95.0%
|
||||
- Days at or above 95% uptime: 0/30
|
||||
- Capacity utilization 0.0% / >60.0%
|
||||
1. **Restart dead agents** -- SSH into VPS, check process, restart hermes
|
||||
2. **Health checks** -- SSH to each VPS, verify disk/memory/services
|
||||
3. **Dead pane recovery** -- tmux pane dies, nobody notices, work stops
|
||||
4. **Provider failover** -- Nous API goes down, agents stop, human reconfigures
|
||||
5. **PR triage** -- 80% auto-merge, but 20% need human review
|
||||
6. **Backlog management** -- 500+ issues, burn loops help but need supervision
|
||||
7. **Nightly retro** -- manually run and push results
|
||||
8. **Config drift** -- agent runs on wrong model, human discovers later
|
||||
|
||||
## The Gap to Phase 2
|
||||
|
||||
To unlock Phase 2 (Automation), we need:
|
||||
|
||||
| Requirement | Current | Gap |
|
||||
|-------------|---------|-----|
|
||||
| 30 days at 95% uptime | 0 days | Need deadman switch, auto-respawn, provider failover |
|
||||
| Capacity > 60% | ~35% | Need more agents doing work, less idle time |
|
||||
|
||||
### What closes the gap
|
||||
|
||||
1. **Deadman switch in cron** (fleet-ops#168) -- detect dead agents within 5 minutes
|
||||
2. **Auto-respawn** (fleet-ops#173) -- restart dead tmux panes automatically
|
||||
3. **Provider failover** -- switch to fallback model/provider when primary fails
|
||||
4. **Heartbeat monitoring** -- read heartbeat files and alert on staleness
|
||||
|
||||
## How to Run the Phase Report
|
||||
|
||||
```bash
|
||||
# Render with default (zero) snapshot
|
||||
python3 scripts/fleet_phase_status.py
|
||||
|
||||
# Render with real snapshot
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json
|
||||
|
||||
# Output as JSON
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --json
|
||||
|
||||
# Write to file
|
||||
python3 scripts/fleet_phase_status.py --snapshot configs/phase-1-snapshot.json --output docs/FLEET_PHASE_1_SURVIVAL.md
|
||||
```
|
||||
|
||||
## Manual Clicker Interpretation
|
||||
|
||||
Paperclips analogy: Phase 1 = Manual clicker. You ARE the automation.
|
||||
Every restart, every SSH, every check is a manual click.
|
||||
|
||||
## Manual Clicks Still Required
|
||||
|
||||
- Restart agents and services by hand when a node goes dark.
|
||||
- SSH into machines to verify health, disk, and memory.
|
||||
- Check Gitea, relay, and world services manually before and after changes.
|
||||
- Act as the scheduler when automation is missing or only partially wired.
|
||||
|
||||
## Repo Signals Already Present
|
||||
|
||||
- `scripts/fleet_health_probe.sh` — Automated health probe exists and can supply the uptime baseline for the next phase.
|
||||
- `scripts/fleet_milestones.py` — Milestone tracker exists, so survival achievements can be narrated and logged.
|
||||
- `scripts/auto_restart_agent.sh` — Auto-restart tooling already exists as phase-2 groundwork.
|
||||
- `scripts/backup_pipeline.sh` — Backup pipeline scaffold exists for post-survival automation work.
|
||||
- `infrastructure/timmy-bridge/reports/generate_report.py` — Bridge reporting exists and can summarize heartbeat-driven uptime.
|
||||
The goal of Phase 1 is not to automate. It's to **name what needs automating**. Every manual click documented here is a Phase 2 ticket.
|
||||
|
||||
## Notes
|
||||
|
||||
- The fleet is alive, but the human is still the control loop.
|
||||
- Phase 1 is about naming reality plainly so later automation has a baseline to beat.
|
||||
- Fleet is operational but fragile -- most recovery is manual
|
||||
- Overnight burns work ~70% of the time; 30% need morning rescue
|
||||
- The deadman switch exists but is not in cron
|
||||
- Heartbeat files exist but no automated monitoring reads them
|
||||
- Provider failover is manual -- Nous goes down = agents stop
|
||||
|
||||
@@ -14,6 +14,7 @@ Quick-reference index for common operational tasks across the Timmy Foundation i
|
||||
| Agent scorecard | fleet-ops | `python3 scripts/agent_scorecard.py` |
|
||||
| View fleet manifest | fleet-ops | `cat manifest.yaml` |
|
||||
| Run nightly codebase genome pass | timmy-home | `python3 scripts/codebase_genome_nightly.py --dry-run` |
|
||||
| Prepare Bezalel Tailscale bootstrap | timmy-home | `python3 scripts/bezalel_tailscale_bootstrap.py --auth-key-file <path> --ssh-public-key-file <path> --json` |
|
||||
|
||||
## the-nexus (Frontend + Brain)
|
||||
|
||||
|
||||
@@ -10,7 +10,6 @@ BACKUP_LOG_DIR="${BACKUP_LOG_DIR:-${BACKUP_ROOT}/logs}"
|
||||
BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-14}"
|
||||
BACKUP_S3_URI="${BACKUP_S3_URI:-}"
|
||||
BACKUP_NAS_TARGET="${BACKUP_NAS_TARGET:-}"
|
||||
OFFSITE_TARGET="${OFFSITE_TARGET:-}"
|
||||
AWS_ENDPOINT_URL="${AWS_ENDPOINT_URL:-}"
|
||||
BACKUP_NAME="hermes-backup-${DATESTAMP}"
|
||||
LOCAL_BACKUP_DIR="${BACKUP_ROOT}/${DATESTAMP}"
|
||||
@@ -32,16 +31,6 @@ fail() {
|
||||
exit 1
|
||||
}
|
||||
|
||||
send_telegram() {
|
||||
local message="$1"
|
||||
if [[ -n "${TELEGRAM_BOT_TOKEN:-}" && -n "${TELEGRAM_CHAT_ID:-}" ]]; then
|
||||
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
|
||||
-d "chat_id=${TELEGRAM_CHAT_ID}" \
|
||||
-d "text=${message}" \
|
||||
-d "parse_mode=HTML" > /dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
rm -f "$PLAINTEXT_ARCHIVE"
|
||||
rm -rf "$STAGE_DIR"
|
||||
@@ -129,17 +118,6 @@ upload_to_nas() {
|
||||
log "Uploaded backup to NAS target: $target_dir"
|
||||
}
|
||||
|
||||
upload_to_offsite() {
|
||||
local archive_path="$1"
|
||||
local manifest_path="$2"
|
||||
local target_root="$3"
|
||||
|
||||
local target_dir="${target_root%/}/${DATESTAMP}"
|
||||
mkdir -p "$target_dir"
|
||||
rsync -az --delete "$archive_path" "$manifest_path" "$target_dir/"
|
||||
log "Uploaded backup to offsite target: $target_dir"
|
||||
}
|
||||
|
||||
upload_to_s3() {
|
||||
local archive_path="$1"
|
||||
local manifest_path="$2"
|
||||
@@ -183,16 +161,10 @@ if [[ -n "$BACKUP_NAS_TARGET" ]]; then
|
||||
upload_to_nas "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$BACKUP_NAS_TARGET"
|
||||
fi
|
||||
|
||||
if [[ -n "$OFFSITE_TARGET" ]]; then
|
||||
upload_to_offsite "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH" "$OFFSITE_TARGET"
|
||||
fi
|
||||
|
||||
if [[ -n "$BACKUP_S3_URI" ]]; then
|
||||
upload_to_s3 "$ENCRYPTED_ARCHIVE" "$MANIFEST_PATH"
|
||||
fi
|
||||
|
||||
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -name '20*' -mtime "+${BACKUP_RETENTION_DAYS}" -exec rm -rf {} + 2>/dev/null || true
|
||||
find "$BACKUP_ROOT" -mindepth 1 -maxdepth 1 -type d -mtime +7 -exec rm -rf {} + 2>/dev/null || true
|
||||
log "Retention applied (${BACKUP_RETENTION_DAYS} days)"
|
||||
log "Backup pipeline completed successfully"
|
||||
send_telegram "✅ Daily backup completed: ${DATESTAMP}"
|
||||
|
||||
@@ -16,11 +16,14 @@ import argparse
|
||||
import json
|
||||
import shlex
|
||||
import subprocess
|
||||
import re
|
||||
from json import JSONDecoder
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
DEFAULT_HOST = "159.203.146.185"
|
||||
DEFAULT_HOST = "67.205.155.108"
|
||||
DEFAULT_HOSTNAME = "bezalel"
|
||||
DEFAULT_INVENTORY_PATH = Path(__file__).resolve().parents[1] / "ansible" / "inventory" / "hosts.ini"
|
||||
DEFAULT_PEERS = {
|
||||
"mac": "100.124.176.28",
|
||||
"ezra": "100.126.61.75",
|
||||
@@ -66,6 +69,37 @@ def parse_tailscale_status(payload: dict[str, Any]) -> dict[str, Any]:
|
||||
}
|
||||
|
||||
|
||||
def resolve_host(host: str | None, inventory_path: Path = DEFAULT_INVENTORY_PATH, hostname: str = DEFAULT_HOSTNAME) -> tuple[str, str]:
|
||||
if host:
|
||||
return host, "explicit"
|
||||
if inventory_path.exists():
|
||||
pattern = re.compile(rf"^{re.escape(hostname)}\s+.*ansible_host=([^\s]+)")
|
||||
for line in inventory_path.read_text().splitlines():
|
||||
match = pattern.search(line.strip())
|
||||
if match:
|
||||
return match.group(1), f"inventory:{inventory_path}"
|
||||
return DEFAULT_HOST, "default"
|
||||
|
||||
|
||||
def parse_apply_output(stdout: str) -> dict[str, Any]:
|
||||
result: dict[str, Any] = {"tailscale": None, "ping_ok": {}}
|
||||
text = stdout or ""
|
||||
start = text.find("{")
|
||||
if start != -1:
|
||||
try:
|
||||
payload, _ = JSONDecoder().raw_decode(text[start:])
|
||||
if isinstance(payload, dict):
|
||||
result["tailscale"] = parse_tailscale_status(payload)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for line in text.splitlines():
|
||||
if line.startswith("PING_OK:"):
|
||||
_, name, ip = line.split(":", 2)
|
||||
result["ping_ok"][name] = ip
|
||||
return result
|
||||
|
||||
|
||||
def build_ssh_command(host: str, remote_script_path: str = "/tmp/bezalel_tailscale_bootstrap.sh") -> list[str]:
|
||||
return ["ssh", host, f"bash {shlex.quote(remote_script_path)}"]
|
||||
|
||||
@@ -89,8 +123,9 @@ def parse_peer_args(items: list[str]) -> dict[str, str]:
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Prepare or execute Tailscale bootstrap for the Bezalel VPS.")
|
||||
parser.add_argument("--host", default=DEFAULT_HOST)
|
||||
parser.add_argument("--host")
|
||||
parser.add_argument("--hostname", default=DEFAULT_HOSTNAME)
|
||||
parser.add_argument("--inventory-path", type=Path, default=DEFAULT_INVENTORY_PATH)
|
||||
parser.add_argument("--auth-key", help="Tailscale auth key")
|
||||
parser.add_argument("--auth-key-file", type=Path, help="Path to file containing the Tailscale auth key")
|
||||
parser.add_argument("--ssh-public-key", help="SSH public key to append to authorized_keys")
|
||||
@@ -116,6 +151,7 @@ def main() -> None:
|
||||
auth_key = _read_secret(args.auth_key, args.auth_key_file)
|
||||
ssh_public_key = _read_secret(args.ssh_public_key, args.ssh_public_key_file)
|
||||
peers = parse_peer_args(args.peer)
|
||||
resolved_host, host_source = resolve_host(args.host, args.inventory_path, args.hostname)
|
||||
|
||||
if not auth_key:
|
||||
raise SystemExit("Missing Tailscale auth key. Use --auth-key or --auth-key-file.")
|
||||
@@ -126,28 +162,31 @@ def main() -> None:
|
||||
write_script(args.script_out, script)
|
||||
|
||||
payload: dict[str, Any] = {
|
||||
"host": args.host,
|
||||
"host": resolved_host,
|
||||
"host_source": host_source,
|
||||
"hostname": args.hostname,
|
||||
"inventory_path": str(args.inventory_path),
|
||||
"script_out": str(args.script_out),
|
||||
"remote_script_path": args.remote_script_path,
|
||||
"ssh_command": build_ssh_command(args.host, args.remote_script_path),
|
||||
"ssh_command": build_ssh_command(resolved_host, args.remote_script_path),
|
||||
"peer_targets": peers,
|
||||
"applied": False,
|
||||
}
|
||||
|
||||
if args.apply:
|
||||
result = run_remote(args.host, args.remote_script_path)
|
||||
result = run_remote(resolved_host, args.remote_script_path)
|
||||
payload["applied"] = True
|
||||
payload["exit_code"] = result.returncode
|
||||
payload["stdout"] = result.stdout
|
||||
payload["stderr"] = result.stderr
|
||||
payload["verification"] = parse_apply_output(result.stdout)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(payload, indent=2))
|
||||
return
|
||||
|
||||
print("--- Bezalel Tailscale Bootstrap ---")
|
||||
print(f"Host: {args.host}")
|
||||
print(f"Host: {resolved_host} ({host_source})")
|
||||
print(f"Local script: {args.script_out}")
|
||||
print("SSH command: " + " ".join(payload["ssh_command"]))
|
||||
if args.apply:
|
||||
|
||||
@@ -2,9 +2,12 @@ from scripts.bezalel_tailscale_bootstrap import (
|
||||
DEFAULT_PEERS,
|
||||
build_remote_script,
|
||||
build_ssh_command,
|
||||
parse_apply_output,
|
||||
parse_peer_args,
|
||||
parse_tailscale_status,
|
||||
resolve_host,
|
||||
)
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def test_build_remote_script_contains_install_up_and_key_append():
|
||||
@@ -78,3 +81,46 @@ def test_parse_peer_args_merges_overrides_into_defaults():
|
||||
"ezra": "100.126.61.76",
|
||||
"forge": "100.70.0.9",
|
||||
}
|
||||
|
||||
|
||||
def test_resolve_host_prefers_inventory_over_stale_default(tmp_path: Path):
|
||||
inventory = tmp_path / "hosts.ini"
|
||||
inventory.write_text(
|
||||
"[fleet]\n"
|
||||
"ezra ansible_host=143.198.27.163 ansible_user=root\n"
|
||||
"bezalel ansible_host=67.205.155.108 ansible_user=root\n"
|
||||
)
|
||||
|
||||
host, source = resolve_host(None, inventory)
|
||||
|
||||
assert host == "67.205.155.108"
|
||||
assert source == f"inventory:{inventory}"
|
||||
|
||||
|
||||
def test_parse_apply_output_extracts_status_and_ping_markers():
|
||||
stdout = (
|
||||
'{"Self": {"HostName": "bezalel", "DNSName": "bezalel.tailnet.ts.net", "TailscaleIPs": ["100.90.0.10"]}, '
|
||||
'"Peer": {"node-1": {"HostName": "ezra", "TailscaleIPs": ["100.126.61.75"]}}}'
|
||||
"\nPING_OK:mac:100.124.176.28\n"
|
||||
"PING_OK:ezra:100.126.61.75\n"
|
||||
)
|
||||
|
||||
result = parse_apply_output(stdout)
|
||||
|
||||
assert result["tailscale"]["self"]["tailscale_ips"] == ["100.90.0.10"]
|
||||
assert result["ping_ok"] == {"mac": "100.124.176.28", "ezra": "100.126.61.75"}
|
||||
|
||||
|
||||
def test_runbook_doc_exists_and_mentions_inventory_auth_and_peer_checks():
|
||||
doc = Path("docs/BEZALEL_TAILSCALE_BOOTSTRAP.md")
|
||||
assert doc.exists(), "missing docs/BEZALEL_TAILSCALE_BOOTSTRAP.md"
|
||||
text = doc.read_text()
|
||||
assert "ansible/inventory/hosts.ini" in text
|
||||
assert "tailscale up" in text
|
||||
assert "authorized_keys" in text
|
||||
assert "100.124.176.28" in text
|
||||
assert "100.126.61.75" in text
|
||||
|
||||
runbook = Path("docs/RUNBOOK_INDEX.md").read_text()
|
||||
assert "Prepare Bezalel Tailscale bootstrap" in runbook
|
||||
assert "scripts/bezalel_tailscale_bootstrap.py" in runbook
|
||||
|
||||
Reference in New Issue
Block a user