Compare commits
1 Commits
timmy/issu
...
docs/autom
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ffea2964c4 |
@@ -1,23 +1,27 @@
|
||||
# DEPRECATED — Bash Loop Scripts Removed
|
||||
# DEPRECATED — policy, not proof of runtime absence
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Reason:** Replaced by Hermes + timmy-config sidecar orchestration
|
||||
Original deprecation date: 2026-03-25
|
||||
|
||||
## What was removed
|
||||
- claude-loop.sh, gemini-loop.sh, agent-loop.sh
|
||||
- timmy-orchestrator.sh, workforce-manager.py
|
||||
- nexus-merge-bot.sh, claudemax-watchdog.sh, timmy-loopstat.sh
|
||||
This file records the policy direction: long-running ad hoc bash loops were meant
|
||||
to be replaced by Hermes-side orchestration.
|
||||
|
||||
## What replaces them
|
||||
**Harness:** Hermes
|
||||
**Overlay repo:** Timmy_Foundation/timmy-config
|
||||
**Entry points:** `orchestration.py`, `tasks.py`, `deploy.sh`
|
||||
**Features:** Huey + SQLite scheduling, local-model health checks, session export, DPO artifact staging
|
||||
But policy and world state diverged.
|
||||
Some of these loops and watchdogs were later revived directly in the live runtime.
|
||||
|
||||
## Why
|
||||
The bash loops crash-looped, produced zero work after relaunch, had no crash
|
||||
recovery, no durable export path, and required too many ad hoc scripts. The
|
||||
Hermes sidecar keeps orchestration close to Timmy's actual config and training
|
||||
surfaces.
|
||||
Do NOT use this file as proof that something is gone.
|
||||
Use `docs/automation-inventory.md` as the current world-state document.
|
||||
|
||||
Do NOT recreate bash loops. If orchestration is broken, fix the Hermes sidecar.
|
||||
## Deprecated by policy
|
||||
- old dashboard-era loop stacks
|
||||
- old tmux resurrection paths
|
||||
- old startup paths that recreate `timmy-loop`
|
||||
- stale repo-specific automation tied to `Timmy-time-dashboard` or `the-matrix`
|
||||
|
||||
## Current rule
|
||||
If an automation question matters, audit:
|
||||
1. launchd loaded jobs
|
||||
2. live process table
|
||||
3. Hermes cron list
|
||||
4. the automation inventory doc
|
||||
|
||||
Only then decide what is actually live.
|
||||
|
||||
12
README.md
12
README.md
@@ -14,8 +14,8 @@ timmy-config/
|
||||
├── DEPRECATED.md ← What was removed and why
|
||||
├── config.yaml ← Hermes harness configuration
|
||||
├── channel_directory.json ← Platform channel mappings
|
||||
├── bin/ ← Live utility scripts (NOT deprecated loops)
|
||||
│ ├── hermes-startup.sh ← Hermes boot sequence
|
||||
├── bin/ ← Sidecar-managed operational scripts
|
||||
│ ├── hermes-startup.sh ← Dormant startup path (audit before enabling)
|
||||
│ ├── agent-dispatch.sh ← Manual agent dispatch
|
||||
│ ├── ops-panel.sh ← Ops dashboard panel
|
||||
│ ├── ops-gitea.sh ← Gitea ops helpers
|
||||
@@ -25,6 +25,7 @@ timmy-config/
|
||||
├── skins/ ← UI skins (timmy skin)
|
||||
├── playbooks/ ← Agent playbooks (YAML)
|
||||
├── cron/ ← Cron job definitions
|
||||
├── docs/automation-inventory.md ← Live automation + stale-state inventory
|
||||
└── training/ ← Transitional training recipes, not canonical lived data
|
||||
```
|
||||
|
||||
@@ -40,9 +41,10 @@ If a file answers "who is Timmy?" or "how does Hermes host him?", it belongs
|
||||
here. If it answers "what has Timmy done or learned?" it belongs in
|
||||
`timmy-home`.
|
||||
|
||||
The scripts in `bin/` are live operational helpers for the Hermes sidecar.
|
||||
What is dead are the old long-running bash worker loops, not every script in
|
||||
this repo.
|
||||
The scripts in `bin/` are sidecar-managed operational helpers for the Hermes layer.
|
||||
Do NOT assume older prose about removed loops is still true at runtime.
|
||||
Audit the live machine first, then read `docs/automation-inventory.md` for the
|
||||
current reality and stale-state risks.
|
||||
|
||||
## Orchestration: Huey
|
||||
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
# Gitea
|
||||
GITEA_URL=http://143.198.27.163:3000
|
||||
# Prefer setting GITEA_TOKEN directly in deployment. If omitted, GITEA_TOKEN_FILE is used.
|
||||
GITEA_TOKEN_FILE=~/.config/gitea/timmy-token
|
||||
|
||||
# Nostr relay
|
||||
RELAY_URL=wss://alexanderwhitestone.com/relay/
|
||||
|
||||
# Bridge identity
|
||||
BRIDGE_IDENTITY=allegro
|
||||
KEYSTORE_PATH=~/.timmy/nostr/agent_keys.json
|
||||
# Optional: set BRIDGE_NSEC directly instead of using KEYSTORE_PATH + BRIDGE_IDENTITY
|
||||
# Useful when the deployment keystore does not contain the default identity name.
|
||||
# BRIDGE_NSEC=
|
||||
|
||||
# Gitea routing
|
||||
DEFAULT_REPO=Timmy_Foundation/timmy-config
|
||||
STATUS_ASSIGNEE=allegro
|
||||
|
||||
# Comma-separated list of allowed operator npubs
|
||||
AUTHORIZED_NPUBS=npub1t8exnw6sp7vtxar8q5teyr0ueq0rvtgqpq5jkzylegupqulxfqwq4j66p5
|
||||
3
bridge/nostr-dm-bridge/.gitignore
vendored
3
bridge/nostr-dm-bridge/.gitignore
vendored
@@ -1,3 +0,0 @@
|
||||
__pycache__/
|
||||
*.pyc
|
||||
.env
|
||||
@@ -1,97 +0,0 @@
|
||||
# Nostr DM → Gitea Bridge
|
||||
|
||||
Imported into repo truth from the live Allegro VPS bridge and sanitized for reproducible deployment.
|
||||
|
||||
This bridge lets an authorized Nostr operator send encrypted DMs from Nostur that create or update Gitea issues. Gitea remains the system of record. Nostr is operator ingress only.
|
||||
|
||||
## What it does
|
||||
|
||||
- `!status` returns the configured assignee queue from Gitea
|
||||
- `!issue "Title" "Body"` creates a new Gitea issue
|
||||
- `!comment #123 "Text"` comments on an existing issue
|
||||
- freeform text creates an issue in the configured default repo
|
||||
- every mutation replies with the canonical Gitea URL
|
||||
|
||||
## Repo truth vs live origin
|
||||
|
||||
The original running bridge on Allegro proved the concept, but it contained machine-local assumptions:
|
||||
|
||||
- root-only token path
|
||||
- root-only keystore path
|
||||
- hardcoded bridge identity
|
||||
- hardcoded assignee and repo
|
||||
- VPS-specific systemd paths
|
||||
|
||||
This repo copy removes those assumptions and makes deployment explicit through environment variables.
|
||||
|
||||
## Configuration
|
||||
|
||||
Copy `.env.example` to `.env` and set the values for your host.
|
||||
|
||||
Required at runtime:
|
||||
|
||||
- `GITEA_TOKEN` or `GITEA_TOKEN_FILE`
|
||||
- `BRIDGE_NSEC` or `KEYSTORE_PATH` + `BRIDGE_IDENTITY`
|
||||
|
||||
Common settings:
|
||||
|
||||
- `GITEA_URL` default: `http://143.198.27.163:3000`
|
||||
- `RELAY_URL` default: `wss://alexanderwhitestone.com/relay/`
|
||||
- `DEFAULT_REPO` default: `Timmy_Foundation/timmy-config`
|
||||
- `AUTHORIZED_NPUBS` default: Alexander's operator npub
|
||||
- `STATUS_ASSIGNEE` default: same as `BRIDGE_IDENTITY`
|
||||
|
||||
## Files
|
||||
|
||||
- `bridge_allegro.py` — bridge daemon
|
||||
- `test_bridge.py` — component validation script
|
||||
- `nostr-dm-bridge.service` — example systemd unit
|
||||
- `.env.example` — deployment template
|
||||
|
||||
## Manual run
|
||||
|
||||
```bash
|
||||
cd /opt/timmy/nostr-dm-bridge
|
||||
cp .env.example .env
|
||||
# edit .env
|
||||
python3 bridge_allegro.py
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
```bash
|
||||
cd /opt/timmy/nostr-dm-bridge
|
||||
python3 test_bridge.py
|
||||
```
|
||||
|
||||
If the configured `BRIDGE_IDENTITY` is not present in the local keystore, the test script generates an ephemeral bridge key so parser/encryption validation still works without production secrets.
|
||||
|
||||
## Systemd
|
||||
|
||||
```bash
|
||||
sudo cp nostr-dm-bridge.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now nostr-dm-bridge
|
||||
sudo systemctl status nostr-dm-bridge
|
||||
```
|
||||
|
||||
The unit expects the repo to live at `/opt/timmy/nostr-dm-bridge` and reads optional runtime config from `/opt/timmy/nostr-dm-bridge/.env`.
|
||||
|
||||
## Security model
|
||||
|
||||
1. Only configured `AUTHORIZED_NPUBS` can trigger mutations.
|
||||
2. All durable work objects live in Gitea.
|
||||
3. Nostr only carries commands and acknowledgments.
|
||||
4. Every successful action replies with the canonical Gitea link.
|
||||
5. Bridge identity is explicit and re-keyable without code edits.
|
||||
|
||||
## Operator flow
|
||||
|
||||
```text
|
||||
Nostur DM (encrypted kind 4)
|
||||
-> relay subscription
|
||||
-> bridge decrypts and validates sender
|
||||
-> bridge parses command
|
||||
-> bridge calls Gitea API
|
||||
-> bridge replies with result + canonical URL
|
||||
```
|
||||
@@ -1,317 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Nostr DM → Gitea Bridge MVP for Issue #181
|
||||
|
||||
Imported from the live Allegro VPS bridge and sanitized for repo truth.
|
||||
Uses a configurable bridge identity (defaults to Allegro) and explicit env/config
|
||||
rather than hardcoded machine-local paths.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
from pathlib import Path
|
||||
|
||||
# Nostr imports
|
||||
from nostr.event import Event
|
||||
from nostr.key import PrivateKey, PublicKey
|
||||
from nostr.relay_manager import RelayManager
|
||||
|
||||
# === CONFIGURATION ===
|
||||
GITEA_URL = os.environ.get("GITEA_URL", "http://143.198.27.163:3000").rstrip("/")
|
||||
GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "").strip()
|
||||
GITEA_TOKEN_FILE = Path(os.environ.get("GITEA_TOKEN_FILE", "~/.config/gitea/timmy-token")).expanduser()
|
||||
RELAY_URL = os.environ.get("RELAY_URL", "wss://alexanderwhitestone.com/relay/")
|
||||
KEYSTORE_PATH = Path(os.environ.get("KEYSTORE_PATH", "~/.timmy/nostr/agent_keys.json")).expanduser()
|
||||
BRIDGE_IDENTITY = os.environ.get("BRIDGE_IDENTITY", "allegro")
|
||||
DEFAULT_REPO = os.environ.get("DEFAULT_REPO", "Timmy_Foundation/timmy-config")
|
||||
AUTHORIZED_NPUBS = [x.strip() for x in os.environ.get("AUTHORIZED_NPUBS", "npub1t8exnw6sp7vtxar8q5teyr0ueq0rvtgqpq5jkzylegupqulxfqwq4j66p5").split(",") if x.strip()]
|
||||
STATUS_ASSIGNEE = os.environ.get("STATUS_ASSIGNEE", BRIDGE_IDENTITY)
|
||||
|
||||
if not GITEA_TOKEN and GITEA_TOKEN_FILE.exists():
|
||||
GITEA_TOKEN = GITEA_TOKEN_FILE.read_text().strip()
|
||||
if not GITEA_TOKEN:
|
||||
raise RuntimeError(f"Missing Gitea token. Set GITEA_TOKEN or provide {GITEA_TOKEN_FILE}")
|
||||
|
||||
BRIDGE_NSEC = os.environ.get("BRIDGE_NSEC", "").strip()
|
||||
if not BRIDGE_NSEC:
|
||||
with open(KEYSTORE_PATH) as f:
|
||||
ks = json.load(f)
|
||||
if BRIDGE_IDENTITY not in ks:
|
||||
raise RuntimeError(f"Bridge identity '{BRIDGE_IDENTITY}' not found in {KEYSTORE_PATH}")
|
||||
BRIDGE_NSEC = ks[BRIDGE_IDENTITY]["nsec"]
|
||||
|
||||
bridge_key = PrivateKey.from_nsec(BRIDGE_NSEC)
|
||||
BRIDGE_NPUB = bridge_key.public_key.bech32()
|
||||
BRIDGE_HEX = bridge_key.public_key.hex()
|
||||
AUTHORIZED_HEX = {PublicKey.from_npub(npub).hex(): npub for npub in AUTHORIZED_NPUBS}
|
||||
|
||||
print(f"[Bridge] Identity: {BRIDGE_IDENTITY} {BRIDGE_NPUB}")
|
||||
print(f"[Bridge] Authorized operators: {', '.join(AUTHORIZED_NPUBS)}")
|
||||
|
||||
# === GITEA API HELPERS ===
|
||||
|
||||
def gitea_get(path):
|
||||
headers = {"Authorization": f"token {GITEA_TOKEN}"}
|
||||
req = urllib.request.Request(f"{GITEA_URL}/api/v1{path}", headers=headers)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except urllib.error.HTTPError as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def gitea_post(path, data):
|
||||
headers = {"Authorization": f"token {GITEA_TOKEN}", "Content-Type": "application/json"}
|
||||
body = json.dumps(data).encode()
|
||||
req = urllib.request.Request(f"{GITEA_URL}/api/v1{path}", data=body, headers=headers, method="POST")
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except urllib.error.HTTPError as e:
|
||||
return {"error": str(e), "code": e.code}
|
||||
|
||||
# === COMMAND PARSERS ===
|
||||
|
||||
def parse_command(text: str) -> dict:
|
||||
"""Parse DM text for commands."""
|
||||
text = text.strip()
|
||||
|
||||
# !issue "Title" "Body" - create new issue
|
||||
if text.startswith("!issue"):
|
||||
parts = text[6:].strip()
|
||||
if '"' in parts:
|
||||
try:
|
||||
quotes = []
|
||||
in_quote = False
|
||||
current = ""
|
||||
for c in parts:
|
||||
if c == '"':
|
||||
if in_quote:
|
||||
quotes.append(current)
|
||||
current = ""
|
||||
in_quote = not in_quote
|
||||
elif in_quote:
|
||||
current += c
|
||||
if len(quotes) >= 2:
|
||||
return {
|
||||
"action": "create_issue",
|
||||
"repo": DEFAULT_REPO,
|
||||
"title": quotes[0],
|
||||
"body": quotes[1]
|
||||
}
|
||||
elif len(quotes) == 1:
|
||||
return {
|
||||
"action": "create_issue",
|
||||
"repo": DEFAULT_REPO,
|
||||
"title": quotes[0],
|
||||
"body": f"Created via Nostr DM bridge ({BRIDGE_IDENTITY} operator)"
|
||||
}
|
||||
except:
|
||||
pass
|
||||
return {
|
||||
"action": "create_issue",
|
||||
"repo": DEFAULT_REPO,
|
||||
"title": parts or "Issue from Nostr",
|
||||
"body": f"Created via Nostr DM bridge ({BRIDGE_IDENTITY} operator)"
|
||||
}
|
||||
|
||||
# !comment #123 "Text" - append to existing issue
|
||||
if text.startswith("!comment"):
|
||||
parts = text[8:].strip()
|
||||
if parts.startswith("#"):
|
||||
try:
|
||||
num_end = 1
|
||||
while num_end < len(parts) and parts[num_end].isdigit():
|
||||
num_end += 1
|
||||
issue_num = int(parts[1:num_end])
|
||||
rest = parts[num_end:].strip()
|
||||
if '"' in rest:
|
||||
body = rest.split('"')[1]
|
||||
else:
|
||||
body = rest
|
||||
return {
|
||||
"action": "add_comment",
|
||||
"repo": DEFAULT_REPO,
|
||||
"issue": issue_num,
|
||||
"body": body
|
||||
}
|
||||
except:
|
||||
pass
|
||||
|
||||
# !status - get queue summary
|
||||
if text.startswith("!status"):
|
||||
return {"action": "get_status"}
|
||||
|
||||
# Default: treat as freeform issue creation
|
||||
if text and not text.startswith("!"):
|
||||
return {
|
||||
"action": "create_issue",
|
||||
"repo": DEFAULT_REPO,
|
||||
"title": text[:80] + ("..." if len(text) > 80 else ""),
|
||||
"body": f"Operator message via Nostr DM:\n\n{text}\n\n---\n*Via Nostur → Gitea bridge ({BRIDGE_IDENTITY})*"
|
||||
}
|
||||
|
||||
return None
|
||||
|
||||
# === ACTION HANDLERS ===
|
||||
|
||||
def handle_create_issue(cmd: dict) -> str:
|
||||
result = gitea_post(f"/repos/{cmd['repo']}/issues", {
|
||||
"title": cmd["title"],
|
||||
"body": cmd["body"]
|
||||
})
|
||||
if "error" in result:
|
||||
return f"❌ Failed to create issue: {result.get('error')}"
|
||||
url = f"{GITEA_URL}/{cmd['repo']}/issues/{result['number']}"
|
||||
return f"✅ Created issue #{result['number']}: {result['title']}\n🔗 {url}"
|
||||
|
||||
def handle_add_comment(cmd: dict) -> str:
|
||||
result = gitea_post(f"/repos/{cmd['repo']}/issues/{cmd['issue']}/comments", {
|
||||
"body": cmd["body"] + f"\n\n---\n*Via Nostur → Gitea bridge ({BRIDGE_IDENTITY})*"
|
||||
})
|
||||
if "error" in result:
|
||||
return f"❌ Failed to comment on #{cmd['issue']}: {result.get('error')}"
|
||||
return f"✅ Commented on issue #{cmd['issue']}\n🔗 {GITEA_URL}/{cmd['repo']}/issues/{cmd['issue']}"
|
||||
|
||||
def handle_get_status() -> str:
|
||||
try:
|
||||
issues = gitea_get(f"/repos/{DEFAULT_REPO}/issues?state=open&assignee={STATUS_ASSIGNEE}")
|
||||
if isinstance(issues, dict) and "error" in issues:
|
||||
return f"⚠️ Status fetch failed: {issues['error']}"
|
||||
|
||||
lines = [f"📊 Current {STATUS_ASSIGNEE} Queue:", ""]
|
||||
for i in issues[:5]:
|
||||
lines.append(f"#{i['number']}: {i['title'][:50]}")
|
||||
if len(issues) > 5:
|
||||
lines.append(f"... and {len(issues) - 5} more")
|
||||
lines.append("")
|
||||
lines.append(f"🔗 {GITEA_URL}/{DEFAULT_REPO}/issues?q=assignee%3A{STATUS_ASSIGNEE}")
|
||||
return "\n".join(lines)
|
||||
except Exception as e:
|
||||
return f"⚠️ Status error: {e}"
|
||||
|
||||
def execute_command(cmd: dict) -> str:
|
||||
action = cmd.get("action")
|
||||
if action == "create_issue":
|
||||
return handle_create_issue(cmd)
|
||||
elif action == "add_comment":
|
||||
return handle_add_comment(cmd)
|
||||
elif action == "get_status":
|
||||
return handle_get_status()
|
||||
return "❓ Unknown command"
|
||||
|
||||
# === NOSTR EVENT HANDLING ===
|
||||
|
||||
def decrypt_dm(event: Event) -> str:
|
||||
"""Decrypt DM content using the bridge identity's private key."""
|
||||
try:
|
||||
content = bridge_key.decrypt_message(event.content, event.public_key)
|
||||
return content
|
||||
except Exception as e:
|
||||
print(f"[Decrypt Error] {e}")
|
||||
return None
|
||||
|
||||
def send_dm(recipient_hex: str, message: str):
|
||||
"""Send encrypted DM to recipient."""
|
||||
try:
|
||||
encrypted = bridge_key.encrypt_message(message, recipient_hex)
|
||||
dm_event = Event(
|
||||
kind=4,
|
||||
content=encrypted,
|
||||
tags=[["p", recipient_hex]],
|
||||
public_key=BRIDGE_HEX
|
||||
)
|
||||
bridge_key.sign_event(dm_event)
|
||||
|
||||
relay_manager = RelayManager()
|
||||
relay_manager.add_relay(RELAY_URL)
|
||||
relay_manager.open_connections()
|
||||
time.sleep(1)
|
||||
|
||||
relay_manager.publish_event(dm_event)
|
||||
time.sleep(1)
|
||||
relay_manager.close_connections()
|
||||
|
||||
print(f"[Out] DM sent to {recipient_hex[:16]}...")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"[Send Error] {e}")
|
||||
return False
|
||||
|
||||
# === MAIN LOOP ===
|
||||
|
||||
def process_event(event: Event):
|
||||
"""Process an incoming Nostr event."""
|
||||
if event.kind != 4:
|
||||
return
|
||||
|
||||
p_tags = [t[1] for t in event.tags if t[0] == "p"]
|
||||
if BRIDGE_HEX not in p_tags:
|
||||
return
|
||||
|
||||
sender = event.public_key
|
||||
if sender not in AUTHORIZED_HEX:
|
||||
print(f"[Reject] DM from unauthorized key: {sender[:16]}...")
|
||||
return
|
||||
|
||||
plaintext = decrypt_dm(event)
|
||||
if not plaintext:
|
||||
print("[Error] Failed to decrypt DM")
|
||||
return
|
||||
|
||||
print(f"[In] DM from authorized operator: {plaintext[:60]}...")
|
||||
|
||||
cmd = parse_command(plaintext)
|
||||
if not cmd:
|
||||
send_dm(sender, "❓ Commands:\n!status\n!issue \"Title\" \"Body\"\n!comment #123 \"Text\"\nOr send freeform text to create issue")
|
||||
return
|
||||
|
||||
print(f"[Exec] {cmd['action']}")
|
||||
response = execute_command(cmd)
|
||||
|
||||
send_dm(sender, response)
|
||||
print(f"[Out] Response: {response[:60]}...")
|
||||
|
||||
def run_bridge():
|
||||
print("=" * 60)
|
||||
print(f"Nostr DM → Gitea Bridge MVP ({BRIDGE_IDENTITY} identity)")
|
||||
print("=" * 60)
|
||||
print(f"Relay: {RELAY_URL}")
|
||||
print(f"Listening for DMs to: {BRIDGE_NPUB}")
|
||||
print(f"Authorized operators: {', '.join(AUTHORIZED_NPUBS)}")
|
||||
print("-" * 60)
|
||||
|
||||
relay_manager = RelayManager()
|
||||
relay_manager.add_relay(RELAY_URL)
|
||||
|
||||
filter_json = {
|
||||
"kinds": [4],
|
||||
"#p": [BRIDGE_HEX],
|
||||
"since": int(time.time())
|
||||
}
|
||||
|
||||
relay_manager.add_subscription("dm_listener", filter_json)
|
||||
relay_manager.open_connections()
|
||||
|
||||
print("[Bridge] Listening for operator DMs... (Ctrl+C to exit)")
|
||||
print(f"[Bridge] npub for Nostur contact: {BRIDGE_NPUB}")
|
||||
|
||||
try:
|
||||
print("[Bridge] Event loop started. Waiting for DMs...")
|
||||
while True:
|
||||
# Poll for events without run_sync (API compatibility)
|
||||
while relay_manager.message_pool.has_events():
|
||||
event_msg = relay_manager.message_pool.get_event()
|
||||
if event_msg:
|
||||
process_event(event_msg.event)
|
||||
time.sleep(2)
|
||||
except KeyboardInterrupt:
|
||||
print("\n[Bridge] Shutting down...")
|
||||
finally:
|
||||
relay_manager.close_connections()
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_bridge()
|
||||
@@ -1,19 +0,0 @@
|
||||
[Unit]
|
||||
Description=Nostr DM to Gitea Bridge
|
||||
Documentation=https://gitea.com/Timmy_Foundation/timmy-config/issues/186
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/timmy/nostr-dm-bridge
|
||||
EnvironmentFile=-/opt/timmy/nostr-dm-bridge/.env
|
||||
Environment="HOME=/root"
|
||||
Environment="PYTHONUNBUFFERED=1"
|
||||
ExecStart=/usr/bin/python3 /opt/timmy/nostr-dm-bridge/bridge_allegro.py
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@@ -1,133 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Validate the Nostr DM bridge configuration and core behaviors."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
|
||||
GITEA_URL = os.environ.get("GITEA_URL", "http://143.198.27.163:3000").rstrip("/")
|
||||
GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "").strip()
|
||||
GITEA_TOKEN_FILE = Path(os.environ.get("GITEA_TOKEN_FILE", "~/.config/gitea/timmy-token")).expanduser()
|
||||
KEYSTORE_PATH = Path(os.environ.get("KEYSTORE_PATH", "~/.timmy/nostr/agent_keys.json")).expanduser()
|
||||
BRIDGE_IDENTITY = os.environ.get("BRIDGE_IDENTITY", "allegro")
|
||||
BRIDGE_NSEC = os.environ.get("BRIDGE_NSEC", "").strip()
|
||||
DEFAULT_REPO = os.environ.get("DEFAULT_REPO", "Timmy_Foundation/timmy-config")
|
||||
AUTHORIZED_NPUBS = [x.strip() for x in os.environ.get("AUTHORIZED_NPUBS", "npub1t8exnw6sp7vtxar8q5teyr0ueq0rvtgqpq5jkzylegupqulxfqwq4j66p5").split(",") if x.strip()]
|
||||
|
||||
print("=" * 60)
|
||||
print("Nostr DM Bridge Component Test")
|
||||
print("=" * 60)
|
||||
|
||||
if not GITEA_TOKEN and GITEA_TOKEN_FILE.exists():
|
||||
GITEA_TOKEN = GITEA_TOKEN_FILE.read_text().strip()
|
||||
|
||||
if not GITEA_TOKEN:
|
||||
print(f"✗ Missing Gitea token. Set GITEA_TOKEN or create {GITEA_TOKEN_FILE}")
|
||||
sys.exit(1)
|
||||
print("✓ Gitea token loaded")
|
||||
|
||||
try:
|
||||
from nostr.key import PrivateKey, PublicKey
|
||||
print("✓ nostr library imported")
|
||||
except ImportError as e:
|
||||
print(f"✗ Failed to import nostr: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if not BRIDGE_NSEC:
|
||||
try:
|
||||
with open(KEYSTORE_PATH) as f:
|
||||
keystore = json.load(f)
|
||||
BRIDGE_NSEC = keystore[BRIDGE_IDENTITY]["nsec"]
|
||||
print(f"✓ {BRIDGE_IDENTITY} nsec loaded from keystore")
|
||||
except Exception as e:
|
||||
bridge_key = PrivateKey()
|
||||
BRIDGE_NSEC = bridge_key.bech32()
|
||||
print(f"! Bridge identity {BRIDGE_IDENTITY!r} not available in {KEYSTORE_PATH}: {e}")
|
||||
print("✓ Generated ephemeral bridge key for local validation")
|
||||
else:
|
||||
print("✓ Bridge nsec loaded from BRIDGE_NSEC")
|
||||
|
||||
try:
|
||||
bridge_key = PrivateKey.from_nsec(BRIDGE_NSEC)
|
||||
bridge_npub = bridge_key.public_key.bech32()
|
||||
print(f"✓ Bridge npub: {bridge_npub}")
|
||||
except Exception as e:
|
||||
print(f"✗ Key derivation failed: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
authorized_hex = [PublicKey.from_npub(npub).hex() for npub in AUTHORIZED_NPUBS]
|
||||
print(f"✓ Authorized operators parsed: {len(authorized_hex)}")
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to parse AUTHORIZED_NPUBS: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
headers = {"Authorization": f"token {GITEA_TOKEN}"}
|
||||
req = urllib.request.Request(f"{GITEA_URL}/api/v1/user", headers=headers)
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
user = json.loads(resp.read().decode())
|
||||
print(f"✓ Gitea API connected as: {user.get('login')}")
|
||||
except Exception as e:
|
||||
print(f"✗ Gitea API failed: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
os.environ.setdefault("GITEA_TOKEN", GITEA_TOKEN)
|
||||
os.environ.setdefault("BRIDGE_NSEC", BRIDGE_NSEC)
|
||||
os.environ.setdefault("DEFAULT_REPO", DEFAULT_REPO)
|
||||
os.environ.setdefault("AUTHORIZED_NPUBS", ",".join(AUTHORIZED_NPUBS))
|
||||
|
||||
print("\n" + "-" * 60)
|
||||
print("Testing command parsers...")
|
||||
|
||||
try:
|
||||
from bridge_allegro import parse_command
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to import bridge_allegro: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
cases = [
|
||||
("!status", "get_status"),
|
||||
('!issue "Test Title" "Test Body"', "create_issue"),
|
||||
('!comment #123 "Hello"', "add_comment"),
|
||||
("This is a freeform message", "create_issue"),
|
||||
]
|
||||
|
||||
for text, expected_action in cases:
|
||||
cmd = parse_command(text)
|
||||
if not cmd or cmd.get("action") != expected_action:
|
||||
print(f"✗ Parser mismatch for {text!r}: {cmd}")
|
||||
sys.exit(1)
|
||||
if expected_action in {"create_issue", "add_comment"} and cmd.get("repo") != DEFAULT_REPO:
|
||||
print(f"✗ Parser repo mismatch for {text!r}: {cmd.get('repo')} != {DEFAULT_REPO}")
|
||||
sys.exit(1)
|
||||
print(f"✓ {text!r} -> {expected_action}")
|
||||
|
||||
print("✓ All parser tests passed")
|
||||
|
||||
print("\n" + "-" * 60)
|
||||
print("Testing encryption round-trip...")
|
||||
try:
|
||||
test_message = "Test DM content for round-trip validation"
|
||||
recipient_hex = authorized_hex[0]
|
||||
encrypted = bridge_key.encrypt_message(test_message, recipient_hex)
|
||||
decrypted = bridge_key.decrypt_message(encrypted, recipient_hex)
|
||||
if decrypted != test_message:
|
||||
print(f"✗ Decryption mismatch: {decrypted!r}")
|
||||
sys.exit(1)
|
||||
print("✓ Encryption round-trip successful")
|
||||
except Exception as e:
|
||||
print(f"✗ Encryption test failed: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("ALL TESTS PASSED")
|
||||
print("=" * 60)
|
||||
print("\nBridge is ready to run:")
|
||||
print(" python3 bridge_allegro.py")
|
||||
print("\nFor operator testing:")
|
||||
print(f" 1. Open Nostur")
|
||||
print(f" 2. Send DM to: {bridge_npub}")
|
||||
print(" 3. Try: !status")
|
||||
358
docs/automation-inventory.md
Normal file
358
docs/automation-inventory.md
Normal file
@@ -0,0 +1,358 @@
|
||||
# Automation Inventory
|
||||
|
||||
Last audited: 2026-04-04 15:55 EDT
|
||||
Owner: Timmy sidecar / Timmy home split
|
||||
Purpose: document every known automation that can restart services, revive old worktrees, reuse stale session state, or re-enter old queue state.
|
||||
|
||||
## Why this file exists
|
||||
|
||||
The failure mode is not just "a process is running".
|
||||
The failure mode is:
|
||||
- launchd or a watchdog restarts something behind our backs
|
||||
- the restarted process reads old config, old labels, old worktrees, old session mappings, or old tmux assumptions
|
||||
- the machine appears haunted because old state comes back after we thought it was gone
|
||||
|
||||
This file is the source of truth for what automations exist, what state they read, and how to stop or reset them safely.
|
||||
|
||||
## Source-of-truth split
|
||||
|
||||
Not all automations live in one repo.
|
||||
|
||||
1. timmy-config
|
||||
Path: ~/.timmy/timmy-config
|
||||
Owns: sidecar deployment, ~/.hermes/config.yaml overlay, launch-facing helper scripts in timmy-config/bin/
|
||||
|
||||
2. timmy-home
|
||||
Path: ~/.timmy
|
||||
Owns: Kimi heartbeat script at uniwizard/kimi-heartbeat.sh and other workspace-native automation
|
||||
|
||||
3. live runtime
|
||||
Path: ~/.hermes/bin
|
||||
Reality: some scripts are still only present live in ~/.hermes/bin and are NOT yet mirrored into timmy-config/bin/
|
||||
|
||||
Rule:
|
||||
- Do not assume ~/.hermes/bin is canonical.
|
||||
- Do not assume timmy-config contains every currently running automation.
|
||||
- Audit runtime first, then reconcile to source control.
|
||||
|
||||
## Current live automations
|
||||
|
||||
### A. launchd-loaded automations
|
||||
|
||||
These are loaded right now according to `launchctl list`.
|
||||
|
||||
#### 1. ai.hermes.gateway
|
||||
- Plist: ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
- Command: `python -m hermes_cli.main gateway run --replace`
|
||||
- HERMES_HOME: `~/.hermes`
|
||||
- Logs:
|
||||
- `~/.hermes/logs/gateway.log`
|
||||
- `~/.hermes/logs/gateway.error.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- State it reuses:
|
||||
- `~/.hermes/config.yaml`
|
||||
- `~/.hermes/channel_directory.json`
|
||||
- `~/.hermes/sessions/sessions.json`
|
||||
- `~/.hermes/state.db`
|
||||
- Old-state risk:
|
||||
- if config drifted, this gateway will faithfully revive the drift
|
||||
- if Telegram/session mappings are stale, it will continue stale conversations
|
||||
|
||||
Stop:
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
```
|
||||
Start:
|
||||
```bash
|
||||
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
```
|
||||
|
||||
#### 2. ai.hermes.gateway-fenrir
|
||||
- Plist: ~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist
|
||||
- Command: same gateway binary
|
||||
- HERMES_HOME: `~/.hermes/profiles/fenrir`
|
||||
- Logs:
|
||||
- `~/.hermes/profiles/fenrir/logs/gateway.log`
|
||||
- `~/.hermes/profiles/fenrir/logs/gateway.error.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- same class as main gateway, but isolated to fenrir profile state
|
||||
|
||||
#### 3. ai.openclaw.gateway
|
||||
- Plist: ~/Library/LaunchAgents/ai.openclaw.gateway.plist
|
||||
- Command: `node .../openclaw/dist/index.js gateway --port 18789`
|
||||
- Logs:
|
||||
- `~/.openclaw/logs/gateway.log`
|
||||
- `~/.openclaw/logs/gateway.err.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- long-lived gateway survives toolchain assumptions and keeps accepting work even if upstream routing changed
|
||||
|
||||
#### 4. ai.timmy.kimi-heartbeat
|
||||
- Plist: ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist
|
||||
- Command: `/bin/bash ~/.timmy/uniwizard/kimi-heartbeat.sh`
|
||||
- Interval: every 300s
|
||||
- Logs:
|
||||
- `/tmp/kimi-heartbeat-launchd.log`
|
||||
- `/tmp/kimi-heartbeat-launchd.err`
|
||||
- script log: `/tmp/kimi-heartbeat.log`
|
||||
- State it reuses:
|
||||
- `/tmp/kimi-heartbeat.lock`
|
||||
- Gitea labels: `assigned-kimi`, `kimi-in-progress`, `kimi-done`
|
||||
- repo issue bodies/comments as task memory
|
||||
- Current behavior as of this audit:
|
||||
- stale `kimi-in-progress` tasks are now reclaimed after 1 hour of silence
|
||||
- Old-state risk:
|
||||
- labels ARE the queue state; if labels are stale, the heartbeat used to starve forever
|
||||
- the heartbeat is source-controlled in timmy-home, not timmy-config
|
||||
|
||||
Stop:
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist
|
||||
```
|
||||
|
||||
Clear lock only if process is truly dead:
|
||||
```bash
|
||||
rm -f /tmp/kimi-heartbeat.lock
|
||||
```
|
||||
|
||||
#### 5. ai.timmy.claudemax-watchdog
|
||||
- Plist: ~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist
|
||||
- Command: `/bin/bash ~/.hermes/bin/claudemax-watchdog.sh`
|
||||
- Interval: every 300s
|
||||
- Logs:
|
||||
- `~/.hermes/logs/claudemax-watchdog.log`
|
||||
- launchd wrapper: `~/.hermes/logs/claudemax-launchd.log`
|
||||
- State it reuses:
|
||||
- live process table via `pgrep`
|
||||
- recent Claude logs `~/.hermes/logs/claude-*.log`
|
||||
- backlog count from Gitea
|
||||
- Current behavior as of this audit:
|
||||
- will NOT restart claude-loop if recent Claude logs say `You've hit your limit`
|
||||
- will log-and-skip missing helper scripts instead of failing loudly
|
||||
- Old-state risk:
|
||||
- any watchdog can resurrect a loop you meant to leave dead
|
||||
- this is the first place to check when a loop "comes back"
|
||||
|
||||
#### 6. com.timmy.dashboard-backend
|
||||
- Plist: ~/Library/LaunchAgents/com.timmy.dashboard-backend.plist
|
||||
- Command: uvicorn `dashboard.app:app`
|
||||
- Working directory: `~/worktrees/kimi-repo`
|
||||
- Port: 8100
|
||||
- Logs: `~/.hermes/logs/dashboard-backend.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- this serves code from a specific worktree, not from current repo truth in the abstract
|
||||
- if `~/worktrees/kimi-repo` is stale, launchd will faithfully keep serving stale code
|
||||
|
||||
#### 7. com.timmy.matrix-frontend
|
||||
- Plist: ~/Library/LaunchAgents/com.timmy.matrix-frontend.plist
|
||||
- Command: `npx vite --host`
|
||||
- Working directory: `~/worktrees/the-matrix`
|
||||
- Logs: `~/.hermes/logs/matrix-frontend.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- HIGH
|
||||
- this still points at `~/worktrees/the-matrix`, even though the live 3D world work moved to `Timmy_Foundation/the-nexus`
|
||||
- if this is left loaded, it can revive the old frontend lineage
|
||||
|
||||
### B. running now but NOT launchd-managed
|
||||
|
||||
These are live processes, but not currently represented by a loaded launchd plist.
|
||||
They can still persist because they were started with `nohup` or by other parent scripts.
|
||||
|
||||
#### 8. gemini-loop.sh
|
||||
- Live process: `~/.hermes/bin/gemini-loop.sh`
|
||||
- State files:
|
||||
- `~/.hermes/logs/gemini-loop.log`
|
||||
- `~/.hermes/logs/gemini-skip-list.json`
|
||||
- `~/.hermes/logs/gemini-active.json`
|
||||
- `~/.hermes/logs/gemini-locks/`
|
||||
- `~/.hermes/logs/gemini-pids/`
|
||||
- worktrees under `~/worktrees/gemini-w*`
|
||||
- per-issue logs `~/.hermes/logs/gemini-*.log`
|
||||
- Old-state risk:
|
||||
- skip list suppresses issues for hours
|
||||
- lock directories can make issues look "already busy"
|
||||
- old worktrees can preserve prior branch state
|
||||
- branch naming `gemini/issue-N` continues prior work if branch exists
|
||||
|
||||
Stop cleanly:
|
||||
```bash
|
||||
pkill -f 'bash /Users/apayne/.hermes/bin/gemini-loop.sh'
|
||||
pkill -f 'gemini .*--yolo'
|
||||
rm -rf ~/.hermes/logs/gemini-locks/*.lock ~/.hermes/logs/gemini-pids/*.pid
|
||||
printf '{}\n' > ~/.hermes/logs/gemini-active.json
|
||||
```
|
||||
|
||||
#### 9. timmy-orchestrator.sh
|
||||
- Live process: `~/.hermes/bin/timmy-orchestrator.sh`
|
||||
- State files:
|
||||
- `~/.hermes/logs/timmy-orchestrator.log`
|
||||
- `~/.hermes/logs/timmy-orchestrator.pid`
|
||||
- `~/.hermes/logs/timmy-reviews.log`
|
||||
- `~/.hermes/logs/workforce-manager.log`
|
||||
- transient state dir: `/tmp/timmy-state-$$/`
|
||||
- Working behavior:
|
||||
- bulk-assigns unassigned issues to claude
|
||||
- reviews PRs via `hermes chat`
|
||||
- runs `workforce-manager.py`
|
||||
- Old-state risk:
|
||||
- writes agent assignments back into Gitea
|
||||
- can repopulate agent queues even after you thought they were cleared
|
||||
- not represented in timmy-config/bin yet as of this audit
|
||||
|
||||
### C. Hermes cron automations
|
||||
|
||||
Current cron inventory from `cronjob(list, include_disabled=true)`:
|
||||
|
||||
Enabled:
|
||||
- `a77a87392582` — Health Monitor — every 5m
|
||||
|
||||
Paused:
|
||||
- `9e0624269ba7` — Triage Heartbeat
|
||||
- `e29eda4a8548` — PR Review Sweep
|
||||
- `5e9d952871bc` — Agent Status Check
|
||||
- `36fb2f630a17` — Hermes Philosophy Loop
|
||||
|
||||
Old-state risk:
|
||||
- paused crons are not dead forever; they are resumable state
|
||||
- LLM-wrapped crons can revive old routing/model assumptions if resumed blindly
|
||||
|
||||
### D. file exists but NOT currently loaded
|
||||
|
||||
These are the ones most likely to surprise us later because they still exist and point at old realities.
|
||||
|
||||
#### 10. ai.hermes.startup
|
||||
- Plist: `~/Library/LaunchAgents/ai.hermes.startup.plist`
|
||||
- Points to: `~/.hermes/bin/hermes-startup.sh`
|
||||
- Not loaded in launchctl at audit time
|
||||
- High-risk notes:
|
||||
- startup script still expects `~/.hermes/bin/timmy-tmux.sh`
|
||||
- that file is MISSING at audit time
|
||||
- script also tries to start webhook listener and the old `timmy-loop` tmux world
|
||||
- This is a dormant old-state resurrection path
|
||||
|
||||
#### 11. com.timmy.tick
|
||||
- Plist: `~/Library/LaunchAgents/com.timmy.tick.plist`
|
||||
- Points to: `/Users/apayne/Timmy-time-dashboard/deploy/timmy-tick-mac.sh`
|
||||
- Not loaded at audit time
|
||||
- Definitely legacy dashboard-era automation
|
||||
|
||||
#### 12. com.tower.pr-automerge
|
||||
- Plist: `~/Library/LaunchAgents/com.tower.pr-automerge.plist`
|
||||
- Points to: `/Users/apayne/hermes-config/bin/pr-automerge.sh`
|
||||
- Not loaded at audit time
|
||||
- Separate Tower-era automation path; not part of current Timmy sidecar truth
|
||||
|
||||
## State carriers that make the machine feel haunted
|
||||
|
||||
These are the files and external states that most often "bring back old state":
|
||||
|
||||
### Hermes runtime state
|
||||
- `~/.hermes/config.yaml`
|
||||
- `~/.hermes/channel_directory.json`
|
||||
- `~/.hermes/sessions/sessions.json`
|
||||
- `~/.hermes/state.db`
|
||||
|
||||
### Loop state
|
||||
- `~/.hermes/logs/claude-skip-list.json`
|
||||
- `~/.hermes/logs/claude-active.json`
|
||||
- `~/.hermes/logs/claude-locks/`
|
||||
- `~/.hermes/logs/claude-pids/`
|
||||
- `~/.hermes/logs/gemini-skip-list.json`
|
||||
- `~/.hermes/logs/gemini-active.json`
|
||||
- `~/.hermes/logs/gemini-locks/`
|
||||
- `~/.hermes/logs/gemini-pids/`
|
||||
|
||||
### Kimi queue state
|
||||
- Gitea labels, not local files, are the queue truth
|
||||
- `assigned-kimi`
|
||||
- `kimi-in-progress`
|
||||
- `kimi-done`
|
||||
|
||||
### Worktree state
|
||||
- `~/worktrees/*`
|
||||
- especially old frontend/backend worktrees like:
|
||||
- `~/worktrees/the-matrix`
|
||||
- `~/worktrees/kimi-repo`
|
||||
|
||||
### Launchd state
|
||||
- plist files in `~/Library/LaunchAgents`
|
||||
- anything with `RunAtLoad` and `KeepAlive` can resurrect automatically
|
||||
|
||||
## Audit commands
|
||||
|
||||
List loaded Timmy/Hermes automations:
|
||||
```bash
|
||||
launchctl list | egrep 'timmy|kimi|claude|max|dashboard|matrix|gateway|huey'
|
||||
```
|
||||
|
||||
List Timmy/Hermes launch agent files:
|
||||
```bash
|
||||
find ~/Library/LaunchAgents -maxdepth 1 -name '*.plist' | egrep 'timmy|hermes|openclaw|tower'
|
||||
```
|
||||
|
||||
List running loop scripts:
|
||||
```bash
|
||||
ps -Ao pid,ppid,etime,command | egrep '/Users/apayne/.hermes/bin/|/Users/apayne/.timmy/uniwizard/'
|
||||
```
|
||||
|
||||
List cron jobs:
|
||||
```bash
|
||||
hermes cron list --include-disabled
|
||||
```
|
||||
|
||||
## Safe reset order when old state keeps coming back
|
||||
|
||||
1. Stop launchd jobs first
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.openclaw.gateway.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/com.timmy.dashboard-backend.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/com.timmy.matrix-frontend.plist || true
|
||||
```
|
||||
|
||||
2. Kill manual loops
|
||||
```bash
|
||||
pkill -f 'gemini-loop.sh' || true
|
||||
pkill -f 'timmy-orchestrator.sh' || true
|
||||
pkill -f 'claude-loop.sh' || true
|
||||
pkill -f 'claude .*--print' || true
|
||||
pkill -f 'gemini .*--yolo' || true
|
||||
```
|
||||
|
||||
3. Clear local loop state
|
||||
```bash
|
||||
rm -rf ~/.hermes/logs/claude-locks/*.lock ~/.hermes/logs/claude-pids/*.pid
|
||||
rm -rf ~/.hermes/logs/gemini-locks/*.lock ~/.hermes/logs/gemini-pids/*.pid
|
||||
printf '{}\n' > ~/.hermes/logs/claude-active.json
|
||||
printf '{}\n' > ~/.hermes/logs/gemini-active.json
|
||||
rm -f /tmp/kimi-heartbeat.lock
|
||||
```
|
||||
|
||||
4. If gateway/session drift is the problem, back up before clearing
|
||||
```bash
|
||||
cp ~/.hermes/config.yaml ~/.hermes/config.yaml.bak.$(date +%Y%m%d-%H%M%S)
|
||||
cp ~/.hermes/sessions/sessions.json ~/.hermes/sessions/sessions.json.bak.$(date +%Y%m%d-%H%M%S)
|
||||
```
|
||||
|
||||
5. Relaunch only what you explicitly want
|
||||
|
||||
## Current contradictions to fix later
|
||||
|
||||
1. README still describes `bin/` as "NOT deprecated loops" but live runtime still contains revived loop scripts.
|
||||
2. `DEPRECATED.md` says claude-loop/gemini-loop/timmy-orchestrator/claudemax-watchdog were removed, but reality disagrees.
|
||||
3. `com.timmy.matrix-frontend` still points at `~/worktrees/the-matrix` rather than the nexus lineage.
|
||||
4. `ai.hermes.startup` still points at a startup path that expects missing `timmy-tmux.sh`.
|
||||
5. `gemini-loop.sh` and `timmy-orchestrator.sh` are live but not yet mirrored into timmy-config/bin/.
|
||||
|
||||
Until those are reconciled, trust this inventory over older prose.
|
||||
Reference in New Issue
Block a user