feat: add operational scripts and deploy.sh

- Moved all agent loop scripts into source control (bin/)
- claude-loop.sh, gemini-loop.sh, timmy-orchestrator.sh
- workforce-manager.py, agent-dispatch.sh, nexus-merge-bot.sh
- ops dashboard scripts (ops-panel, ops-helpers, ops-gitea)
- monitoring scripts (timmy-status, timmy-loopstat)
- deploy.sh: one-command overlay onto ~/.hermes/
- Updated README with sidecar architecture docs
- All loops now target the-nexus + autolora only
This commit is contained in:
Alexander Whitestone
2026-03-25 10:05:55 -04:00
parent 341c85381c
commit d4c79d47a6
17 changed files with 3863 additions and 25 deletions

View File

@@ -1,17 +1,34 @@
# timmy-config
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, and operational config.
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, operational scripts, and config.
This repo is the canonical source of truth for Timmy's identity and operational state. Applied as a side-car to the Hermes harness — no hacking on the harness itself.
This repo is the canonical source of truth for Timmy's identity and operational state. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code. Pull upstream updates to hermes-agent, overlay timmy-config on top.
## Structure
```
timmy-config/
├── deploy.sh ← Deploys config as overlay onto ~/.hermes/
├── SOUL.md ← Inscription 1 — the immutable conscience
├── FALSEWORK.md ← API cost management strategy
├── config.yaml ← Hermes harness configuration
├── channel_directory.json ← Platform channel mappings
├── bin/ ← Operational scripts
│ ├── claude-loop.sh ← Parallel Claude Code agent dispatch
│ ├── gemini-loop.sh ← Parallel Gemini Code agent dispatch
│ ├── timmy-orchestrator.sh ← PR review, triage, merge orchestration
│ ├── workforce-manager.py ← Agent assignment and scoring
│ ├── agent-dispatch.sh ← Single-issue agent launcher
│ ├── agent-loop.sh ← Generic agent loop template
│ ├── nexus-merge-bot.sh ← Auto-merge passing PRs
│ ├── claudemax-watchdog.sh ← Claude quota monitoring
│ ├── hermes-startup.sh ← Boot sequence
│ ├── ops-panel.sh ← Operational dashboard
│ ├── ops-helpers.sh ← Shared shell functions
│ ├── ops-gitea.sh ← Gitea API helpers
│ ├── timmy-status.sh ← Git + Gitea status display
│ ├── timmy-loopstat.sh ← Queue and perf stats
│ └── hotspot-keepalive.sh ← Network keepalive
├── memories/
│ ├── MEMORY.md ← Persistent agent memory
│ └── USER.md ← User profile (Alexander)
@@ -31,34 +48,43 @@ timmy-config/
└── design-log/ ← Historical design decisions
```
## What Lives Where
| What | This Repo | sovereign-orchestration | autolora |
|------|-----------|------------------------|----------|
| Soul & identity | ✓ | | |
| Memories | ✓ | | |
| Skins | ✓ | | |
| Playbooks | ✓ | ✓ (copy) | |
| Config | ✓ | | |
| Task queue & executor | | ✓ | |
| Gitea client & MCP | | ✓ | |
| Training pipeline | | | ✓ |
| Eval suite | | | ✓ |
## Deployment
This config is applied to `~/.hermes/` on the host machine:
```bash
# Sync config to hermes
cp config.yaml ~/.hermes/config.yaml
cp SOUL.md ~/.timmy/SOUL.md
cp memories/* ~/.hermes/memories/
cp skins/* ~/.hermes/skins/
cp playbooks/* ~/.hermes/playbooks/
# One command deploys everything
./deploy.sh
# Deploy and restart all agent loops
./deploy.sh --restart-loops
```
This overlays timmy-config onto `~/.hermes/` and `~/.timmy/`:
- `SOUL.md``~/.timmy/`
- `config.yaml``~/.hermes/`
- `bin/*``~/.hermes/bin/`
- `skins/*``~/.hermes/skins/`
- `memories/*``~/.hermes/memories/`
- `playbooks/*``~/.hermes/playbooks/`
## Architecture: Sidecar, Not Fork
```
hermes-agent (upstream) timmy-config (this repo)
┌─────────────────────┐ ┌──────────────────────┐
│ Engine │ │ Driver's seat │
│ Tools, routing, │ │ SOUL, memories, │
│ agent loop, gateway │ │ skins, scripts, │
│ │ │ config, playbooks │
└─────────┬───────────┘ └──────────┬───────────┘
│ │
└────────────┬───────────────┘
~/.hermes/ (merged at deploy time)
```
Never modify hermes-agent. Pull updates like any upstream dependency. Everything custom lives here.
## Origin
Migrated from `hermes/hermes-config` (now archived).
Migrated from `hermes/hermes-config` (archived).
Owned by Timmy_Foundation. Sovereignty and service always.

98
bin/agent-dispatch.sh Executable file
View File

@@ -0,0 +1,98 @@
#!/usr/bin/env bash
# agent-dispatch.sh — Generate a self-contained prompt for any agent
#
# Usage: agent-dispatch.sh <agent_name> <issue_num> <repo>
# agent-dispatch.sh manus 42 Timmy_Foundation/the-nexus
#
# Outputs a prompt to stdout. Copy-paste into the agent's interface.
# The prompt includes everything: API URLs, token, git commands, PR creation.
set -euo pipefail
AGENT_NAME="${1:?Usage: agent-dispatch.sh <agent> <issue_num> <owner/repo>}"
ISSUE_NUM="${2:?Usage: agent-dispatch.sh <agent> <issue_num> <owner/repo>}"
REPO="${3:?Usage: agent-dispatch.sh <agent> <issue_num> <owner/repo>}"
GITEA_URL="http://143.198.27.163:3000"
TOKEN_FILE="$HOME/.hermes/${AGENT_NAME}_token"
if [ ! -f "$TOKEN_FILE" ]; then
echo "ERROR: No token found at $TOKEN_FILE" >&2
echo "Create a Gitea user and token for '$AGENT_NAME' first." >&2
exit 1
fi
GITEA_TOKEN=$(cat "$TOKEN_FILE")
REPO_OWNER=$(echo "$REPO" | cut -d/ -f1)
REPO_NAME=$(echo "$REPO" | cut -d/ -f2)
BRANCH="${AGENT_NAME}/issue-${ISSUE_NUM}"
# Fetch issue title
ISSUE_TITLE=$(curl -sf -H "Authorization: token $GITEA_TOKEN" \
"${GITEA_URL}/api/v1/repos/${REPO}/issues/${ISSUE_NUM}" 2>/dev/null | \
python3 -c "import sys,json; print(json.loads(sys.stdin.read())['title'])" 2>/dev/null || echo "Issue #${ISSUE_NUM}")
cat <<PROMPT
You are ${AGENT_NAME}, an autonomous code agent working on the ${REPO_NAME} project.
YOUR ISSUE: #${ISSUE_NUM} — "${ISSUE_TITLE}"
GITEA API: ${GITEA_URL}/api/v1
GITEA TOKEN: ${GITEA_TOKEN}
REPO: ${REPO_OWNER}/${REPO_NAME}
== STEP 1: READ THE ISSUE ==
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/issues/${ISSUE_NUM}"
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/issues/${ISSUE_NUM}/comments"
Read the issue body AND all comments for context and build order constraints.
== STEP 2: SET UP WORKSPACE ==
git clone http://${AGENT_NAME}:${GITEA_TOKEN}@143.198.27.163:3000/${REPO_OWNER}/${REPO_NAME}.git /tmp/${AGENT_NAME}-work-${ISSUE_NUM}
cd /tmp/${AGENT_NAME}-work-${ISSUE_NUM}
Check if branch exists (prior attempt): git ls-remote origin ${BRANCH}
If yes: git fetch origin ${BRANCH} && git checkout ${BRANCH}
If no: git checkout -b ${BRANCH}
== STEP 3: UNDERSTAND THE PROJECT ==
Read README.md or any contributing guide. Check for tox.ini, Makefile, package.json.
Follow existing code conventions.
== STEP 4: DO THE WORK ==
Implement the fix/feature described in the issue. Run tests if the project has them.
== STEP 5: COMMIT AND PUSH ==
git add -A
git commit -m "feat: <description> (#${ISSUE_NUM})
Fixes #${ISSUE_NUM}"
git push origin ${BRANCH}
== STEP 6: CREATE PR ==
curl -s -X POST "${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/pulls" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"title": "[${AGENT_NAME}] <description> (#${ISSUE_NUM})", "body": "Fixes #${ISSUE_NUM}\n\n<describe changes>", "head": "${BRANCH}", "base": "main"}'
== STEP 7: COMMENT ON ISSUE ==
curl -s -X POST "${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/issues/${ISSUE_NUM}/comments" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"body": "PR submitted. <summary>"}'
== RULES ==
- Read project docs FIRST.
- Use the project's own test/lint tools.
- Respect git hooks. Do not skip them.
- If tests fail twice, STOP and comment on the issue.
- ALWAYS push your work. ALWAYS create a PR. No exceptions.
- Clean up: remove /tmp/${AGENT_NAME}-work-${ISSUE_NUM} when done.
PROMPT

373
bin/agent-loop.sh Executable file
View File

@@ -0,0 +1,373 @@
#!/usr/bin/env bash
# agent-loop.sh — Universal agent dev loop
# One script for all agents. Config via agent-specific .conf files.
#
# Usage: agent-loop.sh <agent-name> [num-workers]
# agent-loop.sh groq
# agent-loop.sh claude 10
# agent-loop.sh grok 1
set -uo pipefail
AGENT="${1:?Usage: agent-loop.sh <agent-name> [num-workers]}"
NUM_WORKERS="${2:-1}"
CONF="$HOME/.hermes/agents/${AGENT}.conf"
if [ ! -f "$CONF" ]; then
echo "No config at $CONF — create it first." >&2
exit 1
fi
# Load agent config
source "$CONF"
# === DEFAULTS (overridable in .conf) ===
: "${GITEA_URL:=http://143.198.27.163:3000}"
: "${WORKTREE_BASE:=$HOME/worktrees}"
: "${TIMEOUT:=600}"
: "${COOLDOWN:=30}"
: "${MAX_WORKERS:=10}"
: "${REPOS:=Timmy_Foundation/the-nexus rockachopa/hermes-agent}"
LOG_DIR="$HOME/.hermes/logs"
LOG="$LOG_DIR/${AGENT}-loop.log"
PIDFILE="$LOG_DIR/${AGENT}-loop.pid"
SKIP_FILE="$LOG_DIR/${AGENT}-skip-list.json"
LOCK_DIR="$LOG_DIR/${AGENT}-locks"
mkdir -p "$LOG_DIR" "$WORKTREE_BASE" "$LOCK_DIR"
[ -f "$SKIP_FILE" ] || echo '{}' > "$SKIP_FILE"
export BROWSER=echo # never open a browser
# === Single instance guard ===
if [ -f "$PIDFILE" ]; then
old_pid=$(cat "$PIDFILE")
if kill -0 "$old_pid" 2>/dev/null; then
echo "${AGENT} loop already running (PID $old_pid)" >&2
exit 0
fi
fi
echo $$ > "$PIDFILE"
trap 'rm -f "$PIDFILE"' EXIT
AGENT_UPPER=$(echo "$AGENT" | tr '[:lower:]' '[:upper:]')
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ${AGENT_UPPER}: $*" >> "$LOG"
}
mark_skip() {
local issue_num="$1" reason="$2"
python3 -c "
import json, time, fcntl
with open('${SKIP_FILE}', 'r+') as f:
fcntl.flock(f, fcntl.LOCK_EX)
try: skips = json.load(f)
except: skips = {}
failures = skips.get(str($issue_num), {}).get('failures', 0) + 1
skip_hours = 6 if failures >= 3 else 1
skips[str($issue_num)] = {
'until': time.time() + (skip_hours * 3600),
'reason': '$reason', 'failures': failures
}
f.seek(0); f.truncate()
json.dump(skips, f, indent=2)
" 2>/dev/null
}
lock_issue() {
local key="$1"
mkdir "$LOCK_DIR/$key.lock" 2>/dev/null && echo $$ > "$LOCK_DIR/$key.lock/pid"
}
unlock_issue() {
rm -rf "$LOCK_DIR/$1.lock" 2>/dev/null
}
get_next_issue() {
python3 -c "
import json, sys, time, urllib.request, os
token = '${GITEA_TOKEN}'
base = '${GITEA_URL}'
repos = '${REPOS}'.split()
agent = '${AGENT}'
try:
with open('${SKIP_FILE}') as f: skips = json.load(f)
except: skips = {}
for repo in repos:
url = f'{base}/api/v1/repos/{repo}/issues?state=open&type=issues&limit=30&sort=created'
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
try:
resp = urllib.request.urlopen(req, timeout=10)
issues = json.loads(resp.read())
except: continue
for i in issues:
assignees = [a['login'] for a in (i.get('assignees') or [])]
if assignees and agent not in assignees: continue
title = i['title'].lower()
if '[epic]' in title or '[meta]' in title or '[audit]' in title: continue
num = str(i['number'])
entry = skips.get(num, {})
if entry and entry.get('until', 0) > time.time(): continue
lock = '${LOCK_DIR}/' + repo.replace('/','-') + '-' + num + '.lock'
if os.path.isdir(lock): continue
owner, name = repo.split('/')
if not assignees:
try:
data = json.dumps({'assignees': [agent]}).encode()
req2 = urllib.request.Request(
f'{base}/api/v1/repos/{repo}/issues/{i[\"number\"]}',
data=data, method='PATCH',
headers={'Authorization': f'token {token}', 'Content-Type': 'application/json'})
urllib.request.urlopen(req2, timeout=5)
except: pass
print(json.dumps({
'number': i['number'], 'title': i['title'],
'repo_owner': owner, 'repo_name': name, 'repo': repo}))
sys.exit(0)
print('null')
" 2>/dev/null
}
# === MERGE OWN PRs FIRST ===
merge_own_prs() {
# Before new work: find our open PRs, rebase if needed, merge them.
local open_prs
open_prs=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls?state=open&limit=20" 2>/dev/null | \
python3 -c "
import sys, json
prs = json.loads(sys.stdin.buffer.read())
ours = [p for p in prs if p['user']['login'] == '${AGENT}']
for p in ours:
print(f'{p[\"number\"]}|{p[\"head\"][\"ref\"]}|{p.get(\"mergeable\",False)}')
" 2>/dev/null)
[ -z "$open_prs" ] && return 0
local count=0
echo "$open_prs" | while IFS='|' read pr_num branch mergeable; do
[ -z "$pr_num" ] && continue
count=$((count + 1))
if [ "$mergeable" = "True" ]; then
# Try to squash merge directly
local result
result=$(curl -sf -w "%{http_code}" -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do":"squash","delete_branch_after_merge":true}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" 2>/dev/null)
local code="${result: -3}"
if [ "$code" = "200" ] || [ "$code" = "405" ]; then
log "MERGE: PR #${pr_num} merged"
else
log "MERGE: PR #${pr_num} merge failed (HTTP $code)"
fi
else
# Conflicts — clone, rebase, force push, then merge
local tmpdir="/tmp/${AGENT}-rebase-${pr_num}"
cd "$HOME"
rm -rf "$tmpdir" 2>/dev/null
local CLONE_URL="http://${AGENT}:${GITEA_TOKEN}@143.198.27.163:3000/Timmy_Foundation/the-nexus.git"
git clone -q --depth=50 -b "$branch" "$CLONE_URL" "$tmpdir" 2>/dev/null
if [ -d "$tmpdir/.git" ]; then
cd "$tmpdir"
git fetch origin main 2>/dev/null
if git rebase origin/main 2>/dev/null; then
git push -f origin "$branch" 2>/dev/null
log "REBASE: PR #${pr_num} rebased and pushed"
sleep 3
# Now try merge
curl -sf -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do":"squash","delete_branch_after_merge":true}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" 2>/dev/null
log "MERGE: PR #${pr_num} merged after rebase"
else
git rebase --abort 2>/dev/null
# Rebase impossible — close the PR, issue stays open for redo
curl -sf -X PATCH \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"state":"closed"}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}" 2>/dev/null
log "CLOSE: PR #${pr_num} unrebaseable, closed"
fi
cd "$HOME"; rm -rf "$tmpdir"
fi
fi
sleep 2
done
return $count
}
# === WORKER FUNCTION ===
run_worker() {
local wid="$1"
log "WORKER-${wid}: started"
while true; do
# RULE: Merge existing PRs BEFORE creating new work.
merge_own_prs
local issue_json
issue_json=$(get_next_issue)
if [ "$issue_json" = "null" ] || [ -z "$issue_json" ]; then
sleep 120
continue
fi
local issue_num repo_owner repo_name repo branch workdir issue_key
issue_num=$(echo "$issue_json" | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['number'])")
issue_title=$(echo "$issue_json" | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['title'])")
repo_owner=$(echo "$issue_json" | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['repo_owner'])")
repo_name=$(echo "$issue_json" | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['repo_name'])")
repo="${repo_owner}/${repo_name}"
branch="${AGENT}/issue-${issue_num}"
workdir="${WORKTREE_BASE}/${AGENT}-w${wid}-${issue_num}"
issue_key="${repo_owner}-${repo_name}-${issue_num}"
lock_issue "$issue_key" || { sleep "$COOLDOWN"; continue; }
log "WORKER-${wid}: #${issue_num}${issue_title}"
# Clone
cd "$HOME"
rm -rf "$workdir" 2>/dev/null || true
local CLONE_URL="http://${AGENT}:${GITEA_TOKEN}@143.198.27.163:3000/${repo}.git"
if git ls-remote --heads "$CLONE_URL" "$branch" 2>/dev/null | grep -q "$branch"; then
git clone -q --depth=50 -b "$branch" "$CLONE_URL" "$workdir" 2>/dev/null
if [ -d "$workdir/.git" ]; then
cd "$workdir"
git fetch origin main 2>/dev/null
if ! git rebase origin/main 2>/dev/null; then
log "WORKER-${wid}: rebase failed, starting fresh"
cd "$HOME"; rm -rf "$workdir"
git clone -q --depth=1 -b main "$CLONE_URL" "$workdir" 2>/dev/null
cd "$workdir"; git checkout -b "$branch" 2>/dev/null
fi
fi
else
git clone -q --depth=1 -b main "$CLONE_URL" "$workdir" 2>/dev/null
cd "$workdir" 2>/dev/null && git checkout -b "$branch" 2>/dev/null
fi
if [ ! -d "$workdir/.git" ]; then
log "WORKER-${wid}: clone failed for #${issue_num}"
mark_skip "$issue_num" "clone_failed"
unlock_issue "$issue_key"
sleep "$COOLDOWN"; continue
fi
cd "$workdir"
# Read issue context
local issue_body issue_comments
issue_body=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${repo}/issues/${issue_num}" 2>/dev/null | \
python3 -c "import sys,json; print(json.loads(sys.stdin.read()).get('body',''))" 2>/dev/null || echo "")
issue_comments=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${repo}/issues/${issue_num}/comments" 2>/dev/null | \
python3 -c "
import sys,json
comments = json.loads(sys.stdin.read())
for c in comments[-3:]:
print(f'{c[\"user\"][\"login\"]}: {c[\"body\"][:150]}')
" 2>/dev/null || echo "")
# === RUN THE AGENT-SPECIFIC CLI ===
# This is the ONLY part that differs between agents.
# The run_agent function is defined in the .conf file.
run_agent "$issue_num" "$issue_title" "$issue_body" "$issue_comments" "$workdir" "$repo_owner" "$repo_name" "$branch"
# === COMMIT + PUSH (universal) ===
cd "$workdir" 2>/dev/null || { unlock_issue "$issue_key"; continue; }
git add -A 2>/dev/null
if ! git diff --cached --quiet 2>/dev/null; then
git commit -m "feat: ${issue_title} (#${issue_num})
Refs #${issue_num}
Agent: ${AGENT}" 2>/dev/null
fi
# Check for any local commits (agent may have committed directly)
local has_commits=false
if ! git diff --quiet HEAD origin/main 2>/dev/null; then
has_commits=true
fi
# Also check for new branch with no remote
git log --oneline -1 2>/dev/null | grep -q . && has_commits=true
if [ "$has_commits" = true ]; then
git push origin "$branch" 2>/dev/null || git push -f origin "$branch" 2>/dev/null || {
log "WORKER-${wid}: push failed for #${issue_num}"
mark_skip "$issue_num" "push_failed"
cd "$HOME"; rm -rf "$workdir"; unlock_issue "$issue_key"
sleep "$COOLDOWN"; continue
}
# Create or update PR
local existing_pr pr_num
existing_pr=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${repo}/pulls?state=open&head=${branch}&limit=1" 2>/dev/null | \
python3 -c "import sys,json; prs=json.loads(sys.stdin.read()); print(prs[0]['number'] if prs else '')" 2>/dev/null)
if [ -n "$existing_pr" ]; then
pr_num="$existing_pr"
log "WORKER-${wid}: updated PR #${pr_num}"
else
local pr_result
pr_result=$(curl -sf -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"title\": \"[${AGENT}] ${issue_title} (#${issue_num})\", \"body\": \"Refs #${issue_num}\n\nAgent: ${AGENT}\", \"head\": \"${branch}\", \"base\": \"main\"}" \
"${GITEA_URL}/api/v1/repos/${repo}/pulls" 2>/dev/null || echo "{}")
pr_num=$(echo "$pr_result" | python3 -c "import sys,json; print(json.loads(sys.stdin.read()).get('number','?'))" 2>/dev/null)
log "WORKER-${wid}: PR #${pr_num} created for #${issue_num}"
fi
# Only comment once per agent per issue — check before posting
existing_comment=$(curl -sf \
-H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${repo}/issues/${issue_num}/comments" 2>/dev/null \
| python3 -c "import sys,json; cs=json.loads(sys.stdin.read()); print('yes' if any('PR #' in c.get('body','') and '${AGENT}' in c.get('body','') for c in cs) else 'no')" 2>/dev/null)
if [ "$existing_comment" != "yes" ]; then
curl -sf -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"body\": \"PR #${pr_num}${AGENT}\"}" \
"${GITEA_URL}/api/v1/repos/${repo}/issues/${issue_num}/comments" >/dev/null 2>&1
fi
else
log "WORKER-${wid}: no changes for #${issue_num}"
mark_skip "$issue_num" "no_changes"
fi
cd "$HOME"; rm -rf "$workdir"
unlock_issue "$issue_key"
log "WORKER-${wid}: #${issue_num} complete"
sleep "$COOLDOWN"
done
}
# === MAIN ===
log "=== ${AGENT} loop started (PID $$, ${NUM_WORKERS} workers) ==="
if [ "$NUM_WORKERS" -gt 1 ]; then
for i in $(seq 1 "$NUM_WORKERS"); do
run_worker "$i" &
sleep 2
done
wait
else
run_worker 1
fi

610
bin/claude-loop.sh Executable file
View File

@@ -0,0 +1,610 @@
#!/usr/bin/env bash
# claude-loop.sh — Parallel Claude Code agent dispatch loop
# Runs N workers concurrently against the Gitea backlog.
# Gracefully handles rate limits with backoff.
#
# Usage: claude-loop.sh [NUM_WORKERS] (default: 3)
set -euo pipefail
# === CONFIG ===
NUM_WORKERS="${1:-5}"
MAX_WORKERS=10 # absolute ceiling
WORKTREE_BASE="$HOME/worktrees"
GITEA_URL="http://143.198.27.163:3000"
GITEA_TOKEN=$(cat "$HOME/.hermes/claude_token")
CLAUDE_TIMEOUT=900 # 15 min per issue
COOLDOWN=15 # seconds between issues — stagger clones
RATE_LIMIT_SLEEP=30 # initial sleep on rate limit
MAX_RATE_SLEEP=120 # max backoff on rate limit
LOG_DIR="$HOME/.hermes/logs"
SKIP_FILE="$LOG_DIR/claude-skip-list.json"
LOCK_DIR="$LOG_DIR/claude-locks"
ACTIVE_FILE="$LOG_DIR/claude-active.json"
mkdir -p "$LOG_DIR" "$WORKTREE_BASE" "$LOCK_DIR"
# Initialize files
[ -f "$SKIP_FILE" ] || echo '{}' > "$SKIP_FILE"
echo '{}' > "$ACTIVE_FILE"
# === SHARED FUNCTIONS ===
log() {
local msg="[$(date '+%Y-%m-%d %H:%M:%S')] $*"
echo "$msg" >> "$LOG_DIR/claude-loop.log"
}
lock_issue() {
local issue_key="$1"
local lockfile="$LOCK_DIR/$issue_key.lock"
if mkdir "$lockfile" 2>/dev/null; then
echo $$ > "$lockfile/pid"
return 0
fi
return 1
}
unlock_issue() {
local issue_key="$1"
rm -rf "$LOCK_DIR/$issue_key.lock" 2>/dev/null
}
mark_skip() {
local issue_num="$1"
local reason="$2"
local skip_hours="${3:-1}"
python3 -c "
import json, time, fcntl
with open('$SKIP_FILE', 'r+') as f:
fcntl.flock(f, fcntl.LOCK_EX)
try: skips = json.load(f)
except: skips = {}
skips[str($issue_num)] = {
'until': time.time() + ($skip_hours * 3600),
'reason': '$reason',
'failures': skips.get(str($issue_num), {}).get('failures', 0) + 1
}
if skips[str($issue_num)]['failures'] >= 3:
skips[str($issue_num)]['until'] = time.time() + (6 * 3600)
f.seek(0)
f.truncate()
json.dump(skips, f, indent=2)
" 2>/dev/null
log "SKIP: #${issue_num}${reason}"
}
update_active() {
local worker="$1" issue="$2" repo="$3" status="$4"
python3 -c "
import json, fcntl
with open('$ACTIVE_FILE', 'r+') as f:
fcntl.flock(f, fcntl.LOCK_EX)
try: active = json.load(f)
except: active = {}
if '$status' == 'done':
active.pop('$worker', None)
else:
active['$worker'] = {'issue': '$issue', 'repo': '$repo', 'status': '$status'}
f.seek(0)
f.truncate()
json.dump(active, f, indent=2)
" 2>/dev/null
}
cleanup_workdir() {
local wt="$1"
rm -rf "$wt" 2>/dev/null || true
}
get_next_issue() {
python3 -c "
import json, sys, time, urllib.request, os
token = '${GITEA_TOKEN}'
base = '${GITEA_URL}'
repos = [
'Timmy_Foundation/the-nexus',
'Timmy_Foundation/autolora',
]
# Load skip list
try:
with open('${SKIP_FILE}') as f: skips = json.load(f)
except: skips = {}
# Load active issues (to avoid double-picking)
try:
with open('${ACTIVE_FILE}') as f:
active = json.load(f)
active_issues = {v['issue'] for v in active.values()}
except:
active_issues = set()
all_issues = []
for repo in repos:
url = f'{base}/api/v1/repos/{repo}/issues?state=open&type=issues&limit=50&sort=created'
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
try:
resp = urllib.request.urlopen(req, timeout=10)
issues = json.loads(resp.read())
for i in issues:
i['_repo'] = repo
all_issues.extend(issues)
except:
continue
# Sort by priority: URGENT > P0 > P1 > bugs > LHF > rest
def priority(i):
t = i['title'].lower()
if '[urgent]' in t or 'urgent:' in t: return 0
if '[p0]' in t: return 1
if '[p1]' in t: return 2
if '[bug]' in t: return 3
if 'lhf:' in t or 'lhf ' in t.lower(): return 4
if '[p2]' in t: return 5
return 6
all_issues.sort(key=priority)
for i in all_issues:
assignees = [a['login'] for a in (i.get('assignees') or [])]
# Take issues assigned to claude OR unassigned (self-assign)
if assignees and 'claude' not in assignees:
continue
title = i['title'].lower()
if '[philosophy]' in title: continue
if '[epic]' in title or 'epic:' in title: continue
if '[showcase]' in title: continue
num_str = str(i['number'])
if num_str in active_issues: continue
entry = skips.get(num_str, {})
if entry and entry.get('until', 0) > time.time(): continue
lock = '${LOCK_DIR}/' + i['_repo'].replace('/', '-') + '-' + num_str + '.lock'
if os.path.isdir(lock): continue
repo = i['_repo']
owner, name = repo.split('/')
# Self-assign if unassigned
if not assignees:
try:
data = json.dumps({'assignees': ['claude']}).encode()
req2 = urllib.request.Request(
f'{base}/api/v1/repos/{repo}/issues/{i[\"number\"]}',
data=data, method='PATCH',
headers={'Authorization': f'token {token}', 'Content-Type': 'application/json'})
urllib.request.urlopen(req2, timeout=5)
except: pass
print(json.dumps({
'number': i['number'],
'title': i['title'],
'repo_owner': owner,
'repo_name': name,
'repo': repo,
}))
sys.exit(0)
print('null')
" 2>/dev/null
}
build_prompt() {
local issue_num="$1"
local issue_title="$2"
local worktree="$3"
local repo_owner="$4"
local repo_name="$5"
cat <<PROMPT
You are Claude, an autonomous code agent on the ${repo_name} project.
YOUR ISSUE: #${issue_num} — "${issue_title}"
GITEA API: ${GITEA_URL}/api/v1
GITEA TOKEN: ${GITEA_TOKEN}
REPO: ${repo_owner}/${repo_name}
WORKING DIRECTORY: ${worktree}
== YOUR POWERS ==
You can do ANYTHING a developer can do.
1. READ the issue and any comments for context:
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}"
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments"
2. DO THE WORK. Code, test, fix, refactor — whatever the issue needs.
- Check for tox.ini / Makefile / package.json for test/lint commands
- Run tests if the project has them
- Follow existing code conventions
3. COMMIT with conventional commits: fix: / feat: / refactor: / test: / chore:
Include "Fixes #${issue_num}" or "Refs #${issue_num}" in the message.
4. PUSH to your branch (claude/issue-${issue_num}) and CREATE A PR:
git push origin claude/issue-${issue_num}
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"title": "[claude] <description> (#${issue_num})", "body": "Fixes #${issue_num}\n\n<describe what you did>", "head": "claude/issue-${issue_num}", "base": "main"}'
5. COMMENT on the issue when done:
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"body": "PR created. <summary of changes>"}'
== RULES ==
- Read CLAUDE.md or project README first for conventions
- If the project has tox, use tox. If npm, use npm. Follow the project.
- Never use --no-verify on git commands.
- If tests fail after 2 attempts, STOP and comment on the issue explaining why.
- Be thorough but focused. Fix the issue, don't refactor the world.
== CRITICAL: ALWAYS COMMIT AND PUSH ==
- NEVER exit without committing your work. Even partial progress MUST be committed.
- Before you finish, ALWAYS: git add -A && git commit && git push origin claude/issue-${issue_num}
- ALWAYS create a PR before exiting. No exceptions.
- If a branch already exists with prior work, check it out and CONTINUE from where it left off.
- Check: git ls-remote origin claude/issue-${issue_num} — if it exists, pull it first.
- Your work is WASTED if it's not pushed. Push early, push often.
PROMPT
}
# === WORKER FUNCTION ===
run_worker() {
local worker_id="$1"
local consecutive_failures=0
log "WORKER-${worker_id}: Started"
while true; do
# Backoff on repeated failures
if [ "$consecutive_failures" -ge 5 ]; then
local backoff=$((RATE_LIMIT_SLEEP * (consecutive_failures / 5)))
[ "$backoff" -gt "$MAX_RATE_SLEEP" ] && backoff=$MAX_RATE_SLEEP
log "WORKER-${worker_id}: BACKOFF ${backoff}s (${consecutive_failures} failures)"
sleep "$backoff"
consecutive_failures=0
fi
# RULE: Merge existing PRs BEFORE creating new work.
# Check for open PRs from claude, rebase + merge them first.
local our_prs
our_prs=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls?state=open&limit=5" 2>/dev/null | \
python3 -c "
import sys, json
prs = json.loads(sys.stdin.buffer.read())
ours = [p for p in prs if p['user']['login'] == 'claude'][:3]
for p in ours:
print(f'{p[\"number\"]}|{p[\"head\"][\"ref\"]}|{p.get(\"mergeable\",False)}')
" 2>/dev/null)
if [ -n "$our_prs" ]; then
echo "$our_prs" | while IFS='|' read pr_num branch mergeable; do
[ -z "$pr_num" ] && continue
if [ "$mergeable" = "True" ]; then
curl -sf -X POST -H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do":"squash","delete_branch_after_merge":true}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" >/dev/null 2>&1
log "WORKER-${worker_id}: merged own PR #${pr_num}"
sleep 3
else
# Rebase and push
local tmpdir="/tmp/claude-rebase-${pr_num}"
cd "$HOME"; rm -rf "$tmpdir" 2>/dev/null
git clone -q --depth=50 -b "$branch" "$CLONE_URL" "$tmpdir" 2>/dev/null
if [ -d "$tmpdir/.git" ]; then
cd "$tmpdir"
git fetch origin main 2>/dev/null
if git rebase origin/main 2>/dev/null; then
git push -f origin "$branch" 2>/dev/null
sleep 3
curl -sf -X POST -H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do":"squash","delete_branch_after_merge":true}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" >/dev/null 2>&1
log "WORKER-${worker_id}: rebased+merged PR #${pr_num}"
else
git rebase --abort 2>/dev/null
curl -sf -X PATCH -H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" -d '{"state":"closed"}' \
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}" >/dev/null 2>&1
log "WORKER-${worker_id}: closed unrebaseable PR #${pr_num}"
fi
cd "$HOME"; rm -rf "$tmpdir"
fi
fi
done
fi
# Get next issue
issue_json=$(get_next_issue)
if [ "$issue_json" = "null" ] || [ -z "$issue_json" ]; then
update_active "$worker_id" "" "" "idle"
sleep 10
continue
fi
issue_num=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['number'])")
issue_title=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['title'])")
repo_owner=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_owner'])")
repo_name=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_name'])")
issue_key="${repo_owner}-${repo_name}-${issue_num}"
branch="claude/issue-${issue_num}"
# Use UUID for worktree dir to prevent collisions under high concurrency
wt_uuid=$(/usr/bin/uuidgen 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
worktree="${WORKTREE_BASE}/claude-${issue_num}-${wt_uuid}"
# Try to lock
if ! lock_issue "$issue_key"; then
sleep 5
continue
fi
log "WORKER-${worker_id}: === ISSUE #${issue_num}: ${issue_title} (${repo_owner}/${repo_name}) ==="
update_active "$worker_id" "$issue_num" "${repo_owner}/${repo_name}" "working"
# Clone and pick up prior work if it exists
rm -rf "$worktree" 2>/dev/null
CLONE_URL="http://claude:${GITEA_TOKEN}@143.198.27.163:3000/${repo_owner}/${repo_name}.git"
# Check if branch already exists on remote (prior work to continue)
if git ls-remote --heads "$CLONE_URL" "$branch" 2>/dev/null | grep -q "$branch"; then
log "WORKER-${worker_id}: Found existing branch $branch — continuing prior work"
if ! git clone --depth=50 -b "$branch" "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
log "WORKER-${worker_id}: ERROR cloning branch $branch for #${issue_num}"
unlock_issue "$issue_key"
consecutive_failures=$((consecutive_failures + 1))
sleep "$COOLDOWN"
continue
fi
# Rebase on main to resolve stale conflicts from closed PRs
cd "$worktree"
git fetch origin main >/dev/null 2>&1
if ! git rebase origin/main >/dev/null 2>&1; then
# Rebase failed — start fresh from main
log "WORKER-${worker_id}: Rebase failed for $branch, starting fresh"
cd "$HOME"
rm -rf "$worktree"
git clone --depth=1 -b main "$CLONE_URL" "$worktree" >/dev/null 2>&1
cd "$worktree"
git checkout -b "$branch" >/dev/null 2>&1
fi
else
if ! git clone --depth=1 -b main "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
log "WORKER-${worker_id}: ERROR cloning for #${issue_num}"
unlock_issue "$issue_key"
consecutive_failures=$((consecutive_failures + 1))
sleep "$COOLDOWN"
continue
fi
cd "$worktree"
git checkout -b "$branch" >/dev/null 2>&1
fi
cd "$worktree"
# Build prompt and run
prompt=$(build_prompt "$issue_num" "$issue_title" "$worktree" "$repo_owner" "$repo_name")
log "WORKER-${worker_id}: Launching Claude Code for #${issue_num}..."
CYCLE_START=$(date +%s)
set +e
cd "$worktree"
env -u CLAUDECODE gtimeout "$CLAUDE_TIMEOUT" claude \
--print \
--model sonnet \
--dangerously-skip-permissions \
-p "$prompt" \
</dev/null >> "$LOG_DIR/claude-${issue_num}.log" 2>&1
exit_code=$?
set -e
CYCLE_END=$(date +%s)
CYCLE_DURATION=$(( CYCLE_END - CYCLE_START ))
# ── SALVAGE: Never waste work. Commit+push whatever exists. ──
cd "$worktree" 2>/dev/null || true
DIRTY=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
UNPUSHED=$(git log --oneline "origin/main..HEAD" 2>/dev/null | wc -l | tr -d ' ')
if [ "${DIRTY:-0}" -gt 0 ]; then
log "WORKER-${worker_id}: SALVAGING $DIRTY dirty files for #${issue_num}"
git add -A 2>/dev/null
git commit -m "WIP: Claude Code progress on #${issue_num}
Automated salvage commit — agent session ended (exit $exit_code).
Work in progress, may need continuation." 2>/dev/null || true
fi
# Push if we have any commits (including salvaged ones)
UNPUSHED=$(git log --oneline "origin/main..HEAD" 2>/dev/null | wc -l | tr -d ' ')
if [ "${UNPUSHED:-0}" -gt 0 ]; then
git push -u origin "$branch" 2>/dev/null && \
log "WORKER-${worker_id}: Pushed $UNPUSHED commit(s) on $branch" || \
log "WORKER-${worker_id}: Push failed for $branch"
fi
# ── Create PR if branch was pushed and no PR exists yet ──
pr_num=$(curl -sf "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls?state=open&head=${repo_owner}:${branch}&limit=1" \
-H "Authorization: token ${GITEA_TOKEN}" | python3 -c "
import sys,json
prs = json.load(sys.stdin)
if prs: print(prs[0]['number'])
else: print('')
" 2>/dev/null)
if [ -z "$pr_num" ] && [ "${UNPUSHED:-0}" -gt 0 ]; then
pr_num=$(curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "$(python3 -c "
import json
print(json.dumps({
'title': 'Claude: Issue #${issue_num}',
'head': '${branch}',
'base': 'main',
'body': 'Automated PR for issue #${issue_num}.\nExit code: ${exit_code}'
}))
")" | python3 -c "import sys,json; print(json.load(sys.stdin).get('number',''))" 2>/dev/null)
[ -n "$pr_num" ] && log "WORKER-${worker_id}: Created PR #${pr_num} for issue #${issue_num}"
fi
# ── Merge + close on success ──
if [ "$exit_code" -eq 0 ]; then
log "WORKER-${worker_id}: SUCCESS #${issue_num}"
if [ -n "$pr_num" ]; then
curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls/${pr_num}/merge" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do": "squash"}' >/dev/null 2>&1 || true
curl -sf -X PATCH "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"state": "closed"}' >/dev/null 2>&1 || true
log "WORKER-${worker_id}: PR #${pr_num} merged, issue #${issue_num} closed"
fi
consecutive_failures=0
elif [ "$exit_code" -eq 124 ]; then
log "WORKER-${worker_id}: TIMEOUT #${issue_num} (work saved in PR)"
consecutive_failures=$((consecutive_failures + 1))
else
# Check for rate limit
if grep -q "rate_limit\|rate limit\|429\|overloaded" "$LOG_DIR/claude-${issue_num}.log" 2>/dev/null; then
log "WORKER-${worker_id}: RATE LIMITED on #${issue_num} — backing off (work saved)"
consecutive_failures=$((consecutive_failures + 3))
else
log "WORKER-${worker_id}: FAILED #${issue_num} exit ${exit_code} (work saved in PR)"
consecutive_failures=$((consecutive_failures + 1))
fi
fi
# ── METRICS: structured JSONL for reporting ──
LINES_ADDED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo 0)
LINES_REMOVED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo 0)
FILES_CHANGED=$(cd "$worktree" 2>/dev/null && git diff --name-only origin/main..HEAD 2>/dev/null | wc -l | tr -d ' ' || echo 0)
# Determine outcome
if [ "$exit_code" -eq 0 ]; then
OUTCOME="success"
elif [ "$exit_code" -eq 124 ]; then
OUTCOME="timeout"
elif grep -q "rate_limit\|rate limit\|429" "$LOG_DIR/claude-${issue_num}.log" 2>/dev/null; then
OUTCOME="rate_limited"
else
OUTCOME="failed"
fi
METRICS_FILE="$LOG_DIR/claude-metrics.jsonl"
python3 -c "
import json, datetime
print(json.dumps({
'ts': datetime.datetime.utcnow().isoformat() + 'Z',
'worker': $worker_id,
'issue': $issue_num,
'repo': '${repo_owner}/${repo_name}',
'title': '''${issue_title}'''[:80],
'outcome': '$OUTCOME',
'exit_code': $exit_code,
'duration_s': $CYCLE_DURATION,
'files_changed': ${FILES_CHANGED:-0},
'lines_added': ${LINES_ADDED:-0},
'lines_removed': ${LINES_REMOVED:-0},
'salvaged': ${DIRTY:-0},
'pr': '${pr_num:-}',
'merged': $( [ '$OUTCOME' = 'success' ] && [ -n '${pr_num:-}' ] && echo 'true' || echo 'false' )
}))
" >> "$METRICS_FILE" 2>/dev/null
# Cleanup
cleanup_workdir "$worktree"
unlock_issue "$issue_key"
update_active "$worker_id" "" "" "done"
sleep "$COOLDOWN"
done
}
# === MAIN ===
log "=== Claude Loop Started — ${NUM_WORKERS} workers (max ${MAX_WORKERS}) ==="
log "Worktrees: ${WORKTREE_BASE}"
# Clean stale locks
rm -rf "$LOCK_DIR"/*.lock 2>/dev/null
# PID tracking via files (bash 3.2 compatible)
PID_DIR="$LOG_DIR/claude-pids"
mkdir -p "$PID_DIR"
rm -f "$PID_DIR"/*.pid 2>/dev/null
launch_worker() {
local wid="$1"
run_worker "$wid" &
echo $! > "$PID_DIR/${wid}.pid"
log "Launched worker $wid (PID $!)"
}
# Initial launch
for i in $(seq 1 "$NUM_WORKERS"); do
launch_worker "$i"
sleep 3
done
# === DYNAMIC SCALER ===
# Every 3 minutes: check health, scale up if no rate limits, scale down if hitting limits
CURRENT_WORKERS="$NUM_WORKERS"
while true; do
sleep 90
# Reap dead workers and relaunch
for pidfile in "$PID_DIR"/*.pid; do
[ -f "$pidfile" ] || continue
wid=$(basename "$pidfile" .pid)
wpid=$(cat "$pidfile")
if ! kill -0 "$wpid" 2>/dev/null; then
log "SCALER: Worker $wid died — relaunching"
launch_worker "$wid"
sleep 2
fi
done
recent_rate_limits=$(tail -100 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "RATE LIMITED" || true)
recent_successes=$(tail -100 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "SUCCESS" || true)
if [ "$recent_rate_limits" -gt 0 ]; then
if [ "$CURRENT_WORKERS" -gt 2 ]; then
drop_to=$(( CURRENT_WORKERS / 2 ))
[ "$drop_to" -lt 2 ] && drop_to=2
log "SCALER: Rate limited — scaling ${CURRENT_WORKERS}${drop_to} workers"
for wid in $(seq $((drop_to + 1)) "$CURRENT_WORKERS"); do
if [ -f "$PID_DIR/${wid}.pid" ]; then
kill "$(cat "$PID_DIR/${wid}.pid")" 2>/dev/null || true
rm -f "$PID_DIR/${wid}.pid"
update_active "$wid" "" "" "done"
fi
done
CURRENT_WORKERS=$drop_to
fi
elif [ "$recent_successes" -ge 2 ] && [ "$CURRENT_WORKERS" -lt "$MAX_WORKERS" ]; then
new_count=$(( CURRENT_WORKERS + 2 ))
[ "$new_count" -gt "$MAX_WORKERS" ] && new_count=$MAX_WORKERS
log "SCALER: Healthy — scaling ${CURRENT_WORKERS}${new_count} workers"
for wid in $(seq $((CURRENT_WORKERS + 1)) "$new_count"); do
launch_worker "$wid"
sleep 2
done
CURRENT_WORKERS=$new_count
fi
done

76
bin/claudemax-watchdog.sh Executable file
View File

@@ -0,0 +1,76 @@
#!/usr/bin/env bash
# ── Claudemax Watchdog ─────────────────────────────────────────────────
# Ensures claude-loop.sh stays alive in the timmy-loop tmux session.
# Run via cron every 5 minutes. Zero LLM cost — pure bash.
#
# Also replenishes the backlog when issues run low by filing
# template issues from a seed list.
# ───────────────────────────────────────────────────────────────────────
set -uo pipefail
export PATH="/opt/homebrew/bin:$HOME/.local/bin:$HOME/.hermes/bin:/usr/local/bin:$PATH"
SESSION="timmy-loop"
LOOP_PANE="1.1"
LOG="$HOME/.hermes/logs/claudemax-watchdog.log"
GITEA_URL="http://143.198.27.163:3000"
GITEA_TOKEN=$(cat "$HOME/.hermes/gitea_token_vps" 2>/dev/null)
REPO_API="$GITEA_URL/api/v1/repos/rockachopa/Timmy-time-dashboard"
MIN_OPEN_ISSUES=10
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] CLAUDEMAX: $*" >> "$LOG"; }
# ── 1. Is the tmux session alive? ──────────────────────────────────────
if ! tmux has-session -t "$SESSION" 2>/dev/null; then
log "Session $SESSION not found. Starting dashboard..."
bash "$HOME/.hermes/bin/start-dashboard.sh"
sleep 3
fi
# ── 2. Is claude-loop running in the loop pane? ───────────────────────
PANE_CMD=$(tmux list-panes -t "$SESSION:${LOOP_PANE%%.*}" -F '#{pane_index}:#{pane_current_command}' 2>/dev/null \
| grep "^${LOOP_PANE##*.}:" | cut -d: -f2)
CLAUDE_RUNNING=$(pgrep -f "claude-loop.sh" 2>/dev/null | head -1)
if [ -z "$CLAUDE_RUNNING" ]; then
log "claude-loop not running. Restarting in pane $LOOP_PANE..."
# Clear any dead shell
tmux send-keys -t "$SESSION:$LOOP_PANE" C-c 2>/dev/null
sleep 1
tmux send-keys -t "$SESSION:$LOOP_PANE" "bash ~/.hermes/bin/claude-loop.sh 2" Enter
log "Restarted claude-loop.sh with 2 workers"
else
log "claude-loop alive (PID $CLAUDE_RUNNING)"
fi
# ── 3. Backlog depth check ─────────────────────────────────────────────
OPEN_COUNT=$(curl -s --max-time 10 -H "Authorization: token $GITEA_TOKEN" \
"$REPO_API/issues?state=open&type=issues&limit=1" 2>/dev/null \
| python3 -c "import sys,json; print(len(json.loads(sys.stdin.read())))" 2>/dev/null || echo 0)
log "Open issues: $OPEN_COUNT (minimum: $MIN_OPEN_ISSUES)"
if [ "$OPEN_COUNT" -lt "$MIN_OPEN_ISSUES" ]; then
log "Backlog running low! Filing replenishment issues..."
# Source the backlog generator
bash "$HOME/.hermes/bin/claudemax-replenish.sh" 2>&1 | while read -r line; do log "$line"; done
fi
# ── 5. Auto-deploy Matrix if new commits ──────────────────────────────
bash "$HOME/.hermes/bin/autodeploy-matrix.sh" 2>&1 | while read -r line; do log "$line"; done
log "Watchdog complete."
# ── 4. Is gemini-loop running? ────────────────────────────────────────
GEMINI_RUNNING=$(pgrep -f "gemini-loop.sh" 2>/dev/null | head -1)
if [ -z "$GEMINI_RUNNING" ]; then
log "gemini-loop not running. Restarting..."
tmux send-keys -t "ops:1.2" C-c 2>/dev/null
sleep 1
tmux send-keys -t "ops:1.2" "bash ~/.hermes/bin/gemini-loop.sh 1" Enter
log "Restarted gemini-loop.sh with 1 worker"
else
log "gemini-loop alive (PID $GEMINI_RUNNING)"
fi

507
bin/gemini-loop.sh Executable file
View File

@@ -0,0 +1,507 @@
#!/usr/bin/env bash
# gemini-loop.sh — Parallel Gemini Code agent dispatch loop
# Runs N workers concurrently against the Gitea backlog.
# Dynamic scaling: starts at N, scales up to MAX, drops on rate limits.
#
# Usage: gemini-loop.sh [NUM_WORKERS] (default: 3)
set -euo pipefail
export GEMINI_API_KEY="AIzaSyAmGgS516K4PwlODFEnghL535yzoLnofKM"
# === CONFIG ===
NUM_WORKERS="${1:-2}"
MAX_WORKERS=5
WORKTREE_BASE="$HOME/worktrees"
GITEA_URL="http://143.198.27.163:3000"
GITEA_TOKEN=$(cat "$HOME/.hermes/gemini_token")
GEMINI_TIMEOUT=600 # 10 min per issue
COOLDOWN=15 # seconds between issues — stagger clones
RATE_LIMIT_SLEEP=30
MAX_RATE_SLEEP=120
LOG_DIR="$HOME/.hermes/logs"
SKIP_FILE="$LOG_DIR/gemini-skip-list.json"
LOCK_DIR="$LOG_DIR/gemini-locks"
ACTIVE_FILE="$LOG_DIR/gemini-active.json"
mkdir -p "$LOG_DIR" "$WORKTREE_BASE" "$LOCK_DIR"
[ -f "$SKIP_FILE" ] || echo '{}' > "$SKIP_FILE"
echo '{}' > "$ACTIVE_FILE"
# === SHARED FUNCTIONS ===
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> "$LOG_DIR/gemini-loop.log"
}
lock_issue() {
local issue_key="$1"
local lockfile="$LOCK_DIR/$issue_key.lock"
if mkdir "$lockfile" 2>/dev/null; then
echo $$ > "$lockfile/pid"
return 0
fi
return 1
}
unlock_issue() {
rm -rf "$LOCK_DIR/$1.lock" 2>/dev/null
}
mark_skip() {
local issue_num="$1" reason="$2" skip_hours="${3:-1}"
python3 -c "
import json, time, fcntl
with open('$SKIP_FILE', 'r+') as f:
fcntl.flock(f, fcntl.LOCK_EX)
try: skips = json.load(f)
except: skips = {}
skips[str($issue_num)] = {
'until': time.time() + ($skip_hours * 3600),
'reason': '$reason',
'failures': skips.get(str($issue_num), {}).get('failures', 0) + 1
}
if skips[str($issue_num)]['failures'] >= 3:
skips[str($issue_num)]['until'] = time.time() + (6 * 3600)
f.seek(0)
f.truncate()
json.dump(skips, f, indent=2)
" 2>/dev/null
log "SKIP: #${issue_num}${reason}"
}
update_active() {
local worker="$1" issue="$2" repo="$3" status="$4"
python3 -c "
import json, fcntl
with open('$ACTIVE_FILE', 'r+') as f:
fcntl.flock(f, fcntl.LOCK_EX)
try: active = json.load(f)
except: active = {}
if '$status' == 'done':
active.pop('$worker', None)
else:
active['$worker'] = {'issue': '$issue', 'repo': '$repo', 'status': '$status'}
f.seek(0)
f.truncate()
json.dump(active, f, indent=2)
" 2>/dev/null
}
cleanup_workdir() {
local wt="$1"
rm -rf "$wt" 2>/dev/null || true
}
get_next_issue() {
python3 -c "
import json, sys, time, urllib.request, os
token = '${GITEA_TOKEN}'
base = '${GITEA_URL}'
repos = [
'Timmy_Foundation/the-nexus',
'Timmy_Foundation/autolora',
]
try:
with open('${SKIP_FILE}') as f: skips = json.load(f)
except: skips = {}
try:
with open('${ACTIVE_FILE}') as f:
active = json.load(f)
active_issues = {v['issue'] for v in active.values()}
except:
active_issues = set()
all_issues = []
for repo in repos:
url = f'{base}/api/v1/repos/{repo}/issues?state=open&type=issues&limit=50&sort=created'
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
try:
resp = urllib.request.urlopen(req, timeout=10)
issues = json.loads(resp.read())
for i in issues:
i['_repo'] = repo
all_issues.extend(issues)
except:
continue
def priority(i):
t = i['title'].lower()
if '[urgent]' in t or 'urgent:' in t: return 0
if '[p0]' in t: return 1
if '[p1]' in t: return 2
if '[bug]' in t: return 3
if 'lhf:' in t or 'lhf ' in t: return 4
if '[p2]' in t: return 5
return 6
all_issues.sort(key=priority)
for i in all_issues:
assignees = [a['login'] for a in (i.get('assignees') or [])]
# Take issues assigned to gemini OR unassigned (self-assign)
if assignees and 'gemini' not in assignees:
continue
title = i['title'].lower()
if '[philosophy]' in title: continue
if '[epic]' in title or 'epic:' in title: continue
if '[showcase]' in title: continue
num_str = str(i['number'])
if num_str in active_issues: continue
entry = skips.get(num_str, {})
if entry and entry.get('until', 0) > time.time(): continue
lock = '${LOCK_DIR}/' + i['_repo'].replace('/', '-') + '-' + num_str + '.lock'
if os.path.isdir(lock): continue
repo = i['_repo']
owner, name = repo.split('/')
# Self-assign if unassigned
if not assignees:
try:
data = json.dumps({'assignees': ['gemini']}).encode()
req2 = urllib.request.Request(
f'{base}/api/v1/repos/{repo}/issues/{i["number"]}',
data=data, method='PATCH',
headers={'Authorization': f'token {token}', 'Content-Type': 'application/json'})
urllib.request.urlopen(req2, timeout=5)
except: pass
print(json.dumps({
'number': i['number'],
'title': i['title'],
'repo_owner': owner,
'repo_name': name,
'repo': repo,
}))
sys.exit(0)
print('null')
" 2>/dev/null
}
build_prompt() {
local issue_num="$1" issue_title="$2" worktree="$3" repo_owner="$4" repo_name="$5"
cat <<PROMPT
You are Gemini, an autonomous code agent on the ${repo_name} project.
YOUR ISSUE: #${issue_num} — "${issue_title}"
GITEA API: ${GITEA_URL}/api/v1
GITEA TOKEN: ${GITEA_TOKEN}
REPO: ${repo_owner}/${repo_name}
WORKING DIRECTORY: ${worktree}
== YOUR POWERS ==
You can do ANYTHING a developer can do.
1. READ the issue and any comments for context:
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}"
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments"
2. DO THE WORK. Code, test, fix, refactor — whatever the issue needs.
- Check for tox.ini / Makefile / package.json for test/lint commands
- Run tests if the project has them
- Follow existing code conventions
3. COMMIT with conventional commits: fix: / feat: / refactor: / test: / chore:
Include "Fixes #${issue_num}" or "Refs #${issue_num}" in the message.
4. PUSH to your branch (gemini/issue-${issue_num}) and CREATE A PR:
git push origin gemini/issue-${issue_num}
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"title": "[gemini] <description> (#${issue_num})", "body": "Fixes #${issue_num}\n\n<describe what you did>", "head": "gemini/issue-${issue_num}", "base": "main"}'
5. COMMENT on the issue when done:
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments" \\
-H "Authorization: token ${GITEA_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{"body": "PR created. <summary of changes>"}'
== RULES ==
- Read CLAUDE.md or project README first for conventions
- If the project has tox, use tox. If npm, use npm. Follow the project.
- Never use --no-verify on git commands.
- If tests fail after 2 attempts, STOP and comment on the issue explaining why.
- Be thorough but focused. Fix the issue, don't refactor the world.
== CRITICAL: ALWAYS COMMIT AND PUSH ==
- NEVER exit without committing your work. Even partial progress MUST be committed.
- Before you finish, ALWAYS: git add -A && git commit && git push origin gemini/issue-${issue_num}
- ALWAYS create a PR before exiting. No exceptions.
- If a branch already exists with prior work, check it out and CONTINUE from where it left off.
- Check: git ls-remote origin gemini/issue-${issue_num} — if it exists, pull it first.
- Your work is WASTED if it's not pushed. Push early, push often.
PROMPT
}
# === WORKER FUNCTION ===
run_worker() {
local worker_id="$1"
local consecutive_failures=0
log "WORKER-${worker_id}: Started"
while true; do
if [ "$consecutive_failures" -ge 5 ]; then
local backoff=$((RATE_LIMIT_SLEEP * (consecutive_failures / 5)))
[ "$backoff" -gt "$MAX_RATE_SLEEP" ] && backoff=$MAX_RATE_SLEEP
log "WORKER-${worker_id}: BACKOFF ${backoff}s (${consecutive_failures} failures)"
sleep "$backoff"
consecutive_failures=0
fi
issue_json=$(get_next_issue)
if [ "$issue_json" = "null" ] || [ -z "$issue_json" ]; then
update_active "$worker_id" "" "" "idle"
sleep 10
continue
fi
issue_num=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['number'])")
issue_title=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['title'])")
repo_owner=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_owner'])")
repo_name=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_name'])")
issue_key="${repo_owner}-${repo_name}-${issue_num}"
branch="gemini/issue-${issue_num}"
worktree="${WORKTREE_BASE}/gemini-w${worker_id}-${issue_num}"
if ! lock_issue "$issue_key"; then
sleep 5
continue
fi
log "WORKER-${worker_id}: === ISSUE #${issue_num}: ${issue_title} (${repo_owner}/${repo_name}) ==="
update_active "$worker_id" "$issue_num" "${repo_owner}/${repo_name}" "working"
# Clone and pick up prior work if it exists
rm -rf "$worktree" 2>/dev/null
CLONE_URL="http://gemini:${GITEA_TOKEN}@143.198.27.163:3000/${repo_owner}/${repo_name}.git"
if git ls-remote --heads "$CLONE_URL" "$branch" 2>/dev/null | grep -q "$branch"; then
log "WORKER-${worker_id}: Found existing branch $branch — continuing prior work"
if ! git clone --depth=50 -b "$branch" "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
log "WORKER-${worker_id}: ERROR cloning branch $branch for #${issue_num}"
unlock_issue "$issue_key"
consecutive_failures=$((consecutive_failures + 1))
sleep "$COOLDOWN"
continue
fi
else
if ! git clone --depth=1 -b main "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
log "WORKER-${worker_id}: ERROR cloning for #${issue_num}"
unlock_issue "$issue_key"
consecutive_failures=$((consecutive_failures + 1))
sleep "$COOLDOWN"
continue
fi
cd "$worktree"
git checkout -b "$branch" >/dev/null 2>&1
fi
cd "$worktree"
prompt=$(build_prompt "$issue_num" "$issue_title" "$worktree" "$repo_owner" "$repo_name")
log "WORKER-${worker_id}: Launching Gemini Code for #${issue_num}..."
CYCLE_START=$(date +%s)
set +e
cd "$worktree"
gtimeout "$GEMINI_TIMEOUT" gemini \
-p "$prompt" \
--yolo \
</dev/null >> "$LOG_DIR/gemini-${issue_num}.log" 2>&1
exit_code=$?
set -e
CYCLE_END=$(date +%s)
CYCLE_DURATION=$(( CYCLE_END - CYCLE_START ))
# ── SALVAGE: Never waste work. Commit+push whatever exists. ──
cd "$worktree" 2>/dev/null || true
DIRTY=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
if [ "${DIRTY:-0}" -gt 0 ]; then
log "WORKER-${worker_id}: SALVAGING $DIRTY dirty files for #${issue_num}"
git add -A 2>/dev/null
git commit -m "WIP: Gemini Code progress on #${issue_num}
Automated salvage commit — agent session ended (exit $exit_code).
Work in progress, may need continuation." 2>/dev/null || true
fi
UNPUSHED=$(git log --oneline "origin/main..HEAD" 2>/dev/null | wc -l | tr -d ' ')
if [ "${UNPUSHED:-0}" -gt 0 ]; then
git push -u origin "$branch" 2>/dev/null && \
log "WORKER-${worker_id}: Pushed $UNPUSHED commit(s) on $branch" || \
log "WORKER-${worker_id}: Push failed for $branch"
fi
# ── Create PR if needed ──
pr_num=$(curl -sf "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls?state=open&head=${repo_owner}:${branch}&limit=1" \
-H "Authorization: token ${GITEA_TOKEN}" | python3 -c "
import sys,json
prs = json.load(sys.stdin)
if prs: print(prs[0]['number'])
else: print('')
" 2>/dev/null)
if [ -z "$pr_num" ] && [ "${UNPUSHED:-0}" -gt 0 ]; then
pr_num=$(curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "$(python3 -c "
import json
print(json.dumps({
'title': 'Gemini: Issue #${issue_num}',
'head': '${branch}',
'base': 'main',
'body': 'Automated PR for issue #${issue_num}.\nExit code: ${exit_code}'
}))
")" | python3 -c "import sys,json; print(json.load(sys.stdin).get('number',''))" 2>/dev/null)
[ -n "$pr_num" ] && log "WORKER-${worker_id}: Created PR #${pr_num} for issue #${issue_num}"
fi
# ── Merge + close on success ──
if [ "$exit_code" -eq 0 ]; then
log "WORKER-${worker_id}: SUCCESS #${issue_num}"
if [ -n "$pr_num" ]; then
curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls/${pr_num}/merge" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do": "squash"}' >/dev/null 2>&1 || true
curl -sf -X PATCH "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}" \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"state": "closed"}' >/dev/null 2>&1 || true
log "WORKER-${worker_id}: PR #${pr_num} merged, issue #${issue_num} closed"
fi
consecutive_failures=0
elif [ "$exit_code" -eq 124 ]; then
log "WORKER-${worker_id}: TIMEOUT #${issue_num} (work saved in PR)"
consecutive_failures=$((consecutive_failures + 1))
else
if grep -q "rate_limit\|rate limit\|429\|overloaded\|quota" "$LOG_DIR/gemini-${issue_num}.log" 2>/dev/null; then
log "WORKER-${worker_id}: RATE LIMITED on #${issue_num} (work saved)"
consecutive_failures=$((consecutive_failures + 3))
else
log "WORKER-${worker_id}: FAILED #${issue_num} exit ${exit_code} (work saved in PR)"
consecutive_failures=$((consecutive_failures + 1))
fi
fi
# ── METRICS ──
LINES_ADDED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo 0)
LINES_REMOVED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo 0)
FILES_CHANGED=$(cd "$worktree" 2>/dev/null && git diff --name-only origin/main..HEAD 2>/dev/null | wc -l | tr -d ' ' || echo 0)
if [ "$exit_code" -eq 0 ]; then OUTCOME="success"
elif [ "$exit_code" -eq 124 ]; then OUTCOME="timeout"
elif grep -q "rate_limit\|429" "$LOG_DIR/gemini-${issue_num}.log" 2>/dev/null; then OUTCOME="rate_limited"
else OUTCOME="failed"; fi
python3 -c "
import json, datetime
print(json.dumps({
'ts': datetime.datetime.utcnow().isoformat() + 'Z',
'agent': 'gemini',
'worker': $worker_id,
'issue': $issue_num,
'repo': '${repo_owner}/${repo_name}',
'outcome': '$OUTCOME',
'exit_code': $exit_code,
'duration_s': $CYCLE_DURATION,
'files_changed': ${FILES_CHANGED:-0},
'lines_added': ${LINES_ADDED:-0},
'lines_removed': ${LINES_REMOVED:-0},
'salvaged': ${DIRTY:-0},
'pr': '${pr_num:-}',
'merged': $( [ '$OUTCOME' = 'success' ] && [ -n '${pr_num:-}' ] && echo 'true' || echo 'false' )
}))
" >> "$LOG_DIR/claude-metrics.jsonl" 2>/dev/null
cleanup_workdir "$worktree"
unlock_issue "$issue_key"
update_active "$worker_id" "" "" "done"
sleep "$COOLDOWN"
done
}
# === MAIN ===
log "=== Gemini Loop Started — ${NUM_WORKERS} workers (max ${MAX_WORKERS}) ==="
log "Worktrees: ${WORKTREE_BASE}"
rm -rf "$LOCK_DIR"/*.lock 2>/dev/null
# PID tracking via files (bash 3.2 compatible)
PID_DIR="$LOG_DIR/gemini-pids"
mkdir -p "$PID_DIR"
rm -f "$PID_DIR"/*.pid 2>/dev/null
launch_worker() {
local wid="$1"
run_worker "$wid" &
echo $! > "$PID_DIR/${wid}.pid"
log "Launched worker $wid (PID $!)"
}
for i in $(seq 1 "$NUM_WORKERS"); do
launch_worker "$i"
sleep 3
done
# Dynamic scaler — every 3 minutes
CURRENT_WORKERS="$NUM_WORKERS"
while true; do
sleep 90
# Reap dead workers
for pidfile in "$PID_DIR"/*.pid; do
[ -f "$pidfile" ] || continue
wid=$(basename "$pidfile" .pid)
wpid=$(cat "$pidfile")
if ! kill -0 "$wpid" 2>/dev/null; then
log "SCALER: Worker $wid died — relaunching"
launch_worker "$wid"
sleep 2
fi
done
recent_rate_limits=$(tail -100 "$LOG_DIR/gemini-loop.log" 2>/dev/null | grep -c "RATE LIMITED" || true)
recent_successes=$(tail -100 "$LOG_DIR/gemini-loop.log" 2>/dev/null | grep -c "SUCCESS" || true)
if [ "$recent_rate_limits" -gt 0 ]; then
if [ "$CURRENT_WORKERS" -gt 2 ]; then
drop_to=$(( CURRENT_WORKERS / 2 ))
[ "$drop_to" -lt 2 ] && drop_to=2
log "SCALER: Rate limited — scaling ${CURRENT_WORKERS}${drop_to}"
for wid in $(seq $((drop_to + 1)) "$CURRENT_WORKERS"); do
if [ -f "$PID_DIR/${wid}.pid" ]; then
kill "$(cat "$PID_DIR/${wid}.pid")" 2>/dev/null || true
rm -f "$PID_DIR/${wid}.pid"
update_active "$wid" "" "" "done"
fi
done
CURRENT_WORKERS=$drop_to
fi
elif [ "$recent_successes" -ge 2 ] && [ "$CURRENT_WORKERS" -lt "$MAX_WORKERS" ]; then
new_count=$(( CURRENT_WORKERS + 2 ))
[ "$new_count" -gt "$MAX_WORKERS" ] && new_count=$MAX_WORKERS
log "SCALER: Healthy — scaling ${CURRENT_WORKERS}${new_count}"
for wid in $(seq $((CURRENT_WORKERS + 1)) "$new_count"); do
launch_worker "$wid"
sleep 2
done
CURRENT_WORKERS=$new_count
fi
done

94
bin/hermes-startup.sh Executable file
View File

@@ -0,0 +1,94 @@
#!/usr/bin/env bash
# ── Hermes Master Startup ─────────────────────────────────────────────
# Brings up the entire system after a reboot.
# Called by launchd (ai.hermes.startup) or manually.
#
# Boot order:
# 1. Gitea (homebrew launchd — already handles itself)
# 2. Ollama (macOS app — already handles itself via login item)
# 3. Hermes Gateway (launchd — already handles itself)
# 4. Webhook listener (port 7777)
# 5. Timmy-loop tmux session (4-pane dashboard)
# 6. Hermes cron engine (runs inside gateway)
#
# This script ensures 4 and 5 are alive. 1-3 and 6 are handled by
# their own launchd plists / login items.
# ───────────────────────────────────────────────────────────────────────
set -euo pipefail
export PATH="/opt/homebrew/bin:$HOME/.local/bin:$HOME/.hermes/bin:/usr/local/bin:$PATH"
LOG="$HOME/.hermes/logs/startup.log"
mkdir -p "$(dirname "$LOG")"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"
}
wait_for_port() {
local port=$1 name=$2 max=$3
local i=0
while ! lsof -ti:"$port" >/dev/null 2>&1; do
sleep 1
i=$((i + 1))
if [ "$i" -ge "$max" ]; then
log "WARN: $name not up on port $port after ${max}s"
return 1
fi
done
log "OK: $name alive on port $port"
return 0
}
# ── Prerequisites ──────────────────────────────────────────────────────
log "=== Hermes Master Startup ==="
# Wait for Gitea (port 3000) — up to 30s
log "Waiting for Gitea..."
wait_for_port 3000 "Gitea" 30
# Wait for Ollama (port 11434) — up to 30s
log "Waiting for Ollama..."
wait_for_port 11434 "Ollama" 30
# ── Webhook Listener (port 7777) ───────────────────────────────────────
if lsof -ti:7777 >/dev/null 2>&1; then
log "OK: Webhook listener already running on port 7777"
else
log "Starting webhook listener..."
tmux has-session -t webhook 2>/dev/null && tmux kill-session -t webhook
tmux new-session -d -s webhook "python3 $HOME/.hermes/bin/gitea-webhook-listener.py"
sleep 2
if lsof -ti:7777 >/dev/null 2>&1; then
log "OK: Webhook listener started on port 7777"
else
log "FAIL: Webhook listener did not start"
fi
fi
# ── Timmy Loop (tmux session) ──────────────────────────────────────────
STOP_FILE="$HOME/Timmy-Time-dashboard/.loop/STOP"
if [ -f "$STOP_FILE" ]; then
log "SKIP: Timmy loop — STOP file present at $STOP_FILE"
elif tmux has-session -t timmy-loop 2>/dev/null; then
# Check if the loop pane is actually alive
PANE0_PID=$(tmux list-panes -t "timmy-loop:0.0" -F '#{pane_pid}' 2>/dev/null || true)
if [ -n "$PANE0_PID" ] && kill -0 "$PANE0_PID" 2>/dev/null; then
log "OK: Timmy loop session alive"
else
log "WARN: Timmy loop session exists but pane dead. Restarting..."
tmux kill-session -t timmy-loop 2>/dev/null
"$HOME/.hermes/bin/timmy-tmux.sh"
log "OK: Timmy loop restarted"
fi
else
log "Starting timmy-loop session..."
"$HOME/.hermes/bin/timmy-tmux.sh"
log "OK: Timmy loop started"
fi
log "=== Startup complete ==="

36
bin/hotspot-keepalive.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# hotspot-keepalive.sh — Auto-reconnect to Alfred hotspot
# Checks every 30s, reconnects if dropped.
SSID="Alfred"
IFACE="en0"
LOG="$HOME/.hermes/logs/hotspot.log"
CHECK_INTERVAL=30
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] HOTSPOT: $*" >> "$LOG"; }
log "=== Keepalive started for SSID: $SSID ==="
while true; do
current=$(networksetup -getairportnetwork "$IFACE" 2>/dev/null | sed 's/.*: //')
if [ "$current" = "$SSID" ]; then
# Connected — check we actually have internet
if ! ping -c 1 -W 3 8.8.8.8 >/dev/null 2>&1; then
log "Connected to $SSID but no internet — forcing reconnect"
networksetup -setairportnetwork "$IFACE" "$SSID" 2>/dev/null
fi
else
log "Not on $SSID (current: ${current:-none}) — reconnecting..."
networksetup -setairportnetwork "$IFACE" "$SSID" 2>/dev/null
sleep 5
new=$(networksetup -getairportnetwork "$IFACE" 2>/dev/null | sed 's/.*: //')
if [ "$new" = "$SSID" ]; then
log "Reconnected to $SSID"
else
log "FAILED to reconnect (got: ${new:-none}) — retrying in ${CHECK_INTERVAL}s"
fi
fi
sleep "$CHECK_INTERVAL"
done

216
bin/nexus-merge-bot.sh Executable file
View File

@@ -0,0 +1,216 @@
#!/usr/bin/env bash
# nexus-merge-bot.sh — Auto-review and auto-merge for the-nexus
# Polls open PRs. For each: clone, validate (HTML/JS/JSON/size), merge if clean.
# Runs as a loop. Squash-only. Linear history.
#
# Pattern: matches Timmy-time-dashboard merge policy.
# Pre-commit hooks + this bot are the gates. If gates pass, auto-merge.
set -uo pipefail
LOG_DIR="$HOME/.hermes/logs"
LOG="$LOG_DIR/nexus-merge-bot.log"
PIDFILE="$LOG_DIR/nexus-merge-bot.pid"
GITEA_URL="http://143.198.27.163:3000"
GITEA_TOKEN=$(cat "$HOME/.hermes/gitea_token_vps" 2>/dev/null)
REPO="Timmy_Foundation/the-nexus"
CHECK_INTERVAL=60 # 2 minutes
mkdir -p "$LOG_DIR"
# Single instance guard
if [ -f "$PIDFILE" ]; then
old_pid=$(cat "$PIDFILE")
if kill -0 "$old_pid" 2>/dev/null; then
echo "Merge bot already running (PID $old_pid)" >&2
exit 0
fi
fi
echo $$ > "$PIDFILE"
trap 'rm -f "$PIDFILE"' EXIT
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] MERGE-BOT: $*" >> "$LOG"
}
validate_pr() {
local pr_num="$1"
local work_dir="/tmp/nexus-validate-$$"
rm -rf "$work_dir"
# Get PR head branch
local pr_info
pr_info=$(curl -s --max-time 10 -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${REPO}/pulls/${pr_num}")
local head_ref
head_ref=$(echo "$pr_info" | python3 -c "import sys,json; print(json.loads(sys.stdin.read())['head']['ref'])" 2>/dev/null)
local mergeable
mergeable=$(echo "$pr_info" | python3 -c "import sys,json; print(json.loads(sys.stdin.read()).get('mergeable', False))" 2>/dev/null)
if [ "$mergeable" != "True" ]; then
log "PR #${pr_num}: not mergeable (conflicts), skipping"
echo "CONFLICT"
return 1
fi
# Clone and checkout the PR branch
git clone -q --depth 20 \
"http://Timmy:${GITEA_TOKEN}@143.198.27.163:3000/${REPO}.git" "$work_dir" 2>&1 | tail -5 >> "$LOG"
if [ ! -d "$work_dir/.git" ]; then
log "PR #${pr_num}: clone failed"
echo "CLONE_FAIL"
return 1
fi
cd "$work_dir" || return 1
# Fetch and checkout the PR branch
git fetch origin "$head_ref" 2>/dev/null && git checkout "$head_ref" 2>/dev/null
if [ $? -ne 0 ]; then
# Try fetching the PR ref directly
git fetch origin "pull/${pr_num}/head:pr-${pr_num}" 2>/dev/null && git checkout "pr-${pr_num}" 2>/dev/null
fi
local FAIL=0
# 1. HTML validation
if [ -f index.html ]; then
python3 -c "
import html.parser
class V(html.parser.HTMLParser):
pass
v = V()
v.feed(open('index.html').read())
" 2>/dev/null || { log "PR #${pr_num}: HTML validation failed"; FAIL=1; }
fi
# 2. JS syntax check (node --check)
for f in $(find . -name '*.js' -not -path './node_modules/*' 2>/dev/null); do
if command -v node >/dev/null 2>&1; then
if ! node --check "$f" 2>/dev/null; then
log "PR #${pr_num}: JS syntax error in $f"
FAIL=1
fi
fi
done
# 3. JSON validation
for f in $(find . -name '*.json' -not -path './node_modules/*' 2>/dev/null); do
if ! python3 -c "import json; json.load(open('$f'))" 2>/dev/null; then
log "PR #${pr_num}: invalid JSON in $f"
FAIL=1
fi
done
# 4. File size budget (500KB per JS file)
for f in $(find . -name '*.js' -not -path './node_modules/*' 2>/dev/null); do
local size
size=$(wc -c < "$f")
if [ "$size" -gt 512000 ]; then
log "PR #${pr_num}: $f exceeds 500KB budget (${size} bytes)"
FAIL=1
fi
done
# Cleanup
rm -rf "$work_dir"
if [ $FAIL -eq 0 ]; then
echo "PASS"
return 0
else
echo "FAIL"
return 1
fi
}
merge_pr() {
local pr_num="$1"
local result
result=$(curl -s --max-time 30 -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Do":"squash","delete_branch_after_merge":true}' \
"${GITEA_URL}/api/v1/repos/${REPO}/pulls/${pr_num}/merge")
if echo "$result" | grep -q '"sha"'; then
log "PR #${pr_num}: MERGED (squash)"
return 0
elif echo "$result" | grep -q '"message"'; then
local msg
msg=$(echo "$result" | python3 -c "import sys,json; print(json.loads(sys.stdin.read()).get('message','unknown'))" 2>/dev/null)
log "PR #${pr_num}: merge failed: $msg"
return 1
fi
}
comment_pr() {
local pr_num="$1"
local body="$2"
curl -s --max-time 10 -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"body\": \"$body\"}" \
"${GITEA_URL}/api/v1/repos/${REPO}/issues/${pr_num}/comments" >/dev/null
}
log "Starting nexus merge bot (PID $$)"
while true; do
# Get open PRs
prs=$(curl -s --max-time 15 -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${REPO}/pulls?state=open&sort=newest&limit=20")
pr_count=$(echo "$prs" | python3 -c "import sys,json; print(len(json.loads(sys.stdin.buffer.read())))" 2>/dev/null || echo "0")
if [ "$pr_count" = "0" ] || [ -z "$pr_count" ]; then
log "No open PRs. Sleeping ${CHECK_INTERVAL}s"
sleep "$CHECK_INTERVAL"
continue
fi
log "Found ${pr_count} open PRs, validating..."
# Process PRs one at a time, oldest first (sequential merge)
pr_nums=$(echo "$prs" | python3 -c "
import sys, json
prs = json.loads(sys.stdin.buffer.read())
for p in prs:
print(p['number'])
" 2>/dev/null)
for pr_num in $pr_nums; do
log "Validating PR #${pr_num}..."
result=$(validate_pr "$pr_num")
case "$result" in
PASS)
log "PR #${pr_num}: validation passed, merging..."
comment_pr "$pr_num" "🤖 **Merge Bot**: CI validation passed (HTML, JS syntax, JSON, size budget). Auto-merging."
merge_pr "$pr_num"
# Wait a beat for Gitea to process
sleep 5
;;
CONFLICT)
# Auto-close stale conflicting PRs — don't let them pile up
log "PR #${pr_num}: conflicts, closing"
comment_pr "$pr_num" "🤖 **Merge Bot**: Merge conflicts with main. Closing. The issue remains open — next agent cycle will pick it up fresh."
curl -s --max-time 5 -X PATCH \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"state":"closed"}' \
"${GITEA_URL}/api/v1/repos/${REPO}/pulls/${pr_num}" >/dev/null 2>&1
;;
FAIL)
comment_pr "$pr_num" "🤖 **Merge Bot**: CI validation failed. Check the merge-bot log for details."
;;
*)
log "PR #${pr_num}: unknown result: $result"
;;
esac
done
log "Cycle complete. Sleeping ${CHECK_INTERVAL}s"
sleep "$CHECK_INTERVAL"
done

70
bin/ops-gitea.sh Executable file
View File

@@ -0,0 +1,70 @@
#!/usr/bin/env bash
# ── Gitea Feed Panel ───────────────────────────────────────────────────
# Shows open PRs, recent merges, and issue queue. Called by watch.
# ───────────────────────────────────────────────────────────────────────
B='\033[1m' ; D='\033[2m' ; R='\033[0m'
G='\033[32m' ; Y='\033[33m' ; RD='\033[31m' ; C='\033[36m' ; M='\033[35m'
TOKEN=$(cat ~/.hermes/gitea_token_vps 2>/dev/null)
API="http://143.198.27.163:3000/api/v1/repos/rockachopa/Timmy-time-dashboard"
echo -e "${B}${C} ◈ GITEA${R} ${D}$(date '+%H:%M:%S')${R}"
echo -e "${D}────────────────────────────────────────${R}"
# Open PRs
echo -e " ${B}Open PRs${R}"
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/pulls?state=open&limit=10" 2>/dev/null | python3 -c "
import json,sys
try:
prs = json.loads(sys.stdin.read())
if not prs: print(' (none)')
for p in prs:
age_h = ''
print(f' #{p[\"number\"]:3d} {p[\"user\"][\"login\"]:8s} {p[\"title\"][:45]}')
except: print(' (error)')
" 2>/dev/null
echo -e "${D}────────────────────────────────────────${R}"
# Recent merged (last 5)
echo -e " ${B}Recently Merged${R}"
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/pulls?state=closed&sort=updated&limit=5" 2>/dev/null | python3 -c "
import json,sys
try:
prs = json.loads(sys.stdin.read())
merged = [p for p in prs if p.get('merged')]
if not merged: print(' (none)')
for p in merged[:5]:
t = p['merged_at'][:16].replace('T',' ')
print(f' ${G}${R} #{p[\"number\"]:3d} {p[\"title\"][:35]} ${D}{t}${R}')
except: print(' (error)')
" 2>/dev/null
echo -e "${D}────────────────────────────────────────${R}"
# Issue queue (assigned to kimi)
echo -e " ${B}Kimi Queue${R}"
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/issues?state=open&limit=50&type=issues" 2>/dev/null | python3 -c "
import json,sys
try:
all_issues = json.loads(sys.stdin.read())
issues = [i for i in all_issues if 'kimi' in [a.get('login','') for a in (i.get('assignees') or [])]]
if not issues: print(' (empty — assign more!)')
for i in issues[:8]:
print(f' #{i[\"number\"]:3d} {i[\"title\"][:50]}')
if len(issues) > 8: print(f' ... +{len(issues)-8} more')
except: print(' (error)')
" 2>/dev/null
echo -e "${D}────────────────────────────────────────${R}"
# Unassigned issues
UNASSIGNED=$(curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/issues?state=open&limit=50&type=issues" 2>/dev/null | python3 -c "
import json,sys
try:
issues = json.loads(sys.stdin.read())
print(len([i for i in issues if not i.get('assignees')]))
except: print('?')
" 2>/dev/null)
echo -e " Unassigned issues: ${Y}$UNASSIGNED${R}"

235
bin/ops-helpers.sh Executable file
View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bash
# ── Dashboard Control Helpers ──────────────────────────────────────────
# Source this in the controls pane: source ~/.hermes/bin/ops-helpers.sh
# ───────────────────────────────────────────────────────────────────────
export TOKEN=*** ~/.hermes/gitea_token_vps 2>/dev/null)
export GITEA="http://143.198.27.163:3000"
export REPO_API="$GITEA/api/v1/repos/rockachopa/Timmy-time-dashboard"
ops-help() {
echo ""
echo -e "\033[1m\033[35m ◈ CONTROLS\033[0m"
echo -e "\033[2m ──────────────────────────────────────\033[0m"
echo ""
echo -e " \033[1mWake Up\033[0m"
echo " ops-wake-kimi Restart Kimi loop"
echo " ops-wake-claude Restart Claude loop"
echo " ops-wake-gemini Restart Gemini loop"
echo " ops-wake-gateway Restart gateway"
echo " ops-wake-all Restart everything"
echo ""
echo -e " \033[1mManage\033[0m"
echo " ops-merge PR_NUM Squash-merge a PR"
echo " ops-assign ISSUE Assign issue to Kimi"
echo " ops-assign-claude ISSUE [REPO] Assign to Claude"
echo " ops-audit Run efficiency audit now"
echo " ops-prs List open PRs"
echo " ops-queue Show Kimi's queue"
echo " ops-claude-queue Show Claude's queue"
echo " ops-gemini-queue Show Gemini's queue"
echo ""
echo -e " \033[1mEmergency\033[0m"
echo " ops-kill-kimi Stop Kimi loop"
echo " ops-kill-claude Stop Claude loop"
echo " ops-kill-gemini Stop Gemini loop"
echo " ops-kill-zombies Kill stuck git/pytest"
echo ""
echo -e " \033[1mOrchestrator\033[0m"
echo " ops-wake-timmy Start Timmy (Ollama)"
echo " ops-kill-timmy Stop Timmy"
echo ""
echo -e " \033[1mWatchdog\033[0m"
echo " ops-wake-watchdog Start loop watchdog"
echo " ops-kill-watchdog Stop loop watchdog"
echo ""
echo -e " \033[2m Type ops-help to see this again\033[0m"
echo ""
}
ops-wake-kimi() {
pkill -f "kimi-loop.sh" 2>/dev/null
sleep 1
nohup bash ~/.hermes/bin/kimi-loop.sh >> ~/.hermes/logs/kimi-loop.log 2>&1 &
echo " Kimi loop started (PID $!)"
}
ops-wake-gateway() {
hermes gateway start 2>&1
}
ops-wake-claude() {
local workers="${1:-3}"
pkill -f "claude-loop.sh" 2>/dev/null
sleep 1
nohup bash ~/.hermes/bin/claude-loop.sh "$workers" >> ~/.hermes/logs/claude-loop.log 2>&1 &
echo " Claude loop started — $workers workers (PID $!)"
}
ops-wake-gemini() {
pkill -f "gemini-loop.sh" 2>/dev/null
sleep 1
nohup bash ~/.hermes/bin/gemini-loop.sh >> ~/.hermes/logs/gemini-loop.log 2>&1 &
echo " Gemini loop started (PID $!)"
}
ops-wake-all() {
ops-wake-gateway
sleep 1
ops-wake-kimi
sleep 1
ops-wake-claude
sleep 1
ops-wake-gemini
echo " All services started"
}
ops-merge() {
local pr=$1
[ -z "$pr" ] && { echo "Usage: ops-merge PR_NUMBER"; return 1; }
curl -s -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
"$REPO_API/pulls/$pr/merge" -d '{"Do":"squash"}' | python3 -c "
import json,sys
d=json.loads(sys.stdin.read())
if 'sha' in d: print(f' ✓ PR #{$pr} merged ({d[\"sha\"][:8]})')
else: print(f' ✗ {d.get(\"message\",\"unknown error\")}')
" 2>/dev/null
}
ops-assign() {
local issue=$1
[ -z "$issue" ] && { echo "Usage: ops-assign ISSUE_NUMBER"; return 1; }
curl -s -X PATCH -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
"$REPO_API/issues/$issue" -d '{"assignees":["kimi"]}' | python3 -c "
import json,sys; d=json.loads(sys.stdin.read()); print(f' ✓ #{$issue} assigned to kimi')
" 2>/dev/null
}
ops-audit() {
bash ~/.hermes/bin/efficiency-audit.sh
}
ops-prs() {
curl -s -H "Authorization: token $TOKEN" "$REPO_API/pulls?state=open&limit=20" | python3 -c "
import json,sys
prs=json.loads(sys.stdin.read())
for p in prs: print(f' #{p[\"number\"]:4d} {p[\"user\"][\"login\"]:8s} {p[\"title\"][:60]}')
if not prs: print(' (none)')
" 2>/dev/null
}
ops-queue() {
curl -s -H "Authorization: token $TOKEN" "$REPO_API/issues?state=open&limit=50&type=issues" | python3 -c "
import json,sys
all_issues=json.loads(sys.stdin.read())
issues=[i for i in all_issues if 'kimi' in [a.get('login','') for a in (i.get('assignees') or [])]]
for i in issues: print(f' #{i[\"number\"]:4d} {i[\"title\"][:60]}')
if not issues: print(' (empty)')
" 2>/dev/null
}
ops-kill-kimi() {
pkill -f "kimi-loop.sh" 2>/dev/null
pkill -f "kimi.*--print" 2>/dev/null
echo " Kimi stopped"
}
ops-kill-claude() {
pkill -f "claude-loop.sh" 2>/dev/null
pkill -f "claude.*--print.*--dangerously" 2>/dev/null
rm -rf ~/.hermes/logs/claude-locks/*.lock 2>/dev/null
echo '{}' > ~/.hermes/logs/claude-active.json 2>/dev/null
echo " Claude stopped (all workers)"
}
ops-kill-gemini() {
pkill -f "gemini-loop.sh" 2>/dev/null
pkill -f "gemini.*--print" 2>/dev/null
echo " Gemini stopped"
}
ops-assign-claude() {
local issue=$1
local repo="${2:-rockachopa/Timmy-time-dashboard}"
[ -z "$issue" ] && { echo "Usage: ops-assign-claude ISSUE_NUMBER [owner/repo]"; return 1; }
curl -s -X PATCH -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
"$GITEA/api/v1/repos/$repo/issues/$issue" -d '{"assignees":["claude"]}' | python3 -c "
import json,sys; d=json.loads(sys.stdin.read()); print(f' ✓ #{$issue} assigned to claude')
" 2>/dev/null
}
ops-claude-queue() {
python3 -c "
import json, urllib.request
token=*** ~/.hermes/claude_token 2>/dev/null)'
base = 'http://143.198.27.163:3000'
repos = ['rockachopa/Timmy-time-dashboard','rockachopa/alexanderwhitestone.com','replit/timmy-tower','replit/token-gated-economy','rockachopa/hermes-agent']
for repo in repos:
url = f'{base}/api/v1/repos/{repo}/issues?state=open&limit=50&type=issues'
try:
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
resp = urllib.request.urlopen(req, timeout=5)
raw = json.loads(resp.read())
issues = [i for i in raw if 'claude' in [a.get('login','') for a in (i.get('assignees') or [])]]
for i in issues:
print(f' #{i[\"number\"]:4d} {repo.split(\"/\")[1]:20s} {i[\"title\"][:50]}')
except: continue
" 2>/dev/null || echo " (error)"
}
ops-assign-gemini() {
local issue=$1
local repo="${2:-rockachopa/Timmy-time-dashboard}"
[ -z "$issue" ] && { echo "Usage: ops-assign-gemini ISSUE_NUMBER [owner/repo]"; return 1; }
curl -s -X PATCH -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
"$GITEA/api/v1/repos/$repo/issues/$issue" -d '{"assignees":["gemini"]}' | python3 -c "
import json,sys; d=json.loads(sys.stdin.read()); print(f' ✓ #{$issue} assigned to gemini')
" 2>/dev/null
}
ops-gemini-queue() {
curl -s -H "Authorization: token $TOKEN" "$REPO_API/issues?state=open&limit=50&type=issues" | python3 -c "
import json,sys
all_issues=json.loads(sys.stdin.read())
issues=[i for i in all_issues if 'gemini' in [a.get('login','') for a in (i.get('assignees') or [])]]
for i in issues: print(f' #{i[\"number\"]:4d} {i[\"title\"][:60]}')
if not issues: print(' (empty)')
" 2>/dev/null
}
ops-kill-zombies() {
local killed=0
for pid in $(ps aux | grep "pytest tests/" | grep -v grep | awk '{print $2}'); do
kill "$pid" 2>/dev/null && killed=$((killed+1))
done
for pid in $(ps aux | grep "git.*push\|git-remote-http" | grep -v grep | awk '{print $2}'); do
kill "$pid" 2>/dev/null && killed=$((killed+1))
done
echo " Killed $killed zombie processes"
}
ops-wake-timmy() {
pkill -f "timmy-orchestrator.sh" 2>/dev/null
rm -f ~/.hermes/logs/timmy-orchestrator.pid
sleep 1
nohup bash ~/.hermes/bin/timmy-orchestrator.sh >> ~/.hermes/logs/timmy-orchestrator.log 2>&1 &
echo " Timmy orchestrator started (PID $!)"
}
ops-kill-timmy() {
pkill -f "timmy-orchestrator.sh" 2>/dev/null
rm -f ~/.hermes/logs/timmy-orchestrator.pid
echo " Timmy stopped"
}
ops-wake-watchdog() {
pkill -f "loop-watchdog.sh" 2>/dev/null
sleep 1
nohup bash ~/.hermes/bin/loop-watchdog.sh >> ~/.hermes/logs/watchdog.log 2>&1 &
echo " Watchdog started (PID $!)"
}
ops-kill-watchdog() {
pkill -f "loop-watchdog.sh" 2>/dev/null
echo " Watchdog stopped"
}

300
bin/ops-panel.sh Executable file
View File

@@ -0,0 +1,300 @@
#!/usr/bin/env bash
# ── Consolidated Ops Panel ─────────────────────────────────────────────
# Everything in one view. Designed for a half-screen pane (~100x45).
# ───────────────────────────────────────────────────────────────────────
B='\033[1m' ; D='\033[2m' ; R='\033[0m' ; U='\033[4m'
G='\033[32m' ; Y='\033[33m' ; RD='\033[31m' ; C='\033[36m' ; M='\033[35m' ; W='\033[37m'
OK="${G}${R}" ; WARN="${Y}${R}" ; FAIL="${RD}${R}" ; OFF="${D}${R}"
TOKEN=$(cat ~/.hermes/gitea_token_vps 2>/dev/null)
API="http://143.198.27.163:3000/api/v1/repos/rockachopa/Timmy-time-dashboard"
# ── HEADER ─────────────────────────────────────────────────────────────
echo ""
echo -e " ${B}${M}◈ HERMES OPERATIONS${R} ${D}$(date '+%a %b %d %H:%M:%S')${R}"
echo -e " ${D}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${R}"
echo ""
# ── SERVICES ───────────────────────────────────────────────────────────
echo -e " ${B}${U}SERVICES${R}"
echo ""
# Gateway
GW_PID=$(pgrep -f "hermes.*gateway.*run" 2>/dev/null | head -1)
[ -n "$GW_PID" ] && echo -e " ${OK} Gateway ${D}pid $GW_PID${R}" \
|| echo -e " ${FAIL} Gateway ${RD}DOWN — run: hermes gateway start${R}"
# Kimi Code loop
KIMI_PID=$(pgrep -f "kimi-loop.sh" 2>/dev/null | head -1)
[ -n "$KIMI_PID" ] && echo -e " ${OK} Kimi Loop ${D}pid $KIMI_PID${R}" \
|| echo -e " ${FAIL} Kimi Loop ${RD}DOWN — run: ops-wake-kimi${R}"
# Active Kimi Code worker
KIMI_WORK=$(pgrep -f "kimi.*--print" 2>/dev/null | head -1)
if [ -n "$KIMI_WORK" ]; then
echo -e " ${OK} Kimi Code ${D}pid $KIMI_WORK ${G}working${R}"
elif [ -n "$KIMI_PID" ]; then
echo -e " ${WARN} Kimi Code ${Y}between issues${R}"
else
echo -e " ${OFF} Kimi Code ${D}not running${R}"
fi
# Claude Code loop (parallel workers)
CLAUDE_PID=$(pgrep -f "claude-loop.sh" 2>/dev/null | head -1)
CLAUDE_WORKERS=$(pgrep -f "claude.*--print.*--dangerously" 2>/dev/null | wc -l | tr -d ' ')
if [ -n "$CLAUDE_PID" ]; then
echo -e " ${OK} Claude Loop ${D}pid $CLAUDE_PID ${G}${CLAUDE_WORKERS} workers active${R}"
else
echo -e " ${FAIL} Claude Loop ${RD}DOWN — run: ops-wake-claude${R}"
fi
# Gemini Code loop
GEMINI_PID=$(pgrep -f "gemini-loop.sh" 2>/dev/null | head -1)
GEMINI_WORK=$(pgrep -f "gemini.*--print" 2>/dev/null | head -1)
if [ -n "$GEMINI_PID" ]; then
if [ -n "$GEMINI_WORK" ]; then
echo -e " ${OK} Gemini Loop ${D}pid $GEMINI_PID ${G}working${R}"
else
echo -e " ${WARN} Gemini Loop ${D}pid $GEMINI_PID ${Y}between issues${R}"
fi
else
echo -e " ${FAIL} Gemini Loop ${RD}DOWN — run: ops-wake-gemini${R}"
fi
# Timmy Orchestrator
TIMMY_PID=$(pgrep -f "timmy-orchestrator.sh" 2>/dev/null | head -1)
if [ -n "$TIMMY_PID" ]; then
TIMMY_LAST=$(tail -1 "$HOME/.hermes/logs/timmy-orchestrator.log" 2>/dev/null | sed 's/.*TIMMY: //')
echo -e " ${OK} Timmy (Ollama) ${D}pid $TIMMY_PID ${G}${TIMMY_LAST:0:30}${R}"
else
echo -e " ${FAIL} Timmy ${RD}DOWN — run: ops-wake-timmy${R}"
fi
# Gitea VPS
if curl -s --max-time 3 "http://143.198.27.163:3000/api/v1/version" >/dev/null 2>&1; then
echo -e " ${OK} Gitea VPS ${D}143.198.27.163:3000${R}"
else
echo -e " ${FAIL} Gitea VPS ${RD}unreachable${R}"
fi
# Matrix staging
HTTP=$(curl -s --max-time 3 -o /dev/null -w "%{http_code}" "http://143.198.27.163/")
[ "$HTTP" = "200" ] && echo -e " ${OK} Matrix Staging ${D}143.198.27.163${R}" \
|| echo -e " ${FAIL} Matrix Staging ${RD}HTTP $HTTP${R}"
# Dev cycle cron
CRON_LINE=$(hermes cron list 2>&1 | grep -B1 "consolidated-dev-cycle" | head -1 2>/dev/null)
if echo "$CRON_LINE" | grep -q "active"; then
NEXT=$(hermes cron list 2>&1 | grep -A4 "consolidated-dev-cycle" | grep "Next" | awk '{print $NF}' | cut -dT -f2 | cut -d. -f1)
echo -e " ${OK} Dev Cycle ${D}every 30m, next ${NEXT:-?}${R}"
else
echo -e " ${FAIL} Dev Cycle Cron ${RD}MISSING${R}"
fi
echo ""
# ── KIMI STATS ─────────────────────────────────────────────────────────
echo -e " ${B}${U}KIMI${R}"
echo ""
KIMI_LOG="$HOME/.hermes/logs/kimi-loop.log"
if [ -f "$KIMI_LOG" ]; then
COMPLETED=$(grep -c "SUCCESS:" "$KIMI_LOG" 2>/dev/null | tail -1 || echo 0)
FAILED=$(grep -c "FAILED:" "$KIMI_LOG" 2>/dev/null | tail -1 || echo 0)
LAST_ISSUE=$(grep "=== ISSUE" "$KIMI_LOG" | tail -1 | sed 's/.*=== //' | sed 's/ ===//')
LAST_TIME=$(grep "=== ISSUE\|SUCCESS\|FAILED" "$KIMI_LOG" | tail -1 | cut -d']' -f1 | tr -d '[')
RATE=""
if [ "$COMPLETED" -gt 0 ] && [ "$FAILED" -gt 0 ]; then
TOTAL=$((COMPLETED + FAILED))
PCT=$((COMPLETED * 100 / TOTAL))
RATE=" (${PCT}% success)"
fi
echo -e " Completed ${G}${B}$COMPLETED${R} Failed ${RD}$FAILED${R}${D}$RATE${R}"
echo -e " Current ${C}$LAST_ISSUE${R}"
echo -e " Last seen ${D}$LAST_TIME${R}"
fi
echo ""
# ── CLAUDE STATS ──────────────────────────────────────────────────
echo -e " ${B}${U}CLAUDE${R}"
echo ""
CLAUDE_LOG="$HOME/.hermes/logs/claude-loop.log"
if [ -f "$CLAUDE_LOG" ]; then
CL_COMPLETED=$(grep -c "SUCCESS" "$CLAUDE_LOG" 2>/dev/null | tail -1 || echo 0)
CL_FAILED=$(grep -c "FAILED" "$CLAUDE_LOG" 2>/dev/null | tail -1 || echo 0)
CL_RATE_LIM=$(grep -c "RATE LIMITED" "$CLAUDE_LOG" 2>/dev/null | tail -1 || echo 0)
CL_RATE=""
if [ "$CL_COMPLETED" -gt 0 ] || [ "$CL_FAILED" -gt 0 ]; then
CL_TOTAL=$((CL_COMPLETED + CL_FAILED))
[ "$CL_TOTAL" -gt 0 ] && CL_PCT=$((CL_COMPLETED * 100 / CL_TOTAL)) && CL_RATE=" (${CL_PCT}%)"
fi
echo -e " ${G}${B}$CL_COMPLETED${R} done ${RD}$CL_FAILED${R} fail ${Y}$CL_RATE_LIM${R} rate-limited${D}$CL_RATE${R}"
# Show active workers
ACTIVE="$HOME/.hermes/logs/claude-active.json"
if [ -f "$ACTIVE" ]; then
python3 -c "
import json
try:
with open('$ACTIVE') as f: active = json.load(f)
for wid, info in sorted(active.items()):
iss = info.get('issue','')
repo = info.get('repo','').split('/')[-1] if info.get('repo') else ''
st = info.get('status','')
if st == 'working':
print(f' \033[36mW{wid}\033[0m \033[33m#{iss}\033[0m \033[2m{repo}\033[0m')
elif st == 'idle':
print(f' \033[2mW{wid} idle\033[0m')
except: pass
" 2>/dev/null
fi
else
echo -e " ${D}(no log yet — start with ops-wake-claude)${R}"
fi
echo ""
# ── GEMINI STATS ─────────────────────────────────────────────────────
echo -e " ${B}${U}GEMINI${R}"
echo ""
GEMINI_LOG="$HOME/.hermes/logs/gemini-loop.log"
if [ -f "$GEMINI_LOG" ]; then
GM_COMPLETED=$(grep -c "SUCCESS:" "$GEMINI_LOG" 2>/dev/null | tail -1 || echo 0)
GM_FAILED=$(grep -c "FAILED:" "$GEMINI_LOG" 2>/dev/null | tail -1 || echo 0)
GM_RATE=""
if [ "$GM_COMPLETED" -gt 0 ] || [ "$GM_FAILED" -gt 0 ]; then
GM_TOTAL=$((GM_COMPLETED + GM_FAILED))
[ "$GM_TOTAL" -gt 0 ] && GM_PCT=$((GM_COMPLETED * 100 / GM_TOTAL)) && GM_RATE=" (${GM_PCT}%)"
fi
GM_LAST=$(grep "=== ISSUE" "$GEMINI_LOG" | tail -1 | sed 's/.*=== //' | sed 's/ ===//')
echo -e " ${G}${B}$GM_COMPLETED${R} done ${RD}$GM_FAILED${R} fail${D}$GM_RATE${R}"
[ -n "$GM_LAST" ] && echo -e " Current ${C}$GM_LAST${R}"
else
echo -e " ${D}(no log yet — start with ops-wake-gemini)${R}"
fi
echo ""
# ── OPEN PRS ───────────────────────────────────────────────────────────
echo -e " ${B}${U}PULL REQUESTS${R}"
echo ""
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/pulls?state=open&limit=8" 2>/dev/null | python3 -c "
import json,sys
try:
prs = json.loads(sys.stdin.read())
if not prs: print(' \033[2m(none open)\033[0m')
for p in prs[:6]:
n = p['number']
t = p['title'][:55]
u = p['user']['login']
print(f' \033[33m#{n:<4d}\033[0m \033[2m{u:8s}\033[0m {t}')
if len(prs) > 6: print(f' \033[2m... +{len(prs)-6} more\033[0m')
except: print(' \033[31m(error fetching)\033[0m')
" 2>/dev/null
echo ""
# ── RECENTLY MERGED ────────────────────────────────────────────────────
echo -e " ${B}${U}RECENTLY MERGED${R}"
echo ""
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/pulls?state=closed&sort=updated&limit=5" 2>/dev/null | python3 -c "
import json,sys
try:
prs = json.loads(sys.stdin.read())
merged = [p for p in prs if p.get('merged')][:5]
if not merged: print(' \033[2m(none recent)\033[0m')
for p in merged:
n = p['number']
t = p['title'][:50]
when = p['merged_at'][11:16]
print(f' \033[32m✓ #{n:<4d}\033[0m {t} \033[2m{when}\033[0m')
except: print(' \033[31m(error)\033[0m')
" 2>/dev/null
echo ""
# ── KIMI QUEUE ─────────────────────────────────────────────────────────
echo -e " ${B}${U}KIMI QUEUE${R}"
echo ""
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/issues?state=open&limit=50&type=issues" 2>/dev/null | python3 -c "
import json,sys
try:
all_issues = json.loads(sys.stdin.read())
issues = [i for i in all_issues if 'kimi' in [a.get('login','') for a in (i.get('assignees') or [])]]
if not issues: print(' \033[33m⚠ Queue empty — assign more issues to kimi\033[0m')
for i in issues[:6]:
n = i['number']
t = i['title'][:55]
print(f' #{n:<4d} {t}')
if len(issues) > 6: print(f' \033[2m... +{len(issues)-6} more\033[0m')
except: print(' \033[31m(error)\033[0m')
" 2>/dev/null
echo ""
# ── CLAUDE QUEUE ──────────────────────────────────────────────────
echo -e " ${B}${U}CLAUDE QUEUE${R}"
echo ""
# Claude works across multiple repos
python3 -c "
import json, sys, urllib.request
token = '$(cat ~/.hermes/claude_token 2>/dev/null)'
base = 'http://143.198.27.163:3000'
repos = ['rockachopa/Timmy-time-dashboard','rockachopa/alexanderwhitestone.com','replit/timmy-tower','replit/token-gated-economy','rockachopa/hermes-agent']
all_issues = []
for repo in repos:
url = f'{base}/api/v1/repos/{repo}/issues?state=open&limit=50&type=issues'
try:
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
resp = urllib.request.urlopen(req, timeout=5)
raw = json.loads(resp.read())
issues = [i for i in raw if 'claude' in [a.get('login','') for a in (i.get('assignees') or [])]]
for i in issues:
i['_repo'] = repo.split('/')[1]
all_issues.extend(issues)
except: continue
if not all_issues:
print(' \033[33m\u26a0 Queue empty \u2014 assign issues to claude\033[0m')
else:
for i in all_issues[:6]:
n = i['number']
t = i['title'][:45]
r = i['_repo'][:12]
print(f' #{n:<4d} \033[2m{r:12s}\033[0m {t}')
if len(all_issues) > 6:
print(f' \033[2m... +{len(all_issues)-6} more\033[0m')
" 2>/dev/null
echo ""
# ── GEMINI QUEUE ─────────────────────────────────────────────────────
echo -e " ${B}${U}GEMINI QUEUE${R}"
echo ""
curl -s --max-time 5 -H "Authorization: token $TOKEN" "$API/issues?state=open&limit=50&type=issues" 2>/dev/null | python3 -c "
import json,sys
try:
all_issues = json.loads(sys.stdin.read())
issues = [i for i in all_issues if 'gemini' in [a.get('login','') for a in (i.get('assignees') or [])]]
if not issues: print(' \033[33m⚠ Queue empty — assign issues to gemini\033[0m')
for i in issues[:6]:
n = i['number']
t = i['title'][:55]
print(f' #{n:<4d} {t}')
if len(issues) > 6: print(f' \033[2m... +{len(issues)-6} more\033[0m')
except: print(' \033[31m(error)\033[0m')
" 2>/dev/null
echo ""
# ── WARNINGS ───────────────────────────────────────────────────────────
HERMES_PROCS=$(ps aux | grep -E "hermes.*python" | grep -v grep | wc -l | tr -d ' ')
STUCK_GIT=$(ps aux | grep "git.*push\|git-remote-http" | grep -v grep | wc -l | tr -d ' ')
ORPHAN_PY=$(ps aux | grep "pytest tests/" | grep -v grep | wc -l | tr -d ' ')
UNASSIGNED=$(curl -s --max-time 3 -H "Authorization: token $TOKEN" "$API/issues?state=open&limit=50&type=issues" 2>/dev/null | python3 -c "import json,sys; issues=json.loads(sys.stdin.read()); print(len([i for i in issues if not i.get('assignees')]))" 2>/dev/null)
WARNS=""
[ "$STUCK_GIT" -gt 0 ] && WARNS+=" ${RD}$STUCK_GIT stuck git processes${R}\n"
[ "$ORPHAN_PY" -gt 0 ] && WARNS+=" ${Y}$ORPHAN_PY orphaned pytest runs${R}\n"
[ "${UNASSIGNED:-0}" -gt 10 ] && WARNS+=" ${Y}$UNASSIGNED unassigned issues — feed the queue${R}\n"
if [ -n "$WARNS" ]; then
echo -e " ${B}${U}WARNINGS${R}"
echo ""
echo -e "$WARNS"
fi
echo -e " ${D}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${R}"
echo -e " ${D}hermes sessions: $HERMES_PROCS unassigned: ${UNASSIGNED:-?} ↻ 20s${R}"

210
bin/timmy-loopstat.sh Executable file
View File

@@ -0,0 +1,210 @@
#!/usr/bin/env bash
# ── LOOPSTAT Panel ──────────────────────
# Strategic view: queue, perf, triage,
# recent cycles. 40-col × 50-row pane.
# ────────────────────────────────────────
REPO="$HOME/Timmy-Time-dashboard"
QUEUE="$REPO/.loop/queue.json"
RETRO="$REPO/.loop/retro/cycles.jsonl"
TRIAGE_R="$REPO/.loop/retro/triage.jsonl"
DEEP_R="$REPO/.loop/retro/deep-triage.jsonl"
SUMMARY="$REPO/.loop/retro/summary.json"
QUARANTINE="$REPO/.loop/quarantine.json"
STATE="$REPO/.loop/state.json"
B='\033[1m' ; D='\033[2m' ; R='\033[0m'
G='\033[32m' ; Y='\033[33m' ; RD='\033[31m'
C='\033[36m' ; M='\033[35m'
W=$(tput cols 2>/dev/null || echo 40)
hr() { printf "${D}"; printf '─%.0s' $(seq 1 "$W"); printf "${R}\n"; }
while true; do
clear
echo -e "${B}${M} ◈ LOOPSTAT${R} ${D}$(date '+%H:%M')${R}"
hr
# ── PERFORMANCE ──────────────────────
python3 -c "
import json, os
f = '$SUMMARY'
if not os.path.exists(f):
print(' \033[2m(no perf data yet)\033[0m')
raise SystemExit
s = json.load(open(f))
rate = s.get('success_rate', 0)
avg = s.get('avg_duration_seconds', 0)
total = s.get('total_cycles', 0)
merged = s.get('total_prs_merged', 0)
added = s.get('total_lines_added', 0)
removed = s.get('total_lines_removed', 0)
rc = '\033[32m' if rate >= .8 else '\033[33m' if rate >= .5 else '\033[31m'
am, asec = divmod(avg, 60)
print(f' {rc}{rate*100:.0f}%\033[0m ok \033[1m{am:.0f}m{asec:02.0f}s\033[0m avg {total} cyc')
print(f' \033[32m{merged}\033[0m PRs \033[32m+{added}\033[0m/\033[31m-{removed}\033[0m lines')
bt = s.get('by_type', {})
parts = []
for t in ['bug','feature','refactor']:
i = bt.get(t, {})
if i.get('count', 0):
sr = i.get('success_rate', 0)
parts.append(f'{t[:3]}:{sr*100:.0f}%')
if parts:
print(f' \033[2m{\" \".join(parts)}\033[0m')
" 2>/dev/null
hr
# ── QUEUE ────────────────────────────
echo -e "${B}${Y} QUEUE${R}"
python3 -c "
import json, os
f = '$QUEUE'
if not os.path.exists(f):
print(' \033[2m(no queue yet)\033[0m')
raise SystemExit
q = json.load(open(f))
if not q:
print(' \033[2m(empty — needs triage)\033[0m')
raise SystemExit
types = {}
for item in q:
t = item.get('type','?')
types[t] = types.get(t, 0) + 1
ts = ' '.join(f'{t[0].upper()}:{n}' for t,n in sorted(types.items()) if t != 'philosophy')
print(f' \033[1m{len(q)}\033[0m ready \033[2m{ts}\033[0m')
print()
for i, item in enumerate(q[:8]):
n = item['issue']
s = item.get('score', 0)
title = item.get('title', '?')
t = item.get('type', '?')
ic = {'bug':'\033[31m●','feature':'\033[32m◆','refactor':'\033[36m○'}.get(t, '\033[2m·')
bar = '█' * s + '░' * (9 - s)
ptr = '\033[1m→' if i == 0 else f'\033[2m{i+1}'
# Truncate title to fit: 40 - 2(pad) - 2(ptr) - 2(ic) - 5(#num) - 1 = 28
tit = title[:24]
print(f' {ptr}\033[0m {ic}\033[0m \033[33m#{n}\033[0m {tit}')
if len(q) > 8:
print(f' \033[2m +{len(q)-8} more\033[0m')
" 2>/dev/null
hr
# ── TRIAGE ───────────────────────────
echo -e "${B}${G} TRIAGE${R}"
python3 -c "
import json, os
from datetime import datetime, timezone
cycle = '?'
if os.path.exists('$STATE'):
try: cycle = json.load(open('$STATE')).get('cycle','?')
except: pass
def ago(ts):
if not ts: return 'never'
try:
dt = datetime.fromisoformat(ts)
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
m = int((datetime.now(timezone.utc) - dt).total_seconds() / 60)
if m < 60: return f'{m}m ago'
if m < 1440: return f'{m//60}h{m%60}m ago'
return f'{m//1440}d ago'
except: return '?'
# Fast
fast_ago = 'never'
if os.path.exists('$TRIAGE_R'):
lines = open('$TRIAGE_R').read().strip().splitlines()
if lines:
try:
last = json.loads(lines[-1])
fast_ago = ago(last.get('timestamp',''))
except: pass
# Deep
deep_ago = 'never'
timmy = ''
if os.path.exists('$DEEP_R'):
lines = open('$DEEP_R').read().strip().splitlines()
if lines:
try:
last = json.loads(lines[-1])
deep_ago = ago(last.get('timestamp',''))
timmy = last.get('timmy_feedback','')[:60]
except: pass
# Next
try:
c = int(cycle)
nf = 5 - (c % 5)
nd = 20 - (c % 20)
except:
nf = nd = '?'
print(f' Fast {fast_ago:<12s} \033[2mnext:{nf}c\033[0m')
print(f' Deep {deep_ago:<12s} \033[2mnext:{nd}c\033[0m')
if timmy:
# wrap at ~36 chars
print(f' \033[35mTimmy:\033[0m')
t = timmy
while t:
print(f' \033[2m{t[:36]}\033[0m')
t = t[36:]
# Quarantine
if os.path.exists('$QUARANTINE'):
try:
qd = json.load(open('$QUARANTINE'))
if qd:
qs = ','.join(f'#{k}' for k in list(qd.keys())[:4])
print(f' \033[31mQuarantined:{len(qd)}\033[0m {qs}')
except: pass
" 2>/dev/null
hr
# ── RECENT CYCLES ────────────────────
echo -e "${B}${D} CYCLES${R}"
python3 -c "
import json, os
f = '$RETRO'
if not os.path.exists(f):
print(' \033[2m(none yet)\033[0m')
raise SystemExit
lines = open(f).read().strip().splitlines()
recent = []
for l in lines[-12:]:
try: recent.append(json.loads(l))
except: continue
if not recent:
print(' \033[2m(none yet)\033[0m')
raise SystemExit
for e in reversed(recent):
cy = e.get('cycle','?')
ok = e.get('success', False)
iss = e.get('issue','')
dur = e.get('duration', 0)
pr = e.get('pr','')
reason = e.get('reason','')[:18]
ic = '\033[32m✓\033[0m' if ok else '\033[31m✗\033[0m'
ds = f'{dur//60}m' if dur else '-'
ix = f'#{iss}' if iss else ' — '
if ok:
det = f'PR#{pr}' if pr else ''
else:
det = reason
print(f' {ic} {cy:<3} {ix:<5s} {ds:>4s} \033[2m{det}\033[0m')
" 2>/dev/null
hr
echo -e "${D} ↻ 10s${R}"
sleep 10
done

201
bin/timmy-orchestrator.sh Executable file
View File

@@ -0,0 +1,201 @@
#!/usr/bin/env bash
# timmy-orchestrator.sh — Timmy's orchestration loop
# Uses hermes (local Ollama) to triage, assign, review, and merge.
# Timmy is the brain. Claude/Gemini/Kimi are the hands.
set -uo pipefail
LOG_DIR="$HOME/.hermes/logs"
LOG="$LOG_DIR/timmy-orchestrator.log"
PIDFILE="$LOG_DIR/timmy-orchestrator.pid"
GITEA_URL="http://143.198.27.163:3000"
GITEA_TOKEN=$(cat "$HOME/.hermes/gitea_token_vps" 2>/dev/null) # Timmy token, NOT rockachopa
CYCLE_INTERVAL=300
HERMES_TIMEOUT=180
mkdir -p "$LOG_DIR"
# Single instance guard
if [ -f "$PIDFILE" ]; then
old_pid=$(cat "$PIDFILE")
if kill -0 "$old_pid" 2>/dev/null; then
echo "Timmy already running (PID $old_pid)" >&2
exit 0
fi
fi
echo $$ > "$PIDFILE"
trap 'rm -f "$PIDFILE"' EXIT
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] TIMMY: $*" >> "$LOG"
}
REPOS="Timmy_Foundation/the-nexus Timmy_Foundation/autolora"
gather_state() {
local state_dir="/tmp/timmy-state-$$"
mkdir -p "$state_dir"
> "$state_dir/unassigned.txt"
> "$state_dir/open_prs.txt"
> "$state_dir/agent_status.txt"
for repo in $REPOS; do
local short=$(echo "$repo" | cut -d/ -f2)
# Unassigned issues
curl -sf -H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$repo/issues?state=open&type=issues&limit=50" 2>/dev/null | \
python3 -c "
import sys,json
for i in json.load(sys.stdin):
if not i.get('assignees'):
print(f'REPO={\"$repo\"} NUM={i[\"number\"]} TITLE={i[\"title\"]}')" >> "$state_dir/unassigned.txt" 2>/dev/null
# Open PRs
curl -sf -H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$repo/pulls?state=open&limit=30" 2>/dev/null | \
python3 -c "
import sys,json
for p in json.load(sys.stdin):
print(f'REPO={\"$repo\"} PR={p[\"number\"]} BY={p[\"user\"][\"login\"]} TITLE={p[\"title\"]}')" >> "$state_dir/open_prs.txt" 2>/dev/null
done
echo "Claude workers: $(pgrep -f 'claude.*--print.*--dangerously' 2>/dev/null | wc -l | tr -d ' ')" >> "$state_dir/agent_status.txt"
echo "Claude loop: $(pgrep -f 'claude-loop.sh' 2>/dev/null | wc -l | tr -d ' ') procs" >> "$state_dir/agent_status.txt"
tail -50 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "SUCCESS" | xargs -I{} echo "Recent successes: {}" >> "$state_dir/agent_status.txt"
tail -50 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "FAILED" | xargs -I{} echo "Recent failures: {}" >> "$state_dir/agent_status.txt"
echo "$state_dir"
}
run_triage() {
local state_dir="$1"
local unassigned_count=$(wc -l < "$state_dir/unassigned.txt" | tr -d ' ')
local pr_count=$(wc -l < "$state_dir/open_prs.txt" | tr -d ' ')
log "Cycle: $unassigned_count unassigned, $pr_count open PRs"
# If nothing to do, skip the LLM call
if [ "$unassigned_count" -eq 0 ] && [ "$pr_count" -eq 0 ]; then
log "Nothing to triage"
return
fi
# Phase 1: Bulk-assign unassigned issues to claude (no LLM needed)
if [ "$unassigned_count" -gt 0 ]; then
log "Assigning $unassigned_count issues to claude..."
while IFS= read -r line; do
local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*/\1/')
local num=$(echo "$line" | sed 's/.*NUM=\([^ ]*\).*/\1/')
curl -sf -X PATCH "$GITEA_URL/api/v1/repos/$repo/issues/$num" \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"assignees":["claude"]}' >/dev/null 2>&1 && \
log " Assigned #$num ($repo) to claude"
done < "$state_dir/unassigned.txt"
fi
# Phase 2: PR review via Timmy (LLM)
if [ "$pr_count" -gt 0 ]; then
run_pr_review "$state_dir"
fi
}
run_pr_review() {
local state_dir="$1"
local prompt_file="/tmp/timmy-prompt-$$.txt"
# Build a review prompt listing all open PRs
cat > "$prompt_file" <<'HEADER'
You are Timmy, the orchestrator. Review these open PRs from AI agents.
For each PR, you will see the diff. Your job:
- MERGE if changes look reasonable (most agent PRs are good, merge aggressively)
- COMMENT if there is a clear problem
- CLOSE if it is a duplicate or garbage
Use these exact curl patterns (replace REPO, NUM):
Merge: curl -sf -X POST "GITEA/api/v1/repos/REPO/pulls/NUM/merge" -H "Authorization: token TOKEN" -H "Content-Type: application/json" -d '{"Do":"squash"}'
Comment: curl -sf -X POST "GITEA/api/v1/repos/REPO/pulls/NUM/comments" -H "Authorization: token TOKEN" -H "Content-Type: application/json" -d '{"body":"feedback"}'
Close: curl -sf -X PATCH "GITEA/api/v1/repos/REPO/pulls/NUM" -H "Authorization: token TOKEN" -H "Content-Type: application/json" -d '{"state":"closed"}'
HEADER
# Replace placeholders
sed -i '' "s|GITEA|$GITEA_URL|g; s|TOKEN|$GITEA_TOKEN|g" "$prompt_file"
# Add each PR with its diff (up to 10 PRs per cycle)
local count=0
while IFS= read -r line && [ "$count" -lt 10 ]; do
local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*/\1/')
local pr_num=$(echo "$line" | sed 's/.*PR=\([^ ]*\).*/\1/')
local by=$(echo "$line" | sed 's/.*BY=\([^ ]*\).*/\1/')
local title=$(echo "$line" | sed 's/.*TITLE=//')
[ -z "$pr_num" ] && continue
local diff
diff=$(curl -sf -H "Authorization: token $GITEA_TOKEN" \
-H "Accept: application/diff" \
"$GITEA_URL/api/v1/repos/$repo/pulls/$pr_num" 2>/dev/null | head -150)
[ -z "$diff" ] && continue
echo "" >> "$prompt_file"
echo "=== PR #$pr_num in $repo by $by ===" >> "$prompt_file"
echo "Title: $title" >> "$prompt_file"
echo "Diff (first 150 lines):" >> "$prompt_file"
echo "$diff" >> "$prompt_file"
echo "=== END PR #$pr_num ===" >> "$prompt_file"
count=$((count + 1))
done < "$state_dir/open_prs.txt"
if [ "$count" -eq 0 ]; then
rm -f "$prompt_file"
return
fi
echo "" >> "$prompt_file"
echo "Review each PR above. Execute curl commands for your decisions. Be brief." >> "$prompt_file"
local prompt_text
prompt_text=$(cat "$prompt_file")
rm -f "$prompt_file"
log "Reviewing $count PRs..."
local result
result=$(timeout "$HERMES_TIMEOUT" hermes chat -q "$prompt_text" -Q --yolo 2>&1)
local exit_code=$?
if [ "$exit_code" -eq 0 ]; then
log "PR review complete"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $result" >> "$LOG_DIR/timmy-reviews.log"
else
log "PR review failed (exit $exit_code)"
fi
}
# === MAIN LOOP ===
log "=== Timmy Orchestrator Started (PID $$) ==="
log "Model: qwen3:30b via Ollama | Cycle: ${CYCLE_INTERVAL}s"
WORKFORCE_CYCLE=0
while true; do
state_dir=$(gather_state)
run_triage "$state_dir"
rm -rf "$state_dir"
# Run workforce manager every 3rd cycle (~15 min)
WORKFORCE_CYCLE=$((WORKFORCE_CYCLE + 1))
if [ $((WORKFORCE_CYCLE % 3)) -eq 0 ]; then
log "Running workforce manager..."
python3 "$HOME/.hermes/bin/workforce-manager.py" all >> "$LOG_DIR/workforce-manager.log" 2>&1
log "Workforce manager complete"
fi
log "Sleeping ${CYCLE_INTERVAL}s"
sleep "$CYCLE_INTERVAL"
done

284
bin/timmy-status.sh Executable file
View File

@@ -0,0 +1,284 @@
#!/usr/bin/env bash
# ── Timmy Loop Status Panel ────────────────────────────────────────────
# Compact, info-dense sidebar for the tmux development loop.
# Refreshes every 10s. Designed for ~40-col wide pane.
# ───────────────────────────────────────────────────────────────────────
STATE="$HOME/Timmy-Time-dashboard/.loop/state.json"
REPO="$HOME/Timmy-Time-dashboard"
TOKEN=$(cat ~/.hermes/gitea_token 2>/dev/null)
API="http://143.198.27.163:3000/api/v1/repos/rockachopa/Timmy-time-dashboard"
# ── Colors ──
B='\033[1m' # bold
D='\033[2m' # dim
R='\033[0m' # reset
G='\033[32m' # green
Y='\033[33m' # yellow
RD='\033[31m' # red
C='\033[36m' # cyan
M='\033[35m' # magenta
W='\033[37m' # white
BG='\033[42;30m' # green bg
BY='\033[43;30m' # yellow bg
BR='\033[41;37m' # red bg
# How wide is our pane?
COLS=$(tput cols 2>/dev/null || echo 40)
hr() { printf "${D}"; printf '─%.0s' $(seq 1 "$COLS"); printf "${R}\n"; }
while true; do
clear
# ── Header ──
echo -e "${B}${C} ⚙ TIMMY DEV LOOP${R} ${D}$(date '+%H:%M:%S')${R}"
hr
# ── Loop State ──
if [ -f "$STATE" ]; then
eval "$(python3 -c "
import json, sys
with open('$STATE') as f: s = json.load(f)
print(f'CYCLE={s.get(\"cycle\",\"?\")}')" 2>/dev/null)"
STATUS=$(python3 -c "import json; print(json.load(open('$STATE'))['status'])" 2>/dev/null || echo "?")
LAST_OK=$(python3 -c "
import json
from datetime import datetime, timezone
s = json.load(open('$STATE'))
t = s.get('last_completed','')
if t:
dt = datetime.fromisoformat(t.replace('Z','+00:00'))
delta = datetime.now(timezone.utc) - dt
mins = int(delta.total_seconds() / 60)
if mins < 60: print(f'{mins}m ago')
else: print(f'{mins//60}h {mins%60}m ago')
else: print('never')
" 2>/dev/null || echo "?")
CLOSED=$(python3 -c "import json; print(len(json.load(open('$STATE')).get('issues_closed',[])))" 2>/dev/null || echo 0)
CREATED=$(python3 -c "import json; print(len(json.load(open('$STATE')).get('issues_created',[])))" 2>/dev/null || echo 0)
ERRS=$(python3 -c "import json; print(len(json.load(open('$STATE')).get('errors',[])))" 2>/dev/null || echo 0)
LAST_ISSUE=$(python3 -c "import json; print(json.load(open('$STATE')).get('last_issue','—'))" 2>/dev/null || echo "—")
LAST_PR=$(python3 -c "import json; print(json.load(open('$STATE')).get('last_pr','—'))" 2>/dev/null || echo "—")
TESTS=$(python3 -c "
import json
s = json.load(open('$STATE'))
t = s.get('test_results',{})
if t:
print(f\"{t.get('passed',0)} pass, {t.get('failed',0)} fail, {t.get('coverage','?')} cov\")
else:
print('no data')
" 2>/dev/null || echo "no data")
# Status badge
case "$STATUS" in
working) BADGE="${BY} WORKING ${R}" ;;
idle) BADGE="${BG} IDLE ${R}" ;;
error) BADGE="${BR} ERROR ${R}" ;;
*) BADGE="${D} $STATUS ${R}" ;;
esac
echo -e " ${B}Status${R} $BADGE ${D}cycle${R} ${B}$CYCLE${R}"
echo -e " ${B}Last OK${R} ${G}$LAST_OK${R} ${D}issue${R} #$LAST_ISSUE ${D}PR${R} #$LAST_PR"
echo -e " ${G}${R} $CLOSED closed ${C}+${R} $CREATED created ${RD}${R} $ERRS errs"
echo -e " ${D}Tests:${R} $TESTS"
else
echo -e " ${RD}No state file${R}"
fi
hr
# ── Ollama Status ──
echo -e " ${B}${M}◆ OLLAMA${R}"
OLLAMA_PS=$(curl -s http://localhost:11434/api/ps 2>/dev/null)
if [ -n "$OLLAMA_PS" ] && echo "$OLLAMA_PS" | python3 -c "import sys,json; json.load(sys.stdin)" &>/dev/null; then
python3 -c "
import json, sys
data = json.loads('''$OLLAMA_PS''')
models = data.get('models', [])
if not models:
print(' \033[2m(no models loaded)\033[0m')
for m in models:
name = m.get('name','?')
vram = m.get('size_vram', 0) / 1e9
exp = m.get('expires_at','')
print(f' \033[32m●\033[0m {name} \033[2m{vram:.1f}GB VRAM\033[0m')
" 2>/dev/null
else
echo -e " ${RD}● offline${R}"
fi
# ── Timmy Health ──
TIMMY_HEALTH=$(curl -s --max-time 2 http://localhost:8000/health 2>/dev/null)
if [ -n "$TIMMY_HEALTH" ]; then
python3 -c "
import json
h = json.loads('''$TIMMY_HEALTH''')
status = h.get('status','?')
ollama = h.get('services',{}).get('ollama','?')
model = h.get('llm_model','?')
agent_st = list(h.get('agents',{}).values())[0].get('status','?') if h.get('agents') else '?'
up = int(h.get('uptime_seconds',0))
hrs, rem = divmod(up, 3600)
mins = rem // 60
print(f' \033[1m\033[35m◆ TIMMY DASHBOARD\033[0m')
print(f' \033[32m●\033[0m {status} model={model}')
print(f' \033[2magent={agent_st} ollama={ollama} up={hrs}h{mins}m\033[0m')
" 2>/dev/null
else
echo -e " ${B}${M}◆ TIMMY DASHBOARD${R}"
echo -e " ${RD}● unreachable${R}"
fi
hr
# ── Open Issues ──
echo -e " ${B}${Y}▶ OPEN ISSUES${R}"
if [ -n "$TOKEN" ]; then
curl -s "${API}/issues?state=open&limit=10&sort=created&direction=desc" \
-H "Authorization: token $TOKEN" 2>/dev/null | \
python3 -c "
import json, sys
try:
issues = json.load(sys.stdin)
if not issues:
print(' \033[2m(none)\033[0m')
for i in issues[:10]:
num = i['number']
title = i['title'][:36]
labels = ','.join(l['name'][:8] for l in i.get('labels',[]))
lbl = f' \033[2m[{labels}]\033[0m' if labels else ''
print(f' \033[33m#{num:<4d}\033[0m {title}{lbl}')
if len(issues) > 10:
print(f' \033[2m... +{len(issues)-10} more\033[0m')
except: print(' \033[2m(fetch failed)\033[0m')
" 2>/dev/null
else
echo -e " ${RD}(no token)${R}"
fi
# ── Open PRs ──
echo -e " ${B}${G}▶ OPEN PRs${R}"
if [ -n "$TOKEN" ]; then
curl -s "${API}/pulls?state=open&limit=5" \
-H "Authorization: token $TOKEN" 2>/dev/null | \
python3 -c "
import json, sys
try:
prs = json.load(sys.stdin)
if not prs:
print(' \033[2m(none)\033[0m')
for p in prs[:5]:
num = p['number']
title = p['title'][:36]
print(f' \033[32mPR #{num:<4d}\033[0m {title}')
except: print(' \033[2m(fetch failed)\033[0m')
" 2>/dev/null
else
echo -e " ${RD}(no token)${R}"
fi
hr
# ── Git Log ──
echo -e " ${B}${D}▶ RECENT COMMITS${R}"
cd "$REPO" 2>/dev/null && git log --oneline --no-decorate -6 2>/dev/null | while read line; do
HASH=$(echo "$line" | cut -c1-7)
MSG=$(echo "$line" | cut -c9- | cut -c1-32)
echo -e " ${C}${HASH}${R} ${D}${MSG}${R}"
done
hr
# ── Claims ──
CLAIMS_FILE="$REPO/.loop/claims.json"
if [ -f "$CLAIMS_FILE" ]; then
CLAIMS=$(python3 -c "
import json
with open('$CLAIMS_FILE') as f: c = json.load(f)
active = [(k,v) for k,v in c.items() if v.get('status') == 'active']
if active:
for k,v in active:
print(f' \033[33m⚡\033[0m #{k} claimed by {v.get(\"agent\",\"?\")[:12]}')
else:
print(' \033[2m(none active)\033[0m')
" 2>/dev/null)
if [ -n "$CLAIMS" ]; then
echo -e " ${B}${Y}▶ CLAIMED${R}"
echo "$CLAIMS"
fi
fi
# ── System ──
echo -e " ${B}${D}▶ SYSTEM${R}"
# Disk
DISK=$(df -h / 2>/dev/null | tail -1 | awk '{print $4 " free / " $2}')
echo -e " ${D}Disk:${R} $DISK"
# Memory (macOS)
if command -v memory_pressure &>/dev/null; then
MEM_PRESS=$(memory_pressure 2>/dev/null | grep "System-wide" | head -1 | sed 's/.*: //')
echo -e " ${D}Mem:${R} $MEM_PRESS"
elif [ -f /proc/meminfo ]; then
MEM=$(awk '/MemAvailable/{printf "%.1fGB free", $2/1048576}' /proc/meminfo 2>/dev/null)
echo -e " ${D}Mem:${R} $MEM"
fi
# CPU load
LOAD=$(uptime | sed 's/.*averages: //' | cut -d',' -f1 | xargs)
echo -e " ${D}Load:${R} $LOAD"
hr
# ── Notes from last cycle ──
if [ -f "$STATE" ]; then
NOTES=$(python3 -c "
import json
s = json.load(open('$STATE'))
n = s.get('notes','')
if n:
lines = n[:150]
if len(n) > 150: lines += '...'
print(lines)
" 2>/dev/null)
if [ -n "$NOTES" ]; then
echo -e " ${B}${D}▶ LAST CYCLE NOTE${R}"
echo -e " ${D}${NOTES}${R}"
hr
fi
# Timmy observations
TIMMY_OBS=$(python3 -c "
import json
s = json.load(open('$STATE'))
obs = s.get('timmy_observations','')
if obs:
lines = obs[:120]
if len(obs) > 120: lines += '...'
print(lines)
" 2>/dev/null)
if [ -n "$TIMMY_OBS" ]; then
echo -e " ${B}${M}▶ TIMMY SAYS${R}"
echo -e " ${D}${TIMMY_OBS}${R}"
hr
fi
fi
# ── Watchdog: restart loop if it died ──────────────────────────────
LOOP_LOCK="/tmp/timmy-loop.lock"
if [ -f "$LOOP_LOCK" ]; then
LOOP_PID=$(cat "$LOOP_LOCK" 2>/dev/null)
if ! kill -0 "$LOOP_PID" 2>/dev/null; then
echo -e " ${BR} ⚠ LOOP DIED — RESTARTING ${R}"
rm -f "$LOOP_LOCK"
tmux send-keys -t "dev:2.1" "bash ~/.hermes/bin/timmy-loop.sh" Enter 2>/dev/null
fi
else
# No lock file at all — loop never started or was killed
if ! pgrep -f "timmy-loop.sh" >/dev/null 2>&1; then
echo -e " ${BR} ⚠ LOOP NOT RUNNING — STARTING ${R}"
tmux send-keys -t "dev:2.1" "bash ~/.hermes/bin/timmy-loop.sh" Enter 2>/dev/null
fi
fi
echo -e " ${D}↻ 8s${R}"
sleep 8
done

405
bin/workforce-manager.py Executable file
View File

@@ -0,0 +1,405 @@
#!/usr/bin/env python3
"""
workforce-manager.py — Autonomous agent workforce management
Three capabilities:
1. AUTO-ASSIGN: Match unassigned issues to the right agent by difficulty
2. QUALITY SCORE: Track merge rate per agent, demote poor performers
3. CREDIT MONITOR: Alert when agent quotas are likely exhausted
Runs as a periodic script called by the orchestrator or cron.
ACCEPTANCE CRITERIA:
Auto-assign:
- Scans all repos for unassigned issues
- Scores issue difficulty (0-10) based on: labels, title keywords, file count
- Maps difficulty to agent tier: hard(8-10)→perplexity, medium(4-7)→gemini/manus, easy(0-3)→kimi
- Assigns via Gitea API, adds appropriate labels
- Never assigns EPICs (those need human decision)
- Never reassigns already-assigned issues
- Respects agent capacity (max concurrent issues per agent)
Quality scoring:
- Pulls all closed PRs from last 7 days per agent
- Calculates: merge rate, avg time to merge, rejection count
- Agents below 40% merge rate get demoted one tier
- Agents above 80% merge rate get promoted one tier
- Writes scorecard to ~/.hermes/logs/agent-scorecards.json
Credit monitoring:
- Tracks daily PR count per agent
- Manus: alert if >250 credits used (300/day limit)
- Loop agents: alert if error rate spikes (likely rate limited)
- Writes alerts to ~/.hermes/logs/workforce-alerts.json
"""
import json
import os
import sys
import time
import urllib.request
from datetime import datetime, timedelta, timezone
from collections import defaultdict
# === CONFIG ===
GITEA_URL = "http://143.198.27.163:3000"
TOKEN_FILE = os.path.expanduser("~/.hermes/gitea_token_vps")
LOG_DIR = os.path.expanduser("~/.hermes/logs")
SCORECARD_FILE = os.path.join(LOG_DIR, "agent-scorecards.json")
ALERTS_FILE = os.path.join(LOG_DIR, "workforce-alerts.json")
REPOS = [
"Timmy_Foundation/the-nexus",
"Timmy_Foundation/autolora",
]
# Agent tiers: which agents handle which difficulty
AGENT_TIERS = {
"heavy": ["perplexity"], # 8-10 difficulty
"medium": ["gemini", "manus"], # 4-7 difficulty
"grunt": ["kimi"], # 0-3 difficulty
}
# Max concurrent issues per agent
MAX_CONCURRENT = {
"perplexity": 2, # one-shot, manual
"manus": 2, # one-shot, 300 credits/day
"gemini": 5, # 3-worker loop
"kimi": 3, # 1-worker loop
"claude": 10, # 10-worker loop, managed by its own loop
}
# Credit limits (daily)
CREDIT_LIMITS = {
"manus": 300,
}
# Keywords that indicate difficulty
HARD_KEYWORDS = [
"sovereignty", "nostr", "nip-", "economic", "architecture",
"protocol", "edge intelligence", "memory graph", "identity",
"cryptograph", "zero-knowledge", "consensus", "p2p",
"distributed", "rlhf", "grpo", "training pipeline",
]
MEDIUM_KEYWORDS = [
"feature", "integration", "api", "websocket", "three.js",
"portal", "dashboard", "visualization", "agent", "deploy",
"docker", "ssl", "infrastructure", "mcp", "inference",
]
EASY_KEYWORDS = [
"refactor", "test", "docstring", "typo", "format", "lint",
"rename", "cleanup", "dead code", "move", "extract",
"add unit test", "fix import", "update readme",
]
def api(method, path, data=None):
"""Make a Gitea API call."""
with open(TOKEN_FILE) as f:
token = f.read().strip()
url = f"{GITEA_URL}/api/v1{path}"
headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json",
}
if data:
req = urllib.request.Request(url, json.dumps(data).encode(), headers, method=method)
else:
req = urllib.request.Request(url, headers=headers, method=method)
try:
resp = urllib.request.urlopen(req, timeout=15)
return json.loads(resp.read())
except Exception as e:
return {"error": str(e)}
def score_difficulty(issue):
"""Score an issue 0-10 based on title, labels, and signals."""
title = issue["title"].lower()
labels = [l["name"].lower() for l in issue.get("labels", [])]
score = 5 # default medium
# EPICs are always 10 (but we skip them for auto-assign)
if "[epic]" in title or "epic:" in title:
return 10
# Label-based scoring
if "p0-critical" in labels:
score += 2
if "p1-important" in labels:
score += 1
if "p2-backlog" in labels:
score -= 1
if "needs-design" in labels:
score += 2
if "sovereignty" in labels or "nostr" in labels:
score += 2
if "infrastructure" in labels:
score += 1
# Keyword-based scoring
for kw in HARD_KEYWORDS:
if kw in title:
score += 2
break
for kw in EASY_KEYWORDS:
if kw in title:
score -= 2
break
return max(0, min(10, score))
def get_agent_for_difficulty(score, current_loads):
"""Pick the best agent for a given difficulty score."""
if score >= 8:
tier = "heavy"
elif score >= 4:
tier = "medium"
else:
tier = "grunt"
candidates = AGENT_TIERS[tier]
# Pick the agent with the most capacity
best = None
best_capacity = -1
for agent in candidates:
max_c = MAX_CONCURRENT.get(agent, 3)
current = current_loads.get(agent, 0)
capacity = max_c - current
if capacity > best_capacity:
best_capacity = capacity
best = agent
if best_capacity <= 0:
# All agents in tier are full, try next tier down
fallback_order = ["medium", "grunt"] if tier == "heavy" else ["grunt"]
for fb_tier in fallback_order:
for agent in AGENT_TIERS[fb_tier]:
max_c = MAX_CONCURRENT.get(agent, 3)
current = current_loads.get(agent, 0)
if max_c - current > 0:
return agent
return None
return best
def auto_assign():
"""Scan repos for unassigned issues and assign to appropriate agents."""
print("=== AUTO-ASSIGN ===")
# Get current agent loads (open issues per agent)
current_loads = defaultdict(int)
all_unassigned = []
for repo in REPOS:
issues = api("GET", f"/repos/{repo}/issues?state=open&type=issues&limit=50")
if isinstance(issues, dict) and "error" in issues:
print(f" ERROR fetching {repo}: {issues['error']}")
continue
for issue in issues:
assignees = [a["login"] for a in (issue.get("assignees") or [])]
if assignees:
for a in assignees:
current_loads[a] += 1
else:
issue["_repo"] = repo
all_unassigned.append(issue)
print(f" Agent loads: {dict(current_loads)}")
print(f" Unassigned issues: {len(all_unassigned)}")
assigned_count = 0
for issue in all_unassigned:
title = issue["title"].lower()
# Skip EPICs — those need human decision
if "[epic]" in title or "epic:" in title:
print(f" SKIP #{issue['number']} (EPIC): {issue['title'][:60]}")
continue
# Skip META/audit/showcase
if "[meta]" in title or "[audit]" in title or "[showcase]" in title:
print(f" SKIP #{issue['number']} (meta): {issue['title'][:60]}")
continue
score = score_difficulty(issue)
agent = get_agent_for_difficulty(score, current_loads)
if agent is None:
print(f" SKIP #{issue['number']} (all agents full): {issue['title'][:60]}")
continue
# Assign
repo = issue["_repo"]
result = api("PATCH", f"/repos/{repo}/issues/{issue['number']}", {
"assignees": [agent]
})
if "error" not in result:
current_loads[agent] += 1
assigned_count += 1
tier = "HEAVY" if score >= 8 else "MEDIUM" if score >= 4 else "GRUNT"
print(f" ASSIGN #{issue['number']} -> {agent} (score={score} {tier}): {issue['title'][:50]}")
else:
print(f" ERROR assigning #{issue['number']}: {result['error']}")
print(f" Assigned {assigned_count} issues this cycle.")
return assigned_count
def quality_score():
"""Calculate merge rate and quality metrics per agent over last 7 days."""
print("\n=== QUALITY SCORING ===")
since = (datetime.now(timezone.utc) - timedelta(days=7)).strftime("%Y-%m-%dT%H:%M:%SZ")
agent_stats = defaultdict(lambda: {"merged": 0, "closed_unmerged": 0, "open": 0, "total": 0})
for repo in REPOS:
# Merged PRs
merged = api("GET", f"/repos/{repo}/pulls?state=closed&sort=updated&limit=50")
if isinstance(merged, dict) and "error" in merged:
continue
for pr in merged:
if pr.get("updated_at", "") < since:
continue
agent = pr["user"]["login"]
agent_stats[agent]["total"] += 1
if pr.get("merged"):
agent_stats[agent]["merged"] += 1
else:
agent_stats[agent]["closed_unmerged"] += 1
# Open PRs
open_prs = api("GET", f"/repos/{repo}/pulls?state=open&limit=50")
if isinstance(open_prs, dict) and "error" in open_prs:
continue
for pr in open_prs:
agent = pr["user"]["login"]
agent_stats[agent]["open"] += 1
agent_stats[agent]["total"] += 1
scorecards = {}
for agent, stats in sorted(agent_stats.items()):
total = stats["total"]
if total == 0:
continue
merge_rate = stats["merged"] / max(total, 1) * 100
# Determine tier adjustment
if merge_rate >= 80:
recommendation = "PROMOTE — high merge rate"
elif merge_rate < 40 and total >= 3:
recommendation = "DEMOTE — low merge rate"
else:
recommendation = "HOLD — acceptable"
scorecards[agent] = {
"merged": stats["merged"],
"closed_unmerged": stats["closed_unmerged"],
"open": stats["open"],
"total": total,
"merge_rate": round(merge_rate, 1),
"recommendation": recommendation,
"updated": datetime.now(timezone.utc).isoformat(),
}
print(f" {agent:15s} merged={stats['merged']:3d} rejected={stats['closed_unmerged']:3d} open={stats['open']:3d} rate={merge_rate:5.1f}% {recommendation}")
# Save scorecards
with open(SCORECARD_FILE, "w") as f:
json.dump(scorecards, f, indent=2)
print(f" Scorecards saved to {SCORECARD_FILE}")
return scorecards
def credit_monitor():
"""Track daily usage per agent and alert on approaching limits."""
print("\n=== CREDIT MONITORING ===")
today = datetime.now(timezone.utc).strftime("%Y-%m-%d")
daily_counts = defaultdict(int)
for repo in REPOS:
# Count PRs created today per agent
prs = api("GET", f"/repos/{repo}/pulls?state=all&sort=created&limit=50")
if isinstance(prs, dict) and "error" in prs:
continue
for pr in prs:
created = pr.get("created_at", "")[:10]
if created == today:
agent = pr["user"]["login"]
daily_counts[agent] += 1
alerts = []
for agent, count in sorted(daily_counts.items()):
limit = CREDIT_LIMITS.get(agent)
if limit:
pct = count / limit * 100
status = f"{count}/{limit} ({pct:.0f}%)"
if pct >= 80:
alert = f"WARNING: {agent} at {status} daily credits"
alerts.append({"agent": agent, "type": "credit_limit", "message": alert, "time": datetime.now(timezone.utc).isoformat()})
print(f" ⚠️ {alert}")
else:
print(f" {agent:15s} {status}")
else:
print(f" {agent:15s} {count} PRs today (no credit limit)")
# Check loop health via log files
loop_logs = {
"claude": "claude-loop.log",
"gemini": "gemini-loop.log",
"kimi": "kimi-loop.log",
}
for agent, logfile in loop_logs.items():
logpath = os.path.join(LOG_DIR, logfile)
if not os.path.exists(logpath):
continue
# Count errors in last 50 lines
try:
with open(logpath) as f:
lines = f.readlines()[-50:]
errors = sum(1 for l in lines if "FAIL" in l or "ERROR" in l or "rate" in l.lower())
if errors >= 10:
alert = f"WARNING: {agent} loop has {errors} errors in last 50 log lines (possible rate limit)"
alerts.append({"agent": agent, "type": "error_spike", "message": alert, "time": datetime.now(timezone.utc).isoformat()})
print(f" ⚠️ {alert}")
except:
pass
# Save alerts
existing = []
if os.path.exists(ALERTS_FILE):
try:
with open(ALERTS_FILE) as f:
existing = json.load(f)
except:
pass
existing.extend(alerts)
# Keep last 100 alerts
existing = existing[-100:]
with open(ALERTS_FILE, "w") as f:
json.dump(existing, f, indent=2)
if not alerts:
print(" No alerts. All systems nominal.")
return alerts
def main():
os.makedirs(LOG_DIR, exist_ok=True)
mode = sys.argv[1] if len(sys.argv) > 1 else "all"
if mode in ("all", "assign"):
auto_assign()
if mode in ("all", "score"):
quality_score()
if mode in ("all", "credits"):
credit_monitor()
if __name__ == "__main__":
main()

97
deploy.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# deploy.sh — Apply timmy-config as sidecar overlay onto ~/.hermes/
# This is the canonical way to deploy Timmy's configuration.
# Hermes-agent is the engine. timmy-config is the driver's seat.
#
# Usage: ./deploy.sh [--restart-loops]
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
HERMES_HOME="$HOME/.hermes"
TIMMY_HOME="$HOME/.timmy"
log() { echo "[deploy] $*"; }
# === Sanity checks ===
if [ ! -f "$SCRIPT_DIR/SOUL.md" ]; then
echo "ERROR: Run from timmy-config root" >&2
exit 1
fi
# === Create directories ===
mkdir -p "$HERMES_HOME/bin"
mkdir -p "$HERMES_HOME/skins"
mkdir -p "$HERMES_HOME/playbooks"
mkdir -p "$HERMES_HOME/memories"
mkdir -p "$TIMMY_HOME"
# === Deploy SOUL ===
cp "$SCRIPT_DIR/SOUL.md" "$TIMMY_HOME/SOUL.md"
log "SOUL.md -> $TIMMY_HOME/"
# === Deploy config ===
cp "$SCRIPT_DIR/config.yaml" "$HERMES_HOME/config.yaml"
log "config.yaml -> $HERMES_HOME/"
# === Deploy channel directory ===
if [ -f "$SCRIPT_DIR/channel_directory.json" ]; then
cp "$SCRIPT_DIR/channel_directory.json" "$HERMES_HOME/channel_directory.json"
log "channel_directory.json -> $HERMES_HOME/"
fi
# === Deploy memories ===
for f in "$SCRIPT_DIR"/memories/*; do
[ -f "$f" ] && cp "$f" "$HERMES_HOME/memories/"
done
log "memories/ -> $HERMES_HOME/memories/"
# === Deploy skins ===
for f in "$SCRIPT_DIR"/skins/*; do
[ -f "$f" ] && cp "$f" "$HERMES_HOME/skins/"
done
log "skins/ -> $HERMES_HOME/skins/"
# === Deploy playbooks ===
for f in "$SCRIPT_DIR"/playbooks/*; do
[ -f "$f" ] && cp "$f" "$HERMES_HOME/playbooks/"
done
log "playbooks/ -> $HERMES_HOME/playbooks/"
# === Deploy cron ===
if [ -d "$SCRIPT_DIR/cron" ]; then
mkdir -p "$HERMES_HOME/cron"
for f in "$SCRIPT_DIR"/cron/*; do
[ -f "$f" ] && cp "$f" "$HERMES_HOME/cron/"
done
log "cron/ -> $HERMES_HOME/cron/"
fi
# === Deploy bin (operational scripts) ===
for f in "$SCRIPT_DIR"/bin/*; do
[ -f "$f" ] && cp "$f" "$HERMES_HOME/bin/"
done
chmod +x "$HERMES_HOME/bin/"*.sh "$HERMES_HOME/bin/"*.py 2>/dev/null || true
log "bin/ -> $HERMES_HOME/bin/"
# === Restart loops if requested ===
if [ "${1:-}" = "--restart-loops" ]; then
log "Killing existing loops..."
pkill -f 'claude-loop.sh' 2>/dev/null || true
pkill -f 'gemini-loop.sh' 2>/dev/null || true
pkill -f 'timmy-orchestrator.sh' 2>/dev/null || true
sleep 2
log "Clearing stale locks..."
rm -rf "$HERMES_HOME/logs/claude-locks/"* 2>/dev/null || true
rm -rf "$HERMES_HOME/logs/gemini-locks/"* 2>/dev/null || true
log "Relaunching loops..."
nohup bash "$HERMES_HOME/bin/timmy-orchestrator.sh" >> "$HERMES_HOME/logs/timmy-orchestrator.log" 2>&1 &
nohup bash "$HERMES_HOME/bin/claude-loop.sh" 2 >> "$HERMES_HOME/logs/claude-loop.log" 2>&1 &
nohup bash "$HERMES_HOME/bin/gemini-loop.sh" 1 >> "$HERMES_HOME/logs/gemini-loop.log" 2>&1 &
sleep 1
log "Loops relaunched."
fi
log "Deploy complete. timmy-config applied to $HERMES_HOME/"