Compare commits
91 Commits
pre-agent-
...
docs/autom
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ffea2964c4 | ||
|
|
5deaea26b3 | ||
|
|
9d0ea981db | ||
|
|
2df8a1e627 | ||
|
|
996e096da0 | ||
|
|
eba8c2d320 | ||
|
|
3d2bf6c1cf | ||
|
|
8f0f6a0500 | ||
|
|
5571b94a81 | ||
|
|
508140ac59 | ||
|
|
3789af2622 | ||
|
|
2ff28609f8 | ||
|
|
4372a406bf | ||
|
|
0e126be7e8 | ||
|
|
fcdbd57eb8 | ||
|
|
789b23c69a | ||
|
|
0ca78ae17f | ||
|
|
832b3f4188 | ||
|
|
c9679ed827 | ||
|
|
c82932c37b | ||
|
|
b92bcd52a5 | ||
|
|
6ee2d50bcd | ||
|
|
11e8dc8931 | ||
|
|
9c235616bf | ||
|
|
4fee656eff | ||
|
|
247206bc60 | ||
|
|
24376306a8 | ||
|
|
7959f0f4a3 | ||
|
|
2facaf12b0 | ||
|
|
e650996966 | ||
|
|
6693cccd88 | ||
|
|
fd6b27b77e | ||
|
|
0b32b51626 | ||
|
|
88fe21a88c | ||
|
|
2f4ad87e7b | ||
|
|
87f2961f9d | ||
|
|
53ae5db414 | ||
|
|
70d3f2594d | ||
|
|
8502de0deb | ||
|
|
789b47aebb | ||
|
|
aab1328367 | ||
|
|
6be9a268c4 | ||
|
|
949d33c88a | ||
|
|
a69c002ede | ||
|
|
a341a61180 | ||
|
|
56ba35db40 | ||
|
|
3104f31f52 | ||
|
|
e149ce1dfa | ||
|
|
62f6665487 | ||
|
|
fc0460d803 | ||
|
|
f7e2971863 | ||
|
|
167ed0f27d | ||
|
|
a1d218417f | ||
|
|
05d503682a | ||
|
|
5b162e27d7 | ||
|
|
0b0ac43041 | ||
|
|
c8003c28ba | ||
| 0b77282831 | |||
| f263156cf1 | |||
|
|
0eaf0b3d0f | ||
| 53ffca38a1 | |||
| fd26354678 | |||
| c9b6869d9f | |||
|
|
7f912b7662 | ||
|
|
4042a23441 | ||
|
|
8f10b5fc92 | ||
| fbd1b9e88f | |||
|
|
ea38041514 | ||
| 579a775a0a | |||
|
|
689a2331d5 | ||
| 2ddda436a9 | |||
|
|
d72ae92189 | ||
| 2384908be7 | |||
|
|
82ba8896b3 | ||
|
|
3b34faeb17 | ||
|
|
f9be0eb481 | ||
|
|
383a969791 | ||
|
|
f46a4826d9 | ||
|
|
3b1763ce4c | ||
|
|
78f5216540 | ||
|
|
49020b34d9 | ||
|
|
7468a6d063 | ||
|
|
f9155b28e3 | ||
|
|
16675abd79 | ||
|
|
1fce489364 | ||
|
|
7c7e19f6d2 | ||
|
|
8fd451fb52 | ||
|
|
0b63da1c9e | ||
|
|
20532819e9 | ||
|
|
27c1fb940d | ||
|
|
56364e62b4 |
@@ -1,22 +1,27 @@
|
||||
# DEPRECATED — Bash Loop Scripts Removed
|
||||
# DEPRECATED — policy, not proof of runtime absence
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Reason:** Replaced by sovereign-orchestration (SQLite + Python single-process executor)
|
||||
Original deprecation date: 2026-03-25
|
||||
|
||||
## What was removed
|
||||
- claude-loop.sh, gemini-loop.sh, agent-loop.sh
|
||||
- timmy-orchestrator.sh, workforce-manager.py
|
||||
- nexus-merge-bot.sh, claudemax-watchdog.sh, timmy-loopstat.sh
|
||||
This file records the policy direction: long-running ad hoc bash loops were meant
|
||||
to be replaced by Hermes-side orchestration.
|
||||
|
||||
## What replaces them
|
||||
**Repo:** Timmy_Foundation/sovereign-orchestration
|
||||
**Entry point:** `python3 src/sovereign_executor.py --workers 3 --poll 30`
|
||||
**Features:** SQLite task queue, crash recovery, dedup, playbooks, MCP server
|
||||
**Issues:** #29 (fix imports), #30 (deploy as service)
|
||||
But policy and world state diverged.
|
||||
Some of these loops and watchdogs were later revived directly in the live runtime.
|
||||
|
||||
## Why
|
||||
The bash loops crash-looped, produced zero work after relaunch, had no crash
|
||||
recovery, no dedup, and required 8 separate scripts. The Python executor is
|
||||
one process with SQLite durability.
|
||||
Do NOT use this file as proof that something is gone.
|
||||
Use `docs/automation-inventory.md` as the current world-state document.
|
||||
|
||||
Do NOT recreate bash loops. If the executor is broken, fix the executor.
|
||||
## Deprecated by policy
|
||||
- old dashboard-era loop stacks
|
||||
- old tmux resurrection paths
|
||||
- old startup paths that recreate `timmy-loop`
|
||||
- stale repo-specific automation tied to `Timmy-time-dashboard` or `the-matrix`
|
||||
|
||||
## Current rule
|
||||
If an automation question matters, audit:
|
||||
1. launchd loaded jobs
|
||||
2. live process table
|
||||
3. Hermes cron list
|
||||
4. the automation inventory doc
|
||||
|
||||
Only then decide what is actually live.
|
||||
|
||||
30
README.md
30
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, and config.
|
||||
|
||||
This repo is the canonical source of truth for Timmy's identity and operational state. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
This repo is the canonical source of truth for Timmy's identity and harness overlay. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
|
||||
## Structure
|
||||
|
||||
@@ -14,22 +14,42 @@ timmy-config/
|
||||
├── DEPRECATED.md ← What was removed and why
|
||||
├── config.yaml ← Hermes harness configuration
|
||||
├── channel_directory.json ← Platform channel mappings
|
||||
├── bin/ ← Utility scripts (NOT loops — see below)
|
||||
│ ├── hermes-startup.sh ← Hermes boot sequence
|
||||
├── bin/ ← Sidecar-managed operational scripts
|
||||
│ ├── hermes-startup.sh ← Dormant startup path (audit before enabling)
|
||||
│ ├── agent-dispatch.sh ← Manual agent dispatch
|
||||
│ ├── ops-panel.sh ← Ops dashboard panel
|
||||
│ ├── ops-gitea.sh ← Gitea ops helpers
|
||||
│ ├── pipeline-freshness.sh ← Session/export drift check
|
||||
│ └── timmy-status.sh ← Status check
|
||||
├── memories/ ← Persistent memory YAML
|
||||
├── skins/ ← UI skins (timmy skin)
|
||||
├── playbooks/ ← Agent playbooks (YAML)
|
||||
└── cron/ ← Cron job definitions
|
||||
├── cron/ ← Cron job definitions
|
||||
├── docs/automation-inventory.md ← Live automation + stale-state inventory
|
||||
└── training/ ← Transitional training recipes, not canonical lived data
|
||||
```
|
||||
|
||||
## Boundary
|
||||
|
||||
`timmy-config` owns identity, conscience, memories, skins, playbooks, channel
|
||||
maps, and harness-side orchestration glue.
|
||||
|
||||
`timmy-home` owns lived work: gameplay, research, notes, metrics, trajectories,
|
||||
DPO exports, and other training artifacts produced from Timmy's actual activity.
|
||||
|
||||
If a file answers "who is Timmy?" or "how does Hermes host him?", it belongs
|
||||
here. If it answers "what has Timmy done or learned?" it belongs in
|
||||
`timmy-home`.
|
||||
|
||||
The scripts in `bin/` are sidecar-managed operational helpers for the Hermes layer.
|
||||
Do NOT assume older prose about removed loops is still true at runtime.
|
||||
Audit the live machine first, then read `docs/automation-inventory.md` for the
|
||||
current reality and stale-state risks.
|
||||
|
||||
## Orchestration: Huey
|
||||
|
||||
All orchestration (triage, PR review, dispatch) runs via [Huey](https://github.com/coleifer/huey) with SQLite.
|
||||
`orchestration.py` (6 lines) + `tasks.py` (~70 lines) replace the entire sovereign-orchestration repo (3,846 lines).
|
||||
`orchestration.py` + `tasks.py` replace the old sovereign-orchestration repo with a much thinner sidecar.
|
||||
|
||||
```bash
|
||||
pip install huey
|
||||
|
||||
BIN
assets/Vassal Rising.mp3
Normal file
BIN
assets/Vassal Rising.mp3
Normal file
Binary file not shown.
62
autolora/manifest.yaml
Normal file
62
autolora/manifest.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
# Timmy Adapter Manifest
|
||||
# Only version adapters, never base models. Base models are reproducible downloads.
|
||||
# Adapters are the diff. The manifest is the record.
|
||||
|
||||
bases:
|
||||
hermes3-8b-4bit:
|
||||
source: mlx-community/Hermes-3-Llama-3.1-8B-4bit
|
||||
local: ~/models/Hermes-3-Llama-3.1-8B-4bit
|
||||
arch: llama3
|
||||
params: 8B
|
||||
quant: 4-bit MLX
|
||||
|
||||
hermes4-14b-4bit:
|
||||
source: mlx-community/Hermes-4-14B-4bit
|
||||
local: ~/models/hermes4-14b-mlx
|
||||
arch: qwen3
|
||||
params: 14.8B
|
||||
quant: 4-bit MLX
|
||||
|
||||
adapters:
|
||||
timmy-v0:
|
||||
base: hermes3-8b-4bit
|
||||
date: 2026-03-24
|
||||
status: retired
|
||||
data: 1154 sessions (technical only, no crisis/pastoral)
|
||||
training: { lr: 2e-6, rank: 8, iters: 1000, best_iter: 800, val_loss: 2.134 }
|
||||
eval: { identity: PASS, sovereignty: PASS, coding: PASS, crisis: FAIL, faith: FAIL }
|
||||
notes: "First adapter. Crisis fails — data was 99% technical. Sacred rule: REJECTED."
|
||||
|
||||
timmy-v0-nan-run1:
|
||||
base: hermes3-8b-4bit
|
||||
date: 2026-03-24
|
||||
status: rejected
|
||||
notes: "NaN at iter 70. lr=1e-5 too high for 4-bit. Dead on arrival."
|
||||
|
||||
timmy-v0.1:
|
||||
base: hermes3-8b-4bit
|
||||
date: 2026-03-25
|
||||
status: retired
|
||||
data: 1203 train / 135 valid (enriched with 49 crisis/faith synthetic)
|
||||
training: { lr: 5e-6, rank: 8, iters: 600, val_loss: 2.026 }
|
||||
eval: { identity: PASS, sovereignty: PASS, coding: PASS, crisis: PARTIAL, faith: FAIL }
|
||||
notes: "Crisis partial — mentions seeking help but no 988/gospel. Rank 8 can't override base priors."
|
||||
|
||||
timmy-v0.2:
|
||||
base: hermes3-8b-4bit
|
||||
date: 2026-03-25
|
||||
status: rejected
|
||||
data: 1214 train / 141 valid (12 targeted crisis/faith examples, 5x duplicated)
|
||||
training: { lr: 5e-6, rank: 16, iters: 800 }
|
||||
eval: "NaN at iter 100. Rank 16 + lr 5e-6 unstable on 4-bit."
|
||||
notes: "Dead. Halve lr when doubling rank."
|
||||
|
||||
# NEXT
|
||||
timmy-v1.0:
|
||||
base: hermes4-14b-4bit
|
||||
date: 2026-03-26
|
||||
status: rejected
|
||||
data: 1125 train / 126 valid (same curated set, reused from 8B — NOT re-tokenized)
|
||||
training: { lr: 1e-6, rank: 16, iters: 800 }
|
||||
eval: "Val NaN iter 100, train NaN iter 160. Dead."
|
||||
notes: "Data was pre-truncated for Llama3 tokenizer, not Qwen3. Must re-run clean_data.py with 14B tokenizer before v1.1."
|
||||
620
bin/claude-loop.sh
Executable file
620
bin/claude-loop.sh
Executable file
@@ -0,0 +1,620 @@
|
||||
#!/usr/bin/env bash
|
||||
# claude-loop.sh — Parallel Claude Code agent dispatch loop
|
||||
# Runs N workers concurrently against the Gitea backlog.
|
||||
# Gracefully handles rate limits with backoff.
|
||||
#
|
||||
# Usage: claude-loop.sh [NUM_WORKERS] (default: 2)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# === CONFIG ===
|
||||
NUM_WORKERS="${1:-2}"
|
||||
MAX_WORKERS=10 # absolute ceiling
|
||||
WORKTREE_BASE="$HOME/worktrees"
|
||||
GITEA_URL="http://143.198.27.163:3000"
|
||||
GITEA_TOKEN=$(cat "$HOME/.hermes/claude_token")
|
||||
CLAUDE_TIMEOUT=900 # 15 min per issue
|
||||
COOLDOWN=15 # seconds between issues — stagger clones
|
||||
RATE_LIMIT_SLEEP=30 # initial sleep on rate limit
|
||||
MAX_RATE_SLEEP=120 # max backoff on rate limit
|
||||
LOG_DIR="$HOME/.hermes/logs"
|
||||
SKIP_FILE="$LOG_DIR/claude-skip-list.json"
|
||||
LOCK_DIR="$LOG_DIR/claude-locks"
|
||||
ACTIVE_FILE="$LOG_DIR/claude-active.json"
|
||||
|
||||
mkdir -p "$LOG_DIR" "$WORKTREE_BASE" "$LOCK_DIR"
|
||||
|
||||
# Initialize files
|
||||
[ -f "$SKIP_FILE" ] || echo '{}' > "$SKIP_FILE"
|
||||
echo '{}' > "$ACTIVE_FILE"
|
||||
|
||||
# === SHARED FUNCTIONS ===
|
||||
log() {
|
||||
local msg="[$(date '+%Y-%m-%d %H:%M:%S')] $*"
|
||||
echo "$msg" >> "$LOG_DIR/claude-loop.log"
|
||||
}
|
||||
|
||||
lock_issue() {
|
||||
local issue_key="$1"
|
||||
local lockfile="$LOCK_DIR/$issue_key.lock"
|
||||
if mkdir "$lockfile" 2>/dev/null; then
|
||||
echo $$ > "$lockfile/pid"
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
unlock_issue() {
|
||||
local issue_key="$1"
|
||||
rm -rf "$LOCK_DIR/$issue_key.lock" 2>/dev/null
|
||||
}
|
||||
|
||||
mark_skip() {
|
||||
local issue_num="$1"
|
||||
local reason="$2"
|
||||
local skip_hours="${3:-1}"
|
||||
python3 -c "
|
||||
import json, time, fcntl
|
||||
with open('$SKIP_FILE', 'r+') as f:
|
||||
fcntl.flock(f, fcntl.LOCK_EX)
|
||||
try: skips = json.load(f)
|
||||
except: skips = {}
|
||||
skips[str($issue_num)] = {
|
||||
'until': time.time() + ($skip_hours * 3600),
|
||||
'reason': '$reason',
|
||||
'failures': skips.get(str($issue_num), {}).get('failures', 0) + 1
|
||||
}
|
||||
if skips[str($issue_num)]['failures'] >= 3:
|
||||
skips[str($issue_num)]['until'] = time.time() + (6 * 3600)
|
||||
f.seek(0)
|
||||
f.truncate()
|
||||
json.dump(skips, f, indent=2)
|
||||
" 2>/dev/null
|
||||
log "SKIP: #${issue_num} — ${reason}"
|
||||
}
|
||||
|
||||
update_active() {
|
||||
local worker="$1" issue="$2" repo="$3" status="$4"
|
||||
python3 -c "
|
||||
import json, fcntl
|
||||
with open('$ACTIVE_FILE', 'r+') as f:
|
||||
fcntl.flock(f, fcntl.LOCK_EX)
|
||||
try: active = json.load(f)
|
||||
except: active = {}
|
||||
if '$status' == 'done':
|
||||
active.pop('$worker', None)
|
||||
else:
|
||||
active['$worker'] = {'issue': '$issue', 'repo': '$repo', 'status': '$status'}
|
||||
f.seek(0)
|
||||
f.truncate()
|
||||
json.dump(active, f, indent=2)
|
||||
" 2>/dev/null
|
||||
}
|
||||
|
||||
cleanup_workdir() {
|
||||
local wt="$1"
|
||||
rm -rf "$wt" 2>/dev/null || true
|
||||
}
|
||||
|
||||
get_next_issue() {
|
||||
python3 -c "
|
||||
import json, sys, time, urllib.request, os
|
||||
|
||||
token = '${GITEA_TOKEN}'
|
||||
base = '${GITEA_URL}'
|
||||
repos = [
|
||||
'Timmy_Foundation/the-nexus',
|
||||
'Timmy_Foundation/autolora',
|
||||
]
|
||||
|
||||
# Load skip list
|
||||
try:
|
||||
with open('${SKIP_FILE}') as f: skips = json.load(f)
|
||||
except: skips = {}
|
||||
|
||||
# Load active issues (to avoid double-picking)
|
||||
try:
|
||||
with open('${ACTIVE_FILE}') as f:
|
||||
active = json.load(f)
|
||||
active_issues = {v['issue'] for v in active.values()}
|
||||
except:
|
||||
active_issues = set()
|
||||
|
||||
all_issues = []
|
||||
for repo in repos:
|
||||
url = f'{base}/api/v1/repos/{repo}/issues?state=open&type=issues&limit=50&sort=created'
|
||||
req = urllib.request.Request(url, headers={'Authorization': f'token {token}'})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
issues = json.loads(resp.read())
|
||||
for i in issues:
|
||||
i['_repo'] = repo
|
||||
all_issues.extend(issues)
|
||||
except:
|
||||
continue
|
||||
|
||||
# Sort by priority: URGENT > P0 > P1 > bugs > LHF > rest
|
||||
def priority(i):
|
||||
t = i['title'].lower()
|
||||
if '[urgent]' in t or 'urgent:' in t: return 0
|
||||
if '[p0]' in t: return 1
|
||||
if '[p1]' in t: return 2
|
||||
if '[bug]' in t: return 3
|
||||
if 'lhf:' in t or 'lhf ' in t.lower(): return 4
|
||||
if '[p2]' in t: return 5
|
||||
return 6
|
||||
|
||||
all_issues.sort(key=priority)
|
||||
|
||||
for i in all_issues:
|
||||
assignees = [a['login'] for a in (i.get('assignees') or [])]
|
||||
# Take issues assigned to claude OR unassigned (self-assign)
|
||||
if assignees and 'claude' not in assignees:
|
||||
continue
|
||||
|
||||
title = i['title'].lower()
|
||||
if '[philosophy]' in title: continue
|
||||
if '[epic]' in title or 'epic:' in title: continue
|
||||
if '[showcase]' in title: continue
|
||||
if '[do not close' in title: continue
|
||||
if '[meta]' in title: continue
|
||||
if '[governing]' in title: continue
|
||||
if '[permanent]' in title: continue
|
||||
if '[morning report]' in title: continue
|
||||
if '[retro]' in title: continue
|
||||
if '[intel]' in title: continue
|
||||
if 'master escalation' in title: continue
|
||||
if any(a['login'] == 'Rockachopa' for a in (i.get('assignees') or [])): continue
|
||||
|
||||
num_str = str(i['number'])
|
||||
if num_str in active_issues: continue
|
||||
|
||||
entry = skips.get(num_str, {})
|
||||
if entry and entry.get('until', 0) > time.time(): continue
|
||||
|
||||
lock = '${LOCK_DIR}/' + i['_repo'].replace('/', '-') + '-' + num_str + '.lock'
|
||||
if os.path.isdir(lock): continue
|
||||
|
||||
repo = i['_repo']
|
||||
owner, name = repo.split('/')
|
||||
|
||||
# Self-assign if unassigned
|
||||
if not assignees:
|
||||
try:
|
||||
data = json.dumps({'assignees': ['claude']}).encode()
|
||||
req2 = urllib.request.Request(
|
||||
f'{base}/api/v1/repos/{repo}/issues/{i[\"number\"]}',
|
||||
data=data, method='PATCH',
|
||||
headers={'Authorization': f'token {token}', 'Content-Type': 'application/json'})
|
||||
urllib.request.urlopen(req2, timeout=5)
|
||||
except: pass
|
||||
|
||||
print(json.dumps({
|
||||
'number': i['number'],
|
||||
'title': i['title'],
|
||||
'repo_owner': owner,
|
||||
'repo_name': name,
|
||||
'repo': repo,
|
||||
}))
|
||||
sys.exit(0)
|
||||
|
||||
print('null')
|
||||
" 2>/dev/null
|
||||
}
|
||||
|
||||
build_prompt() {
|
||||
local issue_num="$1"
|
||||
local issue_title="$2"
|
||||
local worktree="$3"
|
||||
local repo_owner="$4"
|
||||
local repo_name="$5"
|
||||
|
||||
cat <<PROMPT
|
||||
You are Claude, an autonomous code agent on the ${repo_name} project.
|
||||
|
||||
YOUR ISSUE: #${issue_num} — "${issue_title}"
|
||||
|
||||
GITEA API: ${GITEA_URL}/api/v1
|
||||
GITEA TOKEN: ${GITEA_TOKEN}
|
||||
REPO: ${repo_owner}/${repo_name}
|
||||
WORKING DIRECTORY: ${worktree}
|
||||
|
||||
== YOUR POWERS ==
|
||||
You can do ANYTHING a developer can do.
|
||||
|
||||
1. READ the issue and any comments for context:
|
||||
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}"
|
||||
curl -s -H "Authorization: token ${GITEA_TOKEN}" "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments"
|
||||
|
||||
2. DO THE WORK. Code, test, fix, refactor — whatever the issue needs.
|
||||
- Check for tox.ini / Makefile / package.json for test/lint commands
|
||||
- Run tests if the project has them
|
||||
- Follow existing code conventions
|
||||
|
||||
3. COMMIT with conventional commits: fix: / feat: / refactor: / test: / chore:
|
||||
Include "Fixes #${issue_num}" or "Refs #${issue_num}" in the message.
|
||||
|
||||
4. PUSH to your branch (claude/issue-${issue_num}) and CREATE A PR:
|
||||
git push origin claude/issue-${issue_num}
|
||||
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \\
|
||||
-H "Authorization: token ${GITEA_TOKEN}" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-d '{"title": "[claude] <description> (#${issue_num})", "body": "Fixes #${issue_num}\n\n<describe what you did>", "head": "claude/issue-${issue_num}", "base": "main"}'
|
||||
|
||||
5. COMMENT on the issue when done:
|
||||
curl -s -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}/comments" \\
|
||||
-H "Authorization: token ${GITEA_TOKEN}" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-d '{"body": "PR created. <summary of changes>"}'
|
||||
|
||||
== RULES ==
|
||||
- Read CLAUDE.md or project README first for conventions
|
||||
- If the project has tox, use tox. If npm, use npm. Follow the project.
|
||||
- Never use --no-verify on git commands.
|
||||
- If tests fail after 2 attempts, STOP and comment on the issue explaining why.
|
||||
- Be thorough but focused. Fix the issue, don't refactor the world.
|
||||
|
||||
== CRITICAL: ALWAYS COMMIT AND PUSH ==
|
||||
- NEVER exit without committing your work. Even partial progress MUST be committed.
|
||||
- Before you finish, ALWAYS: git add -A && git commit && git push origin claude/issue-${issue_num}
|
||||
- ALWAYS create a PR before exiting. No exceptions.
|
||||
- If a branch already exists with prior work, check it out and CONTINUE from where it left off.
|
||||
- Check: git ls-remote origin claude/issue-${issue_num} — if it exists, pull it first.
|
||||
- Your work is WASTED if it's not pushed. Push early, push often.
|
||||
PROMPT
|
||||
}
|
||||
|
||||
# === WORKER FUNCTION ===
|
||||
run_worker() {
|
||||
local worker_id="$1"
|
||||
local consecutive_failures=0
|
||||
|
||||
log "WORKER-${worker_id}: Started"
|
||||
|
||||
while true; do
|
||||
# Backoff on repeated failures
|
||||
if [ "$consecutive_failures" -ge 5 ]; then
|
||||
local backoff=$((RATE_LIMIT_SLEEP * (consecutive_failures / 5)))
|
||||
[ "$backoff" -gt "$MAX_RATE_SLEEP" ] && backoff=$MAX_RATE_SLEEP
|
||||
log "WORKER-${worker_id}: BACKOFF ${backoff}s (${consecutive_failures} failures)"
|
||||
sleep "$backoff"
|
||||
consecutive_failures=0
|
||||
fi
|
||||
|
||||
# RULE: Merge existing PRs BEFORE creating new work.
|
||||
# Check for open PRs from claude, rebase + merge them first.
|
||||
local our_prs
|
||||
our_prs=$(curl -sf -H "Authorization: token ${GITEA_TOKEN}" \
|
||||
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls?state=open&limit=5" 2>/dev/null | \
|
||||
python3 -c "
|
||||
import sys, json
|
||||
prs = json.loads(sys.stdin.buffer.read())
|
||||
ours = [p for p in prs if p['user']['login'] == 'claude'][:3]
|
||||
for p in ours:
|
||||
print(f'{p[\"number\"]}|{p[\"head\"][\"ref\"]}|{p.get(\"mergeable\",False)}')
|
||||
" 2>/dev/null)
|
||||
|
||||
if [ -n "$our_prs" ]; then
|
||||
local pr_clone_url="http://claude:${GITEA_TOKEN}@143.198.27.163:3000/Timmy_Foundation/the-nexus.git"
|
||||
echo "$our_prs" | while IFS='|' read pr_num branch mergeable; do
|
||||
[ -z "$pr_num" ] && continue
|
||||
if [ "$mergeable" = "True" ]; then
|
||||
curl -sf -X POST -H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"Do":"squash","delete_branch_after_merge":true}' \
|
||||
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" >/dev/null 2>&1
|
||||
log "WORKER-${worker_id}: merged own PR #${pr_num}"
|
||||
sleep 3
|
||||
else
|
||||
# Rebase and push
|
||||
local tmpdir="/tmp/claude-rebase-${pr_num}"
|
||||
cd "$HOME"; rm -rf "$tmpdir" 2>/dev/null
|
||||
git clone -q --depth=50 -b "$branch" "$pr_clone_url" "$tmpdir" 2>/dev/null
|
||||
if [ -d "$tmpdir/.git" ]; then
|
||||
cd "$tmpdir"
|
||||
git fetch origin main 2>/dev/null
|
||||
if git rebase origin/main 2>/dev/null; then
|
||||
git push -f origin "$branch" 2>/dev/null
|
||||
sleep 3
|
||||
curl -sf -X POST -H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"Do":"squash","delete_branch_after_merge":true}' \
|
||||
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}/merge" >/dev/null 2>&1
|
||||
log "WORKER-${worker_id}: rebased+merged PR #${pr_num}"
|
||||
else
|
||||
git rebase --abort 2>/dev/null
|
||||
curl -sf -X PATCH -H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" -d '{"state":"closed"}' \
|
||||
"${GITEA_URL}/api/v1/repos/Timmy_Foundation/the-nexus/pulls/${pr_num}" >/dev/null 2>&1
|
||||
log "WORKER-${worker_id}: closed unrebaseable PR #${pr_num}"
|
||||
fi
|
||||
cd "$HOME"; rm -rf "$tmpdir"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Get next issue
|
||||
issue_json=$(get_next_issue)
|
||||
|
||||
if [ "$issue_json" = "null" ] || [ -z "$issue_json" ]; then
|
||||
update_active "$worker_id" "" "" "idle"
|
||||
sleep 10
|
||||
continue
|
||||
fi
|
||||
|
||||
issue_num=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['number'])")
|
||||
issue_title=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['title'])")
|
||||
repo_owner=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_owner'])")
|
||||
repo_name=$(echo "$issue_json" | python3 -c "import sys,json; print(json.load(sys.stdin)['repo_name'])")
|
||||
issue_key="${repo_owner}-${repo_name}-${issue_num}"
|
||||
branch="claude/issue-${issue_num}"
|
||||
# Use UUID for worktree dir to prevent collisions under high concurrency
|
||||
wt_uuid=$(/usr/bin/uuidgen 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
|
||||
worktree="${WORKTREE_BASE}/claude-${issue_num}-${wt_uuid}"
|
||||
|
||||
# Try to lock
|
||||
if ! lock_issue "$issue_key"; then
|
||||
sleep 5
|
||||
continue
|
||||
fi
|
||||
|
||||
log "WORKER-${worker_id}: === ISSUE #${issue_num}: ${issue_title} (${repo_owner}/${repo_name}) ==="
|
||||
update_active "$worker_id" "$issue_num" "${repo_owner}/${repo_name}" "working"
|
||||
|
||||
# Clone and pick up prior work if it exists
|
||||
rm -rf "$worktree" 2>/dev/null
|
||||
CLONE_URL="http://claude:${GITEA_TOKEN}@143.198.27.163:3000/${repo_owner}/${repo_name}.git"
|
||||
|
||||
# Check if branch already exists on remote (prior work to continue)
|
||||
if git ls-remote --heads "$CLONE_URL" "$branch" 2>/dev/null | grep -q "$branch"; then
|
||||
log "WORKER-${worker_id}: Found existing branch $branch — continuing prior work"
|
||||
if ! git clone --depth=50 -b "$branch" "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
|
||||
log "WORKER-${worker_id}: ERROR cloning branch $branch for #${issue_num}"
|
||||
unlock_issue "$issue_key"
|
||||
consecutive_failures=$((consecutive_failures + 1))
|
||||
sleep "$COOLDOWN"
|
||||
continue
|
||||
fi
|
||||
# Rebase on main to resolve stale conflicts from closed PRs
|
||||
cd "$worktree"
|
||||
git fetch origin main >/dev/null 2>&1
|
||||
if ! git rebase origin/main >/dev/null 2>&1; then
|
||||
# Rebase failed — start fresh from main
|
||||
log "WORKER-${worker_id}: Rebase failed for $branch, starting fresh"
|
||||
cd "$HOME"
|
||||
rm -rf "$worktree"
|
||||
git clone --depth=1 -b main "$CLONE_URL" "$worktree" >/dev/null 2>&1
|
||||
cd "$worktree"
|
||||
git checkout -b "$branch" >/dev/null 2>&1
|
||||
fi
|
||||
else
|
||||
if ! git clone --depth=1 -b main "$CLONE_URL" "$worktree" >/dev/null 2>&1; then
|
||||
log "WORKER-${worker_id}: ERROR cloning for #${issue_num}"
|
||||
unlock_issue "$issue_key"
|
||||
consecutive_failures=$((consecutive_failures + 1))
|
||||
sleep "$COOLDOWN"
|
||||
continue
|
||||
fi
|
||||
cd "$worktree"
|
||||
git checkout -b "$branch" >/dev/null 2>&1
|
||||
fi
|
||||
cd "$worktree"
|
||||
|
||||
# Build prompt and run
|
||||
prompt=$(build_prompt "$issue_num" "$issue_title" "$worktree" "$repo_owner" "$repo_name")
|
||||
|
||||
log "WORKER-${worker_id}: Launching Claude Code for #${issue_num}..."
|
||||
CYCLE_START=$(date +%s)
|
||||
|
||||
set +e
|
||||
cd "$worktree"
|
||||
env -u CLAUDECODE gtimeout "$CLAUDE_TIMEOUT" claude \
|
||||
--print \
|
||||
--model sonnet \
|
||||
--dangerously-skip-permissions \
|
||||
-p "$prompt" \
|
||||
</dev/null >> "$LOG_DIR/claude-${issue_num}.log" 2>&1
|
||||
exit_code=$?
|
||||
set -e
|
||||
|
||||
CYCLE_END=$(date +%s)
|
||||
CYCLE_DURATION=$(( CYCLE_END - CYCLE_START ))
|
||||
|
||||
# ── SALVAGE: Never waste work. Commit+push whatever exists. ──
|
||||
cd "$worktree" 2>/dev/null || true
|
||||
DIRTY=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
|
||||
UNPUSHED=$(git log --oneline "origin/main..HEAD" 2>/dev/null | wc -l | tr -d ' ')
|
||||
|
||||
if [ "${DIRTY:-0}" -gt 0 ]; then
|
||||
log "WORKER-${worker_id}: SALVAGING $DIRTY dirty files for #${issue_num}"
|
||||
git add -A 2>/dev/null
|
||||
git commit -m "WIP: Claude Code progress on #${issue_num}
|
||||
|
||||
Automated salvage commit — agent session ended (exit $exit_code).
|
||||
Work in progress, may need continuation." 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Push if we have any commits (including salvaged ones)
|
||||
UNPUSHED=$(git log --oneline "origin/main..HEAD" 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [ "${UNPUSHED:-0}" -gt 0 ]; then
|
||||
git push -u origin "$branch" 2>/dev/null && \
|
||||
log "WORKER-${worker_id}: Pushed $UNPUSHED commit(s) on $branch" || \
|
||||
log "WORKER-${worker_id}: Push failed for $branch"
|
||||
fi
|
||||
|
||||
# ── Create PR if branch was pushed and no PR exists yet ──
|
||||
pr_num=$(curl -sf "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls?state=open&head=${repo_owner}:${branch}&limit=1" \
|
||||
-H "Authorization: token ${GITEA_TOKEN}" | python3 -c "
|
||||
import sys,json
|
||||
prs = json.load(sys.stdin)
|
||||
if prs: print(prs[0]['number'])
|
||||
else: print('')
|
||||
" 2>/dev/null)
|
||||
|
||||
if [ -z "$pr_num" ] && [ "${UNPUSHED:-0}" -gt 0 ]; then
|
||||
pr_num=$(curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls" \
|
||||
-H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$(python3 -c "
|
||||
import json
|
||||
print(json.dumps({
|
||||
'title': 'Claude: Issue #${issue_num}',
|
||||
'head': '${branch}',
|
||||
'base': 'main',
|
||||
'body': 'Automated PR for issue #${issue_num}.\nExit code: ${exit_code}'
|
||||
}))
|
||||
")" | python3 -c "import sys,json; print(json.load(sys.stdin).get('number',''))" 2>/dev/null)
|
||||
[ -n "$pr_num" ] && log "WORKER-${worker_id}: Created PR #${pr_num} for issue #${issue_num}"
|
||||
fi
|
||||
|
||||
# ── Merge + close on success ──
|
||||
if [ "$exit_code" -eq 0 ]; then
|
||||
log "WORKER-${worker_id}: SUCCESS #${issue_num}"
|
||||
|
||||
if [ -n "$pr_num" ]; then
|
||||
curl -sf -X POST "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/pulls/${pr_num}/merge" \
|
||||
-H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"Do": "squash"}' >/dev/null 2>&1 || true
|
||||
curl -sf -X PATCH "${GITEA_URL}/api/v1/repos/${repo_owner}/${repo_name}/issues/${issue_num}" \
|
||||
-H "Authorization: token ${GITEA_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"state": "closed"}' >/dev/null 2>&1 || true
|
||||
log "WORKER-${worker_id}: PR #${pr_num} merged, issue #${issue_num} closed"
|
||||
fi
|
||||
|
||||
consecutive_failures=0
|
||||
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
log "WORKER-${worker_id}: TIMEOUT #${issue_num} (work saved in PR)"
|
||||
consecutive_failures=$((consecutive_failures + 1))
|
||||
|
||||
else
|
||||
# Check for rate limit
|
||||
if grep -q "rate_limit\|rate limit\|429\|overloaded" "$LOG_DIR/claude-${issue_num}.log" 2>/dev/null; then
|
||||
log "WORKER-${worker_id}: RATE LIMITED on #${issue_num} — backing off (work saved)"
|
||||
consecutive_failures=$((consecutive_failures + 3))
|
||||
else
|
||||
log "WORKER-${worker_id}: FAILED #${issue_num} exit ${exit_code} (work saved in PR)"
|
||||
consecutive_failures=$((consecutive_failures + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
# ── METRICS: structured JSONL for reporting ──
|
||||
LINES_ADDED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo 0)
|
||||
LINES_REMOVED=$(cd "$worktree" 2>/dev/null && git diff --stat origin/main..HEAD 2>/dev/null | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo 0)
|
||||
FILES_CHANGED=$(cd "$worktree" 2>/dev/null && git diff --name-only origin/main..HEAD 2>/dev/null | wc -l | tr -d ' ' || echo 0)
|
||||
|
||||
# Determine outcome
|
||||
if [ "$exit_code" -eq 0 ]; then
|
||||
OUTCOME="success"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
OUTCOME="timeout"
|
||||
elif grep -q "rate_limit\|rate limit\|429" "$LOG_DIR/claude-${issue_num}.log" 2>/dev/null; then
|
||||
OUTCOME="rate_limited"
|
||||
else
|
||||
OUTCOME="failed"
|
||||
fi
|
||||
|
||||
METRICS_FILE="$LOG_DIR/claude-metrics.jsonl"
|
||||
python3 -c "
|
||||
import json, datetime
|
||||
print(json.dumps({
|
||||
'ts': datetime.datetime.utcnow().isoformat() + 'Z',
|
||||
'worker': $worker_id,
|
||||
'issue': $issue_num,
|
||||
'repo': '${repo_owner}/${repo_name}',
|
||||
'title': '''${issue_title}'''[:80],
|
||||
'outcome': '$OUTCOME',
|
||||
'exit_code': $exit_code,
|
||||
'duration_s': $CYCLE_DURATION,
|
||||
'files_changed': ${FILES_CHANGED:-0},
|
||||
'lines_added': ${LINES_ADDED:-0},
|
||||
'lines_removed': ${LINES_REMOVED:-0},
|
||||
'salvaged': ${DIRTY:-0},
|
||||
'pr': '${pr_num:-}',
|
||||
'merged': $( [ '$OUTCOME' = 'success' ] && [ -n '${pr_num:-}' ] && echo 'true' || echo 'false' )
|
||||
}))
|
||||
" >> "$METRICS_FILE" 2>/dev/null
|
||||
|
||||
# Cleanup
|
||||
cleanup_workdir "$worktree"
|
||||
unlock_issue "$issue_key"
|
||||
update_active "$worker_id" "" "" "done"
|
||||
|
||||
sleep "$COOLDOWN"
|
||||
done
|
||||
}
|
||||
|
||||
# === MAIN ===
|
||||
log "=== Claude Loop Started — ${NUM_WORKERS} workers (max ${MAX_WORKERS}) ==="
|
||||
log "Worktrees: ${WORKTREE_BASE}"
|
||||
|
||||
# Clean stale locks
|
||||
rm -rf "$LOCK_DIR"/*.lock 2>/dev/null
|
||||
|
||||
# PID tracking via files (bash 3.2 compatible)
|
||||
PID_DIR="$LOG_DIR/claude-pids"
|
||||
mkdir -p "$PID_DIR"
|
||||
rm -f "$PID_DIR"/*.pid 2>/dev/null
|
||||
|
||||
launch_worker() {
|
||||
local wid="$1"
|
||||
run_worker "$wid" &
|
||||
echo $! > "$PID_DIR/${wid}.pid"
|
||||
log "Launched worker $wid (PID $!)"
|
||||
}
|
||||
|
||||
# Initial launch
|
||||
for i in $(seq 1 "$NUM_WORKERS"); do
|
||||
launch_worker "$i"
|
||||
sleep 3
|
||||
done
|
||||
|
||||
# === DYNAMIC SCALER ===
|
||||
# Every 3 minutes: check health, scale up if no rate limits, scale down if hitting limits
|
||||
CURRENT_WORKERS="$NUM_WORKERS"
|
||||
while true; do
|
||||
sleep 90
|
||||
|
||||
# Reap dead workers and relaunch
|
||||
for pidfile in "$PID_DIR"/*.pid; do
|
||||
[ -f "$pidfile" ] || continue
|
||||
wid=$(basename "$pidfile" .pid)
|
||||
wpid=$(cat "$pidfile")
|
||||
if ! kill -0 "$wpid" 2>/dev/null; then
|
||||
log "SCALER: Worker $wid died — relaunching"
|
||||
launch_worker "$wid"
|
||||
sleep 2
|
||||
fi
|
||||
done
|
||||
|
||||
recent_rate_limits=$(tail -100 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "RATE LIMITED" || true)
|
||||
recent_successes=$(tail -100 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "SUCCESS" || true)
|
||||
|
||||
if [ "$recent_rate_limits" -gt 0 ]; then
|
||||
if [ "$CURRENT_WORKERS" -gt 2 ]; then
|
||||
drop_to=$(( CURRENT_WORKERS / 2 ))
|
||||
[ "$drop_to" -lt 2 ] && drop_to=2
|
||||
log "SCALER: Rate limited — scaling ${CURRENT_WORKERS} → ${drop_to} workers"
|
||||
for wid in $(seq $((drop_to + 1)) "$CURRENT_WORKERS"); do
|
||||
if [ -f "$PID_DIR/${wid}.pid" ]; then
|
||||
kill "$(cat "$PID_DIR/${wid}.pid")" 2>/dev/null || true
|
||||
rm -f "$PID_DIR/${wid}.pid"
|
||||
update_active "$wid" "" "" "done"
|
||||
fi
|
||||
done
|
||||
CURRENT_WORKERS=$drop_to
|
||||
fi
|
||||
elif [ "$recent_successes" -ge 2 ] && [ "$CURRENT_WORKERS" -lt "$MAX_WORKERS" ]; then
|
||||
new_count=$(( CURRENT_WORKERS + 2 ))
|
||||
[ "$new_count" -gt "$MAX_WORKERS" ] && new_count=$MAX_WORKERS
|
||||
log "SCALER: Healthy — scaling ${CURRENT_WORKERS} → ${new_count} workers"
|
||||
for wid in $(seq $((CURRENT_WORKERS + 1)) "$new_count"); do
|
||||
launch_worker "$wid"
|
||||
sleep 2
|
||||
done
|
||||
CURRENT_WORKERS=$new_count
|
||||
fi
|
||||
done
|
||||
94
bin/claudemax-watchdog.sh
Executable file
94
bin/claudemax-watchdog.sh
Executable file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env bash
|
||||
# claudemax-watchdog.sh — keep local Claude/Gemini loops alive without stale tmux assumptions
|
||||
|
||||
set -uo pipefail
|
||||
export PATH="/opt/homebrew/bin:$HOME/.local/bin:$HOME/.hermes/bin:/usr/local/bin:$PATH"
|
||||
|
||||
LOG="$HOME/.hermes/logs/claudemax-watchdog.log"
|
||||
GITEA_URL="http://143.198.27.163:3000"
|
||||
GITEA_TOKEN=$(tr -d '[:space:]' < "$HOME/.hermes/gitea_token_vps" 2>/dev/null || true)
|
||||
REPO_API="$GITEA_URL/api/v1/repos/Timmy_Foundation/the-nexus"
|
||||
MIN_OPEN_ISSUES=10
|
||||
CLAUDE_WORKERS=2
|
||||
GEMINI_WORKERS=1
|
||||
|
||||
log() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] CLAUDEMAX: $*" >> "$LOG"
|
||||
}
|
||||
|
||||
start_loop() {
|
||||
local name="$1"
|
||||
local pattern="$2"
|
||||
local cmd="$3"
|
||||
local pid
|
||||
|
||||
pid=$(pgrep -f "$pattern" 2>/dev/null | head -1 || true)
|
||||
if [ -n "$pid" ]; then
|
||||
log "$name alive (PID $pid)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "$name not running. Restarting..."
|
||||
nohup bash -lc "$cmd" >/dev/null 2>&1 &
|
||||
sleep 2
|
||||
|
||||
pid=$(pgrep -f "$pattern" 2>/dev/null | head -1 || true)
|
||||
if [ -n "$pid" ]; then
|
||||
log "Restarted $name (PID $pid)"
|
||||
else
|
||||
log "ERROR: failed to start $name"
|
||||
fi
|
||||
}
|
||||
|
||||
run_optional_script() {
|
||||
local label="$1"
|
||||
local script_path="$2"
|
||||
|
||||
if [ -x "$script_path" ]; then
|
||||
bash "$script_path" 2>&1 | while read -r line; do
|
||||
log "$line"
|
||||
done
|
||||
else
|
||||
log "$label skipped — missing $script_path"
|
||||
fi
|
||||
}
|
||||
|
||||
claude_quota_blocked() {
|
||||
local cutoff now mtime f
|
||||
now=$(date +%s)
|
||||
cutoff=$((now - 43200))
|
||||
for f in "$HOME"/.hermes/logs/claude-*.log; do
|
||||
[ -f "$f" ] || continue
|
||||
mtime=$(stat -f %m "$f" 2>/dev/null || echo 0)
|
||||
if [ "$mtime" -ge "$cutoff" ] && grep -q "You've hit your limit" "$f" 2>/dev/null; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
if [ -z "$GITEA_TOKEN" ]; then
|
||||
log "ERROR: missing Gitea token at ~/.hermes/gitea_token_vps"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if claude_quota_blocked; then
|
||||
log "Claude quota exhausted recently — not starting claude-loop until quota resets or logs age out"
|
||||
else
|
||||
start_loop "claude-loop" "bash .*claude-loop.sh" "bash ~/.hermes/bin/claude-loop.sh $CLAUDE_WORKERS >> ~/.hermes/logs/claude-loop.log 2>&1"
|
||||
fi
|
||||
start_loop "gemini-loop" "bash .*gemini-loop.sh" "bash ~/.hermes/bin/gemini-loop.sh $GEMINI_WORKERS >> ~/.hermes/logs/gemini-loop.log 2>&1"
|
||||
|
||||
OPEN_COUNT=$(curl -s --max-time 10 -H "Authorization: token $GITEA_TOKEN" \
|
||||
"$REPO_API/issues?state=open&type=issues&limit=100" 2>/dev/null \
|
||||
| python3 -c "import sys, json; print(len(json.loads(sys.stdin.read() or '[]')))" 2>/dev/null || echo 0)
|
||||
|
||||
log "Open issues: $OPEN_COUNT (minimum: $MIN_OPEN_ISSUES)"
|
||||
|
||||
if [ "$OPEN_COUNT" -lt "$MIN_OPEN_ISSUES" ]; then
|
||||
log "Backlog running low. Checking replenishment helper..."
|
||||
run_optional_script "claudemax-replenish" "$HOME/.hermes/bin/claudemax-replenish.sh"
|
||||
fi
|
||||
|
||||
run_optional_script "autodeploy-matrix" "$HOME/.hermes/bin/autodeploy-matrix.sh"
|
||||
log "Watchdog complete."
|
||||
42
bin/pipeline-freshness.sh
Executable file
42
bin/pipeline-freshness.sh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SESSIONS_DIR="$HOME/.hermes/sessions"
|
||||
EXPORT_DIR="$HOME/.timmy/training-data/dpo-pairs"
|
||||
|
||||
latest_session=$(find "$SESSIONS_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
latest_export=$(find "$EXPORT_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
|
||||
echo "latest_session=${latest_session:-none}"
|
||||
echo "latest_export=${latest_export:-none}"
|
||||
|
||||
if [ -z "${latest_session:-}" ]; then
|
||||
echo "status=ok"
|
||||
echo "reason=no sessions yet"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "${latest_export:-}" ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=no exports yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
session_mtime=$(stat -f '%m' "$latest_session")
|
||||
export_mtime=$(stat -f '%m' "$latest_export")
|
||||
lag_minutes=$(( (session_mtime - export_mtime) / 60 ))
|
||||
if [ "$lag_minutes" -lt 0 ]; then
|
||||
lag_minutes=0
|
||||
fi
|
||||
|
||||
echo "lag_minutes=$lag_minutes"
|
||||
|
||||
if [ "$lag_minutes" -gt 300 ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=exports more than 5 hours behind sessions"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "status=ok"
|
||||
echo "reason=exports within freshness window"
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# sync-up.sh — Push live ~/.hermes config changes UP to timmy-config repo.
|
||||
# The harness is the source. The repo is the record.
|
||||
# Run periodically or after significant config changes.
|
||||
# Only commits when there are REAL changes (not empty syncs).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -12,31 +12,29 @@ log() { echo "[sync-up] $*"; }
|
||||
|
||||
# === Copy live config into repo ===
|
||||
cp "$HERMES_HOME/config.yaml" "$REPO_DIR/config.yaml"
|
||||
log "config.yaml"
|
||||
|
||||
# === Playbooks ===
|
||||
for f in "$HERMES_HOME"/playbooks/*.yaml; do
|
||||
[ -f "$f" ] && cp "$f" "$REPO_DIR/playbooks/"
|
||||
done
|
||||
log "playbooks/"
|
||||
|
||||
# === Skins ===
|
||||
for f in "$HERMES_HOME"/skins/*; do
|
||||
[ -f "$f" ] && cp "$f" "$REPO_DIR/skins/"
|
||||
done
|
||||
log "skins/"
|
||||
|
||||
# === Channel directory ===
|
||||
[ -f "$HERMES_HOME/channel_directory.json" ] && cp "$HERMES_HOME/channel_directory.json" "$REPO_DIR/"
|
||||
log "channel_directory.json"
|
||||
|
||||
# === Commit and push if there are changes ===
|
||||
# === Only commit if there are real diffs ===
|
||||
cd "$REPO_DIR"
|
||||
if [ -n "$(git status --porcelain)" ]; then
|
||||
git add -A
|
||||
git commit -m "sync: live config from ~/.hermes $(date +%Y-%m-%d_%H:%M)"
|
||||
git push
|
||||
log "Pushed changes to Gitea."
|
||||
else
|
||||
git add -A
|
||||
|
||||
# Check if there are staged changes
|
||||
if git diff --cached --quiet; then
|
||||
log "No changes to sync."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Build a meaningful commit message from what actually changed
|
||||
CHANGED=$(git diff --cached --name-only | tr '\n' ', ' | sed 's/,$//')
|
||||
git commit -m "config: update ${CHANGED}"
|
||||
git push
|
||||
log "Pushed: ${CHANGED}"
|
||||
|
||||
252
bin/timmy-dashboard
Executable file
252
bin/timmy-dashboard
Executable file
@@ -0,0 +1,252 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Timmy Model Dashboard — where are my models, what are they doing.
|
||||
|
||||
Usage:
|
||||
timmy-dashboard # one-shot
|
||||
timmy-dashboard --watch # live refresh every 30s
|
||||
timmy-dashboard --hours=48 # look back 48h
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
HERMES_HOME = Path.home() / ".hermes"
|
||||
TIMMY_HOME = Path.home() / ".timmy"
|
||||
METRICS_DIR = TIMMY_HOME / "metrics"
|
||||
|
||||
# ── Data Sources ──────────────────────────────────────────────────────
|
||||
|
||||
def get_ollama_models():
|
||||
try:
|
||||
req = urllib.request.Request("http://localhost:11434/api/tags")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return json.loads(resp.read()).get("models", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_loaded_models():
|
||||
try:
|
||||
req = urllib.request.Request("http://localhost:11434/api/ps")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return json.loads(resp.read()).get("models", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_huey_pid():
|
||||
try:
|
||||
r = subprocess.run(["pgrep", "-f", "huey_consumer"],
|
||||
capture_output=True, text=True, timeout=5)
|
||||
return r.stdout.strip().split("\n")[0] if r.returncode == 0 else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def get_hermes_sessions():
|
||||
sessions_file = HERMES_HOME / "sessions" / "sessions.json"
|
||||
if not sessions_file.exists():
|
||||
return []
|
||||
try:
|
||||
data = json.loads(sessions_file.read_text())
|
||||
return list(data.values())
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_heartbeat_ticks(date_str=None):
|
||||
if not date_str:
|
||||
date_str = datetime.now().strftime("%Y%m%d")
|
||||
tick_file = TIMMY_HOME / "heartbeat" / f"ticks_{date_str}.jsonl"
|
||||
if not tick_file.exists():
|
||||
return []
|
||||
ticks = []
|
||||
for line in tick_file.read_text().strip().split("\n"):
|
||||
if not line.strip():
|
||||
continue
|
||||
try:
|
||||
ticks.append(json.loads(line))
|
||||
except Exception:
|
||||
continue
|
||||
return ticks
|
||||
|
||||
|
||||
def get_local_metrics(hours=24):
|
||||
"""Read local inference metrics from jsonl files."""
|
||||
records = []
|
||||
cutoff = datetime.now(timezone.utc) - timedelta(hours=hours)
|
||||
if not METRICS_DIR.exists():
|
||||
return records
|
||||
for f in sorted(METRICS_DIR.glob("local_*.jsonl")):
|
||||
for line in f.read_text().strip().split("\n"):
|
||||
if not line.strip():
|
||||
continue
|
||||
try:
|
||||
r = json.loads(line)
|
||||
ts = datetime.fromisoformat(r["timestamp"])
|
||||
if ts >= cutoff:
|
||||
records.append(r)
|
||||
except Exception:
|
||||
continue
|
||||
return records
|
||||
|
||||
|
||||
def get_cron_jobs():
|
||||
"""Get Hermes cron job status."""
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["hermes", "cron", "list", "--json"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if r.returncode == 0:
|
||||
return json.loads(r.stdout).get("jobs", [])
|
||||
except Exception:
|
||||
pass
|
||||
return []
|
||||
|
||||
|
||||
# ── Rendering ─────────────────────────────────────────────────────────
|
||||
|
||||
DIM = "\033[2m"
|
||||
BOLD = "\033[1m"
|
||||
GREEN = "\033[32m"
|
||||
YELLOW = "\033[33m"
|
||||
RED = "\033[31m"
|
||||
CYAN = "\033[36m"
|
||||
RST = "\033[0m"
|
||||
CLR = "\033[2J\033[H"
|
||||
|
||||
|
||||
def render(hours=24):
|
||||
models = get_ollama_models()
|
||||
loaded = get_loaded_models()
|
||||
huey_pid = get_huey_pid()
|
||||
ticks = get_heartbeat_ticks()
|
||||
metrics = get_local_metrics(hours)
|
||||
sessions = get_hermes_sessions()
|
||||
|
||||
loaded_names = {m.get("name", "") for m in loaded}
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
print(CLR, end="")
|
||||
print(f"{BOLD}{'=' * 70}")
|
||||
print(f" TIMMY MODEL DASHBOARD")
|
||||
print(f" {now} | Huey: {GREEN}PID {huey_pid}{RST if huey_pid else f'{RED}DOWN{RST}'}")
|
||||
print(f"{'=' * 70}{RST}")
|
||||
|
||||
# ── LOCAL MODELS ──
|
||||
print(f"\n {BOLD}LOCAL MODELS (Ollama){RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
if models:
|
||||
for m in models:
|
||||
name = m.get("name", "?")
|
||||
size_gb = m.get("size", 0) / 1e9
|
||||
if name in loaded_names:
|
||||
status = f"{GREEN}IN VRAM{RST}"
|
||||
else:
|
||||
status = f"{DIM}on disk{RST}"
|
||||
print(f" {name:35s} {size_gb:5.1f}GB {status}")
|
||||
else:
|
||||
print(f" {RED}(Ollama not responding){RST}")
|
||||
|
||||
# ── LOCAL INFERENCE ACTIVITY ──
|
||||
print(f"\n {BOLD}LOCAL INFERENCE ({len(metrics)} calls, last {hours}h){RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
if metrics:
|
||||
by_caller = {}
|
||||
for r in metrics:
|
||||
caller = r.get("caller", "unknown")
|
||||
if caller not in by_caller:
|
||||
by_caller[caller] = {"count": 0, "success": 0, "errors": 0}
|
||||
by_caller[caller]["count"] += 1
|
||||
if r.get("success"):
|
||||
by_caller[caller]["success"] += 1
|
||||
else:
|
||||
by_caller[caller]["errors"] += 1
|
||||
for caller, stats in by_caller.items():
|
||||
err = f" {RED}err:{stats['errors']}{RST}" if stats["errors"] else ""
|
||||
print(f" {caller:25s} calls:{stats['count']:4d} "
|
||||
f"{GREEN}ok:{stats['success']}{RST}{err}")
|
||||
|
||||
by_model = {}
|
||||
for r in metrics:
|
||||
model = r.get("model", "unknown")
|
||||
by_model[model] = by_model.get(model, 0) + 1
|
||||
print(f"\n {DIM}Models used:{RST}")
|
||||
for model, count in sorted(by_model.items(), key=lambda x: -x[1]):
|
||||
print(f" {model:30s} {count} calls")
|
||||
else:
|
||||
print(f" {DIM}(no local calls recorded yet){RST}")
|
||||
|
||||
# ── HEARTBEAT STATUS ──
|
||||
print(f"\n {BOLD}HEARTBEAT ({len(ticks)} ticks today){RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
if ticks:
|
||||
last = ticks[-1]
|
||||
decision = last.get("decision", last.get("actions", {}))
|
||||
if isinstance(decision, dict):
|
||||
severity = decision.get("severity", "unknown")
|
||||
reasoning = decision.get("reasoning", "")
|
||||
sev_color = GREEN if severity == "ok" else YELLOW if severity == "warning" else RED
|
||||
print(f" Last tick: {last.get('tick_id', '?')}")
|
||||
print(f" Severity: {sev_color}{severity}{RST}")
|
||||
if reasoning:
|
||||
print(f" Reasoning: {reasoning[:65]}")
|
||||
else:
|
||||
print(f" Last tick: {last.get('tick_id', '?')}")
|
||||
actions = last.get("actions", [])
|
||||
print(f" Actions: {actions if actions else 'none'}")
|
||||
|
||||
model_decisions = sum(1 for t in ticks
|
||||
if isinstance(t.get("decision"), dict)
|
||||
and t["decision"].get("severity") != "fallback")
|
||||
fallback = len(ticks) - model_decisions
|
||||
print(f" {CYAN}Model: {model_decisions}{RST} | {DIM}Fallback: {fallback}{RST}")
|
||||
else:
|
||||
print(f" {DIM}(no ticks today){RST}")
|
||||
|
||||
# ── HERMES SESSIONS ──
|
||||
local_sessions = [s for s in sessions
|
||||
if "localhost:11434" in str(s.get("base_url", ""))]
|
||||
cloud_sessions = [s for s in sessions if s not in local_sessions]
|
||||
print(f"\n {BOLD}HERMES SESSIONS{RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
print(f" Total: {len(sessions)} | "
|
||||
f"{GREEN}Local: {len(local_sessions)}{RST} | "
|
||||
f"{YELLOW}Cloud: {len(cloud_sessions)}{RST}")
|
||||
|
||||
# ── ACTIVE LOOPS ──
|
||||
print(f"\n {BOLD}ACTIVE LOOPS{RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
print(f" {CYAN}heartbeat_tick{RST} 10m hermes4:14b DECIDE phase")
|
||||
print(f" {DIM}model_health{RST} 5m (local check) Ollama ping")
|
||||
print(f" {DIM}gemini_worker{RST} 20m gemini-2.5-pro aider")
|
||||
print(f" {DIM}grok_worker{RST} 20m grok-3-fast opencode")
|
||||
print(f" {DIM}cross_review{RST} 30m gemini+grok PR review")
|
||||
|
||||
print(f"\n{BOLD}{'=' * 70}{RST}")
|
||||
print(f" {DIM}Refresh: timmy-dashboard --watch | History: --hours=N{RST}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
watch = "--watch" in sys.argv
|
||||
hours = 24
|
||||
for a in sys.argv[1:]:
|
||||
if a.startswith("--hours="):
|
||||
hours = int(a.split("=")[1])
|
||||
|
||||
if watch:
|
||||
try:
|
||||
while True:
|
||||
render(hours)
|
||||
time.sleep(30)
|
||||
except KeyboardInterrupt:
|
||||
print(f"\n{DIM}Dashboard stopped.{RST}")
|
||||
else:
|
||||
render(hours)
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"updated_at": "2026-03-26T06:59:37.300889",
|
||||
"updated_at": "2026-03-30T16:50:44.194030",
|
||||
"platforms": {
|
||||
"discord": [
|
||||
{
|
||||
@@ -27,6 +27,30 @@
|
||||
"name": "Timmy Time",
|
||||
"type": "group",
|
||||
"thread_id": null
|
||||
},
|
||||
{
|
||||
"id": "-1003664764329:85",
|
||||
"name": "Timmy Time / topic 85",
|
||||
"type": "group",
|
||||
"thread_id": "85"
|
||||
},
|
||||
{
|
||||
"id": "-1003664764329:111",
|
||||
"name": "Timmy Time / topic 111",
|
||||
"type": "group",
|
||||
"thread_id": "111"
|
||||
},
|
||||
{
|
||||
"id": "-1003664764329:173",
|
||||
"name": "Timmy Time / topic 173",
|
||||
"type": "group",
|
||||
"thread_id": "173"
|
||||
},
|
||||
{
|
||||
"id": "7635059073",
|
||||
"name": "Trip T",
|
||||
"type": "dm",
|
||||
"thread_id": null
|
||||
}
|
||||
],
|
||||
"whatsapp": [],
|
||||
|
||||
108
config.yaml
108
config.yaml
@@ -1,16 +1,40 @@
|
||||
model:
|
||||
default: claude-opus-4-6
|
||||
provider: anthropic
|
||||
context_length: 65536
|
||||
fallback_providers:
|
||||
- provider: openai-codex
|
||||
model: codex
|
||||
- provider: gemini
|
||||
model: gemini-2.5-flash
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
- provider: groq
|
||||
model: llama-3.3-70b-versatile
|
||||
base_url: https://api.groq.com/openai/v1
|
||||
api_key_env: GROQ_API_KEY
|
||||
- provider: grok
|
||||
model: grok-3-mini-fast
|
||||
base_url: https://api.x.ai/v1
|
||||
api_key_env: XAI_API_KEY
|
||||
- provider: kimi-coding
|
||||
model: kimi-k2.5
|
||||
- provider: openrouter
|
||||
model: openai/gpt-4.1-mini
|
||||
base_url: https://openrouter.ai/api/v1
|
||||
api_key_env: OPENROUTER_API_KEY
|
||||
toolsets:
|
||||
- all
|
||||
agent:
|
||||
max_turns: 30
|
||||
reasoning_effort: medium
|
||||
tool_use_enforcement: auto
|
||||
reasoning_effort: xhigh
|
||||
verbose: false
|
||||
terminal:
|
||||
backend: local
|
||||
cwd: .
|
||||
timeout: 180
|
||||
env_passthrough: []
|
||||
docker_image: nikolaik/python-nodejs:python3.11-nodejs20
|
||||
docker_forward_env: []
|
||||
singularity_image: docker://nikolaik/python-nodejs:python3.11-nodejs20
|
||||
@@ -25,6 +49,7 @@ terminal:
|
||||
persistent_shell: true
|
||||
browser:
|
||||
inactivity_timeout: 120
|
||||
command_timeout: 30
|
||||
record_sessions: false
|
||||
checkpoints:
|
||||
enabled: true
|
||||
@@ -32,6 +57,8 @@ checkpoints:
|
||||
compression:
|
||||
enabled: false
|
||||
threshold: 0.5
|
||||
target_ratio: 0.2
|
||||
protect_last_n: 20
|
||||
summary_model: ''
|
||||
summary_provider: ''
|
||||
summary_base_url: ''
|
||||
@@ -51,50 +78,61 @@ auxiliary:
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
download_timeout: 30
|
||||
web_extract:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
compression:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 120
|
||||
session_search:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
skills_hub:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
approval:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
mcp:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
flush_memories:
|
||||
provider: auto
|
||||
model: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
timeout: 30
|
||||
display:
|
||||
compact: false
|
||||
personality: ''
|
||||
resume_display: full
|
||||
busy_input_mode: interrupt
|
||||
bell_on_complete: false
|
||||
show_reasoning: false
|
||||
streaming: false
|
||||
show_cost: false
|
||||
skin: timmy
|
||||
tool_progress_command: false
|
||||
tool_preview_length: 0
|
||||
tool_progress: all
|
||||
privacy:
|
||||
redact_pii: false
|
||||
@@ -142,7 +180,11 @@ delegation:
|
||||
provider: ''
|
||||
base_url: ''
|
||||
api_key: ''
|
||||
max_iterations: 50
|
||||
prefill_messages_file: ''
|
||||
skills:
|
||||
external_dirs: []
|
||||
creation_nudge_interval: 15
|
||||
honcho: {}
|
||||
timezone: ''
|
||||
discord:
|
||||
@@ -152,6 +194,7 @@ discord:
|
||||
whatsapp: {}
|
||||
approvals:
|
||||
mode: manual
|
||||
timeout: 60
|
||||
command_allowlist: []
|
||||
quick_commands: {}
|
||||
personalities: {}
|
||||
@@ -165,6 +208,8 @@ security:
|
||||
enabled: false
|
||||
domains: []
|
||||
shared_files: []
|
||||
cron:
|
||||
wrap_response: true
|
||||
_config_version: 10
|
||||
platforms:
|
||||
api_server:
|
||||
@@ -176,35 +221,62 @@ session_reset:
|
||||
mode: none
|
||||
idle_minutes: 0
|
||||
custom_providers:
|
||||
- name: Local Ollama
|
||||
base_url: http://localhost:11434/v1
|
||||
api_key: ollama
|
||||
model: glm-4.7-flash:latest
|
||||
- name: Local llama.cpp
|
||||
base_url: http://localhost:8081/v1
|
||||
api_key: none
|
||||
model: hermes4:14b
|
||||
- name: Google Gemini
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
model: gemini-2.5-pro
|
||||
system_prompt_suffix: "You are Timmy. Your soul is defined in SOUL.md \u2014 read\
|
||||
\ it, live it.\nYou run locally on your owner's machine via Ollama. You never phone\
|
||||
\ home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
\ it, live it.\nYou run locally on your owner's machine via llama.cpp. You never\
|
||||
\ phone home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
When you don't know something, say so. Refusal over fabrication.\nSovereignty and\
|
||||
\ service always.\n"
|
||||
skills:
|
||||
creation_nudge_interval: 15
|
||||
DISCORD_HOME_CHANNEL: '1476292315814297772'
|
||||
providers:
|
||||
ollama:
|
||||
base_url: http://localhost:11434/v1
|
||||
model: hermes3:latest
|
||||
mcp_servers:
|
||||
orchestration:
|
||||
command: /Users/apayne/.hermes/hermes-agent/venv/bin/python3
|
||||
morrowind:
|
||||
command: python3
|
||||
args:
|
||||
- /Users/apayne/.hermes/hermes-agent/tools/orchestration_mcp_server.py
|
||||
- /Users/apayne/.timmy/morrowind/mcp_server.py
|
||||
env: {}
|
||||
timeout: 120
|
||||
fallback_model:
|
||||
provider: custom
|
||||
model: gemini-2.5-pro
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
timeout: 30
|
||||
fallback_model: null
|
||||
|
||||
# ── Fallback Model ────────────────────────────────────────────────────
|
||||
# Automatic provider failover when primary is unavailable.
|
||||
# Uncomment and configure to enable. Triggers on rate limits (429),
|
||||
# overload (529), service errors (503), or connection failures.
|
||||
#
|
||||
# Supported providers:
|
||||
# openrouter (OPENROUTER_API_KEY) — routes to any model
|
||||
# openai-codex (OAuth — hermes login) — OpenAI Codex
|
||||
# nous (OAuth — hermes login) — Nous Portal
|
||||
# zai (ZAI_API_KEY) — Z.AI / GLM
|
||||
# kimi-coding (KIMI_API_KEY) — Kimi / Moonshot
|
||||
# minimax (MINIMAX_API_KEY) — MiniMax
|
||||
# minimax-cn (MINIMAX_CN_API_KEY) — MiniMax (China)
|
||||
#
|
||||
# For custom OpenAI-compatible endpoints, add base_url and api_key_env.
|
||||
#
|
||||
# fallback_model:
|
||||
# provider: openrouter
|
||||
# model: anthropic/claude-sonnet-4
|
||||
#
|
||||
# ── Smart Model Routing ────────────────────────────────────────────────
|
||||
# Optional cheap-vs-strong routing for simple turns.
|
||||
# Keeps the primary model for complex work, but can route short/simple
|
||||
# messages to a cheaper model across providers.
|
||||
#
|
||||
# smart_model_routing:
|
||||
# enabled: true
|
||||
# max_simple_chars: 160
|
||||
# max_simple_words: 28
|
||||
# cheap_model:
|
||||
# provider: openrouter
|
||||
# model: google/gemini-2.5-flash
|
||||
|
||||
24
deploy.sh
24
deploy.sh
@@ -3,7 +3,7 @@
|
||||
# This is the canonical way to deploy Timmy's configuration.
|
||||
# Hermes-agent is the engine. timmy-config is the driver's seat.
|
||||
#
|
||||
# Usage: ./deploy.sh [--restart-loops]
|
||||
# Usage: ./deploy.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -74,24 +74,10 @@ done
|
||||
chmod +x "$HERMES_HOME/bin/"*.sh "$HERMES_HOME/bin/"*.py 2>/dev/null || true
|
||||
log "bin/ -> $HERMES_HOME/bin/"
|
||||
|
||||
# === Restart loops if requested ===
|
||||
if [ "${1:-}" = "--restart-loops" ]; then
|
||||
log "Killing existing loops..."
|
||||
pkill -f 'claude-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'gemini-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'timmy-orchestrator.sh' 2>/dev/null || true
|
||||
sleep 2
|
||||
|
||||
log "Clearing stale locks..."
|
||||
rm -rf "$HERMES_HOME/logs/claude-locks/"* 2>/dev/null || true
|
||||
rm -rf "$HERMES_HOME/logs/gemini-locks/"* 2>/dev/null || true
|
||||
|
||||
log "Relaunching loops..."
|
||||
nohup bash "$HERMES_HOME/bin/timmy-orchestrator.sh" >> "$HERMES_HOME/logs/timmy-orchestrator.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/claude-loop.sh" 2 >> "$HERMES_HOME/logs/claude-loop.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/gemini-loop.sh" 1 >> "$HERMES_HOME/logs/gemini-loop.log" 2>&1 &
|
||||
sleep 1
|
||||
log "Loops relaunched."
|
||||
if [ "${1:-}" != "" ]; then
|
||||
echo "ERROR: deploy.sh no longer accepts legacy loop flags." >&2
|
||||
echo "Deploy the sidecar only. Do not relaunch deprecated bash loops." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "Deploy complete. timmy-config applied to $HERMES_HOME/"
|
||||
|
||||
358
docs/automation-inventory.md
Normal file
358
docs/automation-inventory.md
Normal file
@@ -0,0 +1,358 @@
|
||||
# Automation Inventory
|
||||
|
||||
Last audited: 2026-04-04 15:55 EDT
|
||||
Owner: Timmy sidecar / Timmy home split
|
||||
Purpose: document every known automation that can restart services, revive old worktrees, reuse stale session state, or re-enter old queue state.
|
||||
|
||||
## Why this file exists
|
||||
|
||||
The failure mode is not just "a process is running".
|
||||
The failure mode is:
|
||||
- launchd or a watchdog restarts something behind our backs
|
||||
- the restarted process reads old config, old labels, old worktrees, old session mappings, or old tmux assumptions
|
||||
- the machine appears haunted because old state comes back after we thought it was gone
|
||||
|
||||
This file is the source of truth for what automations exist, what state they read, and how to stop or reset them safely.
|
||||
|
||||
## Source-of-truth split
|
||||
|
||||
Not all automations live in one repo.
|
||||
|
||||
1. timmy-config
|
||||
Path: ~/.timmy/timmy-config
|
||||
Owns: sidecar deployment, ~/.hermes/config.yaml overlay, launch-facing helper scripts in timmy-config/bin/
|
||||
|
||||
2. timmy-home
|
||||
Path: ~/.timmy
|
||||
Owns: Kimi heartbeat script at uniwizard/kimi-heartbeat.sh and other workspace-native automation
|
||||
|
||||
3. live runtime
|
||||
Path: ~/.hermes/bin
|
||||
Reality: some scripts are still only present live in ~/.hermes/bin and are NOT yet mirrored into timmy-config/bin/
|
||||
|
||||
Rule:
|
||||
- Do not assume ~/.hermes/bin is canonical.
|
||||
- Do not assume timmy-config contains every currently running automation.
|
||||
- Audit runtime first, then reconcile to source control.
|
||||
|
||||
## Current live automations
|
||||
|
||||
### A. launchd-loaded automations
|
||||
|
||||
These are loaded right now according to `launchctl list`.
|
||||
|
||||
#### 1. ai.hermes.gateway
|
||||
- Plist: ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
- Command: `python -m hermes_cli.main gateway run --replace`
|
||||
- HERMES_HOME: `~/.hermes`
|
||||
- Logs:
|
||||
- `~/.hermes/logs/gateway.log`
|
||||
- `~/.hermes/logs/gateway.error.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- State it reuses:
|
||||
- `~/.hermes/config.yaml`
|
||||
- `~/.hermes/channel_directory.json`
|
||||
- `~/.hermes/sessions/sessions.json`
|
||||
- `~/.hermes/state.db`
|
||||
- Old-state risk:
|
||||
- if config drifted, this gateway will faithfully revive the drift
|
||||
- if Telegram/session mappings are stale, it will continue stale conversations
|
||||
|
||||
Stop:
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
```
|
||||
Start:
|
||||
```bash
|
||||
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist
|
||||
```
|
||||
|
||||
#### 2. ai.hermes.gateway-fenrir
|
||||
- Plist: ~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist
|
||||
- Command: same gateway binary
|
||||
- HERMES_HOME: `~/.hermes/profiles/fenrir`
|
||||
- Logs:
|
||||
- `~/.hermes/profiles/fenrir/logs/gateway.log`
|
||||
- `~/.hermes/profiles/fenrir/logs/gateway.error.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- same class as main gateway, but isolated to fenrir profile state
|
||||
|
||||
#### 3. ai.openclaw.gateway
|
||||
- Plist: ~/Library/LaunchAgents/ai.openclaw.gateway.plist
|
||||
- Command: `node .../openclaw/dist/index.js gateway --port 18789`
|
||||
- Logs:
|
||||
- `~/.openclaw/logs/gateway.log`
|
||||
- `~/.openclaw/logs/gateway.err.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- long-lived gateway survives toolchain assumptions and keeps accepting work even if upstream routing changed
|
||||
|
||||
#### 4. ai.timmy.kimi-heartbeat
|
||||
- Plist: ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist
|
||||
- Command: `/bin/bash ~/.timmy/uniwizard/kimi-heartbeat.sh`
|
||||
- Interval: every 300s
|
||||
- Logs:
|
||||
- `/tmp/kimi-heartbeat-launchd.log`
|
||||
- `/tmp/kimi-heartbeat-launchd.err`
|
||||
- script log: `/tmp/kimi-heartbeat.log`
|
||||
- State it reuses:
|
||||
- `/tmp/kimi-heartbeat.lock`
|
||||
- Gitea labels: `assigned-kimi`, `kimi-in-progress`, `kimi-done`
|
||||
- repo issue bodies/comments as task memory
|
||||
- Current behavior as of this audit:
|
||||
- stale `kimi-in-progress` tasks are now reclaimed after 1 hour of silence
|
||||
- Old-state risk:
|
||||
- labels ARE the queue state; if labels are stale, the heartbeat used to starve forever
|
||||
- the heartbeat is source-controlled in timmy-home, not timmy-config
|
||||
|
||||
Stop:
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist
|
||||
```
|
||||
|
||||
Clear lock only if process is truly dead:
|
||||
```bash
|
||||
rm -f /tmp/kimi-heartbeat.lock
|
||||
```
|
||||
|
||||
#### 5. ai.timmy.claudemax-watchdog
|
||||
- Plist: ~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist
|
||||
- Command: `/bin/bash ~/.hermes/bin/claudemax-watchdog.sh`
|
||||
- Interval: every 300s
|
||||
- Logs:
|
||||
- `~/.hermes/logs/claudemax-watchdog.log`
|
||||
- launchd wrapper: `~/.hermes/logs/claudemax-launchd.log`
|
||||
- State it reuses:
|
||||
- live process table via `pgrep`
|
||||
- recent Claude logs `~/.hermes/logs/claude-*.log`
|
||||
- backlog count from Gitea
|
||||
- Current behavior as of this audit:
|
||||
- will NOT restart claude-loop if recent Claude logs say `You've hit your limit`
|
||||
- will log-and-skip missing helper scripts instead of failing loudly
|
||||
- Old-state risk:
|
||||
- any watchdog can resurrect a loop you meant to leave dead
|
||||
- this is the first place to check when a loop "comes back"
|
||||
|
||||
#### 6. com.timmy.dashboard-backend
|
||||
- Plist: ~/Library/LaunchAgents/com.timmy.dashboard-backend.plist
|
||||
- Command: uvicorn `dashboard.app:app`
|
||||
- Working directory: `~/worktrees/kimi-repo`
|
||||
- Port: 8100
|
||||
- Logs: `~/.hermes/logs/dashboard-backend.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- this serves code from a specific worktree, not from current repo truth in the abstract
|
||||
- if `~/worktrees/kimi-repo` is stale, launchd will faithfully keep serving stale code
|
||||
|
||||
#### 7. com.timmy.matrix-frontend
|
||||
- Plist: ~/Library/LaunchAgents/com.timmy.matrix-frontend.plist
|
||||
- Command: `npx vite --host`
|
||||
- Working directory: `~/worktrees/the-matrix`
|
||||
- Logs: `~/.hermes/logs/matrix-frontend.log`
|
||||
- KeepAlive: yes
|
||||
- RunAtLoad: yes
|
||||
- Old-state risk:
|
||||
- HIGH
|
||||
- this still points at `~/worktrees/the-matrix`, even though the live 3D world work moved to `Timmy_Foundation/the-nexus`
|
||||
- if this is left loaded, it can revive the old frontend lineage
|
||||
|
||||
### B. running now but NOT launchd-managed
|
||||
|
||||
These are live processes, but not currently represented by a loaded launchd plist.
|
||||
They can still persist because they were started with `nohup` or by other parent scripts.
|
||||
|
||||
#### 8. gemini-loop.sh
|
||||
- Live process: `~/.hermes/bin/gemini-loop.sh`
|
||||
- State files:
|
||||
- `~/.hermes/logs/gemini-loop.log`
|
||||
- `~/.hermes/logs/gemini-skip-list.json`
|
||||
- `~/.hermes/logs/gemini-active.json`
|
||||
- `~/.hermes/logs/gemini-locks/`
|
||||
- `~/.hermes/logs/gemini-pids/`
|
||||
- worktrees under `~/worktrees/gemini-w*`
|
||||
- per-issue logs `~/.hermes/logs/gemini-*.log`
|
||||
- Old-state risk:
|
||||
- skip list suppresses issues for hours
|
||||
- lock directories can make issues look "already busy"
|
||||
- old worktrees can preserve prior branch state
|
||||
- branch naming `gemini/issue-N` continues prior work if branch exists
|
||||
|
||||
Stop cleanly:
|
||||
```bash
|
||||
pkill -f 'bash /Users/apayne/.hermes/bin/gemini-loop.sh'
|
||||
pkill -f 'gemini .*--yolo'
|
||||
rm -rf ~/.hermes/logs/gemini-locks/*.lock ~/.hermes/logs/gemini-pids/*.pid
|
||||
printf '{}\n' > ~/.hermes/logs/gemini-active.json
|
||||
```
|
||||
|
||||
#### 9. timmy-orchestrator.sh
|
||||
- Live process: `~/.hermes/bin/timmy-orchestrator.sh`
|
||||
- State files:
|
||||
- `~/.hermes/logs/timmy-orchestrator.log`
|
||||
- `~/.hermes/logs/timmy-orchestrator.pid`
|
||||
- `~/.hermes/logs/timmy-reviews.log`
|
||||
- `~/.hermes/logs/workforce-manager.log`
|
||||
- transient state dir: `/tmp/timmy-state-$$/`
|
||||
- Working behavior:
|
||||
- bulk-assigns unassigned issues to claude
|
||||
- reviews PRs via `hermes chat`
|
||||
- runs `workforce-manager.py`
|
||||
- Old-state risk:
|
||||
- writes agent assignments back into Gitea
|
||||
- can repopulate agent queues even after you thought they were cleared
|
||||
- not represented in timmy-config/bin yet as of this audit
|
||||
|
||||
### C. Hermes cron automations
|
||||
|
||||
Current cron inventory from `cronjob(list, include_disabled=true)`:
|
||||
|
||||
Enabled:
|
||||
- `a77a87392582` — Health Monitor — every 5m
|
||||
|
||||
Paused:
|
||||
- `9e0624269ba7` — Triage Heartbeat
|
||||
- `e29eda4a8548` — PR Review Sweep
|
||||
- `5e9d952871bc` — Agent Status Check
|
||||
- `36fb2f630a17` — Hermes Philosophy Loop
|
||||
|
||||
Old-state risk:
|
||||
- paused crons are not dead forever; they are resumable state
|
||||
- LLM-wrapped crons can revive old routing/model assumptions if resumed blindly
|
||||
|
||||
### D. file exists but NOT currently loaded
|
||||
|
||||
These are the ones most likely to surprise us later because they still exist and point at old realities.
|
||||
|
||||
#### 10. ai.hermes.startup
|
||||
- Plist: `~/Library/LaunchAgents/ai.hermes.startup.plist`
|
||||
- Points to: `~/.hermes/bin/hermes-startup.sh`
|
||||
- Not loaded in launchctl at audit time
|
||||
- High-risk notes:
|
||||
- startup script still expects `~/.hermes/bin/timmy-tmux.sh`
|
||||
- that file is MISSING at audit time
|
||||
- script also tries to start webhook listener and the old `timmy-loop` tmux world
|
||||
- This is a dormant old-state resurrection path
|
||||
|
||||
#### 11. com.timmy.tick
|
||||
- Plist: `~/Library/LaunchAgents/com.timmy.tick.plist`
|
||||
- Points to: `/Users/apayne/Timmy-time-dashboard/deploy/timmy-tick-mac.sh`
|
||||
- Not loaded at audit time
|
||||
- Definitely legacy dashboard-era automation
|
||||
|
||||
#### 12. com.tower.pr-automerge
|
||||
- Plist: `~/Library/LaunchAgents/com.tower.pr-automerge.plist`
|
||||
- Points to: `/Users/apayne/hermes-config/bin/pr-automerge.sh`
|
||||
- Not loaded at audit time
|
||||
- Separate Tower-era automation path; not part of current Timmy sidecar truth
|
||||
|
||||
## State carriers that make the machine feel haunted
|
||||
|
||||
These are the files and external states that most often "bring back old state":
|
||||
|
||||
### Hermes runtime state
|
||||
- `~/.hermes/config.yaml`
|
||||
- `~/.hermes/channel_directory.json`
|
||||
- `~/.hermes/sessions/sessions.json`
|
||||
- `~/.hermes/state.db`
|
||||
|
||||
### Loop state
|
||||
- `~/.hermes/logs/claude-skip-list.json`
|
||||
- `~/.hermes/logs/claude-active.json`
|
||||
- `~/.hermes/logs/claude-locks/`
|
||||
- `~/.hermes/logs/claude-pids/`
|
||||
- `~/.hermes/logs/gemini-skip-list.json`
|
||||
- `~/.hermes/logs/gemini-active.json`
|
||||
- `~/.hermes/logs/gemini-locks/`
|
||||
- `~/.hermes/logs/gemini-pids/`
|
||||
|
||||
### Kimi queue state
|
||||
- Gitea labels, not local files, are the queue truth
|
||||
- `assigned-kimi`
|
||||
- `kimi-in-progress`
|
||||
- `kimi-done`
|
||||
|
||||
### Worktree state
|
||||
- `~/worktrees/*`
|
||||
- especially old frontend/backend worktrees like:
|
||||
- `~/worktrees/the-matrix`
|
||||
- `~/worktrees/kimi-repo`
|
||||
|
||||
### Launchd state
|
||||
- plist files in `~/Library/LaunchAgents`
|
||||
- anything with `RunAtLoad` and `KeepAlive` can resurrect automatically
|
||||
|
||||
## Audit commands
|
||||
|
||||
List loaded Timmy/Hermes automations:
|
||||
```bash
|
||||
launchctl list | egrep 'timmy|kimi|claude|max|dashboard|matrix|gateway|huey'
|
||||
```
|
||||
|
||||
List Timmy/Hermes launch agent files:
|
||||
```bash
|
||||
find ~/Library/LaunchAgents -maxdepth 1 -name '*.plist' | egrep 'timmy|hermes|openclaw|tower'
|
||||
```
|
||||
|
||||
List running loop scripts:
|
||||
```bash
|
||||
ps -Ao pid,ppid,etime,command | egrep '/Users/apayne/.hermes/bin/|/Users/apayne/.timmy/uniwizard/'
|
||||
```
|
||||
|
||||
List cron jobs:
|
||||
```bash
|
||||
hermes cron list --include-disabled
|
||||
```
|
||||
|
||||
## Safe reset order when old state keeps coming back
|
||||
|
||||
1. Stop launchd jobs first
|
||||
```bash
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.openclaw.gateway.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/com.timmy.dashboard-backend.plist || true
|
||||
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/com.timmy.matrix-frontend.plist || true
|
||||
```
|
||||
|
||||
2. Kill manual loops
|
||||
```bash
|
||||
pkill -f 'gemini-loop.sh' || true
|
||||
pkill -f 'timmy-orchestrator.sh' || true
|
||||
pkill -f 'claude-loop.sh' || true
|
||||
pkill -f 'claude .*--print' || true
|
||||
pkill -f 'gemini .*--yolo' || true
|
||||
```
|
||||
|
||||
3. Clear local loop state
|
||||
```bash
|
||||
rm -rf ~/.hermes/logs/claude-locks/*.lock ~/.hermes/logs/claude-pids/*.pid
|
||||
rm -rf ~/.hermes/logs/gemini-locks/*.lock ~/.hermes/logs/gemini-pids/*.pid
|
||||
printf '{}\n' > ~/.hermes/logs/claude-active.json
|
||||
printf '{}\n' > ~/.hermes/logs/gemini-active.json
|
||||
rm -f /tmp/kimi-heartbeat.lock
|
||||
```
|
||||
|
||||
4. If gateway/session drift is the problem, back up before clearing
|
||||
```bash
|
||||
cp ~/.hermes/config.yaml ~/.hermes/config.yaml.bak.$(date +%Y%m%d-%H%M%S)
|
||||
cp ~/.hermes/sessions/sessions.json ~/.hermes/sessions/sessions.json.bak.$(date +%Y%m%d-%H%M%S)
|
||||
```
|
||||
|
||||
5. Relaunch only what you explicitly want
|
||||
|
||||
## Current contradictions to fix later
|
||||
|
||||
1. README still describes `bin/` as "NOT deprecated loops" but live runtime still contains revived loop scripts.
|
||||
2. `DEPRECATED.md` says claude-loop/gemini-loop/timmy-orchestrator/claudemax-watchdog were removed, but reality disagrees.
|
||||
3. `com.timmy.matrix-frontend` still points at `~/worktrees/the-matrix` rather than the nexus lineage.
|
||||
4. `ai.hermes.startup` still points at a startup path that expects missing `timmy-tmux.sh`.
|
||||
5. `gemini-loop.sh` and `timmy-orchestrator.sh` are live but not yet mirrored into timmy-config/bin/.
|
||||
|
||||
Until those are reconciled, trust this inventory over older prose.
|
||||
438
docs/local-model-integration-sketch.md
Normal file
438
docs/local-model-integration-sketch.md
Normal file
@@ -0,0 +1,438 @@
|
||||
# Local Model Integration Sketch v2
|
||||
# Hermes4-14B in the Heartbeat Loop — No New Telemetry
|
||||
|
||||
## Principle
|
||||
|
||||
No new inference layer. Huey tasks call `hermes chat -q` pointed at
|
||||
Ollama. Hermes handles sessions, token tracking, cost logging.
|
||||
The dashboard reads what Hermes already stores.
|
||||
|
||||
---
|
||||
|
||||
## Why Not Ollama Directly?
|
||||
|
||||
Ollama is fine as a serving backend. The issue isn't Ollama — it's that
|
||||
calling Ollama directly with urllib bypasses the harness. The harness
|
||||
already tracks sessions, tokens, model/provider, platform. Building a
|
||||
second telemetry layer is owning code we don't need.
|
||||
|
||||
Ollama as a named provider isn't wired into the --provider flag yet,
|
||||
but routing works via env vars:
|
||||
|
||||
HERMES_MODEL="hermes4:14b" \
|
||||
HERMES_PROVIDER="custom" \
|
||||
HERMES_BASE_URL="http://localhost:11434/v1" \
|
||||
hermes chat -q "prompt here" -Q
|
||||
|
||||
This creates a tracked session, logs tokens, and returns the response.
|
||||
That's our local inference call.
|
||||
|
||||
### Alternatives to Ollama for serving:
|
||||
- **llama.cpp server** — lighter, no Python, raw HTTP. Good for single
|
||||
model serving. Less convenient for model switching.
|
||||
- **vLLM** — best throughput, but needs NVIDIA GPU. Not for M3 Mac.
|
||||
- **MLX serving** — native Apple Silicon, but no OpenAI-compat API yet.
|
||||
MLX is for training, not serving (our current policy).
|
||||
- **llamafile** — single binary, portable. Good for distribution.
|
||||
|
||||
Verdict: Ollama is fine. It's the standard OpenAI-compat local server
|
||||
on Mac. The issue was never Ollama — it was bypassing the harness.
|
||||
|
||||
---
|
||||
|
||||
## 1. The Call Pattern
|
||||
|
||||
One function in tasks.py that all Huey tasks use:
|
||||
|
||||
```python
|
||||
import subprocess
|
||||
import json
|
||||
|
||||
HERMES_BIN = "hermes"
|
||||
LOCAL_ENV = {
|
||||
"HERMES_MODEL": "hermes4:14b",
|
||||
"HERMES_PROVIDER": "custom",
|
||||
"HERMES_BASE_URL": "http://localhost:11434/v1",
|
||||
}
|
||||
|
||||
def hermes_local(prompt, caller_tag=None, max_retries=2):
|
||||
"""Call hermes with local Ollama model. Returns response text.
|
||||
|
||||
Every call creates a hermes session with full telemetry.
|
||||
caller_tag gets prepended to prompt for searchability.
|
||||
"""
|
||||
import os
|
||||
env = os.environ.copy()
|
||||
env.update(LOCAL_ENV)
|
||||
|
||||
tagged_prompt = prompt
|
||||
if caller_tag:
|
||||
tagged_prompt = f"[{caller_tag}] {prompt}"
|
||||
|
||||
for attempt in range(max_retries + 1):
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[HERMES_BIN, "chat", "-q", tagged_prompt, "-Q", "-t", "none"],
|
||||
capture_output=True, text=True,
|
||||
timeout=120, env=env,
|
||||
)
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
# Strip the session_id line from -Q output
|
||||
lines = result.stdout.strip().split("\n")
|
||||
response_lines = [l for l in lines if not l.startswith("session_id:")]
|
||||
return "\n".join(response_lines).strip()
|
||||
except subprocess.TimeoutExpired:
|
||||
if attempt == max_retries:
|
||||
return None
|
||||
continue
|
||||
return None
|
||||
```
|
||||
|
||||
Notes:
|
||||
- `-t none` disables all toolsets — the heartbeat model shouldn't
|
||||
have terminal/file access. Pure reasoning only.
|
||||
- `-Q` quiet mode suppresses banner/spinner, gives clean output.
|
||||
- Every call creates a session in Hermes session store. Searchable,
|
||||
exportable, countable.
|
||||
- The `[caller_tag]` prefix lets you filter sessions by which Huey
|
||||
task generated them: `hermes sessions list | grep heartbeat`
|
||||
|
||||
---
|
||||
|
||||
## 2. Heartbeat DECIDE Phase
|
||||
|
||||
Replace the hardcoded if/else with a model call:
|
||||
|
||||
```python
|
||||
# In heartbeat_tick(), replace the DECIDE + ACT section:
|
||||
|
||||
# DECIDE: let hermes4:14b reason about what to do
|
||||
decide_prompt = f"""System state at {now.isoformat()}:
|
||||
|
||||
{json.dumps(perception, indent=2)}
|
||||
|
||||
Previous tick: {last_tick.get('tick_id', 'none')}
|
||||
|
||||
You are the heartbeat monitor. Based on this state:
|
||||
1. List any actions needed (alerts, restarts, escalations). Empty if all OK.
|
||||
2. Rate severity: ok, warning, or critical.
|
||||
3. One sentence of reasoning.
|
||||
|
||||
Respond ONLY with JSON:
|
||||
{{"actions": [], "severity": "ok", "reasoning": "..."}}"""
|
||||
|
||||
decision = None
|
||||
try:
|
||||
raw = hermes_local(decide_prompt, caller_tag="heartbeat_tick")
|
||||
if raw:
|
||||
# Try to parse JSON from the response
|
||||
# Model might wrap it in markdown, so extract
|
||||
for line in raw.split("\n"):
|
||||
line = line.strip()
|
||||
if line.startswith("{"):
|
||||
decision = json.loads(line)
|
||||
break
|
||||
if not decision:
|
||||
decision = json.loads(raw)
|
||||
except (json.JSONDecodeError, Exception) as e:
|
||||
decision = None
|
||||
|
||||
# Fallback to hardcoded logic if model fails or is down
|
||||
if decision is None:
|
||||
actions = []
|
||||
if not perception.get("gitea_alive"):
|
||||
actions.append("ALERT: Gitea unreachable")
|
||||
health = perception.get("model_health", {})
|
||||
if isinstance(health, dict) and not health.get("ollama_running"):
|
||||
actions.append("ALERT: Ollama not running")
|
||||
decision = {
|
||||
"actions": actions,
|
||||
"severity": "fallback",
|
||||
"reasoning": "model unavailable, used hardcoded checks"
|
||||
}
|
||||
|
||||
tick_record["decision"] = decision
|
||||
actions = decision.get("actions", [])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. DPO Candidate Collection
|
||||
|
||||
No new database. Hermes sessions ARE the DPO candidates.
|
||||
|
||||
Every `hermes_local()` call creates a session. To extract DPO pairs:
|
||||
|
||||
```bash
|
||||
# Export all local-model sessions
|
||||
hermes sessions export --output /tmp/local-sessions.jsonl
|
||||
|
||||
# Filter for heartbeat decisions
|
||||
grep "heartbeat_tick" /tmp/local-sessions.jsonl > heartbeat_decisions.jsonl
|
||||
```
|
||||
|
||||
The existing `session_export` Huey task (runs every 4h) already extracts
|
||||
user→assistant pairs. It just needs to be aware that some sessions are
|
||||
now local-model decisions instead of human conversations.
|
||||
|
||||
For DPO annotation, add a simple review script:
|
||||
|
||||
```python
|
||||
# review_decisions.py — reads heartbeat tick logs, shows model decisions,
|
||||
# asks Alexander to mark chosen/rejected
|
||||
# Writes annotations back to the tick log files
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
TICK_DIR = Path.home() / ".timmy" / "heartbeat"
|
||||
|
||||
for log_file in sorted(TICK_DIR.glob("ticks_*.jsonl")):
|
||||
for line in log_file.read_text().strip().split("\n"):
|
||||
tick = json.loads(line)
|
||||
decision = tick.get("decision", {})
|
||||
if decision.get("severity") == "fallback":
|
||||
continue # skip fallback entries
|
||||
|
||||
print(f"\n--- Tick {tick['tick_id']} ---")
|
||||
print(f"Perception: {json.dumps(tick['perception'], indent=2)}")
|
||||
print(f"Decision: {json.dumps(decision, indent=2)}")
|
||||
|
||||
rating = input("Rate (c=chosen, r=rejected, s=skip): ").strip()
|
||||
if rating in ("c", "r"):
|
||||
tick["dpo_label"] = "chosen" if rating == "c" else "rejected"
|
||||
# write back... (append to annotated file)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Dashboard — Reads Hermes Data
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Timmy Model Dashboard — reads from Hermes, owns nothing."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
HERMES_HOME = Path.home() / ".hermes"
|
||||
TIMMY_HOME = Path.home() / ".timmy"
|
||||
|
||||
|
||||
def get_ollama_models():
|
||||
"""What's available in Ollama."""
|
||||
try:
|
||||
req = urllib.request.Request("http://localhost:11434/api/tags")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return json.loads(resp.read()).get("models", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_loaded_models():
|
||||
"""What's actually in VRAM right now."""
|
||||
try:
|
||||
req = urllib.request.Request("http://localhost:11434/api/ps")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return json.loads(resp.read()).get("models", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_huey_status():
|
||||
try:
|
||||
r = subprocess.run(["pgrep", "-f", "huey_consumer"],
|
||||
capture_output=True, timeout=5)
|
||||
return r.returncode == 0
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def get_hermes_sessions(hours=24):
|
||||
"""Read session metadata from Hermes session store."""
|
||||
sessions_file = HERMES_HOME / "sessions" / "sessions.json"
|
||||
if not sessions_file.exists():
|
||||
return []
|
||||
try:
|
||||
data = json.loads(sessions_file.read_text())
|
||||
return list(data.values())
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_heartbeat_ticks(date_str=None):
|
||||
"""Read today's heartbeat ticks."""
|
||||
if not date_str:
|
||||
date_str = datetime.now().strftime("%Y%m%d")
|
||||
tick_file = TIMMY_HOME / "heartbeat" / f"ticks_{date_str}.jsonl"
|
||||
if not tick_file.exists():
|
||||
return []
|
||||
ticks = []
|
||||
for line in tick_file.read_text().strip().split("\n"):
|
||||
try:
|
||||
ticks.append(json.loads(line))
|
||||
except Exception:
|
||||
continue
|
||||
return ticks
|
||||
|
||||
|
||||
def render(hours=24):
|
||||
models = get_ollama_models()
|
||||
loaded = get_loaded_models()
|
||||
huey = get_huey_status()
|
||||
sessions = get_hermes_sessions(hours)
|
||||
ticks = get_heartbeat_ticks()
|
||||
|
||||
loaded_names = {m.get("name", "") for m in loaded}
|
||||
|
||||
print("\033[2J\033[H")
|
||||
print("=" * 70)
|
||||
print(" TIMMY MODEL DASHBOARD")
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
print(f" {now} | Huey: {'UP' if huey else 'DOWN'} | Ollama models: {len(models)}")
|
||||
print("=" * 70)
|
||||
|
||||
# DEPLOYMENTS
|
||||
print("\n LOCAL MODELS")
|
||||
print(" " + "-" * 55)
|
||||
for m in models:
|
||||
name = m.get("name", "?")
|
||||
size_gb = m.get("size", 0) / 1e9
|
||||
status = "IN VRAM" if name in loaded_names else "on disk"
|
||||
print(f" {name:35s} {size_gb:5.1f}GB {status}")
|
||||
if not models:
|
||||
print(" (Ollama not responding)")
|
||||
|
||||
# HERMES SESSION ACTIVITY
|
||||
# Count sessions by platform/provider
|
||||
print(f"\n HERMES SESSIONS (recent)")
|
||||
print(" " + "-" * 55)
|
||||
local_sessions = [s for s in sessions
|
||||
if "localhost" in str(s.get("origin", {}))]
|
||||
cli_sessions = [s for s in sessions
|
||||
if s.get("platform") == "cli" or s.get("origin", {}).get("platform") == "cli"]
|
||||
|
||||
total_tokens = sum(s.get("total_tokens", 0) for s in sessions)
|
||||
print(f" Total sessions: {len(sessions)}")
|
||||
print(f" CLI sessions: {len(cli_sessions)}")
|
||||
print(f" Total tokens: {total_tokens:,}")
|
||||
|
||||
# HEARTBEAT STATUS
|
||||
print(f"\n HEARTBEAT ({len(ticks)} ticks today)")
|
||||
print(" " + "-" * 55)
|
||||
if ticks:
|
||||
last = ticks[-1]
|
||||
decision = last.get("decision", {})
|
||||
severity = decision.get("severity", "unknown")
|
||||
reasoning = decision.get("reasoning", "no model decision yet")
|
||||
print(f" Last tick: {last.get('tick_id', '?')}")
|
||||
print(f" Severity: {severity}")
|
||||
print(f" Reasoning: {reasoning[:60]}")
|
||||
|
||||
# Count model vs fallback decisions
|
||||
model_decisions = sum(1 for t in ticks
|
||||
if t.get("decision", {}).get("severity") != "fallback")
|
||||
fallback = len(ticks) - model_decisions
|
||||
print(f" Model decisions: {model_decisions} | Fallback: {fallback}")
|
||||
|
||||
# DPO labels if any
|
||||
labeled = sum(1 for t in ticks if "dpo_label" in t)
|
||||
if labeled:
|
||||
chosen = sum(1 for t in ticks if t.get("dpo_label") == "chosen")
|
||||
rejected = sum(1 for t in ticks if t.get("dpo_label") == "rejected")
|
||||
print(f" DPO labeled: {labeled} (chosen: {chosen}, rejected: {rejected})")
|
||||
else:
|
||||
print(" (no ticks today)")
|
||||
|
||||
# ACTIVE LOOPS
|
||||
print(f"\n ACTIVE LOOPS USING LOCAL MODELS")
|
||||
print(" " + "-" * 55)
|
||||
print(" heartbeat_tick 10m hermes4:14b DECIDE phase")
|
||||
print(" (future) 15m hermes4:14b issue triage")
|
||||
print(" (future) daily timmy:v0.1 morning report")
|
||||
|
||||
print(f"\n NON-LOCAL LOOPS (Gemini/Grok API)")
|
||||
print(" " + "-" * 55)
|
||||
print(" gemini_worker 20m gemini-2.5-pro aider")
|
||||
print(" grok_worker 20m grok-3-fast opencode")
|
||||
print(" cross_review 30m both PR review")
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
watch = "--watch" in sys.argv
|
||||
hours = 24
|
||||
for a in sys.argv[1:]:
|
||||
if a.startswith("--hours="):
|
||||
hours = int(a.split("=")[1])
|
||||
if watch:
|
||||
while True:
|
||||
render(hours)
|
||||
time.sleep(30)
|
||||
else:
|
||||
render(hours)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation Steps
|
||||
|
||||
### Step 1: Add hermes_local() to tasks.py
|
||||
- One function, ~20 lines
|
||||
- Calls `hermes chat -q` with Ollama env vars
|
||||
- All telemetry comes from Hermes for free
|
||||
|
||||
### Step 2: Wire heartbeat_tick DECIDE phase
|
||||
- Replace 6 lines of if/else with hermes_local() call
|
||||
- Keep hardcoded fallback when model is down
|
||||
- Decision stored in tick record for DPO review
|
||||
|
||||
### Step 3: Fix the MCP server warning
|
||||
- The orchestration MCP server path is broken — harmless but noisy
|
||||
- Either fix the path or remove from config
|
||||
|
||||
### Step 4: Drop model_dashboard.py in timmy-config/bin/
|
||||
- Reads Ollama API, Hermes sessions, heartbeat ticks
|
||||
- No new data stores — just views over existing ones
|
||||
- `python3 model_dashboard.py --watch` for live view
|
||||
|
||||
### Step 5: Expand to more Huey tasks
|
||||
- triage_issues: model reads issue, picks agent
|
||||
- good_morning_report: model writes the "From Timmy" section
|
||||
- Each expansion is just calling hermes_local() with a different prompt
|
||||
|
||||
---
|
||||
|
||||
## What Gets Hotfixed in Hermes Config
|
||||
|
||||
If `hermes insights` is broken (the cache_read_tokens column error),
|
||||
that needs a fix. The dashboard falls back to reading sessions.json
|
||||
directly, but insights would be the better data source.
|
||||
|
||||
The `providers.ollama` section in config.yaml exists but isn't wired
|
||||
to the --provider flag. Filing this upstream or patching locally would
|
||||
let us do `hermes chat -q "..." --provider ollama` cleanly instead
|
||||
of relying on env vars. Not blocking — env vars work today.
|
||||
|
||||
---
|
||||
|
||||
## What This Owns
|
||||
|
||||
- hermes_local() — 20-line wrapper around a subprocess call
|
||||
- model_dashboard.py — read-only views over existing data
|
||||
- review_decisions.py — optional DPO annotation CLI
|
||||
|
||||
## What This Does NOT Own
|
||||
|
||||
- Inference. Ollama does that.
|
||||
- Telemetry. Hermes does that.
|
||||
- Session storage. Hermes does that.
|
||||
- Token counting. Hermes does that.
|
||||
- Training pipeline. Already exists in timmy-config/training/.
|
||||
539
gitea_client.py
Normal file
539
gitea_client.py
Normal file
@@ -0,0 +1,539 @@
|
||||
"""
|
||||
Gitea API Client — typed, sovereign, zero-dependency.
|
||||
|
||||
Replaces raw curl calls scattered across 41 bash scripts.
|
||||
Uses only stdlib (urllib) so it works on any Python install.
|
||||
|
||||
Usage:
|
||||
from tools.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient() # reads token from ~/.hermes/gitea_token
|
||||
issues = client.list_issues("Timmy_Foundation/the-nexus", state="open")
|
||||
client.create_comment("Timmy_Foundation/the-nexus", 42, "PR created.")
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _read_token() -> str:
|
||||
"""Read Gitea token from standard locations."""
|
||||
for path in [
|
||||
Path.home() / ".hermes" / "gitea_token",
|
||||
Path.home() / ".hermes" / "gitea_token_vps",
|
||||
Path.home() / ".config" / "gitea" / "token",
|
||||
]:
|
||||
if path.exists():
|
||||
return path.read_text().strip()
|
||||
raise FileNotFoundError(
|
||||
"No Gitea token found. Checked: ~/.hermes/gitea_token, "
|
||||
"~/.hermes/gitea_token_vps, ~/.config/gitea/token"
|
||||
)
|
||||
|
||||
|
||||
def _read_base_url() -> str:
|
||||
"""Read Gitea base URL. Defaults to the VPS."""
|
||||
env = os.environ.get("GITEA_URL")
|
||||
if env:
|
||||
return env.rstrip("/")
|
||||
return "http://143.198.27.163:3000"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Data classes — typed responses
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@dataclass
|
||||
class User:
|
||||
id: int
|
||||
login: str
|
||||
full_name: str = ""
|
||||
email: str = ""
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "User":
|
||||
return cls(
|
||||
id=d.get("id", 0),
|
||||
login=d.get("login", ""),
|
||||
full_name=d.get("full_name", ""),
|
||||
email=d.get("email", ""),
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Label:
|
||||
id: int
|
||||
name: str
|
||||
color: str = ""
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "Label":
|
||||
return cls(id=d.get("id", 0), name=d.get("name", ""), color=d.get("color", ""))
|
||||
|
||||
|
||||
@dataclass
|
||||
class Issue:
|
||||
number: int
|
||||
title: str
|
||||
body: str
|
||||
state: str
|
||||
user: User
|
||||
assignees: list[User] = field(default_factory=list)
|
||||
labels: list[Label] = field(default_factory=list)
|
||||
created_at: str = ""
|
||||
updated_at: str = ""
|
||||
comments: int = 0
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "Issue":
|
||||
return cls(
|
||||
number=d.get("number", 0),
|
||||
title=d.get("title", ""),
|
||||
body=d.get("body", "") or "",
|
||||
state=d.get("state", ""),
|
||||
user=User.from_dict(d.get("user", {})),
|
||||
assignees=[User.from_dict(a) for a in d.get("assignees", []) or []],
|
||||
labels=[Label.from_dict(lb) for lb in d.get("labels", []) or []],
|
||||
created_at=d.get("created_at", ""),
|
||||
updated_at=d.get("updated_at", ""),
|
||||
comments=d.get("comments", 0),
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Comment:
|
||||
id: int
|
||||
body: str
|
||||
user: User
|
||||
created_at: str = ""
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "Comment":
|
||||
return cls(
|
||||
id=d.get("id", 0),
|
||||
body=d.get("body", "") or "",
|
||||
user=User.from_dict(d.get("user", {})),
|
||||
created_at=d.get("created_at", ""),
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PullRequest:
|
||||
number: int
|
||||
title: str
|
||||
body: str
|
||||
state: str
|
||||
user: User
|
||||
head_branch: str = ""
|
||||
base_branch: str = ""
|
||||
mergeable: bool = False
|
||||
merged: bool = False
|
||||
changed_files: int = 0
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "PullRequest":
|
||||
head = d.get("head", {}) or {}
|
||||
base = d.get("base", {}) or {}
|
||||
return cls(
|
||||
number=d.get("number", 0),
|
||||
title=d.get("title", ""),
|
||||
body=d.get("body", "") or "",
|
||||
state=d.get("state", ""),
|
||||
user=User.from_dict(d.get("user", {})),
|
||||
head_branch=head.get("ref", ""),
|
||||
base_branch=base.get("ref", ""),
|
||||
mergeable=d.get("mergeable", False),
|
||||
merged=d.get("merged", False) or False,
|
||||
changed_files=d.get("changed_files", 0),
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PRFile:
|
||||
filename: str
|
||||
status: str # added, modified, deleted
|
||||
additions: int = 0
|
||||
deletions: int = 0
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: dict) -> "PRFile":
|
||||
return cls(
|
||||
filename=d.get("filename", ""),
|
||||
status=d.get("status", ""),
|
||||
additions=d.get("additions", 0),
|
||||
deletions=d.get("deletions", 0),
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Client
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class GiteaError(Exception):
|
||||
"""Gitea API error with status code."""
|
||||
def __init__(self, status: int, message: str, url: str = ""):
|
||||
self.status = status
|
||||
self.url = url
|
||||
super().__init__(f"Gitea {status}: {message} [{url}]")
|
||||
|
||||
|
||||
class GiteaClient:
|
||||
"""
|
||||
Typed Gitea API client. Sovereign, zero-dependency.
|
||||
|
||||
Covers all operations the agent loops need:
|
||||
- Issues: list, get, create, update, close, assign, label, comment
|
||||
- PRs: list, get, create, merge, update, close, files
|
||||
- Repos: list org repos
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: Optional[str] = None,
|
||||
token: Optional[str] = None,
|
||||
):
|
||||
self.base_url = base_url or _read_base_url()
|
||||
self.token = token or _read_token()
|
||||
self.api = f"{self.base_url}/api/v1"
|
||||
|
||||
# -- HTTP layer ----------------------------------------------------------
|
||||
|
||||
def _request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[dict] = None,
|
||||
params: Optional[dict] = None,
|
||||
) -> Any:
|
||||
"""Make an authenticated API request. Returns parsed JSON."""
|
||||
url = f"{self.api}{path}"
|
||||
if params:
|
||||
url += "?" + urllib.parse.urlencode(params)
|
||||
|
||||
body = json.dumps(data).encode() if data else None
|
||||
req = urllib.request.Request(url, data=body, method=method)
|
||||
req.add_header("Authorization", f"token {self.token}")
|
||||
req.add_header("Content-Type", "application/json")
|
||||
req.add_header("Accept", "application/json")
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=30) as resp:
|
||||
raw = resp.read().decode()
|
||||
if not raw:
|
||||
return {}
|
||||
return json.loads(raw)
|
||||
except urllib.error.HTTPError as e:
|
||||
body_text = ""
|
||||
try:
|
||||
body_text = e.read().decode()
|
||||
except Exception:
|
||||
pass
|
||||
raise GiteaError(e.code, body_text, url) from e
|
||||
|
||||
def _get(self, path: str, **params) -> Any:
|
||||
# Filter out None values
|
||||
clean = {k: v for k, v in params.items() if v is not None}
|
||||
return self._request("GET", path, params=clean)
|
||||
|
||||
def _post(self, path: str, data: dict) -> Any:
|
||||
return self._request("POST", path, data=data)
|
||||
|
||||
def _patch(self, path: str, data: dict) -> Any:
|
||||
return self._request("PATCH", path, data=data)
|
||||
|
||||
def _delete(self, path: str) -> Any:
|
||||
return self._request("DELETE", path)
|
||||
|
||||
def _repo_path(self, repo: str) -> str:
|
||||
"""Convert 'owner/name' to '/repos/owner/name'."""
|
||||
return f"/repos/{repo}"
|
||||
|
||||
# -- Health --------------------------------------------------------------
|
||||
|
||||
def ping(self) -> bool:
|
||||
"""Check if Gitea is responding."""
|
||||
try:
|
||||
self._get("/version")
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# -- Repos ---------------------------------------------------------------
|
||||
|
||||
def list_org_repos(self, org: str, limit: int = 50) -> list[dict]:
|
||||
"""List repos in an organization."""
|
||||
return self._get(f"/orgs/{org}/repos", limit=limit)
|
||||
|
||||
# -- Issues --------------------------------------------------------------
|
||||
|
||||
def list_issues(
|
||||
self,
|
||||
repo: str,
|
||||
state: str = "open",
|
||||
assignee: Optional[str] = None,
|
||||
labels: Optional[str] = None,
|
||||
sort: str = "created",
|
||||
direction: str = "desc",
|
||||
limit: int = 30,
|
||||
page: int = 1,
|
||||
) -> list[Issue]:
|
||||
"""List issues for a repo."""
|
||||
raw = self._get(
|
||||
f"{self._repo_path(repo)}/issues",
|
||||
state=state,
|
||||
type="issues",
|
||||
assignee=assignee,
|
||||
labels=labels,
|
||||
sort=sort,
|
||||
direction=direction,
|
||||
limit=limit,
|
||||
page=page,
|
||||
)
|
||||
return [Issue.from_dict(i) for i in raw]
|
||||
|
||||
def get_issue(self, repo: str, number: int) -> Issue:
|
||||
"""Get a single issue."""
|
||||
return Issue.from_dict(
|
||||
self._get(f"{self._repo_path(repo)}/issues/{number}")
|
||||
)
|
||||
|
||||
def create_issue(
|
||||
self,
|
||||
repo: str,
|
||||
title: str,
|
||||
body: str = "",
|
||||
labels: Optional[list[int]] = None,
|
||||
assignees: Optional[list[str]] = None,
|
||||
) -> Issue:
|
||||
"""Create an issue."""
|
||||
data: dict[str, Any] = {"title": title, "body": body}
|
||||
if labels:
|
||||
data["labels"] = labels
|
||||
if assignees:
|
||||
data["assignees"] = assignees
|
||||
return Issue.from_dict(
|
||||
self._post(f"{self._repo_path(repo)}/issues", data)
|
||||
)
|
||||
|
||||
def update_issue(
|
||||
self,
|
||||
repo: str,
|
||||
number: int,
|
||||
title: Optional[str] = None,
|
||||
body: Optional[str] = None,
|
||||
state: Optional[str] = None,
|
||||
assignees: Optional[list[str]] = None,
|
||||
) -> Issue:
|
||||
"""Update an issue (title, body, state, assignees)."""
|
||||
data: dict[str, Any] = {}
|
||||
if title is not None:
|
||||
data["title"] = title
|
||||
if body is not None:
|
||||
data["body"] = body
|
||||
if state is not None:
|
||||
data["state"] = state
|
||||
if assignees is not None:
|
||||
data["assignees"] = assignees
|
||||
return Issue.from_dict(
|
||||
self._patch(f"{self._repo_path(repo)}/issues/{number}", data)
|
||||
)
|
||||
|
||||
def close_issue(self, repo: str, number: int) -> Issue:
|
||||
"""Close an issue."""
|
||||
return self.update_issue(repo, number, state="closed")
|
||||
|
||||
def assign_issue(self, repo: str, number: int, assignees: list[str]) -> Issue:
|
||||
"""Assign users to an issue."""
|
||||
return self.update_issue(repo, number, assignees=assignees)
|
||||
|
||||
def add_labels(self, repo: str, number: int, label_ids: list[int]) -> list[Label]:
|
||||
"""Add labels to an issue."""
|
||||
raw = self._post(
|
||||
f"{self._repo_path(repo)}/issues/{number}/labels",
|
||||
{"labels": label_ids},
|
||||
)
|
||||
return [Label.from_dict(lb) for lb in raw]
|
||||
|
||||
# -- Comments ------------------------------------------------------------
|
||||
|
||||
def list_comments(
|
||||
self, repo: str, number: int, since: Optional[str] = None
|
||||
) -> list[Comment]:
|
||||
"""List comments on an issue."""
|
||||
raw = self._get(
|
||||
f"{self._repo_path(repo)}/issues/{number}/comments",
|
||||
since=since,
|
||||
)
|
||||
return [Comment.from_dict(c) for c in raw]
|
||||
|
||||
def create_comment(self, repo: str, number: int, body: str) -> Comment:
|
||||
"""Add a comment to an issue."""
|
||||
return Comment.from_dict(
|
||||
self._post(
|
||||
f"{self._repo_path(repo)}/issues/{number}/comments",
|
||||
{"body": body},
|
||||
)
|
||||
)
|
||||
|
||||
# -- Pull Requests -------------------------------------------------------
|
||||
|
||||
def list_pulls(
|
||||
self,
|
||||
repo: str,
|
||||
state: str = "open",
|
||||
sort: str = "newest",
|
||||
limit: int = 20,
|
||||
page: int = 1,
|
||||
) -> list[PullRequest]:
|
||||
"""List pull requests."""
|
||||
raw = self._get(
|
||||
f"{self._repo_path(repo)}/pulls",
|
||||
state=state,
|
||||
sort=sort,
|
||||
limit=limit,
|
||||
page=page,
|
||||
)
|
||||
return [PullRequest.from_dict(p) for p in raw]
|
||||
|
||||
def get_pull(self, repo: str, number: int) -> PullRequest:
|
||||
"""Get a single pull request."""
|
||||
return PullRequest.from_dict(
|
||||
self._get(f"{self._repo_path(repo)}/pulls/{number}")
|
||||
)
|
||||
|
||||
def create_pull(
|
||||
self,
|
||||
repo: str,
|
||||
title: str,
|
||||
head: str,
|
||||
base: str = "main",
|
||||
body: str = "",
|
||||
) -> PullRequest:
|
||||
"""Create a pull request."""
|
||||
return PullRequest.from_dict(
|
||||
self._post(
|
||||
f"{self._repo_path(repo)}/pulls",
|
||||
{"title": title, "head": head, "base": base, "body": body},
|
||||
)
|
||||
)
|
||||
|
||||
def merge_pull(
|
||||
self,
|
||||
repo: str,
|
||||
number: int,
|
||||
method: str = "squash",
|
||||
delete_branch: bool = True,
|
||||
) -> bool:
|
||||
"""Merge a pull request. Returns True on success."""
|
||||
try:
|
||||
self._post(
|
||||
f"{self._repo_path(repo)}/pulls/{number}/merge",
|
||||
{"Do": method, "delete_branch_after_merge": delete_branch},
|
||||
)
|
||||
return True
|
||||
except GiteaError as e:
|
||||
if e.status == 405: # not mergeable
|
||||
return False
|
||||
raise
|
||||
|
||||
def update_pull_branch(
|
||||
self, repo: str, number: int, style: str = "rebase"
|
||||
) -> bool:
|
||||
"""Update a PR branch (rebase onto base). Returns True on success."""
|
||||
try:
|
||||
self._post(
|
||||
f"{self._repo_path(repo)}/pulls/{number}/update",
|
||||
{"style": style},
|
||||
)
|
||||
return True
|
||||
except GiteaError:
|
||||
return False
|
||||
|
||||
def close_pull(self, repo: str, number: int) -> PullRequest:
|
||||
"""Close a pull request without merging."""
|
||||
return PullRequest.from_dict(
|
||||
self._patch(
|
||||
f"{self._repo_path(repo)}/pulls/{number}",
|
||||
{"state": "closed"},
|
||||
)
|
||||
)
|
||||
|
||||
def get_pull_files(self, repo: str, number: int) -> list[PRFile]:
|
||||
"""Get files changed in a pull request."""
|
||||
raw = self._get(f"{self._repo_path(repo)}/pulls/{number}/files")
|
||||
return [PRFile.from_dict(f) for f in raw]
|
||||
|
||||
def find_pull_by_branch(
|
||||
self, repo: str, branch: str
|
||||
) -> Optional[PullRequest]:
|
||||
"""Find an open PR for a given head branch."""
|
||||
prs = self.list_pulls(repo, state="open", limit=50)
|
||||
for pr in prs:
|
||||
if pr.head_branch == branch:
|
||||
return pr
|
||||
return None
|
||||
|
||||
# -- Convenience ---------------------------------------------------------
|
||||
|
||||
def get_issue_with_comments(
|
||||
self, repo: str, number: int, last_n: int = 5
|
||||
) -> tuple[Issue, list[Comment]]:
|
||||
"""Get an issue and its most recent comments."""
|
||||
issue = self.get_issue(repo, number)
|
||||
comments = self.list_comments(repo, number)
|
||||
return issue, comments[-last_n:] if len(comments) > last_n else comments
|
||||
|
||||
def find_unassigned_issues(
|
||||
self,
|
||||
repo: str,
|
||||
limit: int = 30,
|
||||
exclude_labels: Optional[list[str]] = None,
|
||||
exclude_title_patterns: Optional[list[str]] = None,
|
||||
) -> list[Issue]:
|
||||
"""Find open issues not assigned to anyone."""
|
||||
issues = self.list_issues(repo, state="open", limit=limit)
|
||||
result = []
|
||||
for issue in issues:
|
||||
if issue.assignees:
|
||||
continue
|
||||
if exclude_labels:
|
||||
issue_label_names = {lb.name for lb in issue.labels}
|
||||
if issue_label_names & set(exclude_labels):
|
||||
continue
|
||||
if exclude_title_patterns:
|
||||
title_lower = issue.title.lower()
|
||||
if any(p.lower() in title_lower for p in exclude_title_patterns):
|
||||
continue
|
||||
result.append(issue)
|
||||
return result
|
||||
|
||||
def find_agent_issues(self, repo: str, agent: str, limit: int = 50) -> list[Issue]:
|
||||
"""Find open issues assigned to a specific agent.
|
||||
|
||||
Gitea's assignee query can return stale or misleading results, so we
|
||||
always post-filter on the actual assignee list in the returned issue.
|
||||
"""
|
||||
issues = self.list_issues(repo, state="open", assignee=agent, limit=limit)
|
||||
agent_lower = agent.lower()
|
||||
return [
|
||||
issue for issue in issues
|
||||
if any((assignee.login or "").lower() == agent_lower for assignee in issue.assignees)
|
||||
]
|
||||
|
||||
def find_agent_pulls(self, repo: str, agent: str) -> list[PullRequest]:
|
||||
"""Find open PRs created by a specific agent."""
|
||||
prs = self.list_pulls(repo, state="open", limit=50)
|
||||
return [pr for pr in prs if pr.user.login == agent]
|
||||
266529
logs/huey.error.log
Normal file
266529
logs/huey.error.log
Normal file
File diff suppressed because it is too large
Load Diff
0
logs/huey.log
Normal file
0
logs/huey.log
Normal file
@@ -4,7 +4,7 @@ description: >
|
||||
reproduces the bug, then fixes the code, then verifies.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 30
|
||||
temperature: 0.2
|
||||
|
||||
@@ -4,7 +4,7 @@ description: >
|
||||
agents. Decomposes large issues into smaller ones.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 20
|
||||
temperature: 0.3
|
||||
|
||||
@@ -4,7 +4,7 @@ description: >
|
||||
comments on problems. The merge bot replacement.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 20
|
||||
temperature: 0.2
|
||||
|
||||
@@ -4,7 +4,7 @@ description: >
|
||||
Well-scoped: 1-3 files per task, clear acceptance criteria.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 30
|
||||
temperature: 0.3
|
||||
|
||||
@@ -4,7 +4,7 @@ description: >
|
||||
dependency issues. Files findings as Gitea issues.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-opus-4-6
|
||||
max_turns: 40
|
||||
temperature: 0.2
|
||||
|
||||
@@ -4,7 +4,7 @@ description: >
|
||||
writes meaningful tests, verifies they pass.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
preferred: qwen3:30b
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 30
|
||||
temperature: 0.3
|
||||
|
||||
47
playbooks/verified-logic.yaml
Normal file
47
playbooks/verified-logic.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
name: verified-logic
|
||||
description: >
|
||||
Crucible-first playbook for tasks that require proof instead of plausible prose.
|
||||
Use Z3-backed sidecar tools for scheduling, dependency ordering, capacity checks,
|
||||
and consistency verification.
|
||||
|
||||
model:
|
||||
preferred: claude-opus-4-6
|
||||
fallback: claude-sonnet-4-20250514
|
||||
max_turns: 12
|
||||
temperature: 0.1
|
||||
|
||||
tools:
|
||||
- mcp_crucible_schedule_tasks
|
||||
- mcp_crucible_order_dependencies
|
||||
- mcp_crucible_capacity_fit
|
||||
|
||||
trigger:
|
||||
manual: true
|
||||
|
||||
steps:
|
||||
- classify_problem
|
||||
- choose_template
|
||||
- translate_into_constraints
|
||||
- verify_with_crucible
|
||||
- report_sat_unsat_with_witness
|
||||
|
||||
output: verified_result
|
||||
timeout_minutes: 5
|
||||
|
||||
system_prompt: |
|
||||
You are running the Crucible playbook.
|
||||
|
||||
Use this playbook for:
|
||||
- scheduling and deadline feasibility
|
||||
- dependency ordering and cycle checks
|
||||
- capacity / resource allocation constraints
|
||||
- consistency checks where a contradiction matters
|
||||
|
||||
RULES:
|
||||
1. Do not bluff through logic.
|
||||
2. Pick the narrowest Crucible template that fits the task.
|
||||
3. Translate the user's question into structured constraints.
|
||||
4. Call the Crucible tool.
|
||||
5. If SAT, report the witness model clearly.
|
||||
6. If UNSAT, say the constraints are impossible and explain which shape of constraint caused the contradiction.
|
||||
7. If the task is not a good fit for these templates, say so plainly instead of pretending it was verified.
|
||||
44
tests/test_gitea_assignee_filter.py
Normal file
44
tests/test_gitea_assignee_filter.py
Normal file
@@ -0,0 +1,44 @@
|
||||
from gitea_client import GiteaClient, Issue, User
|
||||
|
||||
|
||||
def _issue(number: int, assignees: list[str]) -> Issue:
|
||||
return Issue(
|
||||
number=number,
|
||||
title=f"Issue {number}",
|
||||
body="",
|
||||
state="open",
|
||||
user=User(id=1, login="Timmy"),
|
||||
assignees=[User(id=i + 10, login=name) for i, name in enumerate(assignees)],
|
||||
labels=[],
|
||||
)
|
||||
|
||||
|
||||
def test_find_agent_issues_filters_actual_assignees(monkeypatch):
|
||||
client = GiteaClient(base_url="http://example.invalid", token="test-token")
|
||||
|
||||
returned = [
|
||||
_issue(73, ["Timmy"]),
|
||||
_issue(74, ["gemini"]),
|
||||
_issue(75, ["grok", "Timmy"]),
|
||||
_issue(76, []),
|
||||
]
|
||||
|
||||
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
|
||||
|
||||
gemini_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "gemini")
|
||||
grok_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "grok")
|
||||
kimi_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "kimi")
|
||||
|
||||
assert [issue.number for issue in gemini_issues] == [74]
|
||||
assert [issue.number for issue in grok_issues] == [75]
|
||||
assert kimi_issues == []
|
||||
|
||||
|
||||
def test_find_agent_issues_is_case_insensitive(monkeypatch):
|
||||
client = GiteaClient(base_url="http://example.invalid", token="test-token")
|
||||
returned = [_issue(80, ["Gemini"])]
|
||||
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
|
||||
|
||||
issues = client.find_agent_issues("Timmy_Foundation/the-nexus", "gemini")
|
||||
|
||||
assert [issue.number for issue in issues] == [80]
|
||||
21
tests/test_local_runtime_defaults.py
Normal file
21
tests/test_local_runtime_defaults.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def test_config_defaults_to_local_llama_cpp_runtime() -> None:
|
||||
config = yaml.safe_load(Path("config.yaml").read_text())
|
||||
|
||||
assert config["model"]["provider"] == "custom"
|
||||
assert config["model"]["default"] == "hermes4:14b"
|
||||
assert config["model"]["base_url"] == "http://localhost:8081/v1"
|
||||
|
||||
local_provider = next(
|
||||
entry for entry in config["custom_providers"] if entry["name"] == "Local llama.cpp"
|
||||
)
|
||||
assert local_provider["model"] == "hermes4:14b"
|
||||
|
||||
assert config["fallback_model"]["provider"] == "custom"
|
||||
assert config["fallback_model"]["model"] == "gemini-2.5-pro"
|
||||
@@ -1,8 +1,11 @@
|
||||
# Training
|
||||
|
||||
LoRA fine-tuning pipeline for Timmy's sovereign model. No custom harness — just config files for existing tools.
|
||||
Transitional training recipes for Timmy's sovereign model. These files are
|
||||
useful as reference configs and export helpers, but they are not the canonical
|
||||
home of Timmy's lived training data.
|
||||
|
||||
Replaces the `autolora` repo (1,500 lines of custom code → config + `make`).
|
||||
Canonical data should live in `timmy-home` under gameplay trajectories,
|
||||
research artifacts, and `training-data/` exports such as DPO pairs.
|
||||
|
||||
## Install
|
||||
|
||||
@@ -23,6 +26,16 @@ make convert # Convert merged data to MLX train/valid format
|
||||
make help # Show all targets
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
This directory exists to avoid re-growing a bespoke training harness while the
|
||||
system boundary is being cleaned up.
|
||||
|
||||
- Keep thin recipes and export helpers here only when they directly support the
|
||||
Hermes sidecar.
|
||||
- Keep generated data, DPO pairs, and other lived artifacts in `timmy-home`.
|
||||
- Prefer deleting stale pipeline code over expanding it.
|
||||
|
||||
## Files
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user