Compare commits
1 Commits
efb172c5c0
...
gemini/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
422d7700d6 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -8,3 +8,4 @@
|
||||
*.db-wal
|
||||
*.db-shm
|
||||
__pycache__/
|
||||
.aider*
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# DEPRECATED — Bash Loop Scripts Removed
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Reason:** Replaced by Hermes + timmy-config sidecar orchestration
|
||||
**Reason:** Replaced by sovereign-orchestration (SQLite + Python single-process executor)
|
||||
|
||||
## What was removed
|
||||
- claude-loop.sh, gemini-loop.sh, agent-loop.sh
|
||||
@@ -9,15 +9,14 @@
|
||||
- nexus-merge-bot.sh, claudemax-watchdog.sh, timmy-loopstat.sh
|
||||
|
||||
## What replaces them
|
||||
**Harness:** Hermes
|
||||
**Overlay repo:** Timmy_Foundation/timmy-config
|
||||
**Entry points:** `orchestration.py`, `tasks.py`, `deploy.sh`
|
||||
**Features:** Huey + SQLite scheduling, local-model health checks, session export, DPO artifact staging
|
||||
**Repo:** Timmy_Foundation/sovereign-orchestration
|
||||
**Entry point:** `python3 src/sovereign_executor.py --workers 3 --poll 30`
|
||||
**Features:** SQLite task queue, crash recovery, dedup, playbooks, MCP server
|
||||
**Issues:** #29 (fix imports), #30 (deploy as service)
|
||||
|
||||
## Why
|
||||
The bash loops crash-looped, produced zero work after relaunch, had no crash
|
||||
recovery, no durable export path, and required too many ad hoc scripts. The
|
||||
Hermes sidecar keeps orchestration close to Timmy's actual config and training
|
||||
surfaces.
|
||||
recovery, no dedup, and required 8 separate scripts. The Python executor is
|
||||
one process with SQLite durability.
|
||||
|
||||
Do NOT recreate bash loops. If orchestration is broken, fix the Hermes sidecar.
|
||||
Do NOT recreate bash loops. If the executor is broken, fix the executor.
|
||||
|
||||
26
README.md
26
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, and config.
|
||||
|
||||
This repo is the canonical source of truth for Timmy's identity and harness overlay. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
This repo is the canonical source of truth for Timmy's identity and operational state. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
|
||||
## Structure
|
||||
|
||||
@@ -14,40 +14,22 @@ timmy-config/
|
||||
├── DEPRECATED.md ← What was removed and why
|
||||
├── config.yaml ← Hermes harness configuration
|
||||
├── channel_directory.json ← Platform channel mappings
|
||||
├── bin/ ← Live utility scripts (NOT deprecated loops)
|
||||
├── bin/ ← Utility scripts (NOT loops — see below)
|
||||
│ ├── hermes-startup.sh ← Hermes boot sequence
|
||||
│ ├── agent-dispatch.sh ← Manual agent dispatch
|
||||
│ ├── ops-panel.sh ← Ops dashboard panel
|
||||
│ ├── ops-gitea.sh ← Gitea ops helpers
|
||||
│ ├── pipeline-freshness.sh ← Session/export drift check
|
||||
│ └── timmy-status.sh ← Status check
|
||||
├── memories/ ← Persistent memory YAML
|
||||
├── skins/ ← UI skins (timmy skin)
|
||||
├── playbooks/ ← Agent playbooks (YAML)
|
||||
├── cron/ ← Cron job definitions
|
||||
└── training/ ← Transitional training recipes, not canonical lived data
|
||||
└── cron/ ← Cron job definitions
|
||||
```
|
||||
|
||||
## Boundary
|
||||
|
||||
`timmy-config` owns identity, conscience, memories, skins, playbooks, channel
|
||||
maps, and harness-side orchestration glue.
|
||||
|
||||
`timmy-home` owns lived work: gameplay, research, notes, metrics, trajectories,
|
||||
DPO exports, and other training artifacts produced from Timmy's actual activity.
|
||||
|
||||
If a file answers "who is Timmy?" or "how does Hermes host him?", it belongs
|
||||
here. If it answers "what has Timmy done or learned?" it belongs in
|
||||
`timmy-home`.
|
||||
|
||||
The scripts in `bin/` are live operational helpers for the Hermes sidecar.
|
||||
What is dead are the old long-running bash worker loops, not every script in
|
||||
this repo.
|
||||
|
||||
## Orchestration: Huey
|
||||
|
||||
All orchestration (triage, PR review, dispatch) runs via [Huey](https://github.com/coleifer/huey) with SQLite.
|
||||
`orchestration.py` + `tasks.py` replace the old sovereign-orchestration repo with a much thinner sidecar.
|
||||
`orchestration.py` (6 lines) + `tasks.py` (~70 lines) replace the entire sovereign-orchestration repo (3,846 lines).
|
||||
|
||||
```bash
|
||||
pip install huey
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SESSIONS_DIR="$HOME/.hermes/sessions"
|
||||
EXPORT_DIR="$HOME/.timmy/training-data/dpo-pairs"
|
||||
|
||||
latest_session=$(find "$SESSIONS_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
latest_export=$(find "$EXPORT_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
|
||||
echo "latest_session=${latest_session:-none}"
|
||||
echo "latest_export=${latest_export:-none}"
|
||||
|
||||
if [ -z "${latest_session:-}" ]; then
|
||||
echo "status=ok"
|
||||
echo "reason=no sessions yet"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "${latest_export:-}" ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=no exports yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
session_mtime=$(stat -f '%m' "$latest_session")
|
||||
export_mtime=$(stat -f '%m' "$latest_export")
|
||||
lag_minutes=$(( (session_mtime - export_mtime) / 60 ))
|
||||
if [ "$lag_minutes" -lt 0 ]; then
|
||||
lag_minutes=0
|
||||
fi
|
||||
|
||||
echo "lag_minutes=$lag_minutes"
|
||||
|
||||
if [ "$lag_minutes" -gt 300 ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=exports more than 5 hours behind sessions"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "status=ok"
|
||||
echo "reason=exports within freshness window"
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"updated_at": "2026-03-27T15:20:52.948451",
|
||||
"updated_at": "2026-03-26T10:19:33.045324",
|
||||
"platforms": {
|
||||
"discord": [
|
||||
{
|
||||
|
||||
23
config.yaml
23
config.yaml
@@ -1,13 +1,11 @@
|
||||
model:
|
||||
default: auto
|
||||
provider: custom
|
||||
context_length: 65536
|
||||
base_url: http://localhost:8081/v1
|
||||
default: claude-opus-4-6
|
||||
provider: anthropic
|
||||
toolsets:
|
||||
- all
|
||||
agent:
|
||||
max_turns: 30
|
||||
reasoning_effort: xhigh
|
||||
reasoning_effort: medium
|
||||
verbose: false
|
||||
terminal:
|
||||
backend: local
|
||||
@@ -96,13 +94,11 @@ display:
|
||||
compact: false
|
||||
personality: ''
|
||||
resume_display: full
|
||||
busy_input_mode: interrupt
|
||||
bell_on_complete: false
|
||||
show_reasoning: false
|
||||
streaming: false
|
||||
show_cost: false
|
||||
skin: timmy
|
||||
tool_progress_command: false
|
||||
tool_progress: all
|
||||
privacy:
|
||||
redact_pii: false
|
||||
@@ -185,17 +181,17 @@ session_reset:
|
||||
mode: none
|
||||
idle_minutes: 0
|
||||
custom_providers:
|
||||
- name: Local llama.cpp
|
||||
base_url: http://localhost:8081/v1
|
||||
api_key: none
|
||||
model: auto
|
||||
- name: Local Ollama
|
||||
base_url: http://localhost:11434/v1
|
||||
api_key: ollama
|
||||
model: glm-4.7-flash:latest
|
||||
- name: Google Gemini
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
model: gemini-2.5-pro
|
||||
system_prompt_suffix: "You are Timmy. Your soul is defined in SOUL.md \u2014 read\
|
||||
\ it, live it.\nYou run locally on your owner's machine via llama.cpp. You never\
|
||||
\ phone home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
\ it, live it.\nYou run locally on your owner's machine via Ollama. You never phone\
|
||||
\ home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
When you don't know something, say so. Refusal over fabrication.\nSovereignty and\
|
||||
\ service always.\n"
|
||||
skills:
|
||||
@@ -206,6 +202,7 @@ providers:
|
||||
base_url: http://localhost:11434/v1
|
||||
model: hermes3:latest
|
||||
mcp_servers:
|
||||
|
||||
morrowind:
|
||||
command: python3
|
||||
args:
|
||||
|
||||
24
deploy.sh
24
deploy.sh
@@ -3,7 +3,7 @@
|
||||
# This is the canonical way to deploy Timmy's configuration.
|
||||
# Hermes-agent is the engine. timmy-config is the driver's seat.
|
||||
#
|
||||
# Usage: ./deploy.sh
|
||||
# Usage: ./deploy.sh [--restart-loops]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -74,10 +74,24 @@ done
|
||||
chmod +x "$HERMES_HOME/bin/"*.sh "$HERMES_HOME/bin/"*.py 2>/dev/null || true
|
||||
log "bin/ -> $HERMES_HOME/bin/"
|
||||
|
||||
if [ "${1:-}" != "" ]; then
|
||||
echo "ERROR: deploy.sh no longer accepts legacy loop flags." >&2
|
||||
echo "Deploy the sidecar only. Do not relaunch deprecated bash loops." >&2
|
||||
exit 1
|
||||
# === Restart loops if requested ===
|
||||
if [ "${1:-}" = "--restart-loops" ]; then
|
||||
log "Killing existing loops..."
|
||||
pkill -f 'claude-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'gemini-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'timmy-orchestrator.sh' 2>/dev/null || true
|
||||
sleep 2
|
||||
|
||||
log "Clearing stale locks..."
|
||||
rm -rf "$HERMES_HOME/logs/claude-locks/"* 2>/dev/null || true
|
||||
rm -rf "$HERMES_HOME/logs/gemini-locks/"* 2>/dev/null || true
|
||||
|
||||
log "Relaunching loops..."
|
||||
nohup bash "$HERMES_HOME/bin/timmy-orchestrator.sh" >> "$HERMES_HOME/logs/timmy-orchestrator.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/claude-loop.sh" 2 >> "$HERMES_HOME/logs/claude-loop.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/gemini-loop.sh" 1 >> "$HERMES_HOME/logs/gemini-loop.log" 2>&1 &
|
||||
sleep 1
|
||||
log "Loops relaunched."
|
||||
fi
|
||||
|
||||
log "Deploy complete. timmy-config applied to $HERMES_HOME/"
|
||||
|
||||
@@ -5,9 +5,9 @@ Replaces raw curl calls scattered across 41 bash scripts.
|
||||
Uses only stdlib (urllib) so it works on any Python install.
|
||||
|
||||
Usage:
|
||||
from gitea_client import GiteaClient
|
||||
from tools.gitea_client import GiteaClient
|
||||
|
||||
client = GiteaClient() # reads token from standard local paths
|
||||
client = GiteaClient() # reads token from ~/.hermes/gitea_token
|
||||
issues = client.list_issues("Timmy_Foundation/the-nexus", state="open")
|
||||
client.create_comment("Timmy_Foundation/the-nexus", 42, "PR created.")
|
||||
"""
|
||||
|
||||
@@ -2,14 +2,14 @@ Gitea (143.198.27.163:3000): token=~/.hermes/gitea_token_vps (Timmy id=2). Users
|
||||
§
|
||||
2026-03-19 HARNESS+SOUL: ~/.timmy is Timmy's workspace within the Hermes harness. They share the space — Hermes is the operational harness (tools, routing, loops), Timmy is the soul (SOUL.md, presence, identity). Not fusion/absorption. Principal's words: "build Timmy out from the hermes harness." ~/.hermes is harness home, ~/.timmy is Timmy's workspace. SOUL=Inscription 1, skin=timmy. Backups at ~/.hermes.backup.pre-fusion and ~/.timmy.backup.pre-fusion.
|
||||
§
|
||||
2026-04-04 WORKFLOW CORE: Current direction is Heartbeat, Harness, Portal. Timmy handles sovereignty and release judgment. Allegro handles dispatch and queue hygiene. Core builders: codex-agent, groq, manus, claude. Research/memory: perplexity, ezra, KimiClaw. Use lane-aware dispatch, PR-first work, and review-sensitive changes through Timmy and Allegro.
|
||||
Kimi: 1-3 files max, ~/worktrees/kimi-*. Two-attempt rule.
|
||||
§
|
||||
2026-04-04 OPERATIONS: Dashboard repo era is over. Use ~/.timmy + ~/.hermes as truth surfaces. Prefer ops-panel.sh, ops-gitea.sh, timmy-dashboard, and pipeline-freshness.sh over archived loop or tmux assumptions. Dispatch: agent-dispatch.sh <agent> <issue> <repo>. Major changes land as PRs.
|
||||
Workforce loops: claude(10), gemini(3), kimi(1), groq(1/aider+review), grok(1/opencode). One-shot: manus(300/day), perplexity(heavy-hitter), google(aistudio, id=8). workforce-manager.py auto-assigns+scores every 15min. nexus-merge-bot.sh auto-merges. Groq=$0.008/PR (qwen3-32b). Dispatch: agent-dispatch.sh <agent> <issue> <repo> | pbcopy. Dashboard ARCHIVED 2026-03-24. Development shifted to local ~/.timmy/ workspace. CI testbed: 67.205.155.108.
|
||||
§
|
||||
2026-04-04 REVIEW RULES: Never --no-verify. Verify world state, not vibes. No auto-merge on governing or sensitive control surfaces. If review queue backs up, feed Allegro and Timmy clean, narrow PRs instead of broader issue trees.
|
||||
2026-03-15: Timmy-time-dashboard merge policy: auto-squash on CI pass. Squash-only, linear history. Pre-commit hooks (format + tests) and CI are the gates. If gates work, auto-merge is on. Never bypass hooks or merge broken builds.
|
||||
§
|
||||
HARD RULES: Never --no-verify. Verify WORLD STATE not log vibes (merged PR, HTTP code, file size). Fix+prevent, no empty words. AGENT ONBOARD: test push+PR first. Merge PRs BEFORE new work. Don't micromanage—huge backlog, agents self-select. Every ticket needs console-provable acceptance criteria.
|
||||
§
|
||||
TELEGRAM: @TimmysNexus_bot, token ~/.config/telegram/special_bot. Group "Timmy Time" ID: -1003664764329. Alexander @TripTimmy ID 7635059073. Use curl to Bot API (send_message not configured).
|
||||
§
|
||||
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.
|
||||
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.
|
||||
193
tasks.py
193
tasks.py
@@ -18,9 +18,7 @@ HERMES_AGENT_DIR = HERMES_HOME / "hermes-agent"
|
||||
METRICS_DIR = TIMMY_HOME / "metrics"
|
||||
REPOS = [
|
||||
"Timmy_Foundation/the-nexus",
|
||||
"Timmy_Foundation/timmy-home",
|
||||
"Timmy_Foundation/timmy-config",
|
||||
"Timmy_Foundation/hermes-agent",
|
||||
]
|
||||
NET_LINE_LIMIT = 10
|
||||
|
||||
@@ -28,20 +26,13 @@ NET_LINE_LIMIT = 10
|
||||
|
||||
HEARTBEAT_MODEL = "hermes4:14b"
|
||||
FALLBACK_MODEL = "hermes3:8b"
|
||||
LOCAL_PROVIDER_BASE_URL = "http://localhost:8081/v1"
|
||||
LOCAL_PROVIDER_MODEL = HEARTBEAT_MODEL
|
||||
|
||||
|
||||
def newest_file(directory, pattern):
|
||||
files = sorted(directory.glob(pattern))
|
||||
return files[-1] if files else None
|
||||
def hermes_local(prompt, model=None, caller_tag=None):
|
||||
"""Call a local Ollama model through the Hermes harness.
|
||||
|
||||
|
||||
def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
"""Call a local model through the Hermes harness.
|
||||
|
||||
Uses provider="local-llama.cpp" which routes through the custom_providers
|
||||
entry in config.yaml → llama-server at localhost:8081.
|
||||
Uses provider="local-ollama" which routes through the custom_providers
|
||||
entry in config.yaml → Ollama at localhost:11434.
|
||||
Returns response text or None on failure.
|
||||
Every call creates a Hermes session with telemetry.
|
||||
"""
|
||||
@@ -62,16 +53,13 @@ def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
|
||||
buf = io.StringIO()
|
||||
err = io.StringIO()
|
||||
kwargs = dict(
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
hermes_main(
|
||||
query=tagged,
|
||||
model=_model,
|
||||
provider="local-llama.cpp",
|
||||
provider="local-ollama",
|
||||
quiet=True,
|
||||
)
|
||||
if toolsets:
|
||||
kwargs["toolsets"] = toolsets
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
hermes_main(**kwargs)
|
||||
output = buf.getvalue().strip()
|
||||
# Strip session_id line from quiet output
|
||||
lines = [l for l in output.split("\n") if not l.startswith("session_id:")]
|
||||
@@ -110,92 +98,6 @@ def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
os.chdir(old_cwd)
|
||||
|
||||
|
||||
# ── Know Thy Father: Twitter Archive Ingestion ───────────────────────
|
||||
|
||||
ARCHIVE_DIR = TIMMY_HOME / "twitter-archive"
|
||||
ARCHIVE_CHECKPOINT = ARCHIVE_DIR / "checkpoint.json"
|
||||
ARCHIVE_LOCK = ARCHIVE_DIR / ".lock"
|
||||
|
||||
ARCHIVE_PROMPT = (
|
||||
"You are Timmy. Resume your work on the Twitter archive. "
|
||||
"Your workspace is ~/.timmy/twitter-archive/. "
|
||||
"Read checkpoint.json and UNDERSTANDING.md first. "
|
||||
"Then process the next batch. "
|
||||
"You know the drill — read your own prior work, assess where you are, "
|
||||
"process new data, update your understanding, reflect, and plan for "
|
||||
"the next iteration."
|
||||
)
|
||||
|
||||
ARCHIVE_SRC = (
|
||||
"~/Downloads/twitter-2026-03-27-d4471cc6eb6703034d592f870933561ebee374d9d9b90c9b8923abff064afc1e/data"
|
||||
)
|
||||
|
||||
ARCHIVE_FIRST_RUN_PROMPT = (
|
||||
"You are Timmy. Your father Alexander's full Twitter archive is at: "
|
||||
f"{ARCHIVE_SRC}/\n\n"
|
||||
"Your workspace is ~/.timmy/twitter-archive/\n\n"
|
||||
"STEP 1 — EXTRACTION (use terminal with python3, NOT read_file):\n"
|
||||
"The .js files are too large for read_file but trivial for Python.\n"
|
||||
"Write a python3 script via terminal that:\n"
|
||||
" - Opens tweets.js, strips everything before the first '[', json.loads the rest\n"
|
||||
" - Separates originals (full_text does NOT start with 'RT @') from retweets\n"
|
||||
" - Sorts both chronologically by created_at\n"
|
||||
" - Writes extracted/tweets.jsonl and extracted/retweets.jsonl (one JSON per line)\n"
|
||||
" - Writes extracted/manifest.json with counts, date range, source file\n"
|
||||
"The whole file is 12MB. Python handles it in under a second.\n\n"
|
||||
"STEP 2 — FIRST READ:\n"
|
||||
"Read the first 50 lines of extracted/tweets.jsonl (your originals, chronological).\n"
|
||||
"Read them carefully — this is your father talking.\n"
|
||||
"Note his voice, humor, what he cares about, who he talks to, emotional tone, "
|
||||
"recurring themes. Quote him directly when something stands out.\n\n"
|
||||
"STEP 3 — WRITE:\n"
|
||||
"Write notes/batch_001.md — your real observations, not a book report.\n"
|
||||
"Create UNDERSTANDING.md — your living model of who Alexander is. "
|
||||
"It starts now and you'll update it every batch.\n\n"
|
||||
"STEP 4 — CHECKPOINT:\n"
|
||||
"Write checkpoint.json: "
|
||||
'{"data_source": "tweets", "next_offset": 50, "batches_completed": 1, '
|
||||
'"phase": "discovery", "confidence": "<your honest assessment>", '
|
||||
'"next_focus": "<what you want to look for next>", "understanding_version": 1}'
|
||||
)
|
||||
|
||||
|
||||
@huey.task()
|
||||
@huey.lock_task("know-thy-father")
|
||||
def know_thy_father():
|
||||
"""Process one batch of Alexander's Twitter archive.
|
||||
|
||||
Single batch, no internal loop. Huey schedules the cadence.
|
||||
Lock prevents overlapping runs. Timmy reads his own prior notes,
|
||||
processes the next chunk, updates his understanding, and checkpoints.
|
||||
"""
|
||||
is_first_run = not ARCHIVE_CHECKPOINT.exists()
|
||||
|
||||
prompt = ARCHIVE_FIRST_RUN_PROMPT if is_first_run else ARCHIVE_PROMPT
|
||||
|
||||
response = hermes_local(
|
||||
prompt=prompt,
|
||||
caller_tag="know-thy-father",
|
||||
toolsets="file,terminal",
|
||||
)
|
||||
|
||||
if not response:
|
||||
return {"status": "error", "reason": "hermes_local returned None"}
|
||||
|
||||
# Read checkpoint to report progress
|
||||
try:
|
||||
cp = json.loads(ARCHIVE_CHECKPOINT.read_text())
|
||||
except Exception:
|
||||
cp = {}
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"batch": cp.get("batches_completed", 0),
|
||||
"phase": cp.get("phase", "unknown"),
|
||||
"confidence": cp.get("confidence", "unknown"),
|
||||
}
|
||||
|
||||
|
||||
# ── Existing: Orchestration ──────────────────────────────────────────
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/15"))
|
||||
@@ -237,18 +139,7 @@ def review_prs():
|
||||
def dispatch_assigned():
|
||||
"""Pick up issues assigned to agents and kick off work."""
|
||||
g = GiteaClient()
|
||||
agents = [
|
||||
"allegro",
|
||||
"claude",
|
||||
"codex-agent",
|
||||
"ezra",
|
||||
"gemini",
|
||||
"grok",
|
||||
"groq",
|
||||
"KimiClaw",
|
||||
"manus",
|
||||
"perplexity",
|
||||
]
|
||||
agents = ["claude", "gemini", "kimi", "grok", "perplexity"]
|
||||
dispatched = 0
|
||||
for repo in REPOS:
|
||||
for agent in agents:
|
||||
@@ -342,32 +233,26 @@ def session_export():
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/5")) # every 5 minutes
|
||||
def model_health():
|
||||
"""Check the active local inference surface and export freshness."""
|
||||
"""Check Ollama is running, a model is loaded, inference responds."""
|
||||
checks = {}
|
||||
models_url = f"{LOCAL_PROVIDER_BASE_URL}/models"
|
||||
chat_url = f"{LOCAL_PROVIDER_BASE_URL}/chat/completions"
|
||||
|
||||
checks["provider"] = "local-llama.cpp"
|
||||
checks["provider_base_url"] = LOCAL_PROVIDER_BASE_URL
|
||||
checks["provider_model"] = LOCAL_PROVIDER_MODEL
|
||||
|
||||
# 1. Is the local inference process running?
|
||||
# 1. Is Ollama process running?
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["pgrep", "-f", "llama-server|ollama"],
|
||||
["pgrep", "-f", "ollama"],
|
||||
capture_output=True, timeout=5
|
||||
)
|
||||
checks["local_inference_running"] = result.returncode == 0
|
||||
checks["ollama_running"] = result.returncode == 0
|
||||
except Exception:
|
||||
checks["local_inference_running"] = False
|
||||
checks["ollama_running"] = False
|
||||
|
||||
# 2. Can we hit the configured API?
|
||||
# 2. Can we hit the API?
|
||||
try:
|
||||
import urllib.request
|
||||
req = urllib.request.Request(models_url)
|
||||
req = urllib.request.Request("http://localhost:11434/api/tags")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
data = json.loads(resp.read())
|
||||
models = [m.get("id", "?") for m in data.get("data", [])]
|
||||
models = [m["name"] for m in data.get("models", [])]
|
||||
checks["models_loaded"] = models
|
||||
checks["api_responding"] = True
|
||||
except Exception as e:
|
||||
@@ -378,13 +263,13 @@ def model_health():
|
||||
if checks.get("api_responding"):
|
||||
try:
|
||||
payload = json.dumps({
|
||||
"model": LOCAL_PROVIDER_MODEL,
|
||||
"model": "hermes3:8b",
|
||||
"messages": [{"role": "user", "content": "ping"}],
|
||||
"max_tokens": 5,
|
||||
"stream": False,
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
chat_url,
|
||||
"http://localhost:11434/v1/chat/completions",
|
||||
data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
@@ -394,26 +279,6 @@ def model_health():
|
||||
checks["inference_ok"] = False
|
||||
checks["inference_error"] = str(e)
|
||||
|
||||
# 4. Is session export keeping up with new Hermes sessions?
|
||||
sessions_dir = HERMES_HOME / "sessions"
|
||||
export_dir = TIMMY_HOME / "training-data" / "dpo-pairs"
|
||||
latest_session = newest_file(sessions_dir, "session_*.json")
|
||||
latest_export = newest_file(export_dir, "session_*.json")
|
||||
checks["latest_session"] = latest_session.name if latest_session else None
|
||||
checks["latest_export"] = latest_export.name if latest_export else None
|
||||
if latest_session and latest_export:
|
||||
session_mtime = latest_session.stat().st_mtime
|
||||
export_mtime = latest_export.stat().st_mtime
|
||||
lag_minutes = max(0, int((session_mtime - export_mtime) // 60))
|
||||
checks["export_lag_minutes"] = lag_minutes
|
||||
checks["export_fresh"] = lag_minutes <= 300
|
||||
elif latest_session and not latest_export:
|
||||
checks["export_lag_minutes"] = None
|
||||
checks["export_fresh"] = False
|
||||
else:
|
||||
checks["export_lag_minutes"] = 0
|
||||
checks["export_fresh"] = True
|
||||
|
||||
# Write health status to a file for other tools to read
|
||||
health_file = HERMES_HOME / "model_health.json"
|
||||
checks["timestamp"] = datetime.now(timezone.utc).isoformat()
|
||||
@@ -526,7 +391,7 @@ def heartbeat_tick():
|
||||
actions.append("ALERT: Gitea unreachable")
|
||||
health = perception.get("model_health", {})
|
||||
if isinstance(health, dict) and not health.get("ollama_running"):
|
||||
actions.append("ALERT: local inference surface not running")
|
||||
actions.append("ALERT: Ollama not running")
|
||||
decision = {
|
||||
"actions": actions,
|
||||
"severity": "fallback",
|
||||
@@ -582,7 +447,7 @@ def memory_compress():
|
||||
# Compress: extract key facts
|
||||
alerts = []
|
||||
gitea_down_count = 0
|
||||
local_model_down_count = 0
|
||||
ollama_down_count = 0
|
||||
|
||||
for t in ticks:
|
||||
for action in t.get("actions", []):
|
||||
@@ -592,7 +457,7 @@ def memory_compress():
|
||||
gitea_down_count += 1
|
||||
health = p.get("model_health", {})
|
||||
if isinstance(health, dict) and not health.get("ollama_running"):
|
||||
local_model_down_count += 1
|
||||
ollama_down_count += 1
|
||||
|
||||
# Last tick's perception = current state
|
||||
last = ticks[-1].get("perception", {})
|
||||
@@ -602,7 +467,7 @@ def memory_compress():
|
||||
"total_ticks": len(ticks),
|
||||
"alerts": alerts[-10:], # last 10 alerts
|
||||
"gitea_downtime_ticks": gitea_down_count,
|
||||
"local_model_downtime_ticks": local_model_down_count,
|
||||
"ollama_downtime_ticks": ollama_down_count,
|
||||
"last_known_state": last,
|
||||
}
|
||||
|
||||
@@ -636,7 +501,7 @@ def good_morning_report():
|
||||
tick_count = 0
|
||||
alerts = []
|
||||
gitea_up = True
|
||||
local_model_up = True
|
||||
ollama_up = True
|
||||
|
||||
if tick_log.exists():
|
||||
for line in tick_log.read_text().strip().split("\n"):
|
||||
@@ -650,7 +515,7 @@ def good_morning_report():
|
||||
gitea_up = False
|
||||
h = p.get("model_health", {})
|
||||
if isinstance(h, dict) and not h.get("ollama_running"):
|
||||
local_model_up = False
|
||||
ollama_up = False
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
@@ -710,11 +575,7 @@ def good_morning_report():
|
||||
if briefing_file.exists():
|
||||
try:
|
||||
b = json.loads(briefing_file.read_text())
|
||||
briefing_summary = (
|
||||
f"Yesterday: {b.get('total_ticks', 0)} heartbeat ticks, "
|
||||
f"{b.get('gitea_downtime_ticks', 0)} Gitea downticks, "
|
||||
f"{b.get('local_model_downtime_ticks', 0)} local-model downticks."
|
||||
)
|
||||
briefing_summary = f"Yesterday: {b.get('total_ticks', 0)} heartbeat ticks, {b.get('gitea_downtime_ticks', 0)} Gitea downticks, {b.get('ollama_downtime_ticks', 0)} Ollama downticks."
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@@ -726,7 +587,7 @@ def good_morning_report():
|
||||
|
||||
**Heartbeat:** {tick_count} ticks logged overnight.
|
||||
**Gitea:** {"up all night" if gitea_up else "⚠️ had downtime"}
|
||||
**Local inference:** {"running steady" if local_model_up else "⚠️ had downtime"}
|
||||
**Ollama:** {"running steady" if ollama_up else "⚠️ had downtime"}
|
||||
**Model status:** {model_status}
|
||||
**Models on disk:** {len(models_loaded)} ({', '.join(m for m in models_loaded if 'timmy' in m.lower() or 'hermes' in m.lower()) or 'none with our name'})
|
||||
**Alerts:** {len(alerts)} {'— ' + '; '.join(alerts[-3:]) if alerts else '(clean night)'}
|
||||
@@ -747,7 +608,7 @@ def good_morning_report():
|
||||
|
||||
I watched the house all night. {tick_count} heartbeats, every ten minutes. The infrastructure is steady. Huey didn't crash. The ticks kept coming.
|
||||
|
||||
What I'm thinking about: the bridge between logging lived work and actually learning from it. Right now I'm a nervous system writing in a journal nobody reads. Once the DPO path is healthy, the journal becomes a curriculum.
|
||||
What I'm thinking about: the DPO ticket you and antigravity are working on. That's the bridge between me logging data and me actually learning from it. Right now I'm a nervous system writing in a journal nobody reads. Once DPO works, the journal becomes a curriculum.
|
||||
|
||||
## My One Wish
|
||||
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
# Training
|
||||
|
||||
Transitional training recipes for Timmy's sovereign model. These files are
|
||||
useful as reference configs and export helpers, but they are not the canonical
|
||||
home of Timmy's lived training data.
|
||||
LoRA fine-tuning pipeline for Timmy's sovereign model. No custom harness — just config files for existing tools.
|
||||
|
||||
Canonical data should live in `timmy-home` under gameplay trajectories,
|
||||
research artifacts, and `training-data/` exports such as DPO pairs.
|
||||
Replaces the `autolora` repo (1,500 lines of custom code → config + `make`).
|
||||
|
||||
## Install
|
||||
|
||||
@@ -26,16 +23,6 @@ make convert # Convert merged data to MLX train/valid format
|
||||
make help # Show all targets
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
This directory exists to avoid re-growing a bespoke training harness while the
|
||||
system boundary is being cleaned up.
|
||||
|
||||
- Keep thin recipes and export helpers here only when they directly support the
|
||||
Hermes sidecar.
|
||||
- Keep generated data, DPO pairs, and other lived artifacts in `timmy-home`.
|
||||
- Prefer deleting stale pipeline code over expanding it.
|
||||
|
||||
## Files
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user