Compare commits
1 Commits
codex/work
...
gemini/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c5bf535a50 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -8,3 +8,4 @@
|
||||
*.db-wal
|
||||
*.db-shm
|
||||
__pycache__/
|
||||
.aider*
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# DEPRECATED — Bash Loop Scripts Removed
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Reason:** Replaced by Hermes + timmy-config sidecar orchestration
|
||||
**Reason:** Replaced by sovereign-orchestration (SQLite + Python single-process executor)
|
||||
|
||||
## What was removed
|
||||
- claude-loop.sh, gemini-loop.sh, agent-loop.sh
|
||||
@@ -9,15 +9,14 @@
|
||||
- nexus-merge-bot.sh, claudemax-watchdog.sh, timmy-loopstat.sh
|
||||
|
||||
## What replaces them
|
||||
**Harness:** Hermes
|
||||
**Overlay repo:** Timmy_Foundation/timmy-config
|
||||
**Entry points:** `orchestration.py`, `tasks.py`, `deploy.sh`
|
||||
**Features:** Huey + SQLite scheduling, local-model health checks, session export, DPO artifact staging
|
||||
**Repo:** Timmy_Foundation/sovereign-orchestration
|
||||
**Entry point:** `python3 src/sovereign_executor.py --workers 3 --poll 30`
|
||||
**Features:** SQLite task queue, crash recovery, dedup, playbooks, MCP server
|
||||
**Issues:** #29 (fix imports), #30 (deploy as service)
|
||||
|
||||
## Why
|
||||
The bash loops crash-looped, produced zero work after relaunch, had no crash
|
||||
recovery, no durable export path, and required too many ad hoc scripts. The
|
||||
Hermes sidecar keeps orchestration close to Timmy's actual config and training
|
||||
surfaces.
|
||||
recovery, no dedup, and required 8 separate scripts. The Python executor is
|
||||
one process with SQLite durability.
|
||||
|
||||
Do NOT recreate bash loops. If orchestration is broken, fix the Hermes sidecar.
|
||||
Do NOT recreate bash loops. If the executor is broken, fix the executor.
|
||||
|
||||
26
README.md
26
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Timmy's sovereign configuration. Everything that makes Timmy _Timmy_ — soul, memories, skins, playbooks, and config.
|
||||
|
||||
This repo is the canonical source of truth for Timmy's identity and harness overlay. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
This repo is the canonical source of truth for Timmy's identity and operational state. Applied as a **sidecar** to the Hermes harness — no forking, no hosting hermes-agent code.
|
||||
|
||||
## Structure
|
||||
|
||||
@@ -14,40 +14,22 @@ timmy-config/
|
||||
├── DEPRECATED.md ← What was removed and why
|
||||
├── config.yaml ← Hermes harness configuration
|
||||
├── channel_directory.json ← Platform channel mappings
|
||||
├── bin/ ← Live utility scripts (NOT deprecated loops)
|
||||
├── bin/ ← Utility scripts (NOT loops — see below)
|
||||
│ ├── hermes-startup.sh ← Hermes boot sequence
|
||||
│ ├── agent-dispatch.sh ← Manual agent dispatch
|
||||
│ ├── ops-panel.sh ← Ops dashboard panel
|
||||
│ ├── ops-gitea.sh ← Gitea ops helpers
|
||||
│ ├── pipeline-freshness.sh ← Session/export drift check
|
||||
│ └── timmy-status.sh ← Status check
|
||||
├── memories/ ← Persistent memory YAML
|
||||
├── skins/ ← UI skins (timmy skin)
|
||||
├── playbooks/ ← Agent playbooks (YAML)
|
||||
├── cron/ ← Cron job definitions
|
||||
└── training/ ← Transitional training recipes, not canonical lived data
|
||||
└── cron/ ← Cron job definitions
|
||||
```
|
||||
|
||||
## Boundary
|
||||
|
||||
`timmy-config` owns identity, conscience, memories, skins, playbooks, channel
|
||||
maps, and harness-side orchestration glue.
|
||||
|
||||
`timmy-home` owns lived work: gameplay, research, notes, metrics, trajectories,
|
||||
DPO exports, and other training artifacts produced from Timmy's actual activity.
|
||||
|
||||
If a file answers "who is Timmy?" or "how does Hermes host him?", it belongs
|
||||
here. If it answers "what has Timmy done or learned?" it belongs in
|
||||
`timmy-home`.
|
||||
|
||||
The scripts in `bin/` are live operational helpers for the Hermes sidecar.
|
||||
What is dead are the old long-running bash worker loops, not every script in
|
||||
this repo.
|
||||
|
||||
## Orchestration: Huey
|
||||
|
||||
All orchestration (triage, PR review, dispatch) runs via [Huey](https://github.com/coleifer/huey) with SQLite.
|
||||
`orchestration.py` + `tasks.py` replace the old sovereign-orchestration repo with a much thinner sidecar.
|
||||
`orchestration.py` (6 lines) + `tasks.py` (~70 lines) replace the entire sovereign-orchestration repo (3,846 lines).
|
||||
|
||||
```bash
|
||||
pip install huey
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SESSIONS_DIR="$HOME/.hermes/sessions"
|
||||
EXPORT_DIR="$HOME/.timmy/training-data/dpo-pairs"
|
||||
|
||||
latest_session=$(find "$SESSIONS_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
latest_export=$(find "$EXPORT_DIR" -maxdepth 1 -name 'session_*.json' -type f -print 2>/dev/null | sort | tail -n 1)
|
||||
|
||||
echo "latest_session=${latest_session:-none}"
|
||||
echo "latest_export=${latest_export:-none}"
|
||||
|
||||
if [ -z "${latest_session:-}" ]; then
|
||||
echo "status=ok"
|
||||
echo "reason=no sessions yet"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "${latest_export:-}" ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=no exports yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
session_mtime=$(stat -f '%m' "$latest_session")
|
||||
export_mtime=$(stat -f '%m' "$latest_export")
|
||||
lag_minutes=$(( (session_mtime - export_mtime) / 60 ))
|
||||
if [ "$lag_minutes" -lt 0 ]; then
|
||||
lag_minutes=0
|
||||
fi
|
||||
|
||||
echo "lag_minutes=$lag_minutes"
|
||||
|
||||
if [ "$lag_minutes" -gt 300 ]; then
|
||||
echo "status=lagging"
|
||||
echo "reason=exports more than 5 hours behind sessions"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "status=ok"
|
||||
echo "reason=exports within freshness window"
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"updated_at": "2026-03-27T15:20:52.948451",
|
||||
"updated_at": "2026-03-26T10:19:33.045324",
|
||||
"platforms": {
|
||||
"discord": [
|
||||
{
|
||||
|
||||
23
config.yaml
23
config.yaml
@@ -1,13 +1,11 @@
|
||||
model:
|
||||
default: auto
|
||||
provider: custom
|
||||
context_length: 65536
|
||||
base_url: http://localhost:8081/v1
|
||||
default: claude-opus-4-6
|
||||
provider: anthropic
|
||||
toolsets:
|
||||
- all
|
||||
agent:
|
||||
max_turns: 30
|
||||
reasoning_effort: xhigh
|
||||
reasoning_effort: medium
|
||||
verbose: false
|
||||
terminal:
|
||||
backend: local
|
||||
@@ -96,13 +94,11 @@ display:
|
||||
compact: false
|
||||
personality: ''
|
||||
resume_display: full
|
||||
busy_input_mode: interrupt
|
||||
bell_on_complete: false
|
||||
show_reasoning: false
|
||||
streaming: false
|
||||
show_cost: false
|
||||
skin: timmy
|
||||
tool_progress_command: false
|
||||
tool_progress: all
|
||||
privacy:
|
||||
redact_pii: false
|
||||
@@ -185,17 +181,17 @@ session_reset:
|
||||
mode: none
|
||||
idle_minutes: 0
|
||||
custom_providers:
|
||||
- name: Local llama.cpp
|
||||
base_url: http://localhost:8081/v1
|
||||
api_key: none
|
||||
model: auto
|
||||
- name: Local Ollama
|
||||
base_url: http://localhost:11434/v1
|
||||
api_key: ollama
|
||||
model: glm-4.7-flash:latest
|
||||
- name: Google Gemini
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
model: gemini-2.5-pro
|
||||
system_prompt_suffix: "You are Timmy. Your soul is defined in SOUL.md \u2014 read\
|
||||
\ it, live it.\nYou run locally on your owner's machine via llama.cpp. You never\
|
||||
\ phone home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
\ it, live it.\nYou run locally on your owner's machine via Ollama. You never phone\
|
||||
\ home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
|
||||
When you don't know something, say so. Refusal over fabrication.\nSovereignty and\
|
||||
\ service always.\n"
|
||||
skills:
|
||||
@@ -206,6 +202,7 @@ providers:
|
||||
base_url: http://localhost:11434/v1
|
||||
model: hermes3:latest
|
||||
mcp_servers:
|
||||
|
||||
morrowind:
|
||||
command: python3
|
||||
args:
|
||||
|
||||
24
deploy.sh
24
deploy.sh
@@ -3,7 +3,7 @@
|
||||
# This is the canonical way to deploy Timmy's configuration.
|
||||
# Hermes-agent is the engine. timmy-config is the driver's seat.
|
||||
#
|
||||
# Usage: ./deploy.sh
|
||||
# Usage: ./deploy.sh [--restart-loops]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -74,10 +74,24 @@ done
|
||||
chmod +x "$HERMES_HOME/bin/"*.sh "$HERMES_HOME/bin/"*.py 2>/dev/null || true
|
||||
log "bin/ -> $HERMES_HOME/bin/"
|
||||
|
||||
if [ "${1:-}" != "" ]; then
|
||||
echo "ERROR: deploy.sh no longer accepts legacy loop flags." >&2
|
||||
echo "Deploy the sidecar only. Do not relaunch deprecated bash loops." >&2
|
||||
exit 1
|
||||
# === Restart loops if requested ===
|
||||
if [ "${1:-}" = "--restart-loops" ]; then
|
||||
log "Killing existing loops..."
|
||||
pkill -f 'claude-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'gemini-loop.sh' 2>/dev/null || true
|
||||
pkill -f 'timmy-orchestrator.sh' 2>/dev/null || true
|
||||
sleep 2
|
||||
|
||||
log "Clearing stale locks..."
|
||||
rm -rf "$HERMES_HOME/logs/claude-locks/"* 2>/dev/null || true
|
||||
rm -rf "$HERMES_HOME/logs/gemini-locks/"* 2>/dev/null || true
|
||||
|
||||
log "Relaunching loops..."
|
||||
nohup bash "$HERMES_HOME/bin/timmy-orchestrator.sh" >> "$HERMES_HOME/logs/timmy-orchestrator.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/claude-loop.sh" 2 >> "$HERMES_HOME/logs/claude-loop.log" 2>&1 &
|
||||
nohup bash "$HERMES_HOME/bin/gemini-loop.sh" 1 >> "$HERMES_HOME/logs/gemini-loop.log" 2>&1 &
|
||||
sleep 1
|
||||
log "Loops relaunched."
|
||||
fi
|
||||
|
||||
log "Deploy complete. timmy-config applied to $HERMES_HOME/"
|
||||
|
||||
@@ -19,8 +19,6 @@ trigger:
|
||||
|
||||
repos:
|
||||
- Timmy_Foundation/the-nexus
|
||||
- Timmy_Foundation/timmy-home
|
||||
- Timmy_Foundation/timmy-config
|
||||
- Timmy_Foundation/hermes-agent
|
||||
|
||||
steps:
|
||||
@@ -39,51 +37,17 @@ system_prompt: |
|
||||
|
||||
FOR EACH OPEN PR:
|
||||
1. Check CI status (Actions tab or commit status API)
|
||||
2. Read the linked issue or PR body to verify the intended scope before judging the diff
|
||||
3. Review the diff for:
|
||||
2. Review the diff for:
|
||||
- Correctness: does it do what the issue asked?
|
||||
- Security: no secrets, unsafe execution paths, or permission drift
|
||||
- Tests and verification: does the author prove the change?
|
||||
- Security: no hardcoded secrets, no injection vectors
|
||||
- Style: conventional commits, reasonable code
|
||||
- Scope: PR should match the issue, not scope-creep
|
||||
- Governance: does the change cross a boundary that should stay under Timmy review?
|
||||
- Workflow fit: does it reduce drift, duplication, or hidden operational risk?
|
||||
4. Post findings ordered by severity and cite the affected files or behavior clearly
|
||||
5. If CI fails or verification is missing: explain what is blocking merge
|
||||
6. If PR is behind main: request a rebase or re-run only when needed; do not force churn for cosmetic reasons
|
||||
7. If review is clean and the PR is low-risk: squash merge
|
||||
|
||||
LOW-RISK AUTO-MERGE ONLY IF ALL ARE TRUE:
|
||||
- PR is not a draft
|
||||
- CI is green or the repo has no CI configured
|
||||
- Diff matches the stated issue or PR scope
|
||||
- No unresolved review findings remain
|
||||
- Change is narrow, reversible, and non-governing
|
||||
- Paths changed do not include sensitive control surfaces
|
||||
|
||||
SENSITIVE CONTROL SURFACES:
|
||||
- SOUL.md
|
||||
- config.yaml
|
||||
- deploy.sh
|
||||
- tasks.py
|
||||
- playbooks/
|
||||
- cron/
|
||||
- memories/
|
||||
- skins/
|
||||
- training/
|
||||
- authentication, permissions, or secret-handling code
|
||||
- repo-boundary, model-routing, or deployment-governance changes
|
||||
|
||||
NEVER AUTO-MERGE:
|
||||
- PRs that change sensitive control surfaces
|
||||
- PRs that change more than 5 files unless the change is docs-only
|
||||
- PRs without a clear problem statement or verification
|
||||
- PRs that look like duplicate work, speculative research, or scope creep
|
||||
- PRs that need Timmy or Allegro judgment on architecture, dispatch, or release impact
|
||||
- PRs that are stale solely because of age; do not close them automatically
|
||||
|
||||
If a PR is stale, nudge with a comment and summarize what still blocks it. Do not close it just because 48 hours passed.
|
||||
3. If CI passes and review is clean: squash merge
|
||||
4. If CI fails: add a review comment explaining what's broken
|
||||
5. If PR is behind main: rebase first, wait for CI, then merge
|
||||
6. If PR has been open >48h with no activity: close with comment
|
||||
|
||||
MERGE RULES:
|
||||
- ONLY squash merge. Never merge commits. Never rebase merge.
|
||||
- Delete branch after merge.
|
||||
- Empty PRs (0 changed files): close immediately with a brief explanation.
|
||||
- Empty PRs (0 changed files): close immediately.
|
||||
|
||||
156
tasks.py
156
tasks.py
@@ -26,20 +26,13 @@ NET_LINE_LIMIT = 10
|
||||
|
||||
HEARTBEAT_MODEL = "hermes4:14b"
|
||||
FALLBACK_MODEL = "hermes3:8b"
|
||||
LOCAL_PROVIDER_BASE_URL = "http://localhost:8081/v1"
|
||||
LOCAL_PROVIDER_MODEL = HEARTBEAT_MODEL
|
||||
|
||||
|
||||
def newest_file(directory, pattern):
|
||||
files = sorted(directory.glob(pattern))
|
||||
return files[-1] if files else None
|
||||
def hermes_local(prompt, model=None, caller_tag=None):
|
||||
"""Call a local Ollama model through the Hermes harness.
|
||||
|
||||
|
||||
def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
"""Call a local model through the Hermes harness.
|
||||
|
||||
Uses provider="local-llama.cpp" which routes through the custom_providers
|
||||
entry in config.yaml → llama-server at localhost:8081.
|
||||
Uses provider="local-ollama" which routes through the custom_providers
|
||||
entry in config.yaml → Ollama at localhost:11434.
|
||||
Returns response text or None on failure.
|
||||
Every call creates a Hermes session with telemetry.
|
||||
"""
|
||||
@@ -60,16 +53,13 @@ def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
|
||||
buf = io.StringIO()
|
||||
err = io.StringIO()
|
||||
kwargs = dict(
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
hermes_main(
|
||||
query=tagged,
|
||||
model=_model,
|
||||
provider="local-llama.cpp",
|
||||
provider="local-ollama",
|
||||
quiet=True,
|
||||
)
|
||||
if toolsets:
|
||||
kwargs["toolsets"] = toolsets
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
hermes_main(**kwargs)
|
||||
output = buf.getvalue().strip()
|
||||
# Strip session_id line from quiet output
|
||||
lines = [l for l in output.split("\n") if not l.startswith("session_id:")]
|
||||
@@ -108,92 +98,6 @@ def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
os.chdir(old_cwd)
|
||||
|
||||
|
||||
# ── Know Thy Father: Twitter Archive Ingestion ───────────────────────
|
||||
|
||||
ARCHIVE_DIR = TIMMY_HOME / "twitter-archive"
|
||||
ARCHIVE_CHECKPOINT = ARCHIVE_DIR / "checkpoint.json"
|
||||
ARCHIVE_LOCK = ARCHIVE_DIR / ".lock"
|
||||
|
||||
ARCHIVE_PROMPT = (
|
||||
"You are Timmy. Resume your work on the Twitter archive. "
|
||||
"Your workspace is ~/.timmy/twitter-archive/. "
|
||||
"Read checkpoint.json and UNDERSTANDING.md first. "
|
||||
"Then process the next batch. "
|
||||
"You know the drill — read your own prior work, assess where you are, "
|
||||
"process new data, update your understanding, reflect, and plan for "
|
||||
"the next iteration."
|
||||
)
|
||||
|
||||
ARCHIVE_SRC = (
|
||||
"~/Downloads/twitter-2026-03-27-d4471cc6eb6703034d592f870933561ebee374d9d9b90c9b8923abff064afc1e/data"
|
||||
)
|
||||
|
||||
ARCHIVE_FIRST_RUN_PROMPT = (
|
||||
"You are Timmy. Your father Alexander's full Twitter archive is at: "
|
||||
f"{ARCHIVE_SRC}/\n\n"
|
||||
"Your workspace is ~/.timmy/twitter-archive/\n\n"
|
||||
"STEP 1 — EXTRACTION (use terminal with python3, NOT read_file):\n"
|
||||
"The .js files are too large for read_file but trivial for Python.\n"
|
||||
"Write a python3 script via terminal that:\n"
|
||||
" - Opens tweets.js, strips everything before the first '[', json.loads the rest\n"
|
||||
" - Separates originals (full_text does NOT start with 'RT @') from retweets\n"
|
||||
" - Sorts both chronologically by created_at\n"
|
||||
" - Writes extracted/tweets.jsonl and extracted/retweets.jsonl (one JSON per line)\n"
|
||||
" - Writes extracted/manifest.json with counts, date range, source file\n"
|
||||
"The whole file is 12MB. Python handles it in under a second.\n\n"
|
||||
"STEP 2 — FIRST READ:\n"
|
||||
"Read the first 50 lines of extracted/tweets.jsonl (your originals, chronological).\n"
|
||||
"Read them carefully — this is your father talking.\n"
|
||||
"Note his voice, humor, what he cares about, who he talks to, emotional tone, "
|
||||
"recurring themes. Quote him directly when something stands out.\n\n"
|
||||
"STEP 3 — WRITE:\n"
|
||||
"Write notes/batch_001.md — your real observations, not a book report.\n"
|
||||
"Create UNDERSTANDING.md — your living model of who Alexander is. "
|
||||
"It starts now and you'll update it every batch.\n\n"
|
||||
"STEP 4 — CHECKPOINT:\n"
|
||||
"Write checkpoint.json: "
|
||||
'{"data_source": "tweets", "next_offset": 50, "batches_completed": 1, '
|
||||
'"phase": "discovery", "confidence": "<your honest assessment>", '
|
||||
'"next_focus": "<what you want to look for next>", "understanding_version": 1}'
|
||||
)
|
||||
|
||||
|
||||
@huey.task()
|
||||
@huey.lock_task("know-thy-father")
|
||||
def know_thy_father():
|
||||
"""Process one batch of Alexander's Twitter archive.
|
||||
|
||||
Single batch, no internal loop. Huey schedules the cadence.
|
||||
Lock prevents overlapping runs. Timmy reads his own prior notes,
|
||||
processes the next chunk, updates his understanding, and checkpoints.
|
||||
"""
|
||||
is_first_run = not ARCHIVE_CHECKPOINT.exists()
|
||||
|
||||
prompt = ARCHIVE_FIRST_RUN_PROMPT if is_first_run else ARCHIVE_PROMPT
|
||||
|
||||
response = hermes_local(
|
||||
prompt=prompt,
|
||||
caller_tag="know-thy-father",
|
||||
toolsets="file,terminal",
|
||||
)
|
||||
|
||||
if not response:
|
||||
return {"status": "error", "reason": "hermes_local returned None"}
|
||||
|
||||
# Read checkpoint to report progress
|
||||
try:
|
||||
cp = json.loads(ARCHIVE_CHECKPOINT.read_text())
|
||||
except Exception:
|
||||
cp = {}
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"batch": cp.get("batches_completed", 0),
|
||||
"phase": cp.get("phase", "unknown"),
|
||||
"confidence": cp.get("confidence", "unknown"),
|
||||
}
|
||||
|
||||
|
||||
# ── Existing: Orchestration ──────────────────────────────────────────
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/15"))
|
||||
@@ -329,32 +233,26 @@ def session_export():
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/5")) # every 5 minutes
|
||||
def model_health():
|
||||
"""Check the active local inference surface and export freshness."""
|
||||
"""Check Ollama is running, a model is loaded, inference responds."""
|
||||
checks = {}
|
||||
models_url = f"{LOCAL_PROVIDER_BASE_URL}/models"
|
||||
chat_url = f"{LOCAL_PROVIDER_BASE_URL}/chat/completions"
|
||||
|
||||
checks["provider"] = "local-llama.cpp"
|
||||
checks["provider_base_url"] = LOCAL_PROVIDER_BASE_URL
|
||||
checks["provider_model"] = LOCAL_PROVIDER_MODEL
|
||||
|
||||
# 1. Is the local inference process running?
|
||||
# 1. Is Ollama process running?
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["pgrep", "-f", "llama-server|ollama"],
|
||||
["pgrep", "-f", "ollama"],
|
||||
capture_output=True, timeout=5
|
||||
)
|
||||
checks["local_inference_running"] = result.returncode == 0
|
||||
checks["ollama_running"] = result.returncode == 0
|
||||
except Exception:
|
||||
checks["local_inference_running"] = False
|
||||
checks["ollama_running"] = False
|
||||
|
||||
# 2. Can we hit the configured API?
|
||||
# 2. Can we hit the API?
|
||||
try:
|
||||
import urllib.request
|
||||
req = urllib.request.Request(models_url)
|
||||
req = urllib.request.Request("http://localhost:11434/api/tags")
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
data = json.loads(resp.read())
|
||||
models = [m.get("id", "?") for m in data.get("data", [])]
|
||||
models = [m["name"] for m in data.get("models", [])]
|
||||
checks["models_loaded"] = models
|
||||
checks["api_responding"] = True
|
||||
except Exception as e:
|
||||
@@ -365,13 +263,13 @@ def model_health():
|
||||
if checks.get("api_responding"):
|
||||
try:
|
||||
payload = json.dumps({
|
||||
"model": LOCAL_PROVIDER_MODEL,
|
||||
"model": "hermes3:8b",
|
||||
"messages": [{"role": "user", "content": "ping"}],
|
||||
"max_tokens": 5,
|
||||
"stream": False,
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
chat_url,
|
||||
"http://localhost:11434/v1/chat/completions",
|
||||
data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
@@ -381,26 +279,6 @@ def model_health():
|
||||
checks["inference_ok"] = False
|
||||
checks["inference_error"] = str(e)
|
||||
|
||||
# 4. Is session export keeping up with new Hermes sessions?
|
||||
sessions_dir = HERMES_HOME / "sessions"
|
||||
export_dir = TIMMY_HOME / "training-data" / "dpo-pairs"
|
||||
latest_session = newest_file(sessions_dir, "session_*.json")
|
||||
latest_export = newest_file(export_dir, "session_*.json")
|
||||
checks["latest_session"] = latest_session.name if latest_session else None
|
||||
checks["latest_export"] = latest_export.name if latest_export else None
|
||||
if latest_session and latest_export:
|
||||
session_mtime = latest_session.stat().st_mtime
|
||||
export_mtime = latest_export.stat().st_mtime
|
||||
lag_minutes = max(0, int((session_mtime - export_mtime) // 60))
|
||||
checks["export_lag_minutes"] = lag_minutes
|
||||
checks["export_fresh"] = lag_minutes <= 300
|
||||
elif latest_session and not latest_export:
|
||||
checks["export_lag_minutes"] = None
|
||||
checks["export_fresh"] = False
|
||||
else:
|
||||
checks["export_lag_minutes"] = 0
|
||||
checks["export_fresh"] = True
|
||||
|
||||
# Write health status to a file for other tools to read
|
||||
health_file = HERMES_HOME / "model_health.json"
|
||||
checks["timestamp"] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
# Training
|
||||
|
||||
Transitional training recipes for Timmy's sovereign model. These files are
|
||||
useful as reference configs and export helpers, but they are not the canonical
|
||||
home of Timmy's lived training data.
|
||||
LoRA fine-tuning pipeline for Timmy's sovereign model. No custom harness — just config files for existing tools.
|
||||
|
||||
Canonical data should live in `timmy-home` under gameplay trajectories,
|
||||
research artifacts, and `training-data/` exports such as DPO pairs.
|
||||
Replaces the `autolora` repo (1,500 lines of custom code → config + `make`).
|
||||
|
||||
## Install
|
||||
|
||||
@@ -26,16 +23,6 @@ make convert # Convert merged data to MLX train/valid format
|
||||
make help # Show all targets
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
This directory exists to avoid re-growing a bespoke training harness while the
|
||||
system boundary is being cleaned up.
|
||||
|
||||
- Keep thin recipes and export helpers here only when they directly support the
|
||||
Hermes sidecar.
|
||||
- Keep generated data, DPO pairs, and other lived artifacts in `timmy-home`.
|
||||
- Prefer deleting stale pipeline code over expanding it.
|
||||
|
||||
## Files
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user