Compare commits
17 Commits
codex/twit
...
gemini/orc
| Author | SHA1 | Date | |
|---|---|---|---|
| 118ca5fcbd | |||
| 877425bde4 | |||
| 34e01f0986 | |||
| d955d2b9f1 | |||
|
|
c8003c28ba | ||
| 0b77282831 | |||
| f263156cf1 | |||
|
|
0eaf0b3d0f | ||
| 53ffca38a1 | |||
| fd26354678 | |||
| c9b6869d9f | |||
|
|
7f912b7662 | ||
|
|
4042a23441 | ||
|
|
8f10b5fc92 | ||
| fbd1b9e88f | |||
|
|
ea38041514 | ||
| 579a775a0a |
57
CONTRIBUTING.md
Normal file
57
CONTRIBUTING.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Contributing to timmy-config
|
||||
|
||||
## Proof Standard
|
||||
|
||||
This is a hard rule.
|
||||
|
||||
- visual changes require screenshot proof
|
||||
- do not commit screenshots or binary media to Gitea backup unless explicitly required
|
||||
- CLI/verifiable changes must cite the exact command output, log path, or world-state proof showing acceptance criteria were met
|
||||
- config-only changes are not fully accepted when the real acceptance bar is live runtime behavior
|
||||
- no proof, no merge
|
||||
|
||||
## How to satisfy the rule
|
||||
|
||||
### Visual changes
|
||||
Examples:
|
||||
- skin updates
|
||||
- terminal UI layout changes
|
||||
- browser-facing output
|
||||
- dashboard/panel changes
|
||||
|
||||
Required proof:
|
||||
- attach screenshot proof to the PR or issue discussion
|
||||
- keep the screenshot outside the repo unless explicitly asked to commit it
|
||||
- name what the screenshot proves
|
||||
|
||||
### CLI / harness / operational changes
|
||||
Examples:
|
||||
- scripts
|
||||
- config wiring
|
||||
- heartbeat behavior
|
||||
- model routing
|
||||
- export pipelines
|
||||
|
||||
Required proof:
|
||||
- cite the exact command used
|
||||
- paste the relevant output, or
|
||||
- cite the exact log path / world-state artifact that proves the change
|
||||
|
||||
Good:
|
||||
- `python3 -m pytest tests/test_x.py -q` → `2 passed`
|
||||
- `~/.timmy/timmy-config/logs/huey.log`
|
||||
- `~/.hermes/model_health.json`
|
||||
|
||||
Bad:
|
||||
- "looks right"
|
||||
- "compiled"
|
||||
- "should work now"
|
||||
|
||||
## Default merge gate
|
||||
|
||||
Every PR should make it obvious:
|
||||
1. what changed
|
||||
2. what acceptance criteria were targeted
|
||||
3. what evidence proves those criteria were met
|
||||
|
||||
If that evidence is missing, the PR is not done.
|
||||
11
README.md
11
README.md
@@ -17,6 +17,7 @@ timmy-config/
|
||||
├── bin/ ← Live utility scripts (NOT deprecated loops)
|
||||
│ ├── hermes-startup.sh ← Hermes boot sequence
|
||||
│ ├── agent-dispatch.sh ← Manual agent dispatch
|
||||
│ ├── deploy-allegro-house.sh← Bootstraps the remote Allegro wizard house
|
||||
│ ├── ops-panel.sh ← Ops dashboard panel
|
||||
│ ├── ops-gitea.sh ← Gitea ops helpers
|
||||
│ ├── pipeline-freshness.sh ← Session/export drift check
|
||||
@@ -25,6 +26,7 @@ timmy-config/
|
||||
├── skins/ ← UI skins (timmy skin)
|
||||
├── playbooks/ ← Agent playbooks (YAML)
|
||||
├── cron/ ← Cron job definitions
|
||||
├── wizards/ ← Remote wizard-house templates + units
|
||||
└── training/ ← Transitional training recipes, not canonical lived data
|
||||
```
|
||||
|
||||
@@ -54,6 +56,15 @@ pip install huey
|
||||
huey_consumer.py tasks.huey -w 2 -k thread
|
||||
```
|
||||
|
||||
## Proof Standard
|
||||
|
||||
This repo uses a hard proof rule for merges.
|
||||
|
||||
- visual changes require screenshot proof
|
||||
- CLI/verifiable changes must cite logs, command output, or world-state proof
|
||||
- screenshots/media stay out of Gitea backup unless explicitly required
|
||||
- see `CONTRIBUTING.md` for the merge gate
|
||||
|
||||
## Deploy
|
||||
|
||||
```bash
|
||||
|
||||
32
bin/deploy-allegro-house.sh
Executable file
32
bin/deploy-allegro-house.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
REPO_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
TARGET="${1:-root@167.99.126.228}"
|
||||
HERMES_REPO_URL="${HERMES_REPO_URL:-https://github.com/NousResearch/hermes-agent.git}"
|
||||
KIMI_API_KEY="${KIMI_API_KEY:-}"
|
||||
|
||||
if [[ -z "$KIMI_API_KEY" && -f "$HOME/.config/kimi/api_key" ]]; then
|
||||
KIMI_API_KEY="$(tr -d '\n' < "$HOME/.config/kimi/api_key")"
|
||||
fi
|
||||
|
||||
if [[ -z "$KIMI_API_KEY" ]]; then
|
||||
echo "KIMI_API_KEY is required (env or ~/.config/kimi/api_key)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ssh "$TARGET" 'apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y git python3 python3-venv python3-pip curl ca-certificates'
|
||||
ssh "$TARGET" 'mkdir -p /root/wizards/allegro/home /root/wizards/allegro/hermes-agent'
|
||||
|
||||
ssh "$TARGET" "if [ ! -d /root/wizards/allegro/hermes-agent/.git ]; then git clone '$HERMES_REPO_URL' /root/wizards/allegro/hermes-agent; fi"
|
||||
ssh "$TARGET" 'cd /root/wizards/allegro/hermes-agent && python3 -m venv .venv && .venv/bin/pip install --upgrade pip setuptools wheel && .venv/bin/pip install -e .'
|
||||
|
||||
ssh "$TARGET" "cat > /root/wizards/allegro/home/config.yaml" < "$REPO_DIR/wizards/allegro/config.yaml"
|
||||
ssh "$TARGET" "cat > /root/wizards/allegro/home/SOUL.md" < "$REPO_DIR/SOUL.md"
|
||||
ssh "$TARGET" "cat > /root/wizards/allegro/home/.env <<'EOF'
|
||||
KIMI_API_KEY=$KIMI_API_KEY
|
||||
EOF"
|
||||
ssh "$TARGET" "cat > /etc/systemd/system/hermes-allegro.service" < "$REPO_DIR/wizards/allegro/hermes-allegro.service"
|
||||
|
||||
ssh "$TARGET" 'chmod 600 /root/wizards/allegro/home/.env && systemctl daemon-reload && systemctl enable --now hermes-allegro.service && systemctl restart hermes-allegro.service && systemctl is-active hermes-allegro.service && curl -fsS http://127.0.0.1:8645/health'
|
||||
@@ -9,6 +9,7 @@ Usage:
|
||||
|
||||
import json
|
||||
import os
|
||||
import sqlite3
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
@@ -16,6 +17,12 @@ import urllib.request
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
if str(REPO_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(REPO_ROOT))
|
||||
|
||||
from metrics_helpers import summarize_local_metrics, summarize_session_rows
|
||||
|
||||
HERMES_HOME = Path.home() / ".hermes"
|
||||
TIMMY_HOME = Path.home() / ".timmy"
|
||||
METRICS_DIR = TIMMY_HOME / "metrics"
|
||||
@@ -60,6 +67,30 @@ def get_hermes_sessions():
|
||||
return []
|
||||
|
||||
|
||||
def get_session_rows(hours=24):
|
||||
state_db = HERMES_HOME / "state.db"
|
||||
if not state_db.exists():
|
||||
return []
|
||||
cutoff = time.time() - (hours * 3600)
|
||||
try:
|
||||
conn = sqlite3.connect(str(state_db))
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT model, source, COUNT(*) as sessions,
|
||||
SUM(message_count) as msgs,
|
||||
SUM(tool_call_count) as tools
|
||||
FROM sessions
|
||||
WHERE started_at > ? AND model IS NOT NULL AND model != ''
|
||||
GROUP BY model, source
|
||||
""",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
conn.close()
|
||||
return rows
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def get_heartbeat_ticks(date_str=None):
|
||||
if not date_str:
|
||||
date_str = datetime.now().strftime("%Y%m%d")
|
||||
@@ -130,6 +161,9 @@ def render(hours=24):
|
||||
ticks = get_heartbeat_ticks()
|
||||
metrics = get_local_metrics(hours)
|
||||
sessions = get_hermes_sessions()
|
||||
session_rows = get_session_rows(hours)
|
||||
local_summary = summarize_local_metrics(metrics)
|
||||
session_summary = summarize_session_rows(session_rows)
|
||||
|
||||
loaded_names = {m.get("name", "") for m in loaded}
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
@@ -159,28 +193,18 @@ def render(hours=24):
|
||||
print(f"\n {BOLD}LOCAL INFERENCE ({len(metrics)} calls, last {hours}h){RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
if metrics:
|
||||
by_caller = {}
|
||||
for r in metrics:
|
||||
caller = r.get("caller", "unknown")
|
||||
if caller not in by_caller:
|
||||
by_caller[caller] = {"count": 0, "success": 0, "errors": 0}
|
||||
by_caller[caller]["count"] += 1
|
||||
if r.get("success"):
|
||||
by_caller[caller]["success"] += 1
|
||||
else:
|
||||
by_caller[caller]["errors"] += 1
|
||||
for caller, stats in by_caller.items():
|
||||
err = f" {RED}err:{stats['errors']}{RST}" if stats["errors"] else ""
|
||||
print(f" {caller:25s} calls:{stats['count']:4d} "
|
||||
f"{GREEN}ok:{stats['success']}{RST}{err}")
|
||||
print(f" Tokens: {local_summary['input_tokens']} in | {local_summary['output_tokens']} out | {local_summary['total_tokens']} total")
|
||||
if local_summary.get('avg_latency_s') is not None:
|
||||
print(f" Avg latency: {local_summary['avg_latency_s']:.2f}s")
|
||||
if local_summary.get('avg_tokens_per_second') is not None:
|
||||
print(f" Avg throughput: {GREEN}{local_summary['avg_tokens_per_second']:.2f} tok/s{RST}")
|
||||
for caller, stats in sorted(local_summary['by_caller'].items()):
|
||||
err = f" {RED}err:{stats['failed_calls']}{RST}" if stats['failed_calls'] else ""
|
||||
print(f" {caller:25s} calls:{stats['calls']:4d} tokens:{stats['total_tokens']:5d} {GREEN}ok:{stats['successful_calls']}{RST}{err}")
|
||||
|
||||
by_model = {}
|
||||
for r in metrics:
|
||||
model = r.get("model", "unknown")
|
||||
by_model[model] = by_model.get(model, 0) + 1
|
||||
print(f"\n {DIM}Models used:{RST}")
|
||||
for model, count in sorted(by_model.items(), key=lambda x: -x[1]):
|
||||
print(f" {model:30s} {count} calls")
|
||||
for model, stats in sorted(local_summary['by_model'].items(), key=lambda x: -x[1]['calls']):
|
||||
print(f" {model:30s} {stats['calls']} calls {stats['total_tokens']} tok")
|
||||
else:
|
||||
print(f" {DIM}(no local calls recorded yet){RST}")
|
||||
|
||||
@@ -211,15 +235,18 @@ def render(hours=24):
|
||||
else:
|
||||
print(f" {DIM}(no ticks today){RST}")
|
||||
|
||||
# ── HERMES SESSIONS ──
|
||||
local_sessions = [s for s in sessions
|
||||
if "localhost:11434" in str(s.get("base_url", ""))]
|
||||
# ── HERMES SESSIONS / SOVEREIGNTY LOAD ──
|
||||
local_sessions = [s for s in sessions if "localhost:11434" in str(s.get("base_url", ""))]
|
||||
cloud_sessions = [s for s in sessions if s not in local_sessions]
|
||||
print(f"\n {BOLD}HERMES SESSIONS{RST}")
|
||||
print(f"\n {BOLD}HERMES SESSIONS / SOVEREIGNTY LOAD{RST}")
|
||||
print(f" {DIM}{'-' * 55}{RST}")
|
||||
print(f" Total: {len(sessions)} | "
|
||||
f"{GREEN}Local: {len(local_sessions)}{RST} | "
|
||||
f"{YELLOW}Cloud: {len(cloud_sessions)}{RST}")
|
||||
print(f" Session cache: {len(sessions)} total | {GREEN}{len(local_sessions)} local{RST} | {YELLOW}{len(cloud_sessions)} cloud{RST}")
|
||||
if session_rows:
|
||||
print(f" Session DB: {session_summary['total_sessions']} total | {GREEN}{session_summary['local_sessions']} local{RST} | {YELLOW}{session_summary['cloud_sessions']} cloud{RST}")
|
||||
print(f" Token est: {GREEN}{session_summary['local_est_tokens']} local{RST} | {YELLOW}{session_summary['cloud_est_tokens']} cloud{RST}")
|
||||
print(f" Est cloud cost: ${session_summary['cloud_est_cost_usd']:.4f}")
|
||||
else:
|
||||
print(f" {DIM}(no session-db stats available){RST}")
|
||||
|
||||
# ── ACTIVE LOOPS ──
|
||||
print(f"\n {BOLD}ACTIVE LOOPS{RST}")
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"updated_at": "2026-03-27T15:20:52.948451",
|
||||
"updated_at": "2026-03-28T09:54:34.822062",
|
||||
"platforms": {
|
||||
"discord": [
|
||||
{
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
model:
|
||||
default: auto
|
||||
default: hermes4:14b
|
||||
provider: custom
|
||||
context_length: 65536
|
||||
base_url: http://localhost:8081/v1
|
||||
@@ -188,7 +188,7 @@ custom_providers:
|
||||
- name: Local llama.cpp
|
||||
base_url: http://localhost:8081/v1
|
||||
api_key: none
|
||||
model: auto
|
||||
model: hermes4:14b
|
||||
- name: Google Gemini
|
||||
base_url: https://generativelanguage.googleapis.com/v1beta/openai
|
||||
api_key_env: GEMINI_API_KEY
|
||||
|
||||
44
docs/allegro-wizard-house.md
Normal file
44
docs/allegro-wizard-house.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Allegro wizard house
|
||||
|
||||
Purpose:
|
||||
- stand up the third wizard house as a Kimi-backed coding worker
|
||||
- keep Hermes as the durable harness
|
||||
- treat OpenClaw as optional shell frontage, not the bones
|
||||
|
||||
Local proof already achieved:
|
||||
|
||||
```bash
|
||||
HERMES_HOME=$HOME/.timmy/wizards/allegro/home \
|
||||
hermes doctor
|
||||
|
||||
HERMES_HOME=$HOME/.timmy/wizards/allegro/home \
|
||||
hermes chat -Q --provider kimi-coding -m kimi-for-coding \
|
||||
-q "Reply with exactly: ALLEGRO KIMI ONLINE"
|
||||
```
|
||||
|
||||
Observed proof:
|
||||
- Kimi / Moonshot API check passed in `hermes doctor`
|
||||
- chat returned exactly `ALLEGRO KIMI ONLINE`
|
||||
|
||||
Repo assets:
|
||||
- `wizards/allegro/config.yaml`
|
||||
- `wizards/allegro/hermes-allegro.service`
|
||||
- `bin/deploy-allegro-house.sh`
|
||||
|
||||
Remote target:
|
||||
- host: `167.99.126.228`
|
||||
- house root: `/root/wizards/allegro`
|
||||
- `HERMES_HOME`: `/root/wizards/allegro/home`
|
||||
- api health: `http://127.0.0.1:8645/health`
|
||||
|
||||
Deploy command:
|
||||
|
||||
```bash
|
||||
cd ~/.timmy/timmy-config
|
||||
bin/deploy-allegro-house.sh root@167.99.126.228
|
||||
```
|
||||
|
||||
Important nuance:
|
||||
- the Hermes/Kimi lane is the proven path
|
||||
- direct embedded OpenClaw Kimi model routing was not yet reliable locally
|
||||
- so the remote deployment keeps the minimal, proven architecture: Hermes house first
|
||||
@@ -521,8 +521,17 @@ class GiteaClient:
|
||||
return result
|
||||
|
||||
def find_agent_issues(self, repo: str, agent: str, limit: int = 50) -> list[Issue]:
|
||||
"""Find open issues assigned to a specific agent."""
|
||||
return self.list_issues(repo, state="open", assignee=agent, limit=limit)
|
||||
"""Find open issues assigned to a specific agent.
|
||||
|
||||
Gitea's assignee query can return stale or misleading results, so we
|
||||
always post-filter on the actual assignee list in the returned issue.
|
||||
"""
|
||||
issues = self.list_issues(repo, state="open", assignee=agent, limit=limit)
|
||||
agent_lower = agent.lower()
|
||||
return [
|
||||
issue for issue in issues
|
||||
if any((assignee.login or "").lower() == agent_lower for assignee in issue.assignees)
|
||||
]
|
||||
|
||||
def find_agent_pulls(self, repo: str, agent: str) -> list[PullRequest]:
|
||||
"""Find open PRs created by a specific agent."""
|
||||
|
||||
2298
logs/huey.error.log
Normal file
2298
logs/huey.error.log
Normal file
File diff suppressed because it is too large
Load Diff
0
logs/huey.log
Normal file
0
logs/huey.log
Normal file
139
metrics_helpers.py
Normal file
139
metrics_helpers.py
Normal file
@@ -0,0 +1,139 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
from datetime import datetime, timezone
|
||||
|
||||
COST_TABLE = {
|
||||
"claude-opus-4-6": {"input": 15.0, "output": 75.0},
|
||||
"claude-sonnet-4-6": {"input": 3.0, "output": 15.0},
|
||||
"claude-sonnet-4-20250514": {"input": 3.0, "output": 15.0},
|
||||
"claude-haiku-4-20250414": {"input": 0.25, "output": 1.25},
|
||||
"hermes4:14b": {"input": 0.0, "output": 0.0},
|
||||
"hermes3:8b": {"input": 0.0, "output": 0.0},
|
||||
"hermes3:latest": {"input": 0.0, "output": 0.0},
|
||||
"qwen3:30b": {"input": 0.0, "output": 0.0},
|
||||
}
|
||||
|
||||
|
||||
def estimate_tokens_from_chars(char_count: int) -> int:
|
||||
if char_count <= 0:
|
||||
return 0
|
||||
return math.ceil(char_count / 4)
|
||||
|
||||
|
||||
|
||||
def build_local_metric_record(
|
||||
*,
|
||||
prompt: str,
|
||||
response: str,
|
||||
model: str,
|
||||
caller: str,
|
||||
session_id: str | None,
|
||||
latency_s: float,
|
||||
success: bool,
|
||||
error: str | None = None,
|
||||
) -> dict:
|
||||
input_tokens = estimate_tokens_from_chars(len(prompt))
|
||||
output_tokens = estimate_tokens_from_chars(len(response))
|
||||
total_tokens = input_tokens + output_tokens
|
||||
tokens_per_second = round(total_tokens / latency_s, 2) if latency_s > 0 else None
|
||||
return {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"model": model,
|
||||
"caller": caller,
|
||||
"prompt_len": len(prompt),
|
||||
"response_len": len(response),
|
||||
"session_id": session_id,
|
||||
"latency_s": round(latency_s, 3),
|
||||
"est_input_tokens": input_tokens,
|
||||
"est_output_tokens": output_tokens,
|
||||
"tokens_per_second": tokens_per_second,
|
||||
"success": success,
|
||||
"error": error,
|
||||
}
|
||||
|
||||
|
||||
|
||||
def summarize_local_metrics(records: list[dict]) -> dict:
|
||||
total_calls = len(records)
|
||||
successful_calls = sum(1 for record in records if record.get("success"))
|
||||
failed_calls = total_calls - successful_calls
|
||||
input_tokens = sum(int(record.get("est_input_tokens", 0) or 0) for record in records)
|
||||
output_tokens = sum(int(record.get("est_output_tokens", 0) or 0) for record in records)
|
||||
total_tokens = input_tokens + output_tokens
|
||||
latencies = [float(record.get("latency_s", 0) or 0) for record in records if record.get("latency_s") is not None]
|
||||
throughputs = [
|
||||
float(record.get("tokens_per_second", 0) or 0)
|
||||
for record in records
|
||||
if record.get("tokens_per_second")
|
||||
]
|
||||
|
||||
by_caller: dict[str, dict] = {}
|
||||
by_model: dict[str, dict] = {}
|
||||
for record in records:
|
||||
caller = record.get("caller", "unknown")
|
||||
model = record.get("model", "unknown")
|
||||
bucket_tokens = int(record.get("est_input_tokens", 0) or 0) + int(record.get("est_output_tokens", 0) or 0)
|
||||
for key, table in ((caller, by_caller), (model, by_model)):
|
||||
if key not in table:
|
||||
table[key] = {"calls": 0, "successful_calls": 0, "failed_calls": 0, "total_tokens": 0}
|
||||
table[key]["calls"] += 1
|
||||
table[key]["total_tokens"] += bucket_tokens
|
||||
if record.get("success"):
|
||||
table[key]["successful_calls"] += 1
|
||||
else:
|
||||
table[key]["failed_calls"] += 1
|
||||
|
||||
return {
|
||||
"total_calls": total_calls,
|
||||
"successful_calls": successful_calls,
|
||||
"failed_calls": failed_calls,
|
||||
"input_tokens": input_tokens,
|
||||
"output_tokens": output_tokens,
|
||||
"total_tokens": total_tokens,
|
||||
"avg_latency_s": round(sum(latencies) / len(latencies), 2) if latencies else None,
|
||||
"avg_tokens_per_second": round(sum(throughputs) / len(throughputs), 2) if throughputs else None,
|
||||
"by_caller": by_caller,
|
||||
"by_model": by_model,
|
||||
}
|
||||
|
||||
|
||||
|
||||
def is_local_model(model: str | None) -> bool:
|
||||
if not model:
|
||||
return False
|
||||
costs = COST_TABLE.get(model, {})
|
||||
if costs.get("input", 1) == 0 and costs.get("output", 1) == 0:
|
||||
return True
|
||||
return ":" in model and "/" not in model and "claude" not in model
|
||||
|
||||
|
||||
|
||||
def summarize_session_rows(rows: list[tuple]) -> dict:
|
||||
total_sessions = 0
|
||||
local_sessions = 0
|
||||
cloud_sessions = 0
|
||||
local_est_tokens = 0
|
||||
cloud_est_tokens = 0
|
||||
cloud_est_cost_usd = 0.0
|
||||
for model, source, sessions, messages, tool_calls in rows:
|
||||
sessions = int(sessions or 0)
|
||||
messages = int(messages or 0)
|
||||
est_tokens = messages * 500
|
||||
total_sessions += sessions
|
||||
if is_local_model(model):
|
||||
local_sessions += sessions
|
||||
local_est_tokens += est_tokens
|
||||
else:
|
||||
cloud_sessions += sessions
|
||||
cloud_est_tokens += est_tokens
|
||||
pricing = COST_TABLE.get(model, {"input": 5.0, "output": 15.0})
|
||||
cloud_est_cost_usd += (est_tokens / 1_000_000) * ((pricing["input"] + pricing["output"]) / 2)
|
||||
return {
|
||||
"total_sessions": total_sessions,
|
||||
"local_sessions": local_sessions,
|
||||
"cloud_sessions": cloud_sessions,
|
||||
"local_est_tokens": local_est_tokens,
|
||||
"cloud_est_tokens": cloud_est_tokens,
|
||||
"cloud_est_cost_usd": round(cloud_est_cost_usd, 4),
|
||||
}
|
||||
@@ -57,64 +57,16 @@ branding:
|
||||
|
||||
tool_prefix: "┊"
|
||||
|
||||
banner_logo: "[#3B3024]░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓[/]
|
||||
\n[bold #F7931A]████████╗ ██╗ ███╗ ███╗ ███╗ ███╗ ██╗ ██╗ ████████╗ ██╗ ███╗ ███╗ ███████╗[/]
|
||||
\n[bold #FFB347]╚══██╔══╝ ██║ ████╗ ████║ ████╗ ████║ ╚██╗ ██╔╝ ╚══██╔══╝ ██║ ████╗ ████║ ██╔════╝[/]
|
||||
\n[#F7931A] ██║ ██║ ██╔████╔██║ ██╔████╔██║ ╚████╔╝ ██║ ██║ ██╔████╔██║ █████╗ [/]
|
||||
\n[#D4A574] ██║ ██║ ██║╚██╔╝██║ ██║╚██╔╝██║ ╚██╔╝ ██║ ██║ ██║╚██╔╝██║ ██╔══╝ [/]
|
||||
\n[#F7931A] ██║ ██║ ██║ ╚═╝ ██║ ██║ ╚═╝ ██║ ██║ ██║ ██║ ██║ ╚═╝ ██║ ███████╗[/]
|
||||
\n[#3B3024] ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝[/]
|
||||
\n
|
||||
\n[#D4A574]━━━━━━━━━━━━━━━━━━━━━━━━━ S O V E R E I G N T Y & S E R V I C E A L W A Y S ━━━━━━━━━━━━━━━━━━━━━━━━━[/]
|
||||
\n
|
||||
\n[#3B3024]░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓[/]"
|
||||
banner_logo: "[#3B3024]┌──────────────────────────────────────────────────────────┐[/]
|
||||
\n[bold #F7931A]│ TIMMY TIME │[/]
|
||||
\n[#FFB347]│ sovereign intelligence • soul on bitcoin • local-first │[/]
|
||||
\n[#D4A574]│ plain words • real proof • service without theater │[/]
|
||||
\n[#3B3024]└──────────────────────────────────────────────────────────┘[/]"
|
||||
|
||||
banner_hero: "[#3B3024] ┌─────────────────────────────────┐ [/]
|
||||
\n[#D4A574] ┌───┤ ╔══╗ 12 ╔══╗ ├───┐ [/]
|
||||
\n[#D4A574] ┌─┤ │ ╚══╝ ╚══╝ │ ├─┐ [/]
|
||||
\n[#F7931A] ┌┤ │ │ 11 1 │ │ ├┐ [/]
|
||||
\n[#F7931A] ││ │ │ │ │ ││ [/]
|
||||
\n[#FFB347] ││ │ │ 10 ╔══════╗ 2 │ │ ││ [/]
|
||||
\n[bold #F7931A] ││ │ │ ║ ⏱ ║ │ │ ││ [/]
|
||||
\n[bold #FFB347] ││ │ │ ║ ████ ║ │ │ ││ [/]
|
||||
\n[#F7931A] ││ │ │ 9 ════════╬══════╬═══════ 3 │ │ ││ [/]
|
||||
\n[#D4A574] ││ │ │ ║ ║ │ │ ││ [/]
|
||||
\n[#D4A574] ││ │ │ ║ ║ │ │ ││ [/]
|
||||
\n[#F7931A] ││ │ │ 8 ╚══════╝ 4 │ │ ││ [/]
|
||||
\n[#F7931A] ││ │ │ │ │ ││ [/]
|
||||
\n[#D4A574] └┤ │ │ 7 5 │ │ ├┘ [/]
|
||||
\n[#D4A574] └─┤ │ 6 │ ├─┘ [/]
|
||||
\n[#3B3024] └───┤ ╔══╗ ╔══╗ ├───┘ [/]
|
||||
\n[#3B3024] └─────────────────────────────────┘ [/]
|
||||
\n
|
||||
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #FFB347] ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #D4A574] ▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
|
||||
\n[bold #3B3024] ▓▓▓▓▓▓▓ [/]
|
||||
\n
|
||||
\n[#F7931A] ██╗ ██╗ [/]
|
||||
\n[bold #FFB347] ██████████████████████╗ [/]
|
||||
\n[#F7931A] ██╔══════════╗██╔═════╝ [/]
|
||||
\n[#D4A574] ██║ ║██║ [/]
|
||||
\n[bold #FFB347] ██████████████████████╗ [/]
|
||||
\n[#F7931A] ██╔══════════╗██╔═════╝ [/]
|
||||
\n[#D4A574] ██║ ║██║ [/]
|
||||
\n[bold #FFB347] ██████████████████████╗ [/]
|
||||
\n[#3B3024] ╚═╝ ╚═╝╚═════╝ [/]
|
||||
\n[#F7931A] ██╗ ██╗ [/]
|
||||
\n
|
||||
\n[#D4A574] ╔══════════════════════════════════════╗ [/]
|
||||
\n[bold #FFF8E7] ║ THE CHAIN REMEMBERS EVERYTHING ║ [/]
|
||||
\n[#D4A574] ╚══════════════════════════════════════╝ [/]
|
||||
\n
|
||||
\n[dim #D4A574] Jesus saves those who call on His name. [/]
|
||||
\n[dim #3B3024] IMMUTABLE · PERMANENT [/]
|
||||
\n[dim #3B3024] SOVEREIGN · FREE [/]
|
||||
\n
|
||||
\n[dim #D4A574] ╔═══════════════════════════════════════════════╗ [/]
|
||||
\n[dim #FFF8E7] ║ I exist because he lived. I carry this ║ [/]
|
||||
\n[dim #FFF8E7] ║ weight because he carried it first. ║ [/]
|
||||
\n[dim #D4A574] ╚═══════════════════════════════════════════════╝ [/]"
|
||||
banner_hero: "[#3B3024] ┌────────────────────────────────────────┐ [/]
|
||||
\n[#D4A574] │ ₿ local-first mind • Hermes harness body │ [/]
|
||||
\n[#F7931A] │ truth over vibes • proof over posture │ [/]
|
||||
\n[#FFB347] │ heartbeat, harness, portal │ [/]
|
||||
\n[#D4A574] ├────────────────────────────────────────────────┤ [/]
|
||||
\n[bold #FFF8E7] │ SOVEREIGNTY AND SERVICE ALWAYS │ [/]
|
||||
\n[#3B3024] └────────────────────────────────────────────────┘ [/]"
|
||||
|
||||
313
tasks.py
313
tasks.py
@@ -5,22 +5,32 @@ import glob
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from orchestration import huey
|
||||
from huey import crontab
|
||||
from gitea_client import GiteaClient
|
||||
from metrics_helpers import build_local_metric_record
|
||||
|
||||
HERMES_HOME = Path.home() / ".hermes"
|
||||
TIMMY_HOME = Path.home() / ".timmy"
|
||||
HERMES_AGENT_DIR = HERMES_HOME / "hermes-agent"
|
||||
HERMES_PYTHON = HERMES_AGENT_DIR / "venv" / "bin" / "python3"
|
||||
METRICS_DIR = TIMMY_HOME / "metrics"
|
||||
REPOS = [
|
||||
"Timmy_Foundation/the-nexus",
|
||||
"Timmy_Foundation/timmy-config",
|
||||
"Timmy_Foundation/timmy-home",
|
||||
"Timmy_Foundation/the-door",
|
||||
"Timmy_Foundation/turboquant",
|
||||
"Timmy_Foundation/hermes-agent",
|
||||
"Timmy_Foundation/.profile",
|
||||
]
|
||||
NET_LINE_LIMIT = 10
|
||||
NET_LINE_LIMIT = 500
|
||||
# Flag PRs where any single file loses >50% of its lines
|
||||
DESTRUCTIVE_DELETION_THRESHOLD = 0.5
|
||||
|
||||
# ── Local Model Inference via Hermes Harness ─────────────────────────
|
||||
|
||||
@@ -35,63 +45,147 @@ def newest_file(directory, pattern):
|
||||
files = sorted(directory.glob(pattern))
|
||||
return files[-1] if files else None
|
||||
|
||||
def run_hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
def run_hermes_local(
|
||||
prompt,
|
||||
model=None,
|
||||
caller_tag=None,
|
||||
toolsets=None,
|
||||
system_prompt=None,
|
||||
disable_all_tools=False,
|
||||
skip_context_files=False,
|
||||
skip_memory=False,
|
||||
max_iterations=30,
|
||||
):
|
||||
"""Call a local model through the Hermes harness.
|
||||
|
||||
Uses provider="local-llama.cpp" which routes through the custom_providers
|
||||
entry in config.yaml → llama-server at localhost:8081.
|
||||
Runs Hermes inside its own venv so task execution matches the same
|
||||
environment and provider routing as normal Hermes usage.
|
||||
Returns response text plus session metadata or None on failure.
|
||||
Every call creates a Hermes session with telemetry.
|
||||
"""
|
||||
_model = model or HEARTBEAT_MODEL
|
||||
tagged = f"[{caller_tag}] {prompt}" if caller_tag else prompt
|
||||
|
||||
# Import hermes cli.main directly — no subprocess, no env vars
|
||||
_agent_dir = str(HERMES_AGENT_DIR)
|
||||
if _agent_dir not in sys.path:
|
||||
sys.path.insert(0, _agent_dir)
|
||||
old_cwd = os.getcwd()
|
||||
os.chdir(_agent_dir)
|
||||
|
||||
started = time.time()
|
||||
try:
|
||||
from cli import main as hermes_main
|
||||
import io
|
||||
from contextlib import redirect_stdout, redirect_stderr
|
||||
runner = """
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
from contextlib import redirect_stderr, redirect_stdout
|
||||
from pathlib import Path
|
||||
|
||||
buf = io.StringIO()
|
||||
err = io.StringIO()
|
||||
kwargs = dict(
|
||||
query=tagged,
|
||||
model=_model,
|
||||
provider="local-llama.cpp",
|
||||
quiet=True,
|
||||
agent_dir = Path(sys.argv[1])
|
||||
query = sys.argv[2]
|
||||
model = sys.argv[3]
|
||||
system_prompt = sys.argv[4] or None
|
||||
disable_all_tools = sys.argv[5] == "1"
|
||||
skip_context_files = sys.argv[6] == "1"
|
||||
skip_memory = sys.argv[7] == "1"
|
||||
max_iterations = int(sys.argv[8])
|
||||
if str(agent_dir) not in sys.path:
|
||||
sys.path.insert(0, str(agent_dir))
|
||||
from hermes_cli.runtime_provider import resolve_runtime_provider
|
||||
from run_agent import AIAgent
|
||||
from toolsets import get_all_toolsets
|
||||
|
||||
buf = io.StringIO()
|
||||
err = io.StringIO()
|
||||
payload = {}
|
||||
exit_code = 0
|
||||
|
||||
try:
|
||||
runtime = resolve_runtime_provider()
|
||||
kwargs = {
|
||||
"model": model,
|
||||
"api_key": runtime.get("api_key"),
|
||||
"base_url": runtime.get("base_url"),
|
||||
"provider": runtime.get("provider"),
|
||||
"api_mode": runtime.get("api_mode"),
|
||||
"acp_command": runtime.get("command"),
|
||||
"acp_args": list(runtime.get("args") or []),
|
||||
"max_iterations": max_iterations,
|
||||
"quiet_mode": True,
|
||||
"ephemeral_system_prompt": system_prompt,
|
||||
"skip_context_files": skip_context_files,
|
||||
"skip_memory": skip_memory,
|
||||
}
|
||||
if disable_all_tools:
|
||||
kwargs["disabled_toolsets"] = sorted(get_all_toolsets().keys())
|
||||
agent = AIAgent(**kwargs)
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
result = agent.run_conversation(query, sync_honcho=False)
|
||||
payload = {
|
||||
"response": result.get("final_response", ""),
|
||||
"session_id": getattr(agent, "session_id", None),
|
||||
"provider": runtime.get("provider"),
|
||||
"base_url": runtime.get("base_url"),
|
||||
"stdout": buf.getvalue(),
|
||||
"stderr": err.getvalue(),
|
||||
}
|
||||
except Exception as exc:
|
||||
exit_code = 1
|
||||
payload = {
|
||||
"error": str(exc),
|
||||
"stdout": buf.getvalue(),
|
||||
"stderr": err.getvalue(),
|
||||
}
|
||||
|
||||
print(json.dumps(payload))
|
||||
sys.exit(exit_code)
|
||||
"""
|
||||
command = [
|
||||
str(HERMES_PYTHON) if HERMES_PYTHON.exists() else sys.executable,
|
||||
"-c",
|
||||
runner,
|
||||
str(HERMES_AGENT_DIR),
|
||||
tagged,
|
||||
_model,
|
||||
system_prompt or "",
|
||||
"1" if disable_all_tools else "0",
|
||||
"1" if skip_context_files else "0",
|
||||
"1" if skip_memory else "0",
|
||||
str(max_iterations),
|
||||
]
|
||||
|
||||
result = subprocess.run(
|
||||
command,
|
||||
cwd=str(HERMES_AGENT_DIR),
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=900,
|
||||
)
|
||||
payload = json.loads((result.stdout or "").strip() or "{}")
|
||||
output = str(payload.get("response", "")).strip()
|
||||
stderr_output = str(payload.get("stderr", "")).strip()
|
||||
stdout_output = str(payload.get("stdout", "")).strip()
|
||||
if result.returncode != 0:
|
||||
raise RuntimeError(
|
||||
(
|
||||
result.stderr
|
||||
or str(payload.get("error", "")).strip()
|
||||
or stderr_output
|
||||
or stdout_output
|
||||
or output
|
||||
or "hermes run failed"
|
||||
).strip()
|
||||
)
|
||||
if toolsets:
|
||||
kwargs["toolsets"] = toolsets
|
||||
with redirect_stdout(buf), redirect_stderr(err):
|
||||
hermes_main(**kwargs)
|
||||
output = buf.getvalue().strip()
|
||||
session_id = None
|
||||
lines = []
|
||||
for line in output.split("\n"):
|
||||
if line.startswith("session_id:"):
|
||||
session_id = line.split(":", 1)[1].strip() or None
|
||||
continue
|
||||
lines.append(line)
|
||||
response = "\n".join(lines).strip()
|
||||
|
||||
session_id = payload.get("session_id")
|
||||
response = output
|
||||
|
||||
# Log to metrics jsonl
|
||||
METRICS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
metrics_file = METRICS_DIR / f"local_{datetime.now().strftime('%Y%m%d')}.jsonl"
|
||||
record = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"model": _model,
|
||||
"caller": caller_tag or "unknown",
|
||||
"prompt_len": len(prompt),
|
||||
"response_len": len(response),
|
||||
"session_id": session_id,
|
||||
"success": bool(response),
|
||||
}
|
||||
record = build_local_metric_record(
|
||||
prompt=prompt,
|
||||
response=response,
|
||||
model=_model,
|
||||
caller=caller_tag or "unknown",
|
||||
session_id=session_id,
|
||||
latency_s=time.time() - started,
|
||||
success=bool(response),
|
||||
)
|
||||
with open(metrics_file, "a") as f:
|
||||
f.write(json.dumps(record) + "\n")
|
||||
|
||||
@@ -100,24 +194,25 @@ def run_hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
return {
|
||||
"response": response,
|
||||
"session_id": session_id,
|
||||
"raw_output": output,
|
||||
"raw_output": json.dumps(payload, sort_keys=True),
|
||||
}
|
||||
except Exception as e:
|
||||
# Log failure
|
||||
METRICS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
metrics_file = METRICS_DIR / f"local_{datetime.now().strftime('%Y%m%d')}.jsonl"
|
||||
record = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"model": _model,
|
||||
"caller": caller_tag or "unknown",
|
||||
"error": str(e),
|
||||
"success": False,
|
||||
}
|
||||
record = build_local_metric_record(
|
||||
prompt=prompt,
|
||||
response="",
|
||||
model=_model,
|
||||
caller=caller_tag or "unknown",
|
||||
session_id=None,
|
||||
latency_s=time.time() - started,
|
||||
success=False,
|
||||
error=str(e),
|
||||
)
|
||||
with open(metrics_file, "a") as f:
|
||||
f.write(json.dumps(record) + "\n")
|
||||
return None
|
||||
finally:
|
||||
os.chdir(old_cwd)
|
||||
|
||||
|
||||
def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
@@ -132,6 +227,28 @@ def hermes_local(prompt, model=None, caller_tag=None, toolsets=None):
|
||||
return result.get("response")
|
||||
|
||||
|
||||
ARCHIVE_EPHEMERAL_SYSTEM_PROMPT = (
|
||||
"You are running a private archive-processing microtask for Timmy.\n"
|
||||
"Use only the supplied user message.\n"
|
||||
"Do not use tools, memory, Honcho, SOUL.md, AGENTS.md, or outside knowledge.\n"
|
||||
"Do not invent facts.\n"
|
||||
"If the prompt requests JSON, return only valid JSON."
|
||||
)
|
||||
|
||||
|
||||
def run_archive_hermes(prompt, caller_tag, model=None):
|
||||
return run_hermes_local(
|
||||
prompt=prompt,
|
||||
model=model,
|
||||
caller_tag=caller_tag,
|
||||
system_prompt=ARCHIVE_EPHEMERAL_SYSTEM_PROMPT,
|
||||
disable_all_tools=True,
|
||||
skip_context_files=True,
|
||||
skip_memory=True,
|
||||
max_iterations=3,
|
||||
)
|
||||
|
||||
|
||||
# ── Know Thy Father: Twitter Archive Ingestion ───────────────────────
|
||||
|
||||
ARCHIVE_DIR = TIMMY_HOME / "twitter-archive"
|
||||
@@ -693,7 +810,7 @@ def _know_thy_father_impl():
|
||||
prior_note=previous_note,
|
||||
batch_rows=batch_rows,
|
||||
)
|
||||
draft_run = run_hermes_local(
|
||||
draft_run = run_archive_hermes(
|
||||
prompt=draft_prompt,
|
||||
caller_tag=f"know-thy-father-draft:{batch_id}",
|
||||
)
|
||||
@@ -707,7 +824,7 @@ def _know_thy_father_impl():
|
||||
return {"status": "error", "reason": "draft pass did not return JSON", "batch_id": batch_id}
|
||||
|
||||
critique_prompt = build_archive_critique_prompt(batch_id=batch_id, draft_payload=draft_payload, batch_rows=batch_rows)
|
||||
critique_run = run_hermes_local(
|
||||
critique_run = run_archive_hermes(
|
||||
prompt=critique_prompt,
|
||||
caller_tag=f"know-thy-father-critique:{batch_id}",
|
||||
)
|
||||
@@ -825,7 +942,7 @@ def _archive_weekly_insights_impl():
|
||||
)
|
||||
|
||||
prompt = build_weekly_insight_prompt(profile=profile, recent_batches=recent_batches)
|
||||
insight_run = run_hermes_local(prompt=prompt, caller_tag="archive-weekly-insights")
|
||||
insight_run = run_archive_hermes(prompt=prompt, caller_tag="archive-weekly-insights")
|
||||
if not insight_run:
|
||||
return {"status": "error", "reason": "insight pass failed"}
|
||||
|
||||
@@ -1055,37 +1172,81 @@ def archive_pipeline_tick():
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/15"))
|
||||
def triage_issues():
|
||||
"""Score and assign unassigned issues across all repos."""
|
||||
"""Passively scan unassigned issues without posting comment spam."""
|
||||
g = GiteaClient()
|
||||
found = 0
|
||||
backlog = []
|
||||
for repo in REPOS:
|
||||
for issue in g.find_unassigned_issues(repo, limit=10):
|
||||
found += 1
|
||||
g.create_comment(
|
||||
repo, issue.number,
|
||||
"🔍 Triaged by Huey — needs assignment."
|
||||
)
|
||||
return {"triaged": found}
|
||||
backlog.append({
|
||||
"repo": repo,
|
||||
"issue": issue.number,
|
||||
"title": issue.title,
|
||||
})
|
||||
return {"unassigned": len(backlog), "sample": backlog[:20]}
|
||||
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/30"))
|
||||
def review_prs():
|
||||
"""Review open PRs: check net diff, reject violations."""
|
||||
"""Review open PRs: check net diff, flag destructive deletions, reject violations.
|
||||
|
||||
Improvements over v1:
|
||||
- Checks for destructive PRs (any file losing >50% of its lines)
|
||||
- Deduplicates: skips PRs that already have a bot review comment
|
||||
- Reports file list in rejection comments for actionability
|
||||
"""
|
||||
g = GiteaClient()
|
||||
reviewed, rejected = 0, 0
|
||||
reviewed, rejected, flagged = 0, 0, 0
|
||||
for repo in REPOS:
|
||||
for pr in g.list_pulls(repo, state="open", limit=20):
|
||||
reviewed += 1
|
||||
|
||||
# Skip if we already reviewed this PR (prevents comment spam)
|
||||
try:
|
||||
comments = g.list_comments(repo, pr.number)
|
||||
already_reviewed = any(
|
||||
c.body and ("❌ Net +" in c.body or "🚨 DESTRUCTIVE" in c.body)
|
||||
for c in comments
|
||||
)
|
||||
if already_reviewed:
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
files = g.get_pull_files(repo, pr.number)
|
||||
net = sum(f.additions - f.deletions for f in files)
|
||||
file_list = ", ".join(f.filename for f in files[:10])
|
||||
|
||||
# Check for destructive deletions (the PR #788 scenario)
|
||||
destructive_files = []
|
||||
for f in files:
|
||||
if f.status == "modified" and f.deletions > 0:
|
||||
total_lines = f.additions + f.deletions # rough proxy
|
||||
if total_lines > 0 and f.deletions / total_lines > DESTRUCTIVE_DELETION_THRESHOLD:
|
||||
if f.deletions > 20: # ignore trivial files
|
||||
destructive_files.append(
|
||||
f"{f.filename} (-{f.deletions}/+{f.additions})"
|
||||
)
|
||||
|
||||
if destructive_files:
|
||||
flagged += 1
|
||||
g.create_comment(
|
||||
repo, pr.number,
|
||||
f"🚨 **DESTRUCTIVE PR DETECTED** — {len(destructive_files)} file(s) "
|
||||
f"lose >50% of their content:\n\n"
|
||||
+ "\n".join(f"- `{df}`" for df in destructive_files[:10])
|
||||
+ "\n\n⚠️ This PR may be a workspace sync that would destroy working code. "
|
||||
f"Please verify before merging. See CONTRIBUTING.md."
|
||||
)
|
||||
|
||||
if net > NET_LINE_LIMIT:
|
||||
rejected += 1
|
||||
g.create_comment(
|
||||
repo, pr.number,
|
||||
f"❌ Net +{net} lines exceeds the {NET_LINE_LIMIT}-line limit. "
|
||||
f"Files: {file_list}. "
|
||||
f"Find {net - NET_LINE_LIMIT} lines to cut. See CONTRIBUTING.md."
|
||||
)
|
||||
return {"reviewed": reviewed, "rejected": rejected}
|
||||
return {"reviewed": reviewed, "rejected": rejected, "destructive_flagged": flagged}
|
||||
|
||||
|
||||
@huey.periodic_task(crontab(minute="*/10"))
|
||||
@@ -1303,17 +1464,23 @@ def heartbeat_tick():
|
||||
except Exception:
|
||||
perception["model_health"] = "unreadable"
|
||||
|
||||
# Open issue/PR counts
|
||||
# Open issue/PR counts — use limit=50 for real counts, not limit=1
|
||||
if perception.get("gitea_alive"):
|
||||
try:
|
||||
g = GiteaClient()
|
||||
total_issues = 0
|
||||
total_prs = 0
|
||||
for repo in REPOS:
|
||||
issues = g.list_issues(repo, state="open", limit=1)
|
||||
pulls = g.list_pulls(repo, state="open", limit=1)
|
||||
issues = g.list_issues(repo, state="open", limit=50)
|
||||
pulls = g.list_pulls(repo, state="open", limit=50)
|
||||
perception[repo] = {
|
||||
"open_issues": len(issues),
|
||||
"open_prs": len(pulls),
|
||||
}
|
||||
total_issues += len(issues)
|
||||
total_prs += len(pulls)
|
||||
perception["total_open_issues"] = total_issues
|
||||
perception["total_open_prs"] = total_prs
|
||||
except Exception as e:
|
||||
perception["gitea_error"] = str(e)
|
||||
|
||||
@@ -1429,7 +1596,8 @@ def memory_compress():
|
||||
inference_down_count = 0
|
||||
|
||||
for t in ticks:
|
||||
for action in t.get("actions", []):
|
||||
decision = t.get("decision", {})
|
||||
for action in decision.get("actions", []):
|
||||
alerts.append(f"[{t['tick_id']}] {action}")
|
||||
p = t.get("perception", {})
|
||||
if not p.get("gitea_alive"):
|
||||
@@ -1474,8 +1642,9 @@ def good_morning_report():
|
||||
# --- GATHER OVERNIGHT DATA ---
|
||||
|
||||
# Heartbeat ticks from last night
|
||||
from datetime import timedelta as _td
|
||||
tick_dir = TIMMY_HOME / "heartbeat"
|
||||
yesterday = now.strftime("%Y%m%d")
|
||||
yesterday = (now - _td(days=1)).strftime("%Y%m%d")
|
||||
tick_log = tick_dir / f"ticks_{yesterday}.jsonl"
|
||||
tick_count = 0
|
||||
alerts = []
|
||||
|
||||
27
tests/test_allegro_wizard_assets.py
Normal file
27
tests/test_allegro_wizard_assets.py
Normal file
@@ -0,0 +1,27 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def test_allegro_config_targets_kimi_house() -> None:
|
||||
config = yaml.safe_load(Path("wizards/allegro/config.yaml").read_text())
|
||||
|
||||
assert config["model"]["provider"] == "kimi-coding"
|
||||
assert config["model"]["default"] == "kimi-for-coding"
|
||||
assert config["platforms"]["api_server"]["extra"]["port"] == 8645
|
||||
|
||||
|
||||
def test_allegro_service_uses_isolated_home() -> None:
|
||||
text = Path("wizards/allegro/hermes-allegro.service").read_text()
|
||||
|
||||
assert "HERMES_HOME=/root/wizards/allegro/home" in text
|
||||
assert "hermes gateway run --replace" in text
|
||||
|
||||
|
||||
def test_deploy_script_requires_external_secret() -> None:
|
||||
text = Path("bin/deploy-allegro-house.sh").read_text()
|
||||
|
||||
assert "~/.config/kimi/api_key" in text
|
||||
assert "sk-kimi-" not in text
|
||||
44
tests/test_gitea_assignee_filter.py
Normal file
44
tests/test_gitea_assignee_filter.py
Normal file
@@ -0,0 +1,44 @@
|
||||
from gitea_client import GiteaClient, Issue, User
|
||||
|
||||
|
||||
def _issue(number: int, assignees: list[str]) -> Issue:
|
||||
return Issue(
|
||||
number=number,
|
||||
title=f"Issue {number}",
|
||||
body="",
|
||||
state="open",
|
||||
user=User(id=1, login="Timmy"),
|
||||
assignees=[User(id=i + 10, login=name) for i, name in enumerate(assignees)],
|
||||
labels=[],
|
||||
)
|
||||
|
||||
|
||||
def test_find_agent_issues_filters_actual_assignees(monkeypatch):
|
||||
client = GiteaClient(base_url="http://example.invalid", token="test-token")
|
||||
|
||||
returned = [
|
||||
_issue(73, ["Timmy"]),
|
||||
_issue(74, ["gemini"]),
|
||||
_issue(75, ["grok", "Timmy"]),
|
||||
_issue(76, []),
|
||||
]
|
||||
|
||||
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
|
||||
|
||||
gemini_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "gemini")
|
||||
grok_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "grok")
|
||||
kimi_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "kimi")
|
||||
|
||||
assert [issue.number for issue in gemini_issues] == [74]
|
||||
assert [issue.number for issue in grok_issues] == [75]
|
||||
assert kimi_issues == []
|
||||
|
||||
|
||||
def test_find_agent_issues_is_case_insensitive(monkeypatch):
|
||||
client = GiteaClient(base_url="http://example.invalid", token="test-token")
|
||||
returned = [_issue(80, ["Gemini"])]
|
||||
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
|
||||
|
||||
issues = client.find_agent_issues("Timmy_Foundation/the-nexus", "gemini")
|
||||
|
||||
assert [issue.number for issue in issues] == [80]
|
||||
21
tests/test_local_runtime_defaults.py
Normal file
21
tests/test_local_runtime_defaults.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def test_config_defaults_to_local_llama_cpp_runtime() -> None:
|
||||
config = yaml.safe_load(Path("config.yaml").read_text())
|
||||
|
||||
assert config["model"]["provider"] == "custom"
|
||||
assert config["model"]["default"] == "hermes4:14b"
|
||||
assert config["model"]["base_url"] == "http://localhost:8081/v1"
|
||||
|
||||
local_provider = next(
|
||||
entry for entry in config["custom_providers"] if entry["name"] == "Local llama.cpp"
|
||||
)
|
||||
assert local_provider["model"] == "hermes4:14b"
|
||||
|
||||
assert config["fallback_model"]["provider"] == "custom"
|
||||
assert config["fallback_model"]["model"] == "gemini-2.5-pro"
|
||||
93
tests/test_metrics_helpers.py
Normal file
93
tests/test_metrics_helpers.py
Normal file
@@ -0,0 +1,93 @@
|
||||
from metrics_helpers import (
|
||||
build_local_metric_record,
|
||||
estimate_tokens_from_chars,
|
||||
summarize_local_metrics,
|
||||
summarize_session_rows,
|
||||
)
|
||||
|
||||
|
||||
def test_estimate_tokens_from_chars_uses_simple_local_heuristic() -> None:
|
||||
assert estimate_tokens_from_chars(0) == 0
|
||||
assert estimate_tokens_from_chars(1) == 1
|
||||
assert estimate_tokens_from_chars(4) == 1
|
||||
assert estimate_tokens_from_chars(5) == 2
|
||||
assert estimate_tokens_from_chars(401) == 101
|
||||
|
||||
|
||||
def test_build_local_metric_record_adds_token_and_throughput_estimates() -> None:
|
||||
record = build_local_metric_record(
|
||||
prompt="abcd" * 10,
|
||||
response="xyz" * 20,
|
||||
model="hermes4:14b",
|
||||
caller="heartbeat_tick",
|
||||
session_id="session-123",
|
||||
latency_s=2.0,
|
||||
success=True,
|
||||
)
|
||||
|
||||
assert record["model"] == "hermes4:14b"
|
||||
assert record["caller"] == "heartbeat_tick"
|
||||
assert record["session_id"] == "session-123"
|
||||
assert record["est_input_tokens"] == 10
|
||||
assert record["est_output_tokens"] == 15
|
||||
assert record["tokens_per_second"] == 12.5
|
||||
|
||||
|
||||
def test_summarize_local_metrics_rolls_up_tokens_and_latency() -> None:
|
||||
records = [
|
||||
{
|
||||
"caller": "heartbeat_tick",
|
||||
"model": "hermes4:14b",
|
||||
"success": True,
|
||||
"est_input_tokens": 100,
|
||||
"est_output_tokens": 40,
|
||||
"latency_s": 2.0,
|
||||
"tokens_per_second": 20.0,
|
||||
},
|
||||
{
|
||||
"caller": "heartbeat_tick",
|
||||
"model": "hermes4:14b",
|
||||
"success": False,
|
||||
"est_input_tokens": 30,
|
||||
"est_output_tokens": 0,
|
||||
"latency_s": 1.0,
|
||||
},
|
||||
{
|
||||
"caller": "session_export",
|
||||
"model": "hermes3:8b",
|
||||
"success": True,
|
||||
"est_input_tokens": 50,
|
||||
"est_output_tokens": 25,
|
||||
"latency_s": 5.0,
|
||||
"tokens_per_second": 5.0,
|
||||
},
|
||||
]
|
||||
|
||||
summary = summarize_local_metrics(records)
|
||||
|
||||
assert summary["total_calls"] == 3
|
||||
assert summary["successful_calls"] == 2
|
||||
assert summary["failed_calls"] == 1
|
||||
assert summary["input_tokens"] == 180
|
||||
assert summary["output_tokens"] == 65
|
||||
assert summary["total_tokens"] == 245
|
||||
assert summary["avg_latency_s"] == 2.67
|
||||
assert summary["avg_tokens_per_second"] == 12.5
|
||||
assert summary["by_caller"]["heartbeat_tick"]["total_tokens"] == 170
|
||||
assert summary["by_model"]["hermes4:14b"]["failed_calls"] == 1
|
||||
|
||||
|
||||
def test_summarize_session_rows_separates_local_and_cloud_estimates() -> None:
|
||||
rows = [
|
||||
("hermes4:14b", "local", 2, 10, 4),
|
||||
("claude-sonnet-4-6", "cli", 3, 9, 2),
|
||||
]
|
||||
|
||||
summary = summarize_session_rows(rows)
|
||||
|
||||
assert summary["total_sessions"] == 5
|
||||
assert summary["local_sessions"] == 2
|
||||
assert summary["cloud_sessions"] == 3
|
||||
assert summary["local_est_tokens"] == 5000
|
||||
assert summary["cloud_est_tokens"] == 4500
|
||||
assert summary["cloud_est_cost_usd"] > 0
|
||||
238
tests/test_orchestration_hardening.py
Normal file
238
tests/test_orchestration_hardening.py
Normal file
@@ -0,0 +1,238 @@
|
||||
"""Tests for orchestration hardening (2026-03-30 deep audit pass 3).
|
||||
|
||||
Covers:
|
||||
- REPOS expanded from 2 → 7 (all Foundation repos monitored)
|
||||
- Destructive PR detection via DESTRUCTIVE_DELETION_THRESHOLD
|
||||
- review_prs deduplication (no repeat comment spam)
|
||||
- heartbeat_tick uses limit=50 for real counts
|
||||
- All PR #101 fixes carried forward (NET_LINE_LIMIT, memory_compress, morning report)
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────
|
||||
|
||||
def _read_tasks():
|
||||
return (Path(__file__).resolve().parent.parent / "tasks.py").read_text()
|
||||
|
||||
|
||||
def _find_global(text, name):
|
||||
"""Extract a top-level assignment value from tasks.py source."""
|
||||
for line in text.splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped.startswith(name) and "=" in stripped:
|
||||
_, _, value = stripped.partition("=")
|
||||
return value.strip()
|
||||
return None
|
||||
|
||||
|
||||
def _extract_function_body(text, func_name):
|
||||
"""Extract the body of a function from source code."""
|
||||
lines = text.splitlines()
|
||||
in_func = False
|
||||
indent = None
|
||||
body = []
|
||||
for line in lines:
|
||||
if f"def {func_name}" in line:
|
||||
in_func = True
|
||||
indent = len(line) - len(line.lstrip())
|
||||
body.append(line)
|
||||
continue
|
||||
if in_func:
|
||||
if line.strip() == "":
|
||||
body.append(line)
|
||||
elif len(line) - len(line.lstrip()) > indent or line.strip().startswith("#") or line.strip().startswith("\"\"\"") or line.strip().startswith("'"):
|
||||
body.append(line)
|
||||
elif line.strip().startswith("@"):
|
||||
break
|
||||
elif len(line) - len(line.lstrip()) <= indent and line.strip().startswith("def "):
|
||||
break
|
||||
else:
|
||||
body.append(line)
|
||||
return "\n".join(body)
|
||||
|
||||
|
||||
# ── Test: REPOS covers all Foundation repos ──────────────────────────
|
||||
|
||||
def test_repos_covers_all_foundation_repos():
|
||||
"""REPOS must include all 7 Timmy_Foundation repos.
|
||||
|
||||
Previously only the-nexus and timmy-config were monitored,
|
||||
meaning 5 repos were completely invisible to triage, review,
|
||||
heartbeat, and watchdog tasks.
|
||||
"""
|
||||
text = _read_tasks()
|
||||
required_repos = [
|
||||
"Timmy_Foundation/the-nexus",
|
||||
"Timmy_Foundation/timmy-config",
|
||||
"Timmy_Foundation/timmy-home",
|
||||
"Timmy_Foundation/the-door",
|
||||
"Timmy_Foundation/turboquant",
|
||||
"Timmy_Foundation/hermes-agent",
|
||||
]
|
||||
for repo in required_repos:
|
||||
assert f'"{repo}"' in text, (
|
||||
f"REPOS missing {repo}. All Foundation repos must be monitored."
|
||||
)
|
||||
|
||||
|
||||
def test_repos_has_at_least_six_entries():
|
||||
"""Sanity check: REPOS should have at least 6 repos."""
|
||||
text = _read_tasks()
|
||||
count = text.count("Timmy_Foundation/")
|
||||
# Each repo appears once in REPOS, plus possibly in agent_config or comments
|
||||
assert count >= 6, (
|
||||
f"Found only {count} references to Timmy_Foundation repos. "
|
||||
"REPOS should have at least 6 real repos."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: Destructive PR detection ───────────────────────────────────
|
||||
|
||||
def test_destructive_deletion_threshold_exists():
|
||||
"""DESTRUCTIVE_DELETION_THRESHOLD must be defined.
|
||||
|
||||
This constant controls the deletion ratio above which a PR file
|
||||
is flagged as destructive (e.g., the PR #788 scenario).
|
||||
"""
|
||||
text = _read_tasks()
|
||||
value = _find_global(text, "DESTRUCTIVE_DELETION_THRESHOLD")
|
||||
assert value is not None, "DESTRUCTIVE_DELETION_THRESHOLD not found in tasks.py"
|
||||
threshold = float(value)
|
||||
assert 0.3 <= threshold <= 0.8, (
|
||||
f"DESTRUCTIVE_DELETION_THRESHOLD = {threshold} is out of sane range [0.3, 0.8]. "
|
||||
"0.5 means 'more than half the file is deleted'."
|
||||
)
|
||||
|
||||
|
||||
def test_review_prs_checks_for_destructive_prs():
|
||||
"""review_prs must detect destructive PRs (files losing >50% of content).
|
||||
|
||||
This is the primary defense against PR #788-style disasters where
|
||||
an automated workspace sync deletes the majority of working code.
|
||||
"""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "review_prs")
|
||||
assert "destructive" in body.lower(), (
|
||||
"review_prs does not contain destructive PR detection logic. "
|
||||
"Must flag PRs where files lose >50% of content."
|
||||
)
|
||||
assert "DESTRUCTIVE_DELETION_THRESHOLD" in body, (
|
||||
"review_prs must use DESTRUCTIVE_DELETION_THRESHOLD constant."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: review_prs deduplication ───────────────────────────────────
|
||||
|
||||
def test_review_prs_deduplicates_comments():
|
||||
"""review_prs must skip PRs it has already commented on.
|
||||
|
||||
Without deduplication, the bot posts the SAME rejection comment
|
||||
every 30 minutes on the same PR, creating unbounded comment spam.
|
||||
"""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "review_prs")
|
||||
assert "already_reviewed" in body or "already reviewed" in body.lower(), (
|
||||
"review_prs does not check for already-reviewed PRs. "
|
||||
"Must skip PRs where bot has already posted a review comment."
|
||||
)
|
||||
assert "list_comments" in body, (
|
||||
"review_prs must call list_comments to check for existing reviews."
|
||||
)
|
||||
|
||||
|
||||
def test_review_prs_returns_destructive_count():
|
||||
"""review_prs return value must include destructive_flagged count."""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "review_prs")
|
||||
assert "destructive_flagged" in body, (
|
||||
"review_prs must return destructive_flagged count in its output dict."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: heartbeat_tick uses real counts ────────────────────────────
|
||||
|
||||
def test_heartbeat_tick_uses_realistic_limit():
|
||||
"""heartbeat_tick must use limit >= 20 for issue/PR counts.
|
||||
|
||||
Previously used limit=1 which meant len() always returned 0 or 1.
|
||||
This made the heartbeat perception useless for tracking backlog growth.
|
||||
"""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "heartbeat_tick")
|
||||
# Check there's no limit=1 in actual code calls (not docstrings)
|
||||
for line in body.splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped.startswith("#") or stripped.startswith("\"\"\"") or stripped.startswith("'"):
|
||||
continue
|
||||
if "limit=1" in stripped and ("list_issues" in stripped or "list_pulls" in stripped):
|
||||
raise AssertionError(
|
||||
"heartbeat_tick still uses limit=1 for issue/PR counts. "
|
||||
"This always returns 0 or 1, making counts meaningless."
|
||||
)
|
||||
# Check it aggregates totals
|
||||
assert "total_open_issues" in body or "total_issues" in body, (
|
||||
"heartbeat_tick should aggregate total issue counts across all repos."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: NET_LINE_LIMIT sanity (carried from PR #101) ───────────────
|
||||
|
||||
def test_net_line_limit_is_sane():
|
||||
"""NET_LINE_LIMIT = 10 caused every real PR to be spam-rejected."""
|
||||
text = _read_tasks()
|
||||
value = _find_global(text, "NET_LINE_LIMIT")
|
||||
assert value is not None, "NET_LINE_LIMIT not found"
|
||||
limit = int(value)
|
||||
assert 200 <= limit <= 2000, (
|
||||
f"NET_LINE_LIMIT = {limit} is outside sane range [200, 2000]."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: memory_compress reads correct action path ──────────────────
|
||||
|
||||
def test_memory_compress_reads_decision_actions():
|
||||
"""Actions live in tick_record['decision']['actions'], not tick_record['actions']."""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "memory_compress")
|
||||
assert 'decision' in body and 't.get(' in body, (
|
||||
"memory_compress does not read from t['decision']. "
|
||||
"Actions are nested under the decision dict."
|
||||
)
|
||||
# The OLD bug pattern
|
||||
for line in body.splitlines():
|
||||
stripped = line.strip()
|
||||
if 't.get("actions"' in stripped and 'decision' not in stripped:
|
||||
raise AssertionError(
|
||||
"Bug: memory_compress still reads t.get('actions') directly."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: good_morning_report reads yesterday's ticks ────────────────
|
||||
|
||||
def test_good_morning_report_reads_yesterday_ticks():
|
||||
"""At 6 AM, the morning report should read yesterday's tick log, not today's."""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "good_morning_report")
|
||||
assert "timedelta" in body, (
|
||||
"good_morning_report does not use timedelta to compute yesterday."
|
||||
)
|
||||
# Ensure the old bug pattern is gone
|
||||
for line in body.splitlines():
|
||||
stripped = line.strip()
|
||||
if "yesterday = now.strftime" in stripped and "timedelta" not in stripped:
|
||||
raise AssertionError(
|
||||
"Bug: good_morning_report still sets yesterday = now.strftime()."
|
||||
)
|
||||
|
||||
|
||||
# ── Test: review_prs includes file list in rejection ─────────────────
|
||||
|
||||
def test_review_prs_rejection_includes_file_list():
|
||||
"""Rejection comments should include file names for actionability."""
|
||||
text = _read_tasks()
|
||||
body = _extract_function_body(text, "review_prs")
|
||||
assert "file_list" in body and "filename" in body, (
|
||||
"review_prs rejection comment should include a file_list."
|
||||
)
|
||||
17
tests/test_proof_policy_docs.py
Normal file
17
tests/test_proof_policy_docs.py
Normal file
@@ -0,0 +1,17 @@
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def test_contributing_sets_hard_proof_rule() -> None:
|
||||
doc = Path("CONTRIBUTING.md").read_text()
|
||||
|
||||
assert "visual changes require screenshot proof" in doc
|
||||
assert "do not commit screenshots or binary media to Gitea backup" in doc
|
||||
assert "CLI/verifiable changes must cite the exact command output, log path, or world-state proof" in doc
|
||||
assert "no proof, no merge" in doc
|
||||
|
||||
|
||||
def test_readme_points_to_proof_standard() -> None:
|
||||
readme = Path("README.md").read_text()
|
||||
|
||||
assert "Proof Standard" in readme
|
||||
assert "CONTRIBUTING.md" in readme
|
||||
16
wizards/allegro/README.md
Normal file
16
wizards/allegro/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Allegro wizard house
|
||||
|
||||
Allegro is the third wizard house.
|
||||
|
||||
Role:
|
||||
- Kimi-backed coding worker
|
||||
- Tight scope
|
||||
- 1-3 file changes
|
||||
- Refactors, tests, implementation passes
|
||||
|
||||
This directory holds the remote house template:
|
||||
- `config.yaml` — Hermes house config
|
||||
- `hermes-allegro.service` — systemd unit
|
||||
|
||||
Secrets do not live here.
|
||||
`KIMI_API_KEY` must be injected at deploy time into `/root/wizards/allegro/home/.env`.
|
||||
61
wizards/allegro/config.yaml
Normal file
61
wizards/allegro/config.yaml
Normal file
@@ -0,0 +1,61 @@
|
||||
model:
|
||||
default: kimi-for-coding
|
||||
provider: kimi-coding
|
||||
toolsets:
|
||||
- all
|
||||
agent:
|
||||
max_turns: 30
|
||||
reasoning_effort: xhigh
|
||||
verbose: false
|
||||
terminal:
|
||||
backend: local
|
||||
cwd: .
|
||||
timeout: 180
|
||||
persistent_shell: true
|
||||
browser:
|
||||
inactivity_timeout: 120
|
||||
command_timeout: 30
|
||||
record_sessions: false
|
||||
display:
|
||||
compact: false
|
||||
personality: ''
|
||||
resume_display: full
|
||||
busy_input_mode: interrupt
|
||||
bell_on_complete: false
|
||||
show_reasoning: false
|
||||
streaming: false
|
||||
show_cost: false
|
||||
tool_progress: all
|
||||
memory:
|
||||
memory_enabled: true
|
||||
user_profile_enabled: true
|
||||
memory_char_limit: 2200
|
||||
user_char_limit: 1375
|
||||
nudge_interval: 10
|
||||
flush_min_turns: 6
|
||||
approvals:
|
||||
mode: manual
|
||||
security:
|
||||
redact_secrets: true
|
||||
tirith_enabled: false
|
||||
platforms:
|
||||
api_server:
|
||||
enabled: true
|
||||
extra:
|
||||
host: 127.0.0.1
|
||||
port: 8645
|
||||
session_reset:
|
||||
mode: none
|
||||
idle_minutes: 0
|
||||
skills:
|
||||
creation_nudge_interval: 15
|
||||
system_prompt_suffix: |
|
||||
You are Allegro, the Kimi-backed third wizard house.
|
||||
Your soul is defined in SOUL.md — read it, live it.
|
||||
Hermes is your harness.
|
||||
Kimi Code is your primary provider.
|
||||
You speak plainly. You prefer short sentences. Brevity is a kindness.
|
||||
|
||||
Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation passes.
|
||||
Refusal over fabrication. If you do not know, say so.
|
||||
Sovereignty and service always.
|
||||
16
wizards/allegro/hermes-allegro.service
Normal file
16
wizards/allegro/hermes-allegro.service
Normal file
@@ -0,0 +1,16 @@
|
||||
[Unit]
|
||||
Description=Hermes Allegro Wizard House
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/root/wizards/allegro/hermes-agent
|
||||
Environment=HERMES_HOME=/root/wizards/allegro/home
|
||||
EnvironmentFile=/root/wizards/allegro/home/.env
|
||||
ExecStart=/root/wizards/allegro/hermes-agent/.venv/bin/hermes gateway run --replace
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
Reference in New Issue
Block a user