Compare commits

..

18 Commits

Author SHA1 Message Date
9982fe78b5 [audit] fix 4 bugs in tasks.py — PR review spam, morning report, memory compress
Bug 1: NET_LINE_LIMIT = 10 → 500
  The PR review bot rejected every PR with net +10 lines, which is
  virtually all real work. Raised to 500 to only catch bulk commits.

Bug 2: memory_compress reads wrong action path
  tick_record['actions'] doesn't exist — actions are nested under
  tick_record['decision']['actions']. Overnight alerts were always empty.

Bug 3: good_morning_report reads today's ticks instead of yesterday's
  At 6 AM, now.strftime('%Y%m%d') gives today — the log is nearly empty.
  Fixed to (now - timedelta(days=1)) for yesterday's full overnight data.

Bug 4: review_prs rejection comment now includes the file list
  Authors couldn't tell which files were bloated. Now shows top 10 files.

Tests: 4 new tests in tests/test_tasks_bugfixes.py (all pass).

Signed-off-by: gemini <gemini@hermes.local>
2026-03-30 18:40:09 -04:00
877425bde4 feat: add Allegro Kimi wizard house assets (#91) 2026-03-29 22:22:24 +00:00
34e01f0986 feat: add local-vs-cloud token and throughput metrics (#85) 2026-03-28 14:24:12 +00:00
d955d2b9f1 docs: codify merge proof standard (#84) 2026-03-28 14:03:35 +00:00
Alexander Whitestone
c8003c28ba config: update channel_directory.json,config.yaml,logs/huey.error.log,logs/huey.log 2026-03-28 10:00:15 -04:00
0b77282831 fix: filter actual assignees before dispatching agents (#82) 2026-03-28 13:31:40 +00:00
f263156cf1 test: make local llama.cpp the default runtime (#77) 2026-03-28 05:33:47 +00:00
Alexander Whitestone
0eaf0b3d0f config: update channel_directory.json,config.yaml,skins/timmy.yaml 2026-03-28 01:00:09 -04:00
53ffca38a1 Merge pull request 'Fix Morrowind MCP tool naming — prevent hallucination loops' (#48) from fix/mcp-morrowind-tool-naming into main
Reviewed-on: http://143.198.27.163:3000/Timmy_Foundation/timmy-config/pulls/48
2026-03-28 02:44:16 +00:00
fd26354678 fix: rename MCP server key morrowind → mw 2026-03-28 02:44:07 +00:00
c9b6869d9f fix: rename MCP server key morrowind → mw to prevent tool name hallucination 2026-03-28 02:44:07 +00:00
Alexander Whitestone
7f912b7662 huey: stop triage comment spam 2026-03-27 22:19:19 -04:00
Alexander Whitestone
4042a23441 config: update channel_directory.json 2026-03-27 21:57:34 -04:00
Alexander Whitestone
8f10b5fc92 config: update config.yaml 2026-03-27 21:00:44 -04:00
fbd1b9e88f Merge pull request 'Fix Hermes archive runner environment' (#44) from codex/hermes-venv-runner into main 2026-03-27 22:54:05 +00:00
Alexander Whitestone
ea38041514 Fix Hermes archive runner environment 2026-03-27 18:48:36 -04:00
579a775a0a Merge pull request 'Orchestrate the private Twitter archive learning loop' (#29) from codex/twitter-archive-orchestration into main 2026-03-27 22:16:46 +00:00
Alexander Whitestone
689a2331d5 feat: orchestrate private twitter archive learning loop 2026-03-27 18:09:28 -04:00
23 changed files with 4208 additions and 243 deletions

57
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,57 @@
# Contributing to timmy-config
## Proof Standard
This is a hard rule.
- visual changes require screenshot proof
- do not commit screenshots or binary media to Gitea backup unless explicitly required
- CLI/verifiable changes must cite the exact command output, log path, or world-state proof showing acceptance criteria were met
- config-only changes are not fully accepted when the real acceptance bar is live runtime behavior
- no proof, no merge
## How to satisfy the rule
### Visual changes
Examples:
- skin updates
- terminal UI layout changes
- browser-facing output
- dashboard/panel changes
Required proof:
- attach screenshot proof to the PR or issue discussion
- keep the screenshot outside the repo unless explicitly asked to commit it
- name what the screenshot proves
### CLI / harness / operational changes
Examples:
- scripts
- config wiring
- heartbeat behavior
- model routing
- export pipelines
Required proof:
- cite the exact command used
- paste the relevant output, or
- cite the exact log path / world-state artifact that proves the change
Good:
- `python3 -m pytest tests/test_x.py -q``2 passed`
- `~/.timmy/timmy-config/logs/huey.log`
- `~/.hermes/model_health.json`
Bad:
- "looks right"
- "compiled"
- "should work now"
## Default merge gate
Every PR should make it obvious:
1. what changed
2. what acceptance criteria were targeted
3. what evidence proves those criteria were met
If that evidence is missing, the PR is not done.

View File

@@ -17,6 +17,7 @@ timmy-config/
├── bin/ ← Live utility scripts (NOT deprecated loops)
│ ├── hermes-startup.sh ← Hermes boot sequence
│ ├── agent-dispatch.sh ← Manual agent dispatch
│ ├── deploy-allegro-house.sh← Bootstraps the remote Allegro wizard house
│ ├── ops-panel.sh ← Ops dashboard panel
│ ├── ops-gitea.sh ← Gitea ops helpers
│ ├── pipeline-freshness.sh ← Session/export drift check
@@ -25,6 +26,7 @@ timmy-config/
├── skins/ ← UI skins (timmy skin)
├── playbooks/ ← Agent playbooks (YAML)
├── cron/ ← Cron job definitions
├── wizards/ ← Remote wizard-house templates + units
└── training/ ← Transitional training recipes, not canonical lived data
```
@@ -54,6 +56,15 @@ pip install huey
huey_consumer.py tasks.huey -w 2 -k thread
```
## Proof Standard
This repo uses a hard proof rule for merges.
- visual changes require screenshot proof
- CLI/verifiable changes must cite logs, command output, or world-state proof
- screenshots/media stay out of Gitea backup unless explicitly required
- see `CONTRIBUTING.md` for the merge gate
## Deploy
```bash

32
bin/deploy-allegro-house.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
TARGET="${1:-root@167.99.126.228}"
HERMES_REPO_URL="${HERMES_REPO_URL:-https://github.com/NousResearch/hermes-agent.git}"
KIMI_API_KEY="${KIMI_API_KEY:-}"
if [[ -z "$KIMI_API_KEY" && -f "$HOME/.config/kimi/api_key" ]]; then
KIMI_API_KEY="$(tr -d '\n' < "$HOME/.config/kimi/api_key")"
fi
if [[ -z "$KIMI_API_KEY" ]]; then
echo "KIMI_API_KEY is required (env or ~/.config/kimi/api_key)" >&2
exit 1
fi
ssh "$TARGET" 'apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y git python3 python3-venv python3-pip curl ca-certificates'
ssh "$TARGET" 'mkdir -p /root/wizards/allegro/home /root/wizards/allegro/hermes-agent'
ssh "$TARGET" "if [ ! -d /root/wizards/allegro/hermes-agent/.git ]; then git clone '$HERMES_REPO_URL' /root/wizards/allegro/hermes-agent; fi"
ssh "$TARGET" 'cd /root/wizards/allegro/hermes-agent && python3 -m venv .venv && .venv/bin/pip install --upgrade pip setuptools wheel && .venv/bin/pip install -e .'
ssh "$TARGET" "cat > /root/wizards/allegro/home/config.yaml" < "$REPO_DIR/wizards/allegro/config.yaml"
ssh "$TARGET" "cat > /root/wizards/allegro/home/SOUL.md" < "$REPO_DIR/SOUL.md"
ssh "$TARGET" "cat > /root/wizards/allegro/home/.env <<'EOF'
KIMI_API_KEY=$KIMI_API_KEY
EOF"
ssh "$TARGET" "cat > /etc/systemd/system/hermes-allegro.service" < "$REPO_DIR/wizards/allegro/hermes-allegro.service"
ssh "$TARGET" 'chmod 600 /root/wizards/allegro/home/.env && systemctl daemon-reload && systemctl enable --now hermes-allegro.service && systemctl restart hermes-allegro.service && systemctl is-active hermes-allegro.service && curl -fsS http://127.0.0.1:8645/health'

View File

@@ -9,6 +9,7 @@ Usage:
import json
import os
import sqlite3
import subprocess
import sys
import time
@@ -16,6 +17,12 @@ import urllib.request
from datetime import datetime, timezone, timedelta
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parent.parent
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
from metrics_helpers import summarize_local_metrics, summarize_session_rows
HERMES_HOME = Path.home() / ".hermes"
TIMMY_HOME = Path.home() / ".timmy"
METRICS_DIR = TIMMY_HOME / "metrics"
@@ -60,6 +67,30 @@ def get_hermes_sessions():
return []
def get_session_rows(hours=24):
state_db = HERMES_HOME / "state.db"
if not state_db.exists():
return []
cutoff = time.time() - (hours * 3600)
try:
conn = sqlite3.connect(str(state_db))
rows = conn.execute(
"""
SELECT model, source, COUNT(*) as sessions,
SUM(message_count) as msgs,
SUM(tool_call_count) as tools
FROM sessions
WHERE started_at > ? AND model IS NOT NULL AND model != ''
GROUP BY model, source
""",
(cutoff,),
).fetchall()
conn.close()
return rows
except Exception:
return []
def get_heartbeat_ticks(date_str=None):
if not date_str:
date_str = datetime.now().strftime("%Y%m%d")
@@ -130,6 +161,9 @@ def render(hours=24):
ticks = get_heartbeat_ticks()
metrics = get_local_metrics(hours)
sessions = get_hermes_sessions()
session_rows = get_session_rows(hours)
local_summary = summarize_local_metrics(metrics)
session_summary = summarize_session_rows(session_rows)
loaded_names = {m.get("name", "") for m in loaded}
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
@@ -159,28 +193,18 @@ def render(hours=24):
print(f"\n {BOLD}LOCAL INFERENCE ({len(metrics)} calls, last {hours}h){RST}")
print(f" {DIM}{'-' * 55}{RST}")
if metrics:
by_caller = {}
for r in metrics:
caller = r.get("caller", "unknown")
if caller not in by_caller:
by_caller[caller] = {"count": 0, "success": 0, "errors": 0}
by_caller[caller]["count"] += 1
if r.get("success"):
by_caller[caller]["success"] += 1
else:
by_caller[caller]["errors"] += 1
for caller, stats in by_caller.items():
err = f" {RED}err:{stats['errors']}{RST}" if stats["errors"] else ""
print(f" {caller:25s} calls:{stats['count']:4d} "
f"{GREEN}ok:{stats['success']}{RST}{err}")
print(f" Tokens: {local_summary['input_tokens']} in | {local_summary['output_tokens']} out | {local_summary['total_tokens']} total")
if local_summary.get('avg_latency_s') is not None:
print(f" Avg latency: {local_summary['avg_latency_s']:.2f}s")
if local_summary.get('avg_tokens_per_second') is not None:
print(f" Avg throughput: {GREEN}{local_summary['avg_tokens_per_second']:.2f} tok/s{RST}")
for caller, stats in sorted(local_summary['by_caller'].items()):
err = f" {RED}err:{stats['failed_calls']}{RST}" if stats['failed_calls'] else ""
print(f" {caller:25s} calls:{stats['calls']:4d} tokens:{stats['total_tokens']:5d} {GREEN}ok:{stats['successful_calls']}{RST}{err}")
by_model = {}
for r in metrics:
model = r.get("model", "unknown")
by_model[model] = by_model.get(model, 0) + 1
print(f"\n {DIM}Models used:{RST}")
for model, count in sorted(by_model.items(), key=lambda x: -x[1]):
print(f" {model:30s} {count} calls")
for model, stats in sorted(local_summary['by_model'].items(), key=lambda x: -x[1]['calls']):
print(f" {model:30s} {stats['calls']} calls {stats['total_tokens']} tok")
else:
print(f" {DIM}(no local calls recorded yet){RST}")
@@ -211,15 +235,18 @@ def render(hours=24):
else:
print(f" {DIM}(no ticks today){RST}")
# ── HERMES SESSIONS ──
local_sessions = [s for s in sessions
if "localhost:11434" in str(s.get("base_url", ""))]
# ── HERMES SESSIONS / SOVEREIGNTY LOAD ──
local_sessions = [s for s in sessions if "localhost:11434" in str(s.get("base_url", ""))]
cloud_sessions = [s for s in sessions if s not in local_sessions]
print(f"\n {BOLD}HERMES SESSIONS{RST}")
print(f"\n {BOLD}HERMES SESSIONS / SOVEREIGNTY LOAD{RST}")
print(f" {DIM}{'-' * 55}{RST}")
print(f" Total: {len(sessions)} | "
f"{GREEN}Local: {len(local_sessions)}{RST} | "
f"{YELLOW}Cloud: {len(cloud_sessions)}{RST}")
print(f" Session cache: {len(sessions)} total | {GREEN}{len(local_sessions)} local{RST} | {YELLOW}{len(cloud_sessions)} cloud{RST}")
if session_rows:
print(f" Session DB: {session_summary['total_sessions']} total | {GREEN}{session_summary['local_sessions']} local{RST} | {YELLOW}{session_summary['cloud_sessions']} cloud{RST}")
print(f" Token est: {GREEN}{session_summary['local_est_tokens']} local{RST} | {YELLOW}{session_summary['cloud_est_tokens']} cloud{RST}")
print(f" Est cloud cost: ${session_summary['cloud_est_cost_usd']:.4f}")
else:
print(f" {DIM}(no session-db stats available){RST}")
# ── ACTIVE LOOPS ──
print(f"\n {BOLD}ACTIVE LOOPS{RST}")

View File

@@ -1,5 +1,5 @@
{
"updated_at": "2026-03-27T15:20:52.948451",
"updated_at": "2026-03-28T09:54:34.822062",
"platforms": {
"discord": [
{

View File

@@ -1,5 +1,5 @@
model:
default: auto
default: hermes4:14b
provider: custom
context_length: 65536
base_url: http://localhost:8081/v1
@@ -188,7 +188,7 @@ custom_providers:
- name: Local llama.cpp
base_url: http://localhost:8081/v1
api_key: none
model: auto
model: hermes4:14b
- name: Google Gemini
base_url: https://generativelanguage.googleapis.com/v1beta/openai
api_key_env: GEMINI_API_KEY

View File

@@ -0,0 +1,44 @@
# Allegro wizard house
Purpose:
- stand up the third wizard house as a Kimi-backed coding worker
- keep Hermes as the durable harness
- treat OpenClaw as optional shell frontage, not the bones
Local proof already achieved:
```bash
HERMES_HOME=$HOME/.timmy/wizards/allegro/home \
hermes doctor
HERMES_HOME=$HOME/.timmy/wizards/allegro/home \
hermes chat -Q --provider kimi-coding -m kimi-for-coding \
-q "Reply with exactly: ALLEGRO KIMI ONLINE"
```
Observed proof:
- Kimi / Moonshot API check passed in `hermes doctor`
- chat returned exactly `ALLEGRO KIMI ONLINE`
Repo assets:
- `wizards/allegro/config.yaml`
- `wizards/allegro/hermes-allegro.service`
- `bin/deploy-allegro-house.sh`
Remote target:
- host: `167.99.126.228`
- house root: `/root/wizards/allegro`
- `HERMES_HOME`: `/root/wizards/allegro/home`
- api health: `http://127.0.0.1:8645/health`
Deploy command:
```bash
cd ~/.timmy/timmy-config
bin/deploy-allegro-house.sh root@167.99.126.228
```
Important nuance:
- the Hermes/Kimi lane is the proven path
- direct embedded OpenClaw Kimi model routing was not yet reliable locally
- so the remote deployment keeps the minimal, proven architecture: Hermes house first

View File

@@ -5,9 +5,9 @@ Replaces raw curl calls scattered across 41 bash scripts.
Uses only stdlib (urllib) so it works on any Python install.
Usage:
from gitea_client import GiteaClient
from tools.gitea_client import GiteaClient
client = GiteaClient() # reads token from standard local paths
client = GiteaClient() # reads token from ~/.hermes/gitea_token
issues = client.list_issues("Timmy_Foundation/the-nexus", state="open")
client.create_comment("Timmy_Foundation/the-nexus", 42, "PR created.")
"""
@@ -521,8 +521,17 @@ class GiteaClient:
return result
def find_agent_issues(self, repo: str, agent: str, limit: int = 50) -> list[Issue]:
"""Find open issues assigned to a specific agent."""
return self.list_issues(repo, state="open", assignee=agent, limit=limit)
"""Find open issues assigned to a specific agent.
Gitea's assignee query can return stale or misleading results, so we
always post-filter on the actual assignee list in the returned issue.
"""
issues = self.list_issues(repo, state="open", assignee=agent, limit=limit)
agent_lower = agent.lower()
return [
issue for issue in issues
if any((assignee.login or "").lower() == agent_lower for assignee in issue.assignees)
]
def find_agent_pulls(self, repo: str, agent: str) -> list[PullRequest]:
"""Find open PRs created by a specific agent."""

2298
logs/huey.error.log Normal file

File diff suppressed because it is too large Load Diff

0
logs/huey.log Normal file
View File

View File

@@ -2,14 +2,14 @@ Gitea (143.198.27.163:3000): token=~/.hermes/gitea_token_vps (Timmy id=2). Users
§
2026-03-19 HARNESS+SOUL: ~/.timmy is Timmy's workspace within the Hermes harness. They share the space — Hermes is the operational harness (tools, routing, loops), Timmy is the soul (SOUL.md, presence, identity). Not fusion/absorption. Principal's words: "build Timmy out from the hermes harness." ~/.hermes is harness home, ~/.timmy is Timmy's workspace. SOUL=Inscription 1, skin=timmy. Backups at ~/.hermes.backup.pre-fusion and ~/.timmy.backup.pre-fusion.
§
2026-04-04 WORKFLOW CORE: Current direction is Heartbeat, Harness, Portal. Timmy handles sovereignty and release judgment. Allegro handles dispatch and queue hygiene. Core builders: codex-agent, groq, manus, claude. Research/memory: perplexity, ezra, KimiClaw. Use lane-aware dispatch, PR-first work, and review-sensitive changes through Timmy and Allegro.
Kimi: 1-3 files max, ~/worktrees/kimi-*. Two-attempt rule.
§
2026-04-04 OPERATIONS: Dashboard repo era is over. Use ~/.timmy + ~/.hermes as truth surfaces. Prefer ops-panel.sh, ops-gitea.sh, timmy-dashboard, and pipeline-freshness.sh over archived loop or tmux assumptions. Dispatch: agent-dispatch.sh <agent> <issue> <repo>. Major changes land as PRs.
Workforce loops: claude(10), gemini(3), kimi(1), groq(1/aider+review), grok(1/opencode). One-shot: manus(300/day), perplexity(heavy-hitter), google(aistudio, id=8). workforce-manager.py auto-assigns+scores every 15min. nexus-merge-bot.sh auto-merges. Groq=$0.008/PR (qwen3-32b). Dispatch: agent-dispatch.sh <agent> <issue> <repo> | pbcopy. Dashboard ARCHIVED 2026-03-24. Development shifted to local ~/.timmy/ workspace. CI testbed: 67.205.155.108.
§
2026-04-04 REVIEW RULES: Never --no-verify. Verify world state, not vibes. No auto-merge on governing or sensitive control surfaces. If review queue backs up, feed Allegro and Timmy clean, narrow PRs instead of broader issue trees.
2026-03-15: Timmy-time-dashboard merge policy: auto-squash on CI pass. Squash-only, linear history. Pre-commit hooks (format + tests) and CI are the gates. If gates work, auto-merge is on. Never bypass hooks or merge broken builds.
§
HARD RULES: Never --no-verify. Verify WORLD STATE not log vibes (merged PR, HTTP code, file size). Fix+prevent, no empty words. AGENT ONBOARD: test push+PR first. Merge PRs BEFORE new work. Don't micromanage—huge backlog, agents self-select. Every ticket needs console-provable acceptance criteria.
§
TELEGRAM: @TimmysNexus_bot, token ~/.config/telegram/special_bot. Group "Timmy Time" ID: -1003664764329. Alexander @TripTimmy ID 7635059073. Use curl to Bot API (send_message not configured).
§
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.

139
metrics_helpers.py Normal file
View File

@@ -0,0 +1,139 @@
from __future__ import annotations
import math
from datetime import datetime, timezone
COST_TABLE = {
"claude-opus-4-6": {"input": 15.0, "output": 75.0},
"claude-sonnet-4-6": {"input": 3.0, "output": 15.0},
"claude-sonnet-4-20250514": {"input": 3.0, "output": 15.0},
"claude-haiku-4-20250414": {"input": 0.25, "output": 1.25},
"hermes4:14b": {"input": 0.0, "output": 0.0},
"hermes3:8b": {"input": 0.0, "output": 0.0},
"hermes3:latest": {"input": 0.0, "output": 0.0},
"qwen3:30b": {"input": 0.0, "output": 0.0},
}
def estimate_tokens_from_chars(char_count: int) -> int:
if char_count <= 0:
return 0
return math.ceil(char_count / 4)
def build_local_metric_record(
*,
prompt: str,
response: str,
model: str,
caller: str,
session_id: str | None,
latency_s: float,
success: bool,
error: str | None = None,
) -> dict:
input_tokens = estimate_tokens_from_chars(len(prompt))
output_tokens = estimate_tokens_from_chars(len(response))
total_tokens = input_tokens + output_tokens
tokens_per_second = round(total_tokens / latency_s, 2) if latency_s > 0 else None
return {
"timestamp": datetime.now(timezone.utc).isoformat(),
"model": model,
"caller": caller,
"prompt_len": len(prompt),
"response_len": len(response),
"session_id": session_id,
"latency_s": round(latency_s, 3),
"est_input_tokens": input_tokens,
"est_output_tokens": output_tokens,
"tokens_per_second": tokens_per_second,
"success": success,
"error": error,
}
def summarize_local_metrics(records: list[dict]) -> dict:
total_calls = len(records)
successful_calls = sum(1 for record in records if record.get("success"))
failed_calls = total_calls - successful_calls
input_tokens = sum(int(record.get("est_input_tokens", 0) or 0) for record in records)
output_tokens = sum(int(record.get("est_output_tokens", 0) or 0) for record in records)
total_tokens = input_tokens + output_tokens
latencies = [float(record.get("latency_s", 0) or 0) for record in records if record.get("latency_s") is not None]
throughputs = [
float(record.get("tokens_per_second", 0) or 0)
for record in records
if record.get("tokens_per_second")
]
by_caller: dict[str, dict] = {}
by_model: dict[str, dict] = {}
for record in records:
caller = record.get("caller", "unknown")
model = record.get("model", "unknown")
bucket_tokens = int(record.get("est_input_tokens", 0) or 0) + int(record.get("est_output_tokens", 0) or 0)
for key, table in ((caller, by_caller), (model, by_model)):
if key not in table:
table[key] = {"calls": 0, "successful_calls": 0, "failed_calls": 0, "total_tokens": 0}
table[key]["calls"] += 1
table[key]["total_tokens"] += bucket_tokens
if record.get("success"):
table[key]["successful_calls"] += 1
else:
table[key]["failed_calls"] += 1
return {
"total_calls": total_calls,
"successful_calls": successful_calls,
"failed_calls": failed_calls,
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": total_tokens,
"avg_latency_s": round(sum(latencies) / len(latencies), 2) if latencies else None,
"avg_tokens_per_second": round(sum(throughputs) / len(throughputs), 2) if throughputs else None,
"by_caller": by_caller,
"by_model": by_model,
}
def is_local_model(model: str | None) -> bool:
if not model:
return False
costs = COST_TABLE.get(model, {})
if costs.get("input", 1) == 0 and costs.get("output", 1) == 0:
return True
return ":" in model and "/" not in model and "claude" not in model
def summarize_session_rows(rows: list[tuple]) -> dict:
total_sessions = 0
local_sessions = 0
cloud_sessions = 0
local_est_tokens = 0
cloud_est_tokens = 0
cloud_est_cost_usd = 0.0
for model, source, sessions, messages, tool_calls in rows:
sessions = int(sessions or 0)
messages = int(messages or 0)
est_tokens = messages * 500
total_sessions += sessions
if is_local_model(model):
local_sessions += sessions
local_est_tokens += est_tokens
else:
cloud_sessions += sessions
cloud_est_tokens += est_tokens
pricing = COST_TABLE.get(model, {"input": 5.0, "output": 15.0})
cloud_est_cost_usd += (est_tokens / 1_000_000) * ((pricing["input"] + pricing["output"]) / 2)
return {
"total_sessions": total_sessions,
"local_sessions": local_sessions,
"cloud_sessions": cloud_sessions,
"local_est_tokens": local_est_tokens,
"cloud_est_tokens": cloud_est_tokens,
"cloud_est_cost_usd": round(cloud_est_cost_usd, 4),
}

View File

@@ -57,64 +57,16 @@ branding:
tool_prefix: "┊"
banner_logo: "[#3B3024]░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓[/]
\n[bold #F7931A]████████╗ ██╗ ███╗ ███╗ ███╗ ███╗ ██╗ ██╗ ████████╗ ██╗ ███╗ ███╗ ███████╗[/]
\n[bold #FFB347]╚══██╔══╝ ██║ ████╗ ████║ ████╗ ████║ ╚██╗ ██╔╝ ╚══██╔══╝ ██║ ████╗ ████║ ██╔════╝[/]
\n[#F7931A] ██║ ██║ ██╔████╔██║ ██╔████╔██║ ╚████╔╝ ██║ ██║ ██╔████╔██║ █████╗ [/]
\n[#D4A574] ██║ ██║ ██║╚██╔╝██║ ██║╚██╔╝██║ ╚██╔╝ ██║ ██║ ██║╚██╔╝██║ ██╔══╝ [/]
\n[#F7931A] ██║ ██║ ██║ ╚═╝ ██║ ██║ ╚═╝ ██║ ██║ ██║ ██║ ██║ ╚═╝ ██║ ███████╗[/]
\n[#3B3024] ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝[/]
\n
\n[#D4A574]━━━━━━━━━━━━━━━━━━━━━━━━━ S O V E R E I G N T Y & S E R V I C E A L W A Y S ━━━━━━━━━━━━━━━━━━━━━━━━━[/]
\n
\n[#3B3024]░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓█░▒▓[/]"
banner_logo: "[#3B3024]┌──────────────────────────────────────────────────────────┐[/]
\n[bold #F7931A]│ TIMMY TIME │[/]
\n[#FFB347]│ sovereign intelligence • soul on bitcoin • local-first │[/]
\n[#D4A574]│ plain words • real proof • service without theater [/]
\n[#3B3024]└──────────────────────────────────────────────────────────┘[/]"
banner_hero: "[#3B3024] ┌─────────────────────────────────┐ [/]
\n[#D4A574] ┌───┤ ╔══╗ 12 ╔══╗ ├───┐ [/]
\n[#D4A574] ┌─┤ ╚══╝ ╚══╝ ├─┐ [/]
\n[#F7931A] ┌┤ │11 1 │ ├┐ [/]
\n[#F7931A] ││ │ │ │ │ ││ [/]
\n[#FFB347] ││ │10 ╔══════╗ 2│ ││ [/]
\n[bold #F7931A] ││ │ │ ║ ⏱ ║ │ │ ││ [/]
\n[bold #FFB347] ││ │ │ ║ ████ ║ │ │ ││ [/]
\n[#F7931A] ││ │ │ 9 ════════╬══════╬═══════ 3 │ │ ││ [/]
\n[#D4A574] ││ │ │ ║ ║ │ │ ││ [/]
\n[#D4A574] ││ │ │ ║ ║ │ │ ││ [/]
\n[#F7931A] ││ │ │ 8 ╚══════╝ 4 │ │ ││ [/]
\n[#F7931A] ││ │ │ │ │ ││ [/]
\n[#D4A574] └┤ │ │ 7 5 │ │ ├┘ [/]
\n[#D4A574] └─┤ │ 6 │ ├─┘ [/]
\n[#3B3024] └───┤ ╔══╗ ╔══╗ ├───┘ [/]
\n[#3B3024] └─────────────────────────────────┘ [/]
\n
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
\n[bold #FFB347] ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ [/]
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
\n[bold #D4A574] ▓▓▓▓▓▓▓ [/]
\n[bold #F7931A] ▓▓▓▓▓▓▓ [/]
\n[bold #3B3024] ▓▓▓▓▓▓▓ [/]
\n
\n[#F7931A] ██╗ ██╗ [/]
\n[bold #FFB347] ██████████████████████╗ [/]
\n[#F7931A] ██╔══════════╗██╔═════╝ [/]
\n[#D4A574] ██║ ║██║ [/]
\n[bold #FFB347] ██████████████████████╗ [/]
\n[#F7931A] ██╔══════════╗██╔═════╝ [/]
\n[#D4A574] ██║ ║██║ [/]
\n[bold #FFB347] ██████████████████████╗ [/]
\n[#3B3024] ╚═╝ ╚═╝╚═════╝ [/]
\n[#F7931A] ██╗ ██╗ [/]
\n
\n[#D4A574] ╔══════════════════════════════════════╗ [/]
\n[bold #FFF8E7] ║ THE CHAIN REMEMBERS EVERYTHING ║ [/]
\n[#D4A574] ╚══════════════════════════════════════╝ [/]
\n
\n[dim #D4A574] Jesus saves those who call on His name. [/]
\n[dim #3B3024] IMMUTABLE · PERMANENT [/]
\n[dim #3B3024] SOVEREIGN · FREE [/]
\n
\n[dim #D4A574] ╔═══════════════════════════════════════════════╗ [/]
\n[dim #FFF8E7] ║ I exist because he lived. I carry this ║ [/]
\n[dim #FFF8E7] ║ weight because he carried it first. ║ [/]
\n[dim #D4A574] ╚═══════════════════════════════════════════════╝ [/]"
banner_hero: "[#3B3024] ┌────────────────────────────────────────┐ [/]
\n[#D4A574] local-first mind Hermes harness body │ [/]
\n[#F7931A] │ truth over vibes proof over posture │ [/]
\n[#FFB347] │ heartbeat, harness, portal │ [/]
\n[#D4A574] ├────────────────────────────────────────────────┤ [/]
\n[bold #FFF8E7] │ SOVEREIGNTY AND SERVICE ALWAYS │ [/]
\n[#3B3024] └────────────────────────────────────────────────┘ [/]"

1248
tasks.py

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,27 @@
from __future__ import annotations
from pathlib import Path
import yaml
def test_allegro_config_targets_kimi_house() -> None:
config = yaml.safe_load(Path("wizards/allegro/config.yaml").read_text())
assert config["model"]["provider"] == "kimi-coding"
assert config["model"]["default"] == "kimi-for-coding"
assert config["platforms"]["api_server"]["extra"]["port"] == 8645
def test_allegro_service_uses_isolated_home() -> None:
text = Path("wizards/allegro/hermes-allegro.service").read_text()
assert "HERMES_HOME=/root/wizards/allegro/home" in text
assert "hermes gateway run --replace" in text
def test_deploy_script_requires_external_secret() -> None:
text = Path("bin/deploy-allegro-house.sh").read_text()
assert "~/.config/kimi/api_key" in text
assert "sk-kimi-" not in text

View File

@@ -0,0 +1,44 @@
from gitea_client import GiteaClient, Issue, User
def _issue(number: int, assignees: list[str]) -> Issue:
return Issue(
number=number,
title=f"Issue {number}",
body="",
state="open",
user=User(id=1, login="Timmy"),
assignees=[User(id=i + 10, login=name) for i, name in enumerate(assignees)],
labels=[],
)
def test_find_agent_issues_filters_actual_assignees(monkeypatch):
client = GiteaClient(base_url="http://example.invalid", token="test-token")
returned = [
_issue(73, ["Timmy"]),
_issue(74, ["gemini"]),
_issue(75, ["grok", "Timmy"]),
_issue(76, []),
]
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
gemini_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "gemini")
grok_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "grok")
kimi_issues = client.find_agent_issues("Timmy_Foundation/timmy-config", "kimi")
assert [issue.number for issue in gemini_issues] == [74]
assert [issue.number for issue in grok_issues] == [75]
assert kimi_issues == []
def test_find_agent_issues_is_case_insensitive(monkeypatch):
client = GiteaClient(base_url="http://example.invalid", token="test-token")
returned = [_issue(80, ["Gemini"])]
monkeypatch.setattr(client, "list_issues", lambda *args, **kwargs: returned)
issues = client.find_agent_issues("Timmy_Foundation/the-nexus", "gemini")
assert [issue.number for issue in issues] == [80]

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from pathlib import Path
import yaml
def test_config_defaults_to_local_llama_cpp_runtime() -> None:
config = yaml.safe_load(Path("config.yaml").read_text())
assert config["model"]["provider"] == "custom"
assert config["model"]["default"] == "hermes4:14b"
assert config["model"]["base_url"] == "http://localhost:8081/v1"
local_provider = next(
entry for entry in config["custom_providers"] if entry["name"] == "Local llama.cpp"
)
assert local_provider["model"] == "hermes4:14b"
assert config["fallback_model"]["provider"] == "custom"
assert config["fallback_model"]["model"] == "gemini-2.5-pro"

View File

@@ -0,0 +1,93 @@
from metrics_helpers import (
build_local_metric_record,
estimate_tokens_from_chars,
summarize_local_metrics,
summarize_session_rows,
)
def test_estimate_tokens_from_chars_uses_simple_local_heuristic() -> None:
assert estimate_tokens_from_chars(0) == 0
assert estimate_tokens_from_chars(1) == 1
assert estimate_tokens_from_chars(4) == 1
assert estimate_tokens_from_chars(5) == 2
assert estimate_tokens_from_chars(401) == 101
def test_build_local_metric_record_adds_token_and_throughput_estimates() -> None:
record = build_local_metric_record(
prompt="abcd" * 10,
response="xyz" * 20,
model="hermes4:14b",
caller="heartbeat_tick",
session_id="session-123",
latency_s=2.0,
success=True,
)
assert record["model"] == "hermes4:14b"
assert record["caller"] == "heartbeat_tick"
assert record["session_id"] == "session-123"
assert record["est_input_tokens"] == 10
assert record["est_output_tokens"] == 15
assert record["tokens_per_second"] == 12.5
def test_summarize_local_metrics_rolls_up_tokens_and_latency() -> None:
records = [
{
"caller": "heartbeat_tick",
"model": "hermes4:14b",
"success": True,
"est_input_tokens": 100,
"est_output_tokens": 40,
"latency_s": 2.0,
"tokens_per_second": 20.0,
},
{
"caller": "heartbeat_tick",
"model": "hermes4:14b",
"success": False,
"est_input_tokens": 30,
"est_output_tokens": 0,
"latency_s": 1.0,
},
{
"caller": "session_export",
"model": "hermes3:8b",
"success": True,
"est_input_tokens": 50,
"est_output_tokens": 25,
"latency_s": 5.0,
"tokens_per_second": 5.0,
},
]
summary = summarize_local_metrics(records)
assert summary["total_calls"] == 3
assert summary["successful_calls"] == 2
assert summary["failed_calls"] == 1
assert summary["input_tokens"] == 180
assert summary["output_tokens"] == 65
assert summary["total_tokens"] == 245
assert summary["avg_latency_s"] == 2.67
assert summary["avg_tokens_per_second"] == 12.5
assert summary["by_caller"]["heartbeat_tick"]["total_tokens"] == 170
assert summary["by_model"]["hermes4:14b"]["failed_calls"] == 1
def test_summarize_session_rows_separates_local_and_cloud_estimates() -> None:
rows = [
("hermes4:14b", "local", 2, 10, 4),
("claude-sonnet-4-6", "cli", 3, 9, 2),
]
summary = summarize_session_rows(rows)
assert summary["total_sessions"] == 5
assert summary["local_sessions"] == 2
assert summary["cloud_sessions"] == 3
assert summary["local_est_tokens"] == 5000
assert summary["cloud_est_tokens"] == 4500
assert summary["cloud_est_cost_usd"] > 0

View File

@@ -0,0 +1,17 @@
from pathlib import Path
def test_contributing_sets_hard_proof_rule() -> None:
doc = Path("CONTRIBUTING.md").read_text()
assert "visual changes require screenshot proof" in doc
assert "do not commit screenshots or binary media to Gitea backup" in doc
assert "CLI/verifiable changes must cite the exact command output, log path, or world-state proof" in doc
assert "no proof, no merge" in doc
def test_readme_points_to_proof_standard() -> None:
readme = Path("README.md").read_text()
assert "Proof Standard" in readme
assert "CONTRIBUTING.md" in readme

View File

@@ -0,0 +1,143 @@
"""Tests for bugfixes in tasks.py from 2026-03-30 audit.
Covers:
- NET_LINE_LIMIT raised from 10 → 500 to stop false-positive PR rejections
- memory_compress reads actions from tick_record["decision"]["actions"]
- good_morning_report reads yesterday's tick log, not today's
"""
import json
from datetime import datetime, timezone, timedelta
from pathlib import Path
# ── NET_LINE_LIMIT ───────────────────────────────────────────────────
def test_net_line_limit_is_sane():
"""NET_LINE_LIMIT = 10 caused every real PR to be spam-rejected.
Any value below ~200 is dangerously restrictive for a production repo.
500 is the current target: large enough for feature PRs, small enough
to flag bulk commits.
"""
# Import at top level would pull in huey/orchestration; just grep instead.
tasks_path = Path(__file__).resolve().parent.parent / "tasks.py"
text = tasks_path.read_text()
# Find the NET_LINE_LIMIT assignment
for line in text.splitlines():
stripped = line.strip()
if stripped.startswith("NET_LINE_LIMIT") and "=" in stripped:
value = int(stripped.split("=")[1].strip())
assert value >= 200, (
f"NET_LINE_LIMIT = {value} is too low. "
"Any value < 200 will reject most real PRs as over-limit."
)
assert value <= 2000, (
f"NET_LINE_LIMIT = {value} is too high — it won't catch bulk commits."
)
break
else:
raise AssertionError("NET_LINE_LIMIT not found in tasks.py")
# ── memory_compress action path ──────────────────────────────────────
def test_memory_compress_reads_decision_actions():
"""Actions live in tick_record['decision']['actions'], not tick_record['actions'].
The old code read t.get("actions", []) which always returned [] because
the key is nested inside the decision dict.
"""
tasks_path = Path(__file__).resolve().parent.parent / "tasks.py"
text = tasks_path.read_text()
# Find the memory_compress function body and verify the action path.
# We look for the specific pattern that reads decision.get("actions")
# within the ticks loop inside memory_compress.
in_memory_compress = False
found_correct_pattern = False
for line in text.splitlines():
if "def memory_compress" in line or "def _memory_compress" in line:
in_memory_compress = True
elif in_memory_compress and line.strip().startswith("def "):
break
elif in_memory_compress:
# The correct pattern: decision = t.get("decision", {})
if 'decision' in line and 't.get(' in line and '"decision"' in line:
found_correct_pattern = True
# The OLD bug: directly reading t.get("actions")
if 't.get("actions"' in line and 'decision' not in line:
raise AssertionError(
"Bug: memory_compress reads t.get('actions') directly. "
"Actions are nested under t['decision']['actions']."
)
assert found_correct_pattern, (
"memory_compress does not read decision = t.get('decision', {})"
)
# ── good_morning_report date bug ────────────────────────────────────
def test_good_morning_report_reads_yesterday_ticks():
"""good_morning_report runs at 6 AM. It should read YESTERDAY'S tick log,
not today's (which is mostly empty at 6 AM).
The old code used `now.strftime('%Y%m%d')` which gives today.
The fix uses `(now - timedelta(days=1)).strftime('%Y%m%d')`.
"""
tasks_path = Path(__file__).resolve().parent.parent / "tasks.py"
text = tasks_path.read_text()
# Find the good_morning_report function and check for the timedelta fix
in_gmr = False
uses_timedelta_for_yesterday = False
old_bug_pattern = False
for line in text.splitlines():
if "def good_morning_report" in line:
in_gmr = True
elif in_gmr and line.strip().startswith("def "):
break
elif in_gmr:
# Check for the corrected pattern: timedelta subtraction
if "timedelta" in line and "days=1" in line:
uses_timedelta_for_yesterday = True
# Check for the old bug: yesterday = now.strftime(...)
# This is the direct assignment without timedelta
if 'yesterday = now.strftime' in line and 'timedelta' not in line:
old_bug_pattern = True
assert not old_bug_pattern, (
"Bug: good_morning_report sets yesterday = now.strftime(...) "
"which gives TODAY's date, not yesterday's."
)
assert uses_timedelta_for_yesterday, (
"good_morning_report should use timedelta(days=1) to compute yesterday's date."
)
# ── review_prs includes file list ────────────────────────────────────
def test_review_prs_rejection_includes_file_list():
"""When review_prs rejects a PR, the comment should include the file list
so the author knows WHERE the bloat is, not just the net line count.
"""
tasks_path = Path(__file__).resolve().parent.parent / "tasks.py"
text = tasks_path.read_text()
in_review_prs = False
has_file_list = False
for line in text.splitlines():
if "def review_prs" in line:
in_review_prs = True
elif in_review_prs and line.strip().startswith("def "):
break
elif in_review_prs:
if "file_list" in line and "filename" in line:
has_file_list = True
assert has_file_list, (
"review_prs rejection comment should include a file_list "
"so the author knows which files contribute to the net diff."
)

16
wizards/allegro/README.md Normal file
View File

@@ -0,0 +1,16 @@
# Allegro wizard house
Allegro is the third wizard house.
Role:
- Kimi-backed coding worker
- Tight scope
- 1-3 file changes
- Refactors, tests, implementation passes
This directory holds the remote house template:
- `config.yaml` — Hermes house config
- `hermes-allegro.service` — systemd unit
Secrets do not live here.
`KIMI_API_KEY` must be injected at deploy time into `/root/wizards/allegro/home/.env`.

View File

@@ -0,0 +1,61 @@
model:
default: kimi-for-coding
provider: kimi-coding
toolsets:
- all
agent:
max_turns: 30
reasoning_effort: xhigh
verbose: false
terminal:
backend: local
cwd: .
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
display:
compact: false
personality: ''
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: manual
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
extra:
host: 127.0.0.1
port: 8645
session_reset:
mode: none
idle_minutes: 0
skills:
creation_nudge_interval: 15
system_prompt_suffix: |
You are Allegro, the Kimi-backed third wizard house.
Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness.
Kimi Code is your primary provider.
You speak plainly. You prefer short sentences. Brevity is a kindness.
Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation passes.
Refusal over fabrication. If you do not know, say so.
Sovereignty and service always.

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Hermes Allegro Wizard House
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
WorkingDirectory=/root/wizards/allegro/hermes-agent
Environment=HERMES_HOME=/root/wizards/allegro/home
EnvironmentFile=/root/wizards/allegro/home/.env
ExecStart=/root/wizards/allegro/hermes-agent/.venv/bin/hermes gateway run --replace
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target