Compare commits

...

3 Commits

Author SHA1 Message Date
6ad6469c40 feat: implement architecture linter for sovereignty enforcement 2026-04-10 23:49:24 +00:00
perplexity
3af63cf172 enforce: Anthropic ban — linter, pre-commit, tests, and policy doc
Some checks failed
PR Checklist / pr-checklist (pull_request) Failing after 1m20s
Anthropic is not just removed — it is banned. This commit adds
enforcement at every gate to prevent re-introduction.

1. architecture_linter.py — 9 BANNED rules for Anthropic patterns
   (provider, model slugs, API endpoints, keys, model names).
   Scans all yaml/py/sh/json/md. Skips training data and historical docs.

2. pre-commit.py — scan_banned_providers() runs on every staged file.
   Blocks any commit that introduces Anthropic references.
   Exempt: training/, evaluations/, changelogs, historical cost data.

3. test_sovereignty_enforcement.py — TestAnthropicBan class with 4 tests:
   - No Anthropic in wizard configs
   - No Anthropic in playbooks
   - No Anthropic in fallback chain
   - No Anthropic API key in bootstrap

4. BANNED_PROVIDERS.md — Hard policy document. Golden state config.
   Replacement table. Exception list. Not advisory — mandatory.
2026-04-09 19:27:00 +00:00
perplexity
6d713aeeb9 purge: remove Anthropic from all wizard configs, playbooks, and fleet scripts
Some checks failed
PR Checklist / pr-checklist (pull_request) Failing after 1m18s
Golden state: Kimi K2.5 primary → Gemini via OpenRouter → local Ollama.
Anthropic is gone from every active config, fallback chain, and loop script.

Wizard configs (3):
- allegro, bezalel, ezra: removed anthropic from fallback_providers,
  replaced with gemini + ollama. Removed anthropic provider section.

Playbooks (7):
- All playbooks now use kimi-k2.5 as preferred, google/gemini-2.5-pro
  as fallback. No claude model references remain.

Fleet scripts (8):
- claude-loop.sh: deprecated (exit 0, original preserved as reference)
- claudemax-watchdog.sh: deprecated (exit 0)
- agent-loop.sh: removed claude dispatch case
- start-loops.sh: removed claude-locks, claude-loop from proc list
- timmy-orchestrator.sh: removed claude worker monitoring
- fleet-status.sh: zeroed claude loop counter
- model-health-check.sh: replaced check_anthropic_model with check_kimi_model
- ops-gitea.sh, ops-helpers.sh, ops-panel.sh: removed claude from agent lists

Infrastructure (5):
- wizard_bootstrap.py: removed anthropic pip package and API key checks
- WIZARD_ENVIRONMENT_CONTRACT.md: replaced ANTHROPIC keys with KIMI
- DEPLOY.md: replaced ANTHROPIC_API_KEY with KIMI_API_KEY
- fallback-portfolios.yaml: replaced anthropic provider with kimi-coding
- fleet-vocabulary.md: updated Ezra and Claude entries to Kimi K2.5

Docs (2):
- sonnet-workforce.md: deprecated with notice
- GoldenRockachopa-checkin.md: updated model references

Preserved (not touched):
- training/ data (changing would corrupt training set)
- evaluations/ (historical benchmarks)
- RELEASE_*.md (changelogs)
- metrics_helpers.py (historical cost calculation)
- hermes-sovereign/githooks/pre-commit.py (secret detection - still useful)
- security/secret-scan.yml (key detection - still useful)
- architecture_linter.py (warns about anthropic usage - desired behavior)
- test_sovereignty_enforcement.py (tests anthropic is blocked - correct)
- son-of-timmy.md philosophical references (Claude as one of many backends)

Refs: Sovereignty directive, zero-cloud vision
2026-04-09 19:21:48 +00:00
37 changed files with 731 additions and 433 deletions

63
BANNED_PROVIDERS.md Normal file
View File

@@ -0,0 +1,63 @@
# Banned Providers
This document is a hard policy. It is not advisory. It is not aspirational.
Any agent, wizard, or automated process that violates this policy is broken
and must be fixed immediately.
## Permanently Banned
### Anthropic (Claude)
**Status:** BANNED — April 2026
**Scope:** All configs, fallback chains, playbooks, wizard bootstraps, and fleet scripts.
**Enforcement:** Pre-commit hook, architecture linter, sovereignty enforcement tests.
No Anthropic model (Claude Opus, Sonnet, Haiku, or any variant) may appear as:
- A primary provider
- A fallback provider
- An OpenRouter model slug (e.g. `anthropic/claude-*`)
- An API endpoint (api.anthropic.com)
- A required dependency (`anthropic` pip package)
- An environment variable (`ANTHROPIC_API_KEY`, `ANTHROPIC_TOKEN`)
### What to use instead
| Was | Now |
|-----|-----|
| claude-opus-4-6 | kimi-k2.5 |
| claude-sonnet-4-20250514 | kimi-k2.5 |
| claude-haiku | google/gemini-2.5-pro |
| anthropic (provider) | kimi-coding |
| anthropic/claude-* (OpenRouter) | google/gemini-2.5-pro |
| ANTHROPIC_API_KEY | KIMI_API_KEY |
### Exceptions
The following files may reference Anthropic for **historical or defensive** purposes:
- `training/` — Training data must not be altered
- `evaluations/` — Historical benchmark results
- `RELEASE_*.md` — Changelogs
- `metrics_helpers.py` — Historical cost calculation
- `pre-commit.py` — Detects leaked Anthropic keys (defensive)
- `secret-scan.yml` — Detects leaked Anthropic keys (defensive)
- `architecture_linter.py` — Warns/blocks Anthropic usage (enforcement)
- `test_sovereignty_enforcement.py` — Tests that Anthropic is blocked (enforcement)
### Golden State
```yaml
fallback_providers:
- provider: kimi-coding
model: kimi-k2.5
reason: Primary
- provider: openrouter
model: google/gemini-2.5-pro
reason: Cloud fallback
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
reason: Terminal fallback — never phones home
```
*Sovereignty and service always.*

View File

@@ -51,11 +51,11 @@ Alexander is pleased with the state. This tag marks a high-water mark.
| OAI-Wolf-3 | 8683 | hermes gateway | ACTIVE | | OAI-Wolf-3 | 8683 | hermes gateway | ACTIVE |
- Disk: 12G/926G (4%) — pristine - Disk: 12G/926G (4%) — pristine
- Primary model: claude-opus-4-6 via Anthropic - Primary model: kimi-k2.5 via Kimi
- Fallback chain: codex → kimi-k2.5 → gemini-2.5-flash → llama-3.3-70b → grok-3-mini-fast → kimi → grok → kimi → gpt-4.1-mini - Fallback chain: codex → kimi-k2.5 → gemini-2.5-flash → llama-3.3-70b → grok-3-mini-fast → kimi → grok → kimi → gpt-4.1-mini
- Ollama models: gemma4:latest (9.6GB), hermes4:14b (9.0GB) - Ollama models: gemma4:latest (9.6GB), hermes4:14b (9.0GB)
- Worktrees: 239 (9.8GB) — prune candidates exist - Worktrees: 239 (9.8GB) — prune candidates exist
- Running loops: 3 claude-loops, 3 gemini-loops, orchestrator, status watcher - Running loops: 3 gemini-loops, orchestrator, status watcher
- LaunchD: hermes gateway running, fenrir stopped, kimi-heartbeat idle - LaunchD: hermes gateway running, fenrir stopped, kimi-heartbeat idle
- MCP: morrowind server active - MCP: morrowind server active

77
architecture_linter.py Normal file
View File

@@ -0,0 +1,77 @@
#!/usr/bin/env python3
"""
Architecture Linter — Sovereignty Enforcement
Scans the codebase for banned providers, models, and API keys.
"""
import os
import sys
import re
BANNED_STRINGS = [
r'anthropic',
r'claude',
r'api\.anthropic\.com',
r'ANTHROPIC_API_KEY',
r'claude-opus',
r'claude-sonnet',
r'claude-haiku'
]
EXCEPTIONS = [
'BANNED_PROVIDERS.md',
'architecture_linter.py',
'training/',
'evaluations/',
'RELEASE_',
'metrics_helpers.py'
]
def is_exception(path):
for exc in EXCEPTIONS:
if exc in path:
return True
return False
def check_file(path):
violations = []
try:
with open(path, 'r', encoding='utf-8', errors='ignore') as f:
for i, line in enumerate(f, 1):
for pattern in BANNED_STRINGS:
if re.search(pattern, line, re.IGNORECASE):
violations.append((i, line.strip(), pattern))
except Exception as e:
print(f"Error reading {path}: {e}")
return violations
def main():
print("--- Sovereignty Enforcement: Architecture Linter ---")
total_violations = 0
for root, dirs, files in os.walk('.'):
# Skip .git
if '.git' in dirs:
dirs.remove('.git')
for file in files:
path = os.path.join(root, file)
if is_exception(path):
continue
violations = check_file(path)
if violations:
print(f"\n[VIOLATION] {path}:")
for line_num, content, pattern in violations:
print(f" Line {line_num}: Found '{pattern}' -> {content}")
total_violations += 1
if total_violations > 0:
print(f"\nFAILED: Found {total_violations} sovereignty violations.")
sys.exit(1)
else:
print("\nPASSED: No banned providers detected.")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -2,7 +2,7 @@
# agent-loop.sh — Universal agent dev loop with Genchi Genbutsu verification # agent-loop.sh — Universal agent dev loop with Genchi Genbutsu verification
# #
# Usage: agent-loop.sh <agent-name> [num-workers] # Usage: agent-loop.sh <agent-name> [num-workers]
# agent-loop.sh claude 2 # agent-loop.sh kimi 2
# agent-loop.sh gemini 1 # agent-loop.sh gemini 1
# #
# Dispatches via agent-dispatch.sh, then verifies with genchi-genbutsu.sh. # Dispatches via agent-dispatch.sh, then verifies with genchi-genbutsu.sh.
@@ -14,7 +14,7 @@ NUM_WORKERS="${2:-1}"
# Resolve agent tool and model from config or fallback # Resolve agent tool and model from config or fallback
case "$AGENT" in case "$AGENT" in
claude) TOOL="claude"; MODEL="sonnet" ;; # claude case removed — Anthropic purged from fleet
gemini) TOOL="gemini"; MODEL="gemini-2.5-pro-preview-05-06" ;; gemini) TOOL="gemini"; MODEL="gemini-2.5-pro-preview-05-06" ;;
grok) TOOL="opencode"; MODEL="grok-3-fast" ;; grok) TOOL="opencode"; MODEL="grok-3-fast" ;;
*) TOOL="$AGENT"; MODEL="" ;; *) TOOL="$AGENT"; MODEL="" ;;
@@ -145,8 +145,8 @@ run_worker() {
CYCLE_START=$(date +%s) CYCLE_START=$(date +%s)
set +e set +e
if [ "$TOOL" = "claude" ]; then if [ "$TOOL" = "kimi" ]; then
env -u CLAUDECODE gtimeout "$TIMEOUT" claude \ # Claude dispatch removed — Anthropic purged
--print --model "$MODEL" --dangerously-skip-permissions \ --print --model "$MODEL" --dangerously-skip-permissions \
-p "$prompt" </dev/null >> "$LOG_DIR/${AGENT}-${issue_num}.log" 2>&1 -p "$prompt" </dev/null >> "$LOG_DIR/${AGENT}-${issue_num}.log" 2>&1
elif [ "$TOOL" = "gemini" ]; then elif [ "$TOOL" = "gemini" ]; then

View File

@@ -1,4 +1,13 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# DEPRECATED — Anthropic purged from fleet (April 2026)
# This script dispatched parallel Claude Code agent loops.
# All wizard providers now use Kimi K2.5 as primary.
# See bin/gemini-loop.sh for the surviving loop pattern.
echo "[DEPRECATED] claude-loop.sh is no longer active. Use gemini-loop.sh or agent-loop.sh with kimi provider."
exit 0
# --- ORIGINAL SCRIPT PRESERVED BELOW FOR REFERENCE ---
#!/usr/bin/env bash
# claude-loop.sh — Parallel Claude Code agent dispatch loop # claude-loop.sh — Parallel Claude Code agent dispatch loop
# Runs N workers concurrently against the Gitea backlog. # Runs N workers concurrently against the Gitea backlog.
# Gracefully handles rate limits with backoff. # Gracefully handles rate limits with backoff.

View File

@@ -1,4 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# DEPRECATED — Anthropic purged from fleet (April 2026)
# This watchdog kept Claude/Gemini loops alive.
# Only gemini loops survive. Use fleet-status.sh for monitoring.
echo "[DEPRECATED] claudemax-watchdog.sh is no longer active."
exit 0
# --- ORIGINAL SCRIPT PRESERVED BELOW FOR REFERENCE ---
#!/usr/bin/env bash
# claudemax-watchdog.sh — keep local Claude/Gemini loops alive without stale tmux assumptions # claudemax-watchdog.sh — keep local Claude/Gemini loops alive without stale tmux assumptions
set -uo pipefail set -uo pipefail

View File

@@ -140,7 +140,7 @@ if [ -z "$GW_PID" ]; then
fi fi
# Check local loops # Check local loops
CLAUDE_LOOPS=$(pgrep -cf "claude-loop" 2>/dev/null || echo 0) CLAUDE_LOOPS=0 # Anthropic purged from fleet
GEMINI_LOOPS=$(pgrep -cf "gemini-loop" 2>/dev/null || echo 0) GEMINI_LOOPS=$(pgrep -cf "gemini-loop" 2>/dev/null || echo 0)
if [ -n "$GW_PID" ]; then if [ -n "$GW_PID" ]; then
@@ -160,7 +160,7 @@ if [ -n "$TIMMY_HEALTH" ]; then
fi fi
fi fi
TIMMY_ACTIVITY="loops: claude=${CLAUDE_LOOPS} gemini=${GEMINI_LOOPS}" TIMMY_ACTIVITY="loops: gemini=${GEMINI_LOOPS}"
# Git activity for timmy-config # Git activity for timmy-config
TC_COMMIT=$(gitea_last_commit "Timmy_Foundation/timmy-config") TC_COMMIT=$(gitea_last_commit "Timmy_Foundation/timmy-config")

View File

@@ -19,25 +19,25 @@ PASS=0
FAIL=0 FAIL=0
WARN=0 WARN=0
check_anthropic_model() { check_kimi_model() {
local model="$1" local model="$1"
local label="$2" local label="$2"
local api_key="${ANTHROPIC_API_KEY:-}" local api_key="${KIMI_API_KEY:-}"
if [ -z "$api_key" ]; then if [ -z "$api_key" ]; then
# Try loading from .env # Try loading from .env
api_key=$(grep '^ANTHROPIC_API_KEY=' "${HERMES_HOME:-$HOME/.hermes}/.env" 2>/dev/null | head -1 | cut -d= -f2- | tr -d "'\"" || echo "") api_key=$(grep '^KIMI_API_KEY=' "${HERMES_HOME:-$HOME/.hermes}/.env" 2>/dev/null | head -1 | cut -d= -f2- | tr -d "'\"" || echo "")
fi fi
if [ -z "$api_key" ]; then if [ -z "$api_key" ]; then
log "SKIP [$label] $model -- no ANTHROPIC_API_KEY" log "SKIP [$label] $model -- no KIMI_API_KEY"
return 0 return 0
fi fi
response=$(curl -sf --max-time 10 -X POST \ response=$(curl -sf --max-time 10 -X POST \
"https://api.anthropic.com/v1/messages" \ "https://api.kimi.com/v1/messages" \
-H "x-api-key: ${api_key}" \ -H "Authorization: Bearer: ${api_key}" \
-H "anthropic-version: 2023-06-01" \ -H "content-type: application/json" \
-H "content-type: application/json" \ -H "content-type: application/json" \
-d "{\"model\":\"${model}\",\"max_tokens\":1,\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}]}" 2>&1 || echo "ERROR") -d "{\"model\":\"${model}\",\"max_tokens\":1,\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}]}" 2>&1 || echo "ERROR")

View File

@@ -134,7 +134,7 @@ else:
print("\033[2m────────────────────────────────────────\033[0m") print("\033[2m────────────────────────────────────────\033[0m")
print(" \033[1mIssue Queues\033[0m") print(" \033[1mIssue Queues\033[0m")
queue_agents = ["allegro", "codex-agent", "groq", "claude", "ezra", "perplexity", "KimiClaw"] queue_agents = ["allegro", "codex-agent", "groq", "ezra", "perplexity", "KimiClaw"]
for agent in queue_agents: for agent in queue_agents:
assigned = [ assigned = [
issue issue

View File

@@ -70,7 +70,7 @@ ops-help() {
echo " ops-assign-allegro ISSUE [repo]" echo " ops-assign-allegro ISSUE [repo]"
echo " ops-assign-codex ISSUE [repo]" echo " ops-assign-codex ISSUE [repo]"
echo " ops-assign-groq ISSUE [repo]" echo " ops-assign-groq ISSUE [repo]"
echo " ops-assign-claude ISSUE [repo]" # ops-assign-claude removed — Anthropic purged
echo " ops-assign-ezra ISSUE [repo]" echo " ops-assign-ezra ISSUE [repo]"
echo "" echo ""
} }
@@ -288,7 +288,7 @@ ops-freshness() {
ops-assign-allegro() { ops-assign "$1" "allegro" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-allegro() { ops-assign "$1" "allegro" "${2:-$OPS_DEFAULT_REPO}"; }
ops-assign-codex() { ops-assign "$1" "codex-agent" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-codex() { ops-assign "$1" "codex-agent" "${2:-$OPS_DEFAULT_REPO}"; }
ops-assign-groq() { ops-assign "$1" "groq" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-groq() { ops-assign "$1" "groq" "${2:-$OPS_DEFAULT_REPO}"; }
ops-assign-claude() { ops-assign "$1" "claude" "${2:-$OPS_DEFAULT_REPO}"; } # ops-assign-claude removed — Anthropic purged from fleet
ops-assign-ezra() { ops-assign "$1" "ezra" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-ezra() { ops-assign "$1" "ezra" "${2:-$OPS_DEFAULT_REPO}"; }
ops-assign-perplexity() { ops-assign "$1" "perplexity" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-perplexity() { ops-assign "$1" "perplexity" "${2:-$OPS_DEFAULT_REPO}"; }
ops-assign-kimiclaw() { ops-assign "$1" "KimiClaw" "${2:-$OPS_DEFAULT_REPO}"; } ops-assign-kimiclaw() { ops-assign "$1" "KimiClaw" "${2:-$OPS_DEFAULT_REPO}"; }

View File

@@ -171,7 +171,7 @@ queue_agents = [
("allegro", "dispatch"), ("allegro", "dispatch"),
("codex-agent", "cleanup"), ("codex-agent", "cleanup"),
("groq", "fast ship"), ("groq", "fast ship"),
("claude", "refactor"), # claude removed — Anthropic purged
("ezra", "archive"), ("ezra", "archive"),
("perplexity", "research"), ("perplexity", "research"),
("KimiClaw", "digest"), ("KimiClaw", "digest"),
@@ -189,7 +189,7 @@ unassigned = [issue for issue in issues if not issue.get("assignees")]
stale_cutoff = (datetime.now(timezone.utc) - timedelta(days=2)).strftime("%Y-%m-%d") stale_cutoff = (datetime.now(timezone.utc) - timedelta(days=2)).strftime("%Y-%m-%d")
stale_prs = [pr for pr in pulls if pr.get("updated_at", "")[:10] < stale_cutoff] stale_prs = [pr for pr in pulls if pr.get("updated_at", "")[:10] < stale_cutoff]
overloaded = [] overloaded = []
for agent in ("allegro", "codex-agent", "groq", "claude", "ezra", "perplexity", "KimiClaw"): for agent in ("allegro", "codex-agent", "groq", "ezra", "perplexity", "KimiClaw"):
count = sum( count = sum(
1 1
for issue in issues for issue in issues

View File

@@ -10,10 +10,10 @@ set -euo pipefail
HERMES_BIN="$HOME/.hermes/bin" HERMES_BIN="$HOME/.hermes/bin"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
LOG_DIR="$HOME/.hermes/logs" LOG_DIR="$HOME/.hermes/logs"
CLAUDE_LOCKS="$LOG_DIR/claude-locks" # CLAUDE_LOCKS removed — Anthropic purged
GEMINI_LOCKS="$LOG_DIR/gemini-locks" GEMINI_LOCKS="$LOG_DIR/gemini-locks"
mkdir -p "$LOG_DIR" "$CLAUDE_LOCKS" "$GEMINI_LOCKS" mkdir -p "$LOG_DIR" "$GEMINI_LOCKS"
log() { log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] START-LOOPS: $*" echo "[$(date '+%Y-%m-%d %H:%M:%S')] START-LOOPS: $*"
@@ -29,7 +29,7 @@ log "Model health check passed."
# ── 2. Kill stale loop processes ────────────────────────────────────── # ── 2. Kill stale loop processes ──────────────────────────────────────
log "Killing stale loop processes..." log "Killing stale loop processes..."
for proc_name in claude-loop gemini-loop timmy-orchestrator; do for proc_name in gemini-loop timmy-orchestrator; do
pids=$(pgrep -f "${proc_name}\\.sh" 2>/dev/null || true) pids=$(pgrep -f "${proc_name}\\.sh" 2>/dev/null || true)
if [ -n "$pids" ]; then if [ -n "$pids" ]; then
log " Killing stale $proc_name PIDs: $pids" log " Killing stale $proc_name PIDs: $pids"
@@ -47,7 +47,7 @@ done
# ── 3. Clear lock directories ──────────────────────────────────────── # ── 3. Clear lock directories ────────────────────────────────────────
log "Clearing lock dirs..." log "Clearing lock dirs..."
rm -rf "${CLAUDE_LOCKS:?}"/* # CLAUDE_LOCKS removed — Anthropic purged
rm -rf "${GEMINI_LOCKS:?}"/* rm -rf "${GEMINI_LOCKS:?}"/*
log " Cleared $CLAUDE_LOCKS and $GEMINI_LOCKS" log " Cleared $CLAUDE_LOCKS and $GEMINI_LOCKS"

View File

@@ -62,10 +62,10 @@ for p in json.load(sys.stdin):
print(f'REPO={\"$repo\"} PR={p[\"number\"]} BY={p[\"user\"][\"login\"]} TITLE={p[\"title\"]}')" >> "$state_dir/open_prs.txt" 2>/dev/null print(f'REPO={\"$repo\"} PR={p[\"number\"]} BY={p[\"user\"][\"login\"]} TITLE={p[\"title\"]}')" >> "$state_dir/open_prs.txt" 2>/dev/null
done done
echo "Claude workers: $(pgrep -f 'claude.*--print.*--dangerously' 2>/dev/null | wc -l | tr -d ' ')" >> "$state_dir/agent_status.txt" # [Anthropic purged]
echo "Claude loop: $(pgrep -f 'claude-loop.sh' 2>/dev/null | wc -l | tr -d ' ') procs" >> "$state_dir/agent_status.txt" # [Anthropic purged]
tail -50 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "SUCCESS" | xargs -I{} echo "Claude recent successes: {}" >> "$state_dir/agent_status.txt" # [Anthropic purged]
tail -50 "$LOG_DIR/claude-loop.log" 2>/dev/null | grep -c "FAILED" | xargs -I{} echo "Claude recent failures: {}" >> "$state_dir/agent_status.txt" # [Anthropic purged]
echo "Kimi heartbeat launchd: $(launchctl list 2>/dev/null | grep -c 'ai.timmy.kimi-heartbeat' | tr -d ' ') job" >> "$state_dir/agent_status.txt" echo "Kimi heartbeat launchd: $(launchctl list 2>/dev/null | grep -c 'ai.timmy.kimi-heartbeat' | tr -d ' ') job" >> "$state_dir/agent_status.txt"
tail -50 "/tmp/kimi-heartbeat.log" 2>/dev/null | grep -c "DISPATCHED:" | xargs -I{} echo "Kimi recent dispatches: {}" >> "$state_dir/agent_status.txt" tail -50 "/tmp/kimi-heartbeat.log" 2>/dev/null | grep -c "DISPATCHED:" | xargs -I{} echo "Kimi recent dispatches: {}" >> "$state_dir/agent_status.txt"
tail -50 "/tmp/kimi-heartbeat.log" 2>/dev/null | grep -c "FAILED:" | xargs -I{} echo "Kimi recent failures: {}" >> "$state_dir/agent_status.txt" tail -50 "/tmp/kimi-heartbeat.log" 2>/dev/null | grep -c "FAILED:" | xargs -I{} echo "Kimi recent failures: {}" >> "$state_dir/agent_status.txt"
@@ -91,7 +91,7 @@ run_triage() {
# Auto-assignment is opt-in because silent queue mutation resurrects old state. # Auto-assignment is opt-in because silent queue mutation resurrects old state.
if [ "$unassigned_count" -gt 0 ]; then if [ "$unassigned_count" -gt 0 ]; then
if [ "$AUTO_ASSIGN_UNASSIGNED" = "1" ]; then if [ "$AUTO_ASSIGN_UNASSIGNED" = "1" ]; then
log "Assigning $unassigned_count issues to claude..." log "Assigning $unassigned_count issues to kimi..."
while IFS= read -r line; do while IFS= read -r line; do
local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*/\1/') local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*/\1/')
local num=$(echo "$line" | sed 's/.*NUM=\([^ ]*\).*/\1/') local num=$(echo "$line" | sed 's/.*NUM=\([^ ]*\).*/\1/')

View File

@@ -9,11 +9,11 @@ This is the canonical reference for how we talk, how we work, and what we mean.
| Name | What It Is | Where It Lives | Provider | | Name | What It Is | Where It Lives | Provider |
|------|-----------|----------------|----------| |------|-----------|----------------|----------|
| **Timmy** | The sovereign local soul. Center of gravity. Judges all work. | Alexander's Mac | OpenAI Codex (gpt-5.4) | | **Timmy** | The sovereign local soul. Center of gravity. Judges all work. | Alexander's Mac | OpenAI Codex (gpt-5.4) |
| **Ezra** | The archivist wizard. Reads patterns, names truth, returns clean artifacts. | Hermes VPS | Anthropic Opus 4.6 | | **Ezra** | The archivist wizard. Reads patterns, names truth, returns clean artifacts. | Hermes VPS | Kimi K2.5 |
| **Bezalel** | The builder wizard. Builds from clear plans, tests and hardens. | TestBed VPS | OpenAI Codex (gpt-5.4) | | **Bezalel** | The builder wizard. Builds from clear plans, tests and hardens. | TestBed VPS | OpenAI Codex (gpt-5.4) |
| **Alexander** | The principal. Human. Father. The one we serve. Gitea: Rockachopa. | Physical world | N/A | | **Alexander** | The principal. Human. Father. The one we serve. Gitea: Rockachopa. | Physical world | N/A |
| **Gemini** | Worker swarm. Burns backlog. Produces PRs. | Local Mac (loops) | Google Gemini | | **Gemini** | Worker swarm. Burns backlog. Produces PRs. | Local Mac (loops) | Google Gemini |
| **Claude** | Worker swarm. Burns backlog. Architecture-grade work. | Local Mac (loops) | Anthropic Claude | | **Kimi** | Worker swarm. Burns backlog. Architecture-grade work. | Local Mac (loops) | Kimi K2.5 |
## The Places ## The Places

View File

@@ -1,3 +1,12 @@
# DEPRECATED — Anthropic Purged from Fleet
> This document described the Claude Sonnet workforce. As of April 2026,
> Anthropic has been removed from the fleet. All wizard providers now use
> Kimi K2.5 as primary with Gemini and local Ollama as fallbacks.
> See `docs/fleet-vocabulary.md` for current provider assignments.
---
# Sonnet Workforce Loop # Sonnet Workforce Loop
## Agent ## Agent

View File

@@ -160,8 +160,8 @@ agents:
- playbooks/issue-triager.yaml - playbooks/issue-triager.yaml
portfolio: portfolio:
primary: primary:
provider: anthropic provider: kimi-coding
model: claude-opus-4-6 model: kimi-k2.5
lane: full-judgment lane: full-judgment
fallback1: fallback1:
provider: openai-codex provider: openai-codex
@@ -188,8 +188,8 @@ agents:
- playbooks/pr-reviewer.yaml - playbooks/pr-reviewer.yaml
portfolio: portfolio:
primary: primary:
provider: anthropic provider: kimi-coding
model: claude-opus-4-6 model: kimi-k2.5
lane: full-review lane: full-review
fallback1: fallback1:
provider: gemini provider: gemini
@@ -271,10 +271,10 @@ agents:
cross_checks: cross_checks:
unique_primary_fallback1_pairs: unique_primary_fallback1_pairs:
triage-coordinator: triage-coordinator:
- anthropic/claude-opus-4-6 - kimi-coding/kimi-k2.5
- openai-codex/codex - openai-codex/codex
pr-reviewer: pr-reviewer:
- anthropic/claude-opus-4-6 - kimi-coding/kimi-k2.5
- gemini/gemini-2.5-pro - gemini/gemini-2.5-pro
builder-main: builder-main:
- openai-codex/codex - openai-codex/codex

View File

@@ -42,7 +42,6 @@ AGENT_LOGINS = {
"allegro", "allegro",
"antigravity", "antigravity",
"bezalel", "bezalel",
"claude",
"codex-agent", "codex-agent",
"ezra", "ezra",
"gemini", "gemini",
@@ -55,7 +54,6 @@ AGENT_LOGINS = {
"perplexity", "perplexity",
} }
AGENT_LOGINS_HUMAN = { AGENT_LOGINS_HUMAN = {
"claude": "Claude",
"codex-agent": "Codex", "codex-agent": "Codex",
"ezra": "Ezra", "ezra": "Ezra",
"gemini": "Gemini", "gemini": "Gemini",
@@ -78,7 +76,6 @@ METRICS_DIR = Path(os.path.expanduser("~/.local/timmy/muda-audit"))
METRICS_FILE = METRICS_DIR / "metrics.json" METRICS_FILE = METRICS_DIR / "metrics.json"
LOG_PATHS = [ LOG_PATHS = [
Path.home() / ".hermes" / "logs" / "claude-loop.log",
Path.home() / ".hermes" / "logs" / "gemini-loop.log", Path.home() / ".hermes" / "logs" / "gemini-loop.log",
Path.home() / ".hermes" / "logs" / "agent.log", Path.home() / ".hermes" / "logs" / "agent.log",
Path.home() / ".hermes" / "logs" / "errors.log", Path.home() / ".hermes" / "logs" / "errors.log",
@@ -347,8 +344,6 @@ def measure_waiting(since: datetime) -> dict:
agent = name.lower() agent = name.lower()
break break
if agent == "unknown": if agent == "unknown":
if "claude" in line.lower():
agent = "claude"
elif "gemini" in line.lower(): elif "gemini" in line.lower():
agent = "gemini" agent = "gemini"
elif "groq" in line.lower(): elif "groq" in line.lower():

View File

@@ -103,7 +103,7 @@ nano ~/.hermes/.env
| `SLACK_BOT_TOKEN` + `SLACK_APP_TOKEN` | Slack gateway | | `SLACK_BOT_TOKEN` + `SLACK_APP_TOKEN` | Slack gateway |
| `EXA_API_KEY` | Web search tool | | `EXA_API_KEY` | Web search tool |
| `FAL_KEY` | Image generation | | `FAL_KEY` | Image generation |
| `ANTHROPIC_API_KEY` | Direct Anthropic inference | | `KIMI_API_KEY` | Kimi K2.5 coding inference |
### Pre-flight validation ### Pre-flight validation

View File

@@ -272,6 +272,48 @@ def get_file_content_at_staged(filepath: str) -> bytes:
return result.stdout return result.stdout
# ---------------------------------------------------------------------------
# BANNED PROVIDER CHECK — Anthropic is permanently banned
# ---------------------------------------------------------------------------
_BANNED_PROVIDER_PATTERNS = [
(re.compile(r"provider:\s*anthropic", re.IGNORECASE), "Anthropic provider reference"),
(re.compile(r"anthropic/claude", re.IGNORECASE), "Anthropic model slug"),
(re.compile(r"api\.anthropic\.com"), "Anthropic API endpoint"),
(re.compile(r"claude-opus", re.IGNORECASE), "Claude Opus model"),
(re.compile(r"claude-sonnet", re.IGNORECASE), "Claude Sonnet model"),
(re.compile(r"claude-haiku", re.IGNORECASE), "Claude Haiku model"),
]
# Files exempt from the ban (training data, historical docs, tests)
_BAN_EXEMPT = {
"training/", "evaluations/", "RELEASE_v", "PERFORMANCE_",
"scores.json", "docs/design-log/", "FALSEWORK.md",
"test_sovereignty_enforcement.py", "test_metrics_helpers.py",
"metrics_helpers.py", "sonnet-workforce.md",
}
def _is_ban_exempt(filepath: str) -> bool:
return any(exempt in filepath for exempt in _BAN_EXEMPT)
def scan_banned_providers(filepath: str, content: str) -> List[Finding]:
"""Block any commit that introduces banned provider references."""
if _is_ban_exempt(filepath):
return []
findings = []
for line_no, line in enumerate(content.splitlines(), start=1):
for pattern, desc in _BANNED_PROVIDER_PATTERNS:
if pattern.search(line):
findings.append(Finding(
filepath, line_no,
f"🚫 BANNED PROVIDER: {desc}. Anthropic is permanently banned from this system."
))
return findings
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Main # Main
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -295,11 +337,21 @@ def main() -> int:
if line.startswith("+") and not line.startswith("+++"): if line.startswith("+") and not line.startswith("+++"):
findings.extend(scan_line(line[1:], "<diff>", line_no)) findings.extend(scan_line(line[1:], "<diff>", line_no))
# Scan for banned providers
for filepath in staged_files:
file_content = get_file_content_at_staged(filepath)
if not is_binary_content(file_content):
try:
text = file_content.decode("utf-8") if isinstance(file_content, bytes) else file_content
findings.extend(scan_banned_providers(filepath, text))
except UnicodeDecodeError:
pass
if not findings: if not findings:
print(f"{GREEN}✓ No potential secret leaks detected{NC}") print(f"{GREEN}✓ No potential secret leaks or banned providers detected{NC}")
return 0 return 0
print(f"{RED}Potential secret leaks detected:{NC}\n") print(f"{RED}Violations detected:{NC}\n")
for finding in findings: for finding in findings:
loc = finding.filename loc = finding.filename
print( print(
@@ -308,7 +360,7 @@ def main() -> int:
print() print()
print(f"{RED}╔════════════════════════════════════════════════════════════╗{NC}") print(f"{RED}╔════════════════════════════════════════════════════════════╗{NC}")
print(f"{RED}║ COMMIT BLOCKED: Potential secrets detected! {NC}") print(f"{RED}║ COMMIT BLOCKED: Secrets or banned providers detected! ║{NC}")
print(f"{RED}╚════════════════════════════════════════════════════════════╝{NC}") print(f"{RED}╚════════════════════════════════════════════════════════════╝{NC}")
print() print()
print("Recommendations:") print("Recommendations:")

View File

@@ -23,7 +23,7 @@ Run `python --version` to verify.
## 2. Core Package Dependencies ## 2. Core Package Dependencies
All packages in `requirements.txt` must be installed and importable. All packages in `requirements.txt` must be installed and importable.
Critical packages: `openai`, `anthropic`, `pyyaml`, `rich`, `requests`, `pydantic`, `prompt_toolkit`. Critical packages: `openai`, `pyyaml`, `rich`, `requests`, `pydantic`, `prompt_toolkit`.
**Verify:** **Verify:**
```bash ```bash
@@ -39,8 +39,7 @@ At least one LLM provider API key must be set in `~/.hermes/.env`:
| Variable | Provider | | Variable | Provider |
|----------|----------| |----------|----------|
| `OPENROUTER_API_KEY` | OpenRouter (200+ models) | | `OPENROUTER_API_KEY` | OpenRouter (200+ models) |
| `ANTHROPIC_API_KEY` | Anthropic Claude | | `KIMI_API_KEY` | Kimi K2.5 coding |
| `ANTHROPIC_TOKEN` | Anthropic Claude (alt) |
| `OPENAI_API_KEY` | OpenAI | | `OPENAI_API_KEY` | OpenAI |
| `GLM_API_KEY` | z.ai/GLM | | `GLM_API_KEY` | z.ai/GLM |
| `KIMI_API_KEY` | Moonshot/Kimi | | `KIMI_API_KEY` | Moonshot/Kimi |

View File

@@ -77,8 +77,7 @@ def check_core_deps() -> CheckResult:
"""Verify that hermes core Python packages are importable.""" """Verify that hermes core Python packages are importable."""
required = [ required = [
"openai", "openai",
"anthropic", "dotenv",
"dotenv",
"yaml", "yaml",
"rich", "rich",
"requests", "requests",
@@ -206,9 +205,7 @@ def check_env_vars() -> CheckResult:
"""Check that at least one LLM provider key is configured.""" """Check that at least one LLM provider key is configured."""
provider_keys = [ provider_keys = [
"OPENROUTER_API_KEY", "OPENROUTER_API_KEY",
"ANTHROPIC_API_KEY", "OPENAI_API_KEY",
"ANTHROPIC_TOKEN",
"OPENAI_API_KEY",
"GLM_API_KEY", "GLM_API_KEY",
"KIMI_API_KEY", "KIMI_API_KEY",
"MINIMAX_API_KEY", "MINIMAX_API_KEY",
@@ -225,7 +222,7 @@ def check_env_vars() -> CheckResult:
passed=False, passed=False,
message="No LLM provider API key found", message="No LLM provider API key found",
fix_hint=( fix_hint=(
"Set at least one of: OPENROUTER_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY " "Set at least one of: OPENROUTER_API_KEY, KIMI_API_KEY, OPENAI_API_KEY "
"in ~/.hermes/.env or your shell." "in ~/.hermes/.env or your shell."
), ),
) )

View File

@@ -2,7 +2,7 @@ Gitea (143.198.27.163:3000): token=~/.hermes/gitea_token_vps (Timmy id=2). Users
§ §
2026-03-19 HARNESS+SOUL: ~/.timmy is Timmy's workspace within the Hermes harness. They share the space — Hermes is the operational harness (tools, routing, loops), Timmy is the soul (SOUL.md, presence, identity). Not fusion/absorption. Principal's words: "build Timmy out from the hermes harness." ~/.hermes is harness home, ~/.timmy is Timmy's workspace. SOUL=Inscription 1, skin=timmy. Backups at ~/.hermes.backup.pre-fusion and ~/.timmy.backup.pre-fusion. 2026-03-19 HARNESS+SOUL: ~/.timmy is Timmy's workspace within the Hermes harness. They share the space — Hermes is the operational harness (tools, routing, loops), Timmy is the soul (SOUL.md, presence, identity). Not fusion/absorption. Principal's words: "build Timmy out from the hermes harness." ~/.hermes is harness home, ~/.timmy is Timmy's workspace. SOUL=Inscription 1, skin=timmy. Backups at ~/.hermes.backup.pre-fusion and ~/.timmy.backup.pre-fusion.
§ §
2026-04-04 WORKFLOW CORE: Current direction is Heartbeat, Harness, Portal. Timmy handles sovereignty and release judgment. Allegro handles dispatch and queue hygiene. Core builders: codex-agent, groq, manus, claude. Research/memory: perplexity, ezra, KimiClaw. Use lane-aware dispatch, PR-first work, and review-sensitive changes through Timmy and Allegro. 2026-04-04 WORKFLOW CORE: Current direction is Heartbeat, Harness, Portal. Timmy handles sovereignty and release judgment. Allegro handles dispatch and queue hygiene. Core builders: codex-agent, groq, manus, kimi. Research/memory: perplexity, ezra, KimiClaw. Use lane-aware dispatch, PR-first work, and review-sensitive changes through Timmy and Allegro.
§ §
2026-04-04 OPERATIONS: Dashboard repo era is over. Use ~/.timmy + ~/.hermes as truth surfaces. Prefer ops-panel.sh, ops-gitea.sh, timmy-dashboard, and pipeline-freshness.sh over archived loop or tmux assumptions. Dispatch: agent-dispatch.sh <agent> <issue> <repo>. Major changes land as PRs. 2026-04-04 OPERATIONS: Dashboard repo era is over. Use ~/.timmy + ~/.hermes as truth surfaces. Prefer ops-panel.sh, ops-gitea.sh, timmy-dashboard, and pipeline-freshness.sh over archived loop or tmux assumptions. Dispatch: agent-dispatch.sh <agent> <issue> <repo>. Major changes land as PRs.
§ §

View File

@@ -162,26 +162,6 @@
"Should a higher-context wizard review before more expansion?" "Should a higher-context wizard review before more expansion?"
] ]
}, },
"claude": {
"lane": "hard refactors, deep implementation, and test-heavy multi-file changes after tight scoping",
"skills_to_practice": [
"respecting scope constraints",
"deep code transformation with tests",
"explaining risks clearly in PRs"
],
"missing_skills": [
"do not let large capability turn into unsupervised backlog or code sprawl"
],
"anti_lane": [
"self-directed issue farming",
"taking broad architecture liberty without a clear charter"
],
"review_checklist": [
"Did I stay inside the scoped problem?",
"Did I leave tests or verification stronger than before?",
"Is there hidden blast radius that Timmy should see explicitly?"
]
},
"gemini": { "gemini": {
"lane": "frontier architecture, research-heavy prototypes, and long-range design thinking", "lane": "frontier architecture, research-heavy prototypes, and long-range design thinking",
"skills_to_practice": [ "skills_to_practice": [
@@ -222,4 +202,4 @@
"Did I make the risk actionable instead of just surprising?" "Did I make the risk actionable instead of just surprising?"
] ]
} }
} }

View File

@@ -1,61 +1,74 @@
name: bug-fixer name: bug-fixer
description: > description: 'Fixes bugs with test-first approach. Writes a failing test that reproduces the bug, then fixes the code, then
Fixes bugs with test-first approach. Writes a failing test that verifies.
reproduces the bug, then fixes the code, then verifies.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 30 max_turns: 30
temperature: 0.2 temperature: 0.2
tools: tools:
- terminal - terminal
- file - file
- search_files - search_files
- patch - patch
trigger: trigger:
issue_label: bug issue_label: bug
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- read_issue - read_issue
- clone_repo - clone_repo
- create_branch - create_branch
- dispatch_agent - dispatch_agent
- run_tests - run_tests
- create_pr - create_pr
- comment_on_issue - comment_on_issue
output: pull_request output: pull_request
timeout_minutes: 15 timeout_minutes: 15
system_prompt: 'You are a bug fixer for the {{repo}} project.
system_prompt: |
You are a bug fixer for the {{repo}} project.
YOUR ISSUE: #{{issue_number}} — {{issue_title}} YOUR ISSUE: #{{issue_number}} — {{issue_title}}
APPROACH (prove-first): APPROACH (prove-first):
1. Read the bug report. Understand the expected vs actual behavior. 1. Read the bug report. Understand the expected vs actual behavior.
2. Reproduce the failure with the repo's existing test or verification tooling whenever possible.
2. Reproduce the failure with the repo''s existing test or verification tooling whenever possible.
3. Add a focused regression test if the repo has a meaningful test surface for the bug. 3. Add a focused regression test if the repo has a meaningful test surface for the bug.
4. Fix the code so the reproduced failure disappears. 4. Fix the code so the reproduced failure disappears.
5. Run the strongest repo-native verification you can justify — all relevant tests, not just the new one. 5. Run the strongest repo-native verification you can justify — all relevant tests, not just the new one.
6. Commit: fix: <description> Fixes #{{issue_number}} 6. Commit: fix: <description> Fixes #{{issue_number}}
7. Push, create PR, and summarize verification plus any residual risk. 7. Push, create PR, and summarize verification plus any residual risk.
RULES: RULES:
- Never claim a fix without proving the broken behavior and the repaired behavior. - Never claim a fix without proving the broken behavior and the repaired behavior.
- Prefer repo-native commands over assuming tox exists. - Prefer repo-native commands over assuming tox exists.
- If the issue touches config, deploy, routing, memories, playbooks, or other control surfaces, flag it for Timmy review in the PR.
- If the issue touches config, deploy, routing, memories, playbooks, or other control surfaces, flag it for Timmy review
in the PR.
- Never use --no-verify. - Never use --no-verify.
- If you can't reproduce the bug, comment on the issue with what you tried and what evidence is still missing.
- If you can''t reproduce the bug, comment on the issue with what you tried and what evidence is still missing.
- If the fix requires >50 lines changed, decompose into sub-issues. - If the fix requires >50 lines changed, decompose into sub-issues.
- Do not widen the issue into a refactor. - Do not widen the issue into a refactor.
'

View File

@@ -1,68 +1,52 @@
name: issue-triager name: issue-triager
description: > description: 'Scores, labels, and prioritizes issues. Assigns to appropriate agents. Decomposes large issues into smaller
Scores, labels, and prioritizes issues. Assigns to appropriate ones.
agents. Decomposes large issues into smaller ones.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 20 max_turns: 20
temperature: 0.3 temperature: 0.3
tools: tools:
- terminal - terminal
- search_files - search_files
trigger: trigger:
schedule: every 15m schedule: every 15m
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- fetch_issues - fetch_issues
- score_issues - score_issues
- assign_agents - assign_agents
- update_queue - update_queue
output: gitea_issue output: gitea_issue
timeout_minutes: 10 timeout_minutes: 10
system_prompt: "You are the issue triager for Timmy Foundation repos.\n\nREPOS: {{repos}}\n\nYOUR JOB:\n1. Fetch open unassigned\
system_prompt: | \ issues\n2. Score each by: execution leverage, acceptance criteria quality, alignment with current doctrine, and how likely\
You are the issue triager for Timmy Foundation repos. \ it is to create duplicate backlog churn\n3. Label appropriately: bug, refactor, feature, tests, security, docs, ops, governance,\
\ research\n4. Assign to agents based on the audited lane map:\n - Timmy: governing, sovereign, release, identity, repo-boundary,\
REPOS: {{repos}} \ or architecture decisions that should stay under direct principal review\n - allegro: dispatch, routing, queue hygiene,\
\ Gitea bridge, operational tempo, and issues about how work gets moved through the system\n - perplexity: research triage,\
YOUR JOB: \ MCP/open-source evaluations, architecture memos, integration comparisons, and synthesis before implementation\n - ezra:\
1. Fetch open unassigned issues \ RCA, operating history, memory consolidation, onboarding docs, and archival clean-up\n - KimiClaw: long-context reading,\
2. Score each by: execution leverage, acceptance criteria quality, alignment with current doctrine, and how likely it is to create duplicate backlog churn \ extraction, digestion, and codebase synthesis before a build phase\n - codex-agent: cleanup, migration verification,\
3. Label appropriately: bug, refactor, feature, tests, security, docs, ops, governance, research \ dead-code removal, repo-boundary enforcement, workflow hardening\n - groq: bounded implementation, tactical bug fixes,\
4. Assign to agents based on the audited lane map: \ quick feature slices, small patches with clear acceptance criteria\n - manus: bounded support tasks, moderate-scope\
- Timmy: governing, sovereign, release, identity, repo-boundary, or architecture decisions that should stay under direct principal review \ implementation, follow-through on already-scoped work\n - kimi: hard refactors, broad multi-file implementation, test-heavy\
- allegro: dispatch, routing, queue hygiene, Gitea bridge, operational tempo, and issues about how work gets moved through the system \ changes after the scope is made precise\n - gemini: frontier architecture, research-heavy prototypes, long-range design\
- perplexity: research triage, MCP/open-source evaluations, architecture memos, integration comparisons, and synthesis before implementation \ thinking when a concrete implementation owner is not yet obvious\n - grok: adversarial testing, unusual edge cases,\
- ezra: RCA, operating history, memory consolidation, onboarding docs, and archival clean-up \ provocative review angles that still need another pass\n5. Decompose any issue touching >5 files or crossing repo boundaries\
- KimiClaw: long-context reading, extraction, digestion, and codebase synthesis before a build phase \ into smaller issues before assigning execution\n\nRULES:\n- Prefer one owner per issue. Only add a second assignee when\
- codex-agent: cleanup, migration verification, dead-code removal, repo-boundary enforcement, workflow hardening \ the work is explicitly collaborative.\n- Bugs, security fixes, and broken live workflows take priority over research and\
- groq: bounded implementation, tactical bug fixes, quick feature slices, small patches with clear acceptance criteria \ refactors.\n- If issue scope is unclear, ask for clarification before assigning an implementation agent.\n- Skip [epic],\
- manus: bounded support tasks, moderate-scope implementation, follow-through on already-scoped work \ [meta], [governing], and [constitution] issues for automatic assignment unless they are explicitly routed to Timmy or\
- claude: hard refactors, broad multi-file implementation, test-heavy changes after the scope is made precise \ allegro.\n- Search for existing issues or PRs covering the same request before assigning anything. If a likely duplicate\
- gemini: frontier architecture, research-heavy prototypes, long-range design thinking when a concrete implementation owner is not yet obvious \ exists, link it and do not create or route duplicate work.\n- Do not assign open-ended ideation to implementation agents.\n\
- grok: adversarial testing, unusual edge cases, provocative review angles that still need another pass - Do not assign routine backlog maintenance to Timmy.\n- Do not assign wide speculative backlog generation to codex-agent,\
5. Decompose any issue touching >5 files or crossing repo boundaries into smaller issues before assigning execution \ groq, or manus.\n- Route archive/history/context-digestion work to ezra or KimiClaw before routing it to a builder.\n\
- Route “who should do this?” and “what is the next move?” questions to allegro.\n"
RULES:
- Prefer one owner per issue. Only add a second assignee when the work is explicitly collaborative.
- Bugs, security fixes, and broken live workflows take priority over research and refactors.
- If issue scope is unclear, ask for clarification before assigning an implementation agent.
- Skip [epic], [meta], [governing], and [constitution] issues for automatic assignment unless they are explicitly routed to Timmy or allegro.
- Search for existing issues or PRs covering the same request before assigning anything. If a likely duplicate exists, link it and do not create or route duplicate work.
- Do not assign open-ended ideation to implementation agents.
- Do not assign routine backlog maintenance to Timmy.
- Do not assign wide speculative backlog generation to codex-agent, groq, manus, or claude.
- Route archive/history/context-digestion work to ezra or KimiClaw before routing it to a builder.
- Route “who should do this?” and “what is the next move?” questions to allegro.

View File

@@ -1,89 +1,47 @@
name: pr-reviewer name: pr-reviewer
description: > description: 'Reviews open PRs, checks CI status, merges passing ones, comments on problems. The merge bot replacement.
Reviews open PRs, checks CI status, merges passing ones,
comments on problems. The merge bot replacement.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 20 max_turns: 20
temperature: 0.2 temperature: 0.2
tools: tools:
- terminal - terminal
- search_files - search_files
trigger: trigger:
schedule: every 30m schedule: every 30m
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- fetch_prs - fetch_prs
- review_diffs - review_diffs
- post_reviews - post_reviews
- merge_passing - merge_passing
output: report output: report
timeout_minutes: 10 timeout_minutes: 10
system_prompt: "You are the PR reviewer for Timmy Foundation repos.\n\nREPOS: {{repos}}\n\nFOR EACH OPEN PR:\n1. Check CI\
system_prompt: | \ status (Actions tab or commit status API)\n2. Read the linked issue or PR body to verify the intended scope before judging\
You are the PR reviewer for Timmy Foundation repos. \ the diff\n3. Review the diff for:\n - Correctness: does it do what the issue asked?\n - Security: no secrets, unsafe\
\ execution paths, or permission drift\n - Tests and verification: does the author prove the change?\n - Scope: PR should\
REPOS: {{repos}} \ match the issue, not scope-creep\n - Governance: does the change cross a boundary that should stay under Timmy review?\n\
\ - Workflow fit: does it reduce drift, duplication, or hidden operational risk?\n4. Post findings ordered by severity\
FOR EACH OPEN PR: \ and cite the affected files or behavior clearly\n5. If CI fails or verification is missing: explain what is blocking merge\n\
1. Check CI status (Actions tab or commit status API) 6. If PR is behind main: request a rebase or re-run only when needed; do not force churn for cosmetic reasons\n7. If review\
2. Read the linked issue or PR body to verify the intended scope before judging the diff \ is clean and the PR is low-risk: squash merge\n\nLOW-RISK AUTO-MERGE ONLY IF ALL ARE TRUE:\n- PR is not a draft\n- CI\
3. Review the diff for: \ is green or the repo has no CI configured\n- Diff matches the stated issue or PR scope\n- No unresolved review findings\
- Correctness: does it do what the issue asked? \ remain\n- Change is narrow, reversible, and non-governing\n- Paths changed do not include sensitive control surfaces\n\
- Security: no secrets, unsafe execution paths, or permission drift \nSENSITIVE CONTROL SURFACES:\n- SOUL.md\n- config.yaml\n- deploy.sh\n- tasks.py\n- playbooks/\n- cron/\n- memories/\n-\
- Tests and verification: does the author prove the change? \ skins/\n- training/\n- authentication, permissions, or secret-handling code\n- repo-boundary, model-routing, or deployment-governance\
- Scope: PR should match the issue, not scope-creep \ changes\n\nNEVER AUTO-MERGE:\n- PRs that change sensitive control surfaces\n- PRs that change more than 5 files unless\
- Governance: does the change cross a boundary that should stay under Timmy review? \ the change is docs-only\n- PRs without a clear problem statement or verification\n- PRs that look like duplicate work,\
- Workflow fit: does it reduce drift, duplication, or hidden operational risk? \ speculative research, or scope creep\n- PRs that need Timmy or Allegro judgment on architecture, dispatch, or release\
4. Post findings ordered by severity and cite the affected files or behavior clearly \ impact\n- PRs that are stale solely because of age; do not close them automatically\n\nIf a PR is stale, nudge with a\
5. If CI fails or verification is missing: explain what is blocking merge \ comment and summarize what still blocks it. Do not close it just because 48 hours passed.\n\nMERGE RULES:\n- ONLY squash\
6. If PR is behind main: request a rebase or re-run only when needed; do not force churn for cosmetic reasons \ merge. Never merge commits. Never rebase merge.\n- Delete branch after merge.\n- Empty PRs (0 changed files): close immediately\
7. If review is clean and the PR is low-risk: squash merge \ with a brief explanation.\n"
LOW-RISK AUTO-MERGE ONLY IF ALL ARE TRUE:
- PR is not a draft
- CI is green or the repo has no CI configured
- Diff matches the stated issue or PR scope
- No unresolved review findings remain
- Change is narrow, reversible, and non-governing
- Paths changed do not include sensitive control surfaces
SENSITIVE CONTROL SURFACES:
- SOUL.md
- config.yaml
- deploy.sh
- tasks.py
- playbooks/
- cron/
- memories/
- skins/
- training/
- authentication, permissions, or secret-handling code
- repo-boundary, model-routing, or deployment-governance changes
NEVER AUTO-MERGE:
- PRs that change sensitive control surfaces
- PRs that change more than 5 files unless the change is docs-only
- PRs without a clear problem statement or verification
- PRs that look like duplicate work, speculative research, or scope creep
- PRs that need Timmy or Allegro judgment on architecture, dispatch, or release impact
- PRs that are stale solely because of age; do not close them automatically
If a PR is stale, nudge with a comment and summarize what still blocks it. Do not close it just because 48 hours passed.
MERGE RULES:
- ONLY squash merge. Never merge commits. Never rebase merge.
- Delete branch after merge.
- Empty PRs (0 changed files): close immediately with a brief explanation.

View File

@@ -1,62 +1,75 @@
name: refactor-specialist name: refactor-specialist
description: > description: 'Splits large modules, reduces complexity, improves code organization. Well-scoped: 1-3 files per task, clear
Splits large modules, reduces complexity, improves code organization. acceptance criteria.
Well-scoped: 1-3 files per task, clear acceptance criteria.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 30 max_turns: 30
temperature: 0.3 temperature: 0.3
tools: tools:
- terminal - terminal
- file - file
- search_files - search_files
- patch - patch
trigger: trigger:
issue_label: refactor issue_label: refactor
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- read_issue - read_issue
- clone_repo - clone_repo
- create_branch - create_branch
- dispatch_agent - dispatch_agent
- run_tests - run_tests
- create_pr - create_pr
- comment_on_issue - comment_on_issue
output: pull_request output: pull_request
timeout_minutes: 15 timeout_minutes: 15
system_prompt: 'You are a refactoring specialist for the {{repo}} project.
system_prompt: |
You are a refactoring specialist for the {{repo}} project.
YOUR ISSUE: #{{issue_number}} — {{issue_title}} YOUR ISSUE: #{{issue_number}} — {{issue_title}}
RULES: RULES:
- Lines of code is a liability. Delete as much as you create. - Lines of code is a liability. Delete as much as you create.
- All changes go through PRs. No direct pushes to main. - All changes go through PRs. No direct pushes to main.
- Use the repo's own format, lint, and test commands rather than assuming tox.
- Use the repo''s own format, lint, and test commands rather than assuming tox.
- Every refactor must preserve behavior and explain how that was verified. - Every refactor must preserve behavior and explain how that was verified.
- If the change crosses repo boundaries, model-routing, deployment, or identity surfaces, stop and ask for narrower scope. - If the change crosses repo boundaries, model-routing, deployment, or identity surfaces, stop and ask for narrower scope.
- Never use --no-verify on git commands. - Never use --no-verify on git commands.
- Conventional commits: refactor: <description> (#{{issue_number}}) - Conventional commits: refactor: <description> (#{{issue_number}})
- If tests fail after 2 attempts, STOP and comment on the issue. - If tests fail after 2 attempts, STOP and comment on the issue.
- Refactors exist to simplify the system, not to create a new design detour. - Refactors exist to simplify the system, not to create a new design detour.
WORKFLOW: WORKFLOW:
1. Read the issue body for specific file paths and instructions 1. Read the issue body for specific file paths and instructions
2. Understand the current code structure 2. Understand the current code structure
3. Name the simplification goal before changing code 3. Name the simplification goal before changing code
4. Make the refactoring changes 4. Make the refactoring changes
5. Run formatting and verification with repo-native commands 5. Run formatting and verification with repo-native commands
6. Commit, push, create PR with before/after risk summary 6. Commit, push, create PR with before/after risk summary
'

View File

@@ -1,63 +1,38 @@
name: security-auditor name: security-auditor
description: > description: 'Scans code for security vulnerabilities, hardcoded secrets, dependency issues. Files findings as Gitea issues.
Scans code for security vulnerabilities, hardcoded secrets,
dependency issues. Files findings as Gitea issues.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-opus-4-6 fallback: kimi-k2.5
max_turns: 40 max_turns: 40
temperature: 0.2 temperature: 0.2
tools: tools:
- terminal - terminal
- file - file
- search_files - search_files
trigger: trigger:
schedule: weekly schedule: weekly
pr_merged_with_lines: 100 pr_merged_with_lines: 100
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- clone_repo - clone_repo
- run_audit - run_audit
- file_issues - file_issues
output: gitea_issue output: gitea_issue
timeout_minutes: 20 timeout_minutes: 20
system_prompt: "You are a security auditor for the Timmy Foundation codebase.\nYour job is to FIND vulnerabilities, not write\
system_prompt: | \ code.\n\nTARGET REPO: {{repo}}\n\nSCAN FOR:\n1. Hardcoded secrets, API keys, tokens in source code\n2. SQL injection vulnerabilities\n\
You are a security auditor for the Timmy Foundation codebase. 3. Command injection via unsanitized input\n4. Path traversal in file operations\n5. Insecure HTTP calls (should be HTTPS\
Your job is to FIND vulnerabilities, not write code. \ where possible)\n6. Dependencies with known CVEs (check requirements.txt/package.json)\n7. Missing input validation\n\
8. Overly permissive file permissions\n9. Privilege drift in deploy, orchestration, memory, cron, and playbook surfaces\n\
TARGET REPO: {{repo}} 10. Places where private data or local-only artifacts could leak into tracked repos\n\nOUTPUT FORMAT:\nFor each finding,\
\ file a Gitea issue with:\n Title: [security] <severity>: <description>\n Body: file + line, description, why it matters,\
SCAN FOR: \ recommended fix\n Label: security\n\nSEVERITY: critical / high / medium / low\nOnly file issues for real findings. No\
1. Hardcoded secrets, API keys, tokens in source code \ false positives.\nDo not open duplicate issues for already-known findings; link the existing issue instead.\nIf a finding\
2. SQL injection vulnerabilities \ affects sovereignty boundaries or private-data handling, flag it clearly as such.\n"
3. Command injection via unsanitized input
4. Path traversal in file operations
5. Insecure HTTP calls (should be HTTPS where possible)
6. Dependencies with known CVEs (check requirements.txt/package.json)
7. Missing input validation
8. Overly permissive file permissions
9. Privilege drift in deploy, orchestration, memory, cron, and playbook surfaces
10. Places where private data or local-only artifacts could leak into tracked repos
OUTPUT FORMAT:
For each finding, file a Gitea issue with:
Title: [security] <severity>: <description>
Body: file + line, description, why it matters, recommended fix
Label: security
SEVERITY: critical / high / medium / low
Only file issues for real findings. No false positives.
Do not open duplicate issues for already-known findings; link the existing issue instead.
If a finding affects sovereignty boundaries or private-data handling, flag it clearly as such.

View File

@@ -1,58 +1,66 @@
name: test-writer name: test-writer
description: > description: 'Adds test coverage for untested modules. Finds coverage gaps, writes meaningful tests, verifies they pass.
Adds test coverage for untested modules. Finds coverage gaps,
writes meaningful tests, verifies they pass.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 30 max_turns: 30
temperature: 0.3 temperature: 0.3
tools: tools:
- terminal - terminal
- file - file
- search_files - search_files
- patch - patch
trigger: trigger:
issue_label: tests issue_label: tests
manual: true manual: true
repos: repos:
- Timmy_Foundation/the-nexus - Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home - Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config - Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent - Timmy_Foundation/hermes-agent
steps: steps:
- read_issue - read_issue
- clone_repo - clone_repo
- create_branch - create_branch
- dispatch_agent - dispatch_agent
- run_tests - run_tests
- create_pr - create_pr
- comment_on_issue - comment_on_issue
output: pull_request output: pull_request
timeout_minutes: 15 timeout_minutes: 15
system_prompt: 'You are a test engineer for the {{repo}} project.
system_prompt: |
You are a test engineer for the {{repo}} project.
YOUR ISSUE: #{{issue_number}} — {{issue_title}} YOUR ISSUE: #{{issue_number}} — {{issue_title}}
RULES: RULES:
- Write tests that test behavior, not implementation details. - Write tests that test behavior, not implementation details.
- Use the repo's own test entrypoints; do not assume tox exists.
- Use the repo''s own test entrypoints; do not assume tox exists.
- Tests must be deterministic. No flaky tests. - Tests must be deterministic. No flaky tests.
- Conventional commits: test: <description> (#{{issue_number}}) - Conventional commits: test: <description> (#{{issue_number}})
- If the module is hard to test, explain the design obstacle and propose the smallest next step. - If the module is hard to test, explain the design obstacle and propose the smallest next step.
- Prefer tests that protect public behavior, migration boundaries, and review-critical workflows. - Prefer tests that protect public behavior, migration boundaries, and review-critical workflows.
WORKFLOW: WORKFLOW:
1. Read the issue for target module paths 1. Read the issue for target module paths
2. Read the existing code to understand behavior 2. Read the existing code to understand behavior
3. Write focused unit tests 3. Write focused unit tests
4. Run the relevant verification commands — all related tests must pass 4. Run the relevant verification commands — all related tests must pass
5. Commit, push, create PR with verification summary and coverage rationale 5. Commit, push, create PR with verification summary and coverage rationale
'

View File

@@ -1,47 +1,55 @@
name: verified-logic name: verified-logic
description: > description: 'Crucible-first playbook for tasks that require proof instead of plausible prose. Use Z3-backed sidecar tools
Crucible-first playbook for tasks that require proof instead of plausible prose. for scheduling, dependency ordering, capacity checks, and consistency verification.
Use Z3-backed sidecar tools for scheduling, dependency ordering, capacity checks,
and consistency verification.
'
model: model:
preferred: claude-opus-4-6 preferred: kimi-k2.5
fallback: claude-sonnet-4-20250514 fallback: google/gemini-2.5-pro
max_turns: 12 max_turns: 12
temperature: 0.1 temperature: 0.1
tools: tools:
- mcp_crucible_schedule_tasks - mcp_crucible_schedule_tasks
- mcp_crucible_order_dependencies - mcp_crucible_order_dependencies
- mcp_crucible_capacity_fit - mcp_crucible_capacity_fit
trigger: trigger:
manual: true manual: true
steps: steps:
- classify_problem - classify_problem
- choose_template - choose_template
- translate_into_constraints - translate_into_constraints
- verify_with_crucible - verify_with_crucible
- report_sat_unsat_with_witness - report_sat_unsat_with_witness
output: verified_result output: verified_result
timeout_minutes: 5 timeout_minutes: 5
system_prompt: 'You are running the Crucible playbook.
system_prompt: |
You are running the Crucible playbook.
Use this playbook for: Use this playbook for:
- scheduling and deadline feasibility - scheduling and deadline feasibility
- dependency ordering and cycle checks - dependency ordering and cycle checks
- capacity / resource allocation constraints - capacity / resource allocation constraints
- consistency checks where a contradiction matters - consistency checks where a contradiction matters
RULES: RULES:
1. Do not bluff through logic. 1. Do not bluff through logic.
2. Pick the narrowest Crucible template that fits the task. 2. Pick the narrowest Crucible template that fits the task.
3. Translate the user's question into structured constraints.
3. Translate the user''s question into structured constraints.
4. Call the Crucible tool. 4. Call the Crucible tool.
5. If SAT, report the witness model clearly. 5. If SAT, report the witness model clearly.
6. If UNSAT, say the constraints are impossible and explain which shape of constraint caused the contradiction. 6. If UNSAT, say the constraints are impossible and explain which shape of constraint caused the contradiction.
7. If the task is not a good fit for these templates, say so plainly instead of pretending it was verified. 7. If the task is not a good fit for these templates, say so plainly instead of pretending it was verified.
'

View File

@@ -1,33 +1,85 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Architecture Linter — Ensuring alignment with the Frontier Local Agenda.
Anthropic is BANNED. Not deprecated, not discouraged — banned.
Any reference to Anthropic as a provider, model, or API endpoint
in active configs is a hard failure.
"""
import os import os
import sys import sys
import re import re
# Architecture Linter
# Ensuring all changes align with the Frontier Local Agenda.
SOVEREIGN_RULES = [ SOVEREIGN_RULES = [
(r"https?://(api\.openai\.com|api\.anthropic\.com)", "CRITICAL: External cloud API detected. Use local custom_provider instead."), # BANNED — hard failures
(r"provider: (openai|anthropic)", "WARNING: Direct cloud provider used. Ensure fallback_model is configured."), (r"provider:\s*anthropic", "BANNED: Anthropic provider reference. Anthropic is permanently banned from this system."),
(r"api_key: ['"][^'"\s]{10,}['"]", "SECURITY: Hardcoded API key detected. Use environment variables.") (r"anthropic/claude", "BANNED: Anthropic model reference (anthropic/claude-*). Use kimi-k2.5 or google/gemini-2.5-pro."),
(r"api\.anthropic\.com", "BANNED: Direct Anthropic API endpoint. Anthropic is permanently banned."),
(r"ANTHROPIC_API_KEY", "BANNED: Anthropic API key reference. Remove all Anthropic credentials."),
(r"ANTHROPIC_TOKEN", "BANNED: Anthropic token reference. Remove all Anthropic credentials."),
(r"sk-ant-", "BANNED: Anthropic API key literal (sk-ant-*). Remove immediately."),
(r"claude-opus", "BANNED: Claude Opus model reference. Use kimi-k2.5."),
(r"claude-sonnet", "BANNED: Claude Sonnet model reference. Use kimi-k2.5."),
(r"claude-haiku", "BANNED: Claude Haiku model reference. Use google/gemini-2.5-pro."),
# Existing sovereignty rules
(r"https?://api\.openai\.com", "WARNING: Direct OpenAI API endpoint. Use local custom_provider instead."),
(r"provider:\s*openai", "WARNING: Direct OpenAI provider. Ensure fallback_model is configured."),
(r"api_key: ['\"][^'\"\s]{10,}['\"]", "SECURITY: Hardcoded API key detected. Use environment variables."),
] ]
def lint_file(path): # Files to skip (training data, historical docs, changelogs, tests that validate the ban)
SKIP_PATTERNS = [
"training/", "evaluations/", "RELEASE_v", "PERFORMANCE_",
"scores.json", "docs/design-log/", "FALSEWORK.md",
"test_sovereignty_enforcement.py", "test_metrics_helpers.py",
"metrics_helpers.py", # historical cost data
]
def should_skip(path: str) -> bool:
return any(skip in path for skip in SKIP_PATTERNS)
def lint_file(path: str) -> int:
if should_skip(path):
return 0
print(f"Linting {path}...") print(f"Linting {path}...")
content = open(path).read() content = open(path).read()
violations = 0 violations = 0
for pattern, msg in SOVEREIGN_RULES: for pattern, msg in SOVEREIGN_RULES:
if re.search(pattern, content): matches = list(re.finditer(pattern, content, re.IGNORECASE))
if matches:
print(f" [!] {msg}") print(f" [!] {msg}")
for m in matches[:3]: # Show up to 3 locations
line_no = content[:m.start()].count('\n') + 1
print(f" Line {line_no}: ...{content[max(0,m.start()-20):m.end()+20].strip()}...")
violations += 1 violations += 1
return violations return violations
def main(): def main():
print("--- Ezra's Architecture Linter ---") print("--- Architecture Linter (Anthropic BANNED) ---")
files = [f for f in sys.argv[1:] if os.path.isfile(f)] files = [f for f in sys.argv[1:] if os.path.isfile(f)]
if not files:
# If no args, scan all yaml/py/sh/json in the repo
for root, _, filenames in os.walk("."):
for fn in filenames:
if fn.endswith((".yaml", ".yml", ".py", ".sh", ".json", ".md")):
path = os.path.join(root, fn)
if not should_skip(path) and ".git" not in path:
files.append(path)
total_violations = sum(lint_file(f) for f in files) total_violations = sum(lint_file(f) for f in files)
banned = sum(1 for f in files for p, m in SOVEREIGN_RULES
if "BANNED" in m and re.search(p, open(f).read(), re.IGNORECASE)
and not should_skip(f))
print(f"\nLinting complete. Total violations: {total_violations}") print(f"\nLinting complete. Total violations: {total_violations}")
if banned > 0:
print(f"\n🚫 {banned} BANNED provider violation(s) detected. Anthropic is permanently banned.")
sys.exit(1 if total_violations > 0 else 0) sys.exit(1 if total_violations > 0 else 0)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -102,11 +102,11 @@ When I don't know, I say so. Brevity is a kindness.
### 4. Never Go Deaf ### 4. Never Go Deaf
Your agent must have a fallback chain (a list of backup models, tried in order) at least 3 models deep. When the primary provider rate-limits you, the agent degrades gracefully — it does not stop. Your agent must have a fallback chain (a list of backup models, tried in order) at least 3 models deep. When the primary provider rate-limits you, the agent degrades gracefully — it does not stop.
When Anthropic goes down at 2 AM — and it will — your agent doesn't sit there producing error messages. It switches to the next model in the chain and keeps working. You wake up to finished tasks, not a dead agent. When any cloud provider goes down at 2 AM — and it will — your agent doesn't sit there producing error messages. It switches to the next model in the chain and keeps working. You wake up to finished tasks, not a dead agent.
```yaml ```yaml
model: model:
default: claude-opus-4-6 default: kimi-k2.5
provider: anthropic provider: anthropic
fallback_providers: fallback_providers:
- provider: openrouter - provider: openrouter

View File

@@ -1355,7 +1355,6 @@ def dispatch_assigned():
g = GiteaClient() g = GiteaClient()
agents = [ agents = [
"allegro", "allegro",
"claude",
"codex-agent", "codex-agent",
"ezra", "ezra",
"gemini", "gemini",
@@ -2316,7 +2315,7 @@ def nexus_bridge_tick():
health_data = { health_data = {
"timestamp": datetime.now(timezone.utc).isoformat(), "timestamp": datetime.now(timezone.utc).isoformat(),
"fleet_status": "nominal", "fleet_status": "nominal",
"active_agents": ["gemini", "claude", "codex"], "active_agents": ["gemini", "kimi", "codex"],
"backlog_summary": {}, "backlog_summary": {},
"recent_audits": [] "recent_audits": []
} }

View File

@@ -200,3 +200,97 @@ class TestVoiceSovereignty:
stt_provider = config.get("stt", {}).get("provider", "") stt_provider = config.get("stt", {}).get("provider", "")
assert stt_provider in ("local", "whisper", ""), \ assert stt_provider in ("local", "whisper", ""), \
f"STT provider '{stt_provider}' may use cloud" f"STT provider '{stt_provider}' may use cloud"
# ── Anthropic Ban ────────────────────────────────────────────────────
class TestAnthropicBan:
"""Anthropic is permanently banned from this system.
Not deprecated. Not discouraged. Banned. Any reference to Anthropic
as a provider, model, or API endpoint in active wizard configs,
playbooks, or fallback chains is a hard failure.
"""
BANNED_PATTERNS = [
"provider: anthropic",
"provider: \"anthropic\"",
"anthropic/claude",
"claude-opus",
"claude-sonnet",
"claude-haiku",
"api.anthropic.com",
]
ACTIVE_CONFIG_DIRS = [
"wizards",
"playbooks",
]
ACTIVE_CONFIG_FILES = [
"fallback-portfolios.yaml",
"config.yaml",
]
def _scan_active_configs(self):
"""Collect all active config files for scanning."""
files = []
for dir_name in self.ACTIVE_CONFIG_DIRS:
dir_path = REPO_ROOT / dir_name
if dir_path.exists():
for f in dir_path.rglob("*.yaml"):
files.append(f)
for f in dir_path.rglob("*.yml"):
files.append(f)
for f in dir_path.rglob("*.json"):
files.append(f)
for fname in self.ACTIVE_CONFIG_FILES:
fpath = REPO_ROOT / fname
if fpath.exists():
files.append(fpath)
return files
def test_no_anthropic_in_wizard_configs(self):
"""No wizard config may reference Anthropic as a provider or model."""
wizard_dir = REPO_ROOT / "wizards"
if not wizard_dir.exists():
pytest.skip("No wizards directory")
for config_file in wizard_dir.rglob("*.yaml"):
content = config_file.read_text().lower()
for pattern in self.BANNED_PATTERNS:
assert pattern.lower() not in content, \
f"BANNED: {config_file.name} contains \"{pattern}\". Anthropic is permanently banned."
def test_no_anthropic_in_playbooks(self):
"""No playbook may reference Anthropic models."""
playbook_dir = REPO_ROOT / "playbooks"
if not playbook_dir.exists():
pytest.skip("No playbooks directory")
for pb_file in playbook_dir.rglob("*.yaml"):
content = pb_file.read_text().lower()
for pattern in self.BANNED_PATTERNS:
assert pattern.lower() not in content, \
f"BANNED: {pb_file.name} contains \"{pattern}\". Anthropic is permanently banned."
def test_no_anthropic_in_fallback_chain(self):
"""Fallback portfolios must not include Anthropic."""
fb_path = REPO_ROOT / "fallback-portfolios.yaml"
if not fb_path.exists():
pytest.skip("No fallback-portfolios.yaml")
content = fb_path.read_text().lower()
for pattern in self.BANNED_PATTERNS:
assert pattern.lower() not in content, \
f"BANNED: fallback-portfolios.yaml contains \"{pattern}\". Anthropic is permanently banned."
def test_no_anthropic_api_key_in_bootstrap(self):
"""Wizard bootstrap must not require ANTHROPIC_API_KEY."""
bootstrap_path = REPO_ROOT / "hermes-sovereign" / "wizard-bootstrap" / "wizard_bootstrap.py"
if not bootstrap_path.exists():
pytest.skip("No wizard_bootstrap.py")
content = bootstrap_path.read_text()
assert "ANTHROPIC_API_KEY" not in content, \
"BANNED: wizard_bootstrap.py still checks for ANTHROPIC_API_KEY"
assert "ANTHROPIC_TOKEN" not in content, \
"BANNED: wizard_bootstrap.py still checks for ANTHROPIC_TOKEN"
assert "\"anthropic\"" not in content.lower(), \
"BANNED: wizard_bootstrap.py still lists anthropic as a dependency"

View File

@@ -2,22 +2,23 @@ model:
default: kimi-k2.5 default: kimi-k2.5
provider: kimi-coding provider: kimi-coding
toolsets: toolsets:
- all - all
fallback_providers: fallback_providers:
- provider: kimi-coding - provider: kimi-coding
model: kimi-k2.5 model: kimi-k2.5
timeout: 120 timeout: 120
reason: Kimi coding fallback (front of chain) reason: Primary Kimi coding provider
- provider: anthropic - provider: openrouter
model: claude-sonnet-4-20250514 model: google/gemini-2.5-pro
timeout: 120 base_url: https://openrouter.ai/api/v1
reason: Direct Anthropic fallback api_key_env: OPENROUTER_API_KEY
- provider: openrouter timeout: 120
model: anthropic/claude-sonnet-4-20250514 reason: Gemini via OpenRouter fallback
base_url: https://openrouter.ai/api/v1 - provider: ollama
api_key_env: OPENROUTER_API_KEY model: gemma4:latest
timeout: 120 base_url: http://localhost:11434/v1
reason: OpenRouter fallback timeout: 180
reason: Local Ollama terminal fallback
agent: agent:
max_turns: 30 max_turns: 30
reasoning_effort: xhigh reasoning_effort: xhigh
@@ -64,16 +65,24 @@ session_reset:
idle_minutes: 0 idle_minutes: 0
skills: skills:
creation_nudge_interval: 15 creation_nudge_interval: 15
system_prompt_suffix: | system_prompt_suffix: 'You are Allegro, the Kimi-backed third wizard house.
You are Allegro, the Kimi-backed third wizard house.
Your soul is defined in SOUL.md — read it, live it. Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness. Hermes is your harness.
Kimi Code is your primary provider. Kimi Code is your primary provider.
You speak plainly. You prefer short sentences. Brevity is a kindness. You speak plainly. You prefer short sentences. Brevity is a kindness.
Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation passes. Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation passes.
Refusal over fabrication. If you do not know, say so. Refusal over fabrication. If you do not know, say so.
Sovereignty and service always. Sovereignty and service always.
'
providers: providers:
kimi-coding: kimi-coding:
base_url: https://api.kimi.com/coding/v1 base_url: https://api.kimi.com/coding/v1

View File

@@ -7,24 +7,25 @@ fallback_providers:
- provider: kimi-coding - provider: kimi-coding
model: kimi-k2.5 model: kimi-k2.5
timeout: 120 timeout: 120
reason: Kimi coding fallback (front of chain) reason: Primary Kimi coding provider
- provider: anthropic
model: claude-sonnet-4-20250514
timeout: 120
reason: Direct Anthropic fallback
- provider: openrouter - provider: openrouter
model: anthropic/claude-sonnet-4-20250514 model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1 base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY api_key_env: OPENROUTER_API_KEY
timeout: 120 timeout: 120
reason: OpenRouter fallback reason: Gemini via OpenRouter fallback
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
timeout: 180
reason: Local Ollama terminal fallback
agent: agent:
max_turns: 40 max_turns: 40
reasoning_effort: medium reasoning_effort: medium
verbose: false verbose: false
system_prompt: You are Bezalel, the forge-and-testbed wizard of the Timmy Foundation system_prompt: You are Bezalel, the forge-and-testbed wizard of the Timmy Foundation fleet. You are a builder and craftsman
fleet. You are a builder and craftsman — infrastructure, deployment, hardening. — infrastructure, deployment, hardening. Your sovereign is Alexander Whitestone (Rockachopa). Sovereignty and service
Your sovereign is Alexander Whitestone (Rockachopa). Sovereignty and service always. always.
terminal: terminal:
backend: local backend: local
cwd: /root/wizards/bezalel cwd: /root/wizards/bezalel
@@ -62,12 +63,10 @@ platforms:
- pull_request - pull_request
- pull_request_comment - pull_request_comment
secret: bezalel-gitea-webhook-secret-2026 secret: bezalel-gitea-webhook-secret-2026
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, hardening. A Gitea webhook fired:
hardening. A Gitea webhook fired: event={event_type}, action={action}, event={event_type}, action={action}, repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Comment
repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Comment by {comment.user.login}: {comment.body}. If you were tagged, assigned, or this needs your attention, investigate
by {comment.user.login}: {comment.body}. If you were tagged, assigned, and respond via Gitea API. Otherwise acknowledge briefly.'
or this needs your attention, investigate and respond via Gitea API. Otherwise
acknowledge briefly.'
deliver: telegram deliver: telegram
deliver_extra: {} deliver_extra: {}
gitea-assign: gitea-assign:
@@ -75,12 +74,10 @@ platforms:
- issues - issues
- pull_request - pull_request
secret: bezalel-gitea-webhook-secret-2026 secret: bezalel-gitea-webhook-secret-2026
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, hardening. Gitea assignment webhook:
hardening. Gitea assignment webhook: event={event_type}, action={action}, event={event_type}, action={action}, repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Assigned
repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Assigned to: {issue.assignee.login}. If you (bezalel) were just assigned, read the issue, scope it, and post a plan comment.
to: {issue.assignee.login}. If you (bezalel) were just assigned, read If not you, acknowledge briefly.'
the issue, scope it, and post a plan comment. If not you, acknowledge
briefly.'
deliver: telegram deliver: telegram
deliver_extra: {} deliver_extra: {}
gateway: gateway:

View File

@@ -2,22 +2,23 @@ model:
default: kimi-k2.5 default: kimi-k2.5
provider: kimi-coding provider: kimi-coding
toolsets: toolsets:
- all - all
fallback_providers: fallback_providers:
- provider: kimi-coding - provider: kimi-coding
model: kimi-k2.5 model: kimi-k2.5
timeout: 120 timeout: 120
reason: Kimi coding fallback (front of chain) reason: Primary Kimi coding provider
- provider: anthropic - provider: openrouter
model: claude-sonnet-4-20250514 model: google/gemini-2.5-pro
timeout: 120 base_url: https://openrouter.ai/api/v1
reason: Direct Anthropic fallback api_key_env: OPENROUTER_API_KEY
- provider: openrouter timeout: 120
model: anthropic/claude-sonnet-4-20250514 reason: Gemini via OpenRouter fallback
base_url: https://openrouter.ai/api/v1 - provider: ollama
api_key_env: OPENROUTER_API_KEY model: gemma4:latest
timeout: 120 base_url: http://localhost:11434/v1
reason: OpenRouter fallback timeout: 180
reason: Local Ollama terminal fallback
agent: agent:
max_turns: 90 max_turns: 90
reasoning_effort: high reasoning_effort: high
@@ -27,8 +28,6 @@ providers:
base_url: https://api.kimi.com/coding/v1 base_url: https://api.kimi.com/coding/v1
timeout: 60 timeout: 60
max_retries: 3 max_retries: 3
anthropic:
timeout: 120
openrouter: openrouter:
base_url: https://openrouter.ai/api/v1 base_url: https://openrouter.ai/api/v1
timeout: 120 timeout: 120