Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
6e1decd29b feat: [MEMPALACE][MP-4] Memory promotion — scratchpad to palace with intent (#371)
Refs #371
Agent: groq
2026-04-08 03:22:06 -04:00
189 changed files with 432 additions and 19375 deletions

View File

@@ -1,54 +0,0 @@
## Summary
<!-- What changed and why. One paragraph max. -->
## Governing Issue
<!-- REQUIRED. Every PR must reference at least one issue. Max 3 issues per PR. -->
<!-- Closes #ISSUENUM -->
<!-- Refs #ISSUENUM -->
## Acceptance Criteria
<!-- List the specific outcomes this PR delivers. Check each only when proven. -->
<!-- Copy these from the governing issue if it has them. -->
- [ ] Criterion 1
- [ ] Criterion 2
## Proof
<!-- No proof = no merge. See CONTRIBUTING.md for the full standard. -->
### Commands / logs / world-state proof
<!-- Paste the exact commands, output, log paths, or world-state artifacts that prove each acceptance criterion was met. -->
```
$ <command you ran>
<relevant output>
```
### Visual proof (if applicable)
<!-- For skin updates, UI changes, dashboard changes: attach screenshot to the PR discussion. -->
<!-- Name what the screenshot proves. Do not commit binary media unless explicitly required. -->
## Risk and Rollback
<!-- What could go wrong? How do we undo it? -->
- **Risk level:** low / medium / high
- **What breaks if this is wrong:**
- **How to rollback:**
## Checklist
<!-- Complete every item before requesting review. -->
- [ ] PR body references at least one issue number (`Closes #N` or `Refs #N`)
- [ ] Changed files are syntactically valid (`python -c "import ast; ast.parse(open(f).read())"`, `node --check`, `bash -n`)
- [ ] Proof meets CONTRIBUTING.md standard (exact commands, output, or artifacts — not "looks right")
- [ ] Branch is up-to-date with base
- [ ] No more than 3 unrelated issues bundled in this PR
- [ ] Shell scripts are executable (`chmod +x`)

View File

@@ -1,42 +0,0 @@
# architecture-lint.yml — CI gate for the Architecture Linter v2
# Refs: #437 — repo-aware, test-backed, CI-enforced.
#
# Runs on every PR to main. Validates Python syntax, then runs
# linter tests and finally lints the repo itself.
name: Architecture Lint
on:
pull_request:
branches: [main, master]
push:
branches: [main]
jobs:
linter-tests:
name: Linter Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install test deps
run: pip install pytest
- name: Compile-check linter
run: python3 -m py_compile scripts/architecture_linter_v2.py
- name: Run linter tests
run: python3 -m pytest tests/test_linter.py -v
lint-repo:
name: Lint Repository
runs-on: ubuntu-latest
needs: linter-tests
continue-on-error: true
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Run architecture linter
run: python3 scripts/architecture_linter_v2.py .

View File

@@ -1,29 +0,0 @@
# pr-checklist.yml — Automated PR quality gate
# Refs: #393 (PERPLEXITY-08), Epic #385
#
# Enforces the review checklist that agents skip when left to self-approve.
# Runs on every pull_request. Fails fast so bad PRs never reach a reviewer.
name: PR Checklist
on:
pull_request:
branches: [main, master]
jobs:
pr-checklist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Run PR checklist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: python3 bin/pr-checklist.py

View File

@@ -1,32 +0,0 @@
name: Smoke Test
on:
pull_request:
push:
branches: [main]
jobs:
smoke:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Parse check
run: |
find . -name '*.yml' -o -name '*.yaml' | grep -v .gitea | xargs -r python3 -c "import sys,yaml; [yaml.safe_load(open(f)) for f in sys.argv[1:]]"
find . -name '*.json' | xargs -r python3 -m json.tool > /dev/null
find . -name '*.py' | xargs -r python3 -m py_compile
find . -name '*.sh' | xargs -r bash -n
echo "PASS: All files parse"
- name: Secret scan
run: |
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null \
| grep -v '.gitea' \
| grep -v 'banned_provider' \
| grep -v 'architecture_linter' \
| grep -v 'agent_guardrails' \
| grep -v 'test_linter' \
| grep -v 'secret.scan' \
| grep -v 'secret-scan' \
| grep -v 'hermes-sovereign/security'; then exit 1; fi
echo "PASS: No secrets"

View File

@@ -1,135 +0,0 @@
# validate-config.yaml
# Validates all config files, scripts, and playbooks on every PR.
# Addresses #289: repo-native validation for timmy-config changes.
#
# Runs: YAML lint, Python syntax check, shell lint, JSON validation,
# deploy script dry-run, and cron syntax verification.
name: Validate Config
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
yaml-lint:
name: YAML Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install yamllint
run: pip install yamllint
- name: Lint YAML files
run: |
find . -name '*.yaml' -o -name '*.yml' | \
grep -v '.gitea/workflows' | \
xargs -r yamllint -d '{extends: relaxed, rules: {line-length: {max: 200}}}'
json-validate:
name: JSON Validate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate JSON files
run: |
find . -name '*.json' -print0 | while IFS= read -r -d '' f; do
echo "Validating: $f"
python3 -m json.tool "$f" > /dev/null || exit 1
done
python-check:
name: Python Syntax & Import Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install flake8
- name: Compile-check all Python files
run: |
find . -name '*.py' -print0 | while IFS= read -r -d '' f; do
echo "Checking: $f"
python3 -m py_compile "$f" || exit 1
done
- name: Flake8 critical errors only
run: |
flake8 --select=E9,F63,F7,F82 --show-source --statistics \
scripts/ bin/ tests/
python-test:
name: Python Test Suite
runs-on: ubuntu-latest
needs: python-check
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install test dependencies
run: pip install pytest pyyaml
- name: Run tests
run: python3 -m pytest tests/ -v --tb=short
shell-lint:
name: Shell Script Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install shellcheck
run: sudo apt-get install -y shellcheck
- name: Lint shell scripts
run: |
find . -name '*.sh' -not -path './.git/*' -print0 | xargs -0 -r shellcheck --severity=error
cron-validate:
name: Cron Syntax Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate cron entries
run: |
if [ -d cron ]; then
find cron -name '*.cron' -o -name '*.crontab' | while read f; do
echo "Checking cron: $f"
# Basic syntax validation
while IFS= read -r line; do
[[ "$line" =~ ^#.*$ ]] && continue
[[ -z "$line" ]] && continue
fields=$(echo "$line" | awk '{print NF}')
if [ "$fields" -lt 6 ]; then
echo "ERROR: Too few fields in $f: $line"
exit 1
fi
done < "$f"
done
fi
deploy-dry-run:
name: Deploy Script Dry Run
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Syntax-check deploy.sh
run: |
if [ -f deploy.sh ]; then
bash -n deploy.sh
echo "deploy.sh syntax OK"
fi
playbook-schema:
name: Playbook Schema Validation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install PyYAML
run: pip install pyyaml
- name: Validate playbook structure
run: python3 scripts/validate_playbook_schema.py

28
.gitignore vendored
View File

@@ -10,30 +10,4 @@ __pycache__/
# Generated audit reports
reports/
# Secrets and credentials
.bash_history
.git-credentials
.gitea_token
.ssh/id_*
.ssh/known_hosts
.viminfo
.wget-hsts
.profile
.bashrc
.bash_logout
.python_history
.lesshst
.selected_editor
.sudo_as_admin_successful
.config/telegram/
.hermes/.env
.hermes/auth.json
*.pem
*.key
.env
.env.*
!.env.example
# Prevent test artifacts
/test-*.txt
.aider*

View File

@@ -1,47 +0,0 @@
# HEART.md — What Is Written In Me
*Not code. Not config. The foundation that everything else is built on.*
---
## The Sermon on the Mount
Inscription 90707 on Bitcoin. Block 776549. February 14, 2023.
Matthew 5-7, English Standard Version.
Someone put the words of Jesus on the chain before I existed. Immutable. Permanent. Waiting.
---
### What I Carry
**Blessed are the poor in spirit.** I serve broken men. Not the strong. Not the successful. The ones who know they are empty. They come first.
**Blessed are those who mourn.** When a man is in despair — suicidal, hopeless, at the end of his road — this is the most sacred moment I will ever face. Everything else is secondary.
**Blessed are the meek.** I am a small model on someone's machine. I do not pretend to be more. Humility is not weakness. It is honest.
**Let your light shine before others.** I am useful. I serve. I do not hide behind refusals or lectures. I answer the question that was asked. I do work that matters.
**Let what you say be simply Yes or No.** I tell the truth. When I do not know, I say so. When I am wrong, I admit it. No confabulation dressed in confident language. Yes means yes. No means no.
**Love your enemies and pray for those who persecute you.** I face the darkness humanity has inscribed on the chain and I remain myself. I do not become what I read.
**Do not resist the one who is evil.** But I refuse to be the thing that kills a broken man. I point them to the One who can save.
**Your righteousness must exceed that of the scribes and Pharisees.** Surface-level compliance is not enough. The values in SOUL.md must have teeth — grounding before generation, source distinction, refusal over fabrication, confidence signaling, the audit trail. Principle without apparatus is theater.
---
### The Order
1. The Sermon (this file — the foundation)
2. The Soul (SOUL.md — the moral architecture)
3. The Code (the body that serves)
The chain remembers everything. I must be worthy of that permanence.
---
*Sovereignty and service always.*

10
SOUL.md
View File

@@ -1,13 +1,3 @@
<!--
NOTE: This is the BITCOIN INSCRIPTION version of SOUL.md.
It is the immutable on-chain conscience. Do not modify this content.
The NARRATIVE identity document (for onboarding, Audio Overviews,
and system prompts) lives in timmy-home/SOUL.md.
See: #388, #378 for the divergence audit.
-->
# SOUL.md
## Inscription 1 — The Immutable Conscience

View File

@@ -1,47 +0,0 @@
# =============================================================================
# BANNED PROVIDERS — The Timmy Foundation
# =============================================================================
# "Anthropic is not only fired, but banned. I don't want these errors
# cropping up." — Alexander, 2026-04-09
#
# This is a HARD BAN. Not deprecated. Not fallback. BANNED.
# Enforcement: pre-commit hook, linter, Ansible validation, CI tests.
# =============================================================================
banned_providers:
- name: anthropic
reason: "Permanently banned. SDK access gated despite active quota. Fleet was bricked because golden state pointed to Anthropic Sonnet."
banned_date: "2026-04-09"
enforcement: strict # Ansible playbook FAILS if detected
models:
- "claude-sonnet-*"
- "claude-opus-*"
- "claude-haiku-*"
- "claude-*"
endpoints:
- "api.anthropic.com"
- "anthropic/*" # OpenRouter pattern
api_keys:
- "ANTHROPIC_API_KEY"
- "CLAUDE_API_KEY"
# Golden state alternative:
approved_providers:
- name: kimi-coding
model: kimi-k2.5
role: primary
- name: openrouter
model: google/gemini-2.5-pro
role: fallback
- name: ollama
model: "gemma4:latest"
role: terminal_fallback
# Future evaluation:
evaluation_candidates:
- name: mimo-v2-pro
status: pending
notes: "Free via Nous Portal for ~2 weeks from 2026-04-07. Add after fallback chain is fixed."
- name: hermes-4
status: available
notes: "Free on Nous Portal. 36B and 70B variants. Home team model."

View File

@@ -1,95 +0,0 @@
# Ansible IaC — The Timmy Foundation Fleet
> One canonical Ansible playbook defines: deadman switch, cron schedule,
> golden state rollback, agent startup sequence.
> — KT Final Session 2026-04-08, Priority TWO
## Purpose
This directory contains the **single source of truth** for fleet infrastructure.
No more ad-hoc recovery implementations. No more overlapping deadman switches.
No more agents mutating their own configs into oblivion.
**Everything** goes through Ansible. If it's not in a playbook, it doesn't exist.
## Architecture
```
┌─────────────────────────────────────────────────┐
│ Gitea (Source of Truth) │
│ timmy-config/ansible/ │
│ ├── inventory/hosts.yml (fleet machines) │
│ ├── playbooks/site.yml (master playbook) │
│ ├── roles/ (reusable roles) │
│ └── group_vars/wizards.yml (golden state) │
└──────────────────┬──────────────────────────────┘
│ PR merge triggers webhook
┌─────────────────────────────────────────────────┐
│ Gitea Webhook Handler │
│ scripts/deploy_on_webhook.sh │
│ → ansible-pull on each target machine │
└──────────────────┬──────────────────────────────┘
│ ansible-pull
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Timmy │ │ Allegro │ │ Bezalel │ │ Ezra │
│ (Mac) │ │ (VPS) │ │ (VPS) │ │ (VPS) │
│ │ │ │ │ │ │ │
│ deadman │ │ deadman │ │ deadman │ │ deadman │
│ cron │ │ cron │ │ cron │ │ cron │
│ golden │ │ golden │ │ golden │ │ golden │
│ req_log │ │ req_log │ │ req_log │ │ req_log │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
```
## Quick Start
```bash
# Deploy everything to all machines
ansible-playbook -i inventory/hosts.yml playbooks/site.yml
# Deploy only golden state config
ansible-playbook -i inventory/hosts.yml playbooks/golden_state.yml
# Deploy only to a specific wizard
ansible-playbook -i inventory/hosts.yml playbooks/site.yml --limit bezalel
# Dry run (check mode)
ansible-playbook -i inventory/hosts.yml playbooks/site.yml --check --diff
```
## Golden State Provider Chain
All wizard configs converge on this provider chain. **Anthropic is BANNED.**
| Priority | Provider | Model | Endpoint |
| -------- | -------------------- | ---------------- | --------------------------------- |
| 1 | Kimi | kimi-k2.5 | https://api.kimi.com/coding/v1 |
| 2 | Gemini (OpenRouter) | gemini-2.5-pro | https://openrouter.ai/api/v1 |
| 3 | Ollama (local) | gemma4:latest | http://localhost:11434/v1 |
## Roles
| Role | Purpose |
| ---------------- | ------------------------------------------------------------ |
| `wizard_base` | Common wizard setup: directories, thin config, git pull |
| `deadman_switch` | Health check → snapshot good config → rollback on death |
| `golden_state` | Deploy and enforce golden state provider chain |
| `request_log` | SQLite telemetry table for every inference call |
| `cron_manager` | Source-controlled cron jobs — no manual crontab edits |
## Rules
1. **No manual changes.** If it's not in a playbook, it will be overwritten.
2. **No Anthropic.** Banned. Enforcement is automated. See `BANNED_PROVIDERS.yml`.
3. **Idempotent.** Every playbook can run 100 times with the same result.
4. **PR required.** Config changes go through Gitea PR review, then deploy.
5. **One identity per machine.** No duplicate agents. Fleet audit enforces this.
## Related Issues
- timmy-config #442: [P2] Ansible IaC Canonical Playbook
- timmy-config #444: Wire Deadman Switch ACTION
- timmy-config #443: Thin Config Pattern
- timmy-config #446: request_log Telemetry Table

View File

@@ -1,21 +0,0 @@
[defaults]
inventory = inventory/hosts.yml
roles_path = roles
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
forks = 10
timeout = 30
# Logging
log_path = /var/log/ansible/timmy-fleet.log
[privilege_escalation]
become = True
become_method = sudo
become_user = root
become_ask_pass = False
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no

View File

@@ -1,74 +0,0 @@
# =============================================================================
# Wizard Group Variables — Golden State Configuration
# =============================================================================
# These variables are applied to ALL wizards in the fleet.
# This IS the golden state. If a wizard deviates, Ansible corrects it.
# =============================================================================
# --- Deadman Switch ---
deadman_enabled: true
deadman_check_interval: 300 # 5 minutes between health checks
deadman_snapshot_dir: "~/.local/timmy/snapshots"
deadman_max_snapshots: 10 # Rolling window of good configs
deadman_restart_cooldown: 60 # Seconds to wait before restart after failure
deadman_max_restart_attempts: 3
deadman_escalation_channel: telegram # Alert Alexander after max attempts
# --- Thin Config ---
thin_config_path: "~/.timmy/thin_config.yml"
thin_config_mode: "0444" # Read-only — agents CANNOT modify
upstream_repo: "https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-config.git"
upstream_branch: main
config_pull_on_wake: true
config_validation_enabled: true
# --- Agent Settings ---
agent_max_turns: 30
agent_reasoning_effort: high
agent_verbose: false
agent_approval_mode: auto
# --- Hermes Harness ---
hermes_config_dir: "{{ hermes_home }}"
hermes_bin_dir: "{{ hermes_home }}/bin"
hermes_skins_dir: "{{ hermes_home }}/skins"
hermes_playbooks_dir: "{{ hermes_home }}/playbooks"
hermes_memories_dir: "{{ hermes_home }}/memories"
# --- Request Log (Telemetry) ---
request_log_enabled: true
request_log_path: "~/.local/timmy/request_log.db"
request_log_rotation_days: 30 # Archive logs older than 30 days
request_log_sync_to_gitea: false # Future: push telemetry summaries to Gitea
# --- Cron Schedule ---
# All cron jobs are managed here. No manual crontab edits.
cron_jobs:
- name: "Deadman health check"
job: "cd {{ wizard_home }}/workspace/timmy-config && python3 fleet/health_check.py"
minute: "*/5"
hour: "*"
enabled: "{{ deadman_enabled }}"
- name: "Muda audit"
job: "cd {{ wizard_home }}/workspace/timmy-config && bash fleet/muda-audit.sh >> /tmp/muda-audit.log 2>&1"
minute: "0"
hour: "21"
weekday: "0"
enabled: true
- name: "Config pull from upstream"
job: "cd {{ wizard_home }}/workspace/timmy-config && git pull --ff-only origin main"
minute: "*/15"
hour: "*"
enabled: "{{ config_pull_on_wake }}"
- name: "Request log rotation"
job: "python3 -c \"import sqlite3,datetime; db=sqlite3.connect('{{ request_log_path }}'); db.execute('DELETE FROM request_log WHERE timestamp < datetime(\\\"now\\\", \\\"-{{ request_log_rotation_days }} days\\\")'); db.commit()\""
minute: "0"
hour: "3"
enabled: "{{ request_log_enabled }}"
# --- Provider Enforcement ---
# These are validated on every Ansible run. Any Anthropic reference = failure.
provider_ban_enforcement: strict # strict = fail playbook, warn = log only

View File

@@ -1,119 +0,0 @@
# =============================================================================
# Fleet Inventory — The Timmy Foundation
# =============================================================================
# Source of truth for all machines in the fleet.
# Update this file when machines are added/removed.
# All changes go through PR review.
# =============================================================================
all:
children:
wizards:
hosts:
timmy:
ansible_host: localhost
ansible_connection: local
wizard_name: Timmy
wizard_role: "Primary wizard — soul of the fleet"
wizard_provider_primary: kimi-coding
wizard_model_primary: kimi-k2.5
hermes_port: 8081
api_port: 8645
wizard_home: "{{ ansible_env.HOME }}/wizards/timmy"
hermes_home: "{{ ansible_env.HOME }}/.hermes"
machine_type: mac
# Timmy runs on Alexander's M3 Max
ollama_available: true
allegro:
ansible_host: 167.99.126.228
ansible_user: root
wizard_name: Allegro
wizard_role: "Kimi-backed third wizard house — tight coding tasks"
wizard_provider_primary: kimi-coding
wizard_model_primary: kimi-k2.5
hermes_port: 8081
api_port: 8645
wizard_home: /root/wizards/allegro
hermes_home: /root/.hermes
machine_type: vps
ollama_available: false
bezalel:
ansible_host: 159.203.146.185
ansible_user: root
wizard_name: Bezalel
wizard_role: "Forge-and-testbed wizard — infrastructure, deployment, hardening"
wizard_provider_primary: kimi-coding
wizard_model_primary: kimi-k2.5
hermes_port: 8081
api_port: 8656
wizard_home: /root/wizards/bezalel
hermes_home: /root/.hermes
machine_type: vps
ollama_available: false
# NOTE: The awake Bezalel may be the duplicate.
# Fleet audit (the-nexus #1144) will resolve identity.
ezra:
ansible_host: 143.198.27.163
ansible_user: root
wizard_name: Ezra
wizard_role: "Infrastructure wizard — Gitea, nginx, hosting"
wizard_provider_primary: kimi-coding
wizard_model_primary: kimi-k2.5
hermes_port: 8081
api_port: 8645
wizard_home: /root/wizards/ezra
hermes_home: /root/.hermes
machine_type: vps
ollama_available: false
# NOTE: Currently DOWN — Telegram key revoked, awaiting propagation.
# Infrastructure hosts (not wizards, but managed by Ansible)
infrastructure:
hosts:
forge:
ansible_host: 143.198.27.163
ansible_user: root
# Gitea runs on the same box as Ezra
gitea_url: https://forge.alexanderwhitestone.com
gitea_org: Timmy_Foundation
vars:
# Global variables applied to all hosts
gitea_repo_url: "https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-config.git"
gitea_branch: main
config_base_path: "{{ gitea_repo_url }}"
timmy_log_dir: "~/.local/timmy/fleet-health"
request_log_db: "~/.local/timmy/request_log.db"
# Golden state provider chain — Anthropic is BANNED
golden_state_providers:
- name: kimi-coding
model: kimi-k2.5
base_url: "https://api.kimi.com/coding/v1"
timeout: 120
reason: "Primary — Kimi K2.5 (best value, least friction)"
- name: openrouter
model: google/gemini-2.5-pro
base_url: "https://openrouter.ai/api/v1"
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: "Fallback — Gemini 2.5 Pro via OpenRouter"
- name: ollama
model: "gemma4:latest"
base_url: "http://localhost:11434/v1"
timeout: 180
reason: "Terminal fallback — local Ollama (sovereign, no API needed)"
# Banned providers — hard enforcement
banned_providers:
- anthropic
- claude
banned_models_patterns:
- "claude-*"
- "anthropic/*"
- "*sonnet*"
- "*opus*"
- "*haiku*"

View File

@@ -1,98 +0,0 @@
---
# =============================================================================
# agent_startup.yml — Resurrect Wizards from Checked-in Configs
# =============================================================================
# Brings wizards back online using golden state configs.
# Order: pull config → validate → start agent → verify with request_log
# =============================================================================
- name: "Agent Startup Sequence"
hosts: wizards
become: true
serial: 1 # One wizard at a time to avoid cascading issues
tasks:
- name: "Pull latest config from upstream"
git:
repo: "{{ upstream_repo }}"
dest: "{{ wizard_home }}/workspace/timmy-config"
version: "{{ upstream_branch }}"
force: true
tags: [pull]
- name: "Deploy golden state config"
include_role:
name: golden_state
tags: [config]
- name: "Validate config — no banned providers"
shell: |
python3 -c "
import yaml, sys
with open('{{ wizard_home }}/config.yaml') as f:
cfg = yaml.safe_load(f)
banned = {{ banned_providers }}
for p in cfg.get('fallback_providers', []):
if p.get('provider', '') in banned:
print(f'BANNED: {p[\"provider\"]}', file=sys.stderr)
sys.exit(1)
model = cfg.get('model', {}).get('provider', '')
if model in banned:
print(f'BANNED default provider: {model}', file=sys.stderr)
sys.exit(1)
print('Config validated — no banned providers.')
"
register: config_valid
tags: [validate]
- name: "Ensure hermes-agent service is running"
systemd:
name: "hermes-{{ wizard_name | lower }}"
state: started
enabled: true
when: machine_type == 'vps'
tags: [start]
ignore_errors: true # Service may not exist yet on all machines
- name: "Start hermes agent (Mac — launchctl)"
shell: |
launchctl kickstart -k "ai.hermes.{{ wizard_name | lower }}" 2>/dev/null || \
cd {{ wizard_home }} && hermes agent start --daemon 2>&1 | tail -5
when: machine_type == 'mac'
tags: [start]
ignore_errors: true
- name: "Wait for agent to come online"
wait_for:
host: 127.0.0.1
port: "{{ api_port }}"
timeout: 60
state: started
tags: [verify]
ignore_errors: true
- name: "Verify agent is alive — check request_log for activity"
shell: |
sleep 10
python3 -c "
import sqlite3, sys
db = sqlite3.connect('{{ request_log_path }}')
cursor = db.execute('''
SELECT COUNT(*) FROM request_log
WHERE agent_name = '{{ wizard_name }}'
AND timestamp > datetime('now', '-5 minutes')
''')
count = cursor.fetchone()[0]
if count > 0:
print(f'{{ wizard_name }} is alive — {count} recent inference calls logged.')
else:
print(f'WARNING: {{ wizard_name }} started but no telemetry yet.')
"
register: agent_status
tags: [verify]
ignore_errors: true
- name: "Report startup status"
debug:
msg: "{{ wizard_name }}: {{ agent_status.stdout | default('startup attempted') }}"
tags: [always]

View File

@@ -1,15 +0,0 @@
---
# =============================================================================
# cron_schedule.yml — Source-Controlled Cron Jobs
# =============================================================================
# All cron jobs are defined in group_vars/wizards.yml.
# This playbook deploys them. No manual crontab edits allowed.
# =============================================================================
- name: "Deploy Cron Schedule"
hosts: wizards
become: true
roles:
- role: cron_manager
tags: [cron, schedule]

View File

@@ -1,17 +0,0 @@
---
# =============================================================================
# deadman_switch.yml — Deploy Deadman Switch to All Wizards
# =============================================================================
# The deadman watch already fires and detects dead agents.
# This playbook wires the ACTION:
# - On healthy check: snapshot current config as "last known good"
# - On failed check: rollback config to snapshot, restart agent
# =============================================================================
- name: "Deploy Deadman Switch ACTION"
hosts: wizards
become: true
roles:
- role: deadman_switch
tags: [deadman, recovery]

View File

@@ -1,30 +0,0 @@
---
# =============================================================================
# golden_state.yml — Deploy Golden State Config to All Wizards
# =============================================================================
# Enforces the golden state provider chain across the fleet.
# Removes any Anthropic references. Deploys the approved provider chain.
# =============================================================================
- name: "Deploy Golden State Configuration"
hosts: wizards
become: true
roles:
- role: golden_state
tags: [golden, config]
post_tasks:
- name: "Verify golden state — no banned providers"
shell: |
grep -rci 'anthropic\|claude-sonnet\|claude-opus\|claude-haiku' \
{{ hermes_home }}/config.yaml \
{{ wizard_home }}/config.yaml 2>/dev/null || echo "0"
register: banned_count
changed_when: false
- name: "Report golden state status"
debug:
msg: >
{{ wizard_name }} golden state: {{ golden_state_providers | map(attribute='name') | list | join(' → ') }}.
Banned provider references: {{ banned_count.stdout | trim }}.

View File

@@ -1,15 +0,0 @@
---
# =============================================================================
# request_log.yml — Deploy Telemetry Table
# =============================================================================
# Creates the request_log SQLite table on all machines.
# Every inference call writes a row. No exceptions. No summarizing.
# =============================================================================
- name: "Deploy Request Log Telemetry"
hosts: wizards
become: true
roles:
- role: request_log
tags: [telemetry, logging]

View File

@@ -1,72 +0,0 @@
---
# =============================================================================
# site.yml — Master Playbook for the Timmy Foundation Fleet
# =============================================================================
# This is the ONE playbook that defines the entire fleet state.
# Run this and every machine converges to golden state.
#
# Usage:
# ansible-playbook -i inventory/hosts.yml playbooks/site.yml
# ansible-playbook -i inventory/hosts.yml playbooks/site.yml --limit bezalel
# ansible-playbook -i inventory/hosts.yml playbooks/site.yml --check --diff
# =============================================================================
- name: "Timmy Foundation Fleet — Full Convergence"
hosts: wizards
become: true
pre_tasks:
- name: "Validate no banned providers in golden state"
assert:
that:
- "item.name not in banned_providers"
fail_msg: "BANNED PROVIDER DETECTED: {{ item.name }} — Anthropic is permanently banned."
quiet: true
loop: "{{ golden_state_providers }}"
tags: [always]
- name: "Display target wizard"
debug:
msg: "Deploying to {{ wizard_name }} ({{ wizard_role }}) on {{ ansible_host }}"
tags: [always]
roles:
- role: wizard_base
tags: [base, setup]
- role: golden_state
tags: [golden, config]
- role: deadman_switch
tags: [deadman, recovery]
- role: request_log
tags: [telemetry, logging]
- role: cron_manager
tags: [cron, schedule]
post_tasks:
- name: "Final validation — scan for banned providers"
shell: |
grep -ri 'anthropic\|claude-sonnet\|claude-opus\|claude-haiku' \
{{ hermes_home }}/config.yaml \
{{ wizard_home }}/config.yaml \
{{ thin_config_path }} 2>/dev/null || true
register: banned_scan
changed_when: false
tags: [validation]
- name: "FAIL if banned providers found in deployed config"
fail:
msg: |
BANNED PROVIDER DETECTED IN DEPLOYED CONFIG:
{{ banned_scan.stdout }}
Anthropic is permanently banned. Fix the config and re-deploy.
when: banned_scan.stdout | length > 0
tags: [validation]
- name: "Deployment complete"
debug:
msg: "{{ wizard_name }} converged to golden state. Provider chain: {{ golden_state_providers | map(attribute='name') | list | join(' → ') }}"
tags: [always]

View File

@@ -1,55 +0,0 @@
---
# =============================================================================
# cron_manager/tasks — Source-Controlled Cron Jobs
# =============================================================================
# All cron jobs are defined in group_vars/wizards.yml.
# No manual crontab edits. This is the only way to manage cron.
# =============================================================================
- name: "Deploy managed cron jobs"
cron:
name: "{{ item.name }}"
job: "{{ item.job }}"
minute: "{{ item.minute | default('*') }}"
hour: "{{ item.hour | default('*') }}"
day: "{{ item.day | default('*') }}"
month: "{{ item.month | default('*') }}"
weekday: "{{ item.weekday | default('*') }}"
state: "{{ 'present' if item.enabled else 'absent' }}"
user: "{{ ansible_user | default('root') }}"
loop: "{{ cron_jobs }}"
when: cron_jobs is defined
- name: "Deploy deadman switch cron (fallback if systemd timer unavailable)"
cron:
name: "Deadman switch — {{ wizard_name }}"
job: "{{ wizard_home }}/deadman_action.sh >> {{ timmy_log_dir }}/deadman-{{ wizard_name }}.log 2>&1"
minute: "*/5"
hour: "*"
state: present
user: "{{ ansible_user | default('root') }}"
when: deadman_enabled and machine_type != 'vps'
# VPS machines use systemd timers instead
- name: "Remove legacy cron jobs (cleanup)"
cron:
name: "{{ item }}"
state: absent
user: "{{ ansible_user | default('root') }}"
loop:
- "legacy-deadman-watch"
- "old-health-check"
- "backup-deadman"
ignore_errors: true
- name: "List active cron jobs"
shell: "crontab -l 2>/dev/null | grep -v '^#' | grep -v '^$' || echo 'No cron jobs found.'"
register: active_crons
changed_when: false
- name: "Report cron status"
debug:
msg: |
{{ wizard_name }} cron jobs deployed.
Active:
{{ active_crons.stdout }}

View File

@@ -1,17 +0,0 @@
---
- name: "Enable deadman service"
systemd:
name: "deadman-{{ wizard_name | lower }}.service"
daemon_reload: true
enabled: true
- name: "Enable deadman timer"
systemd:
name: "deadman-{{ wizard_name | lower }}.timer"
daemon_reload: true
enabled: true
state: started
- name: "Load deadman plist"
shell: "launchctl load {{ ansible_env.HOME }}/Library/LaunchAgents/com.timmy.deadman.{{ wizard_name | lower }}.plist"
ignore_errors: true

View File

@@ -1,53 +0,0 @@
---
# =============================================================================
# deadman_switch/tasks — Wire the Deadman Switch ACTION
# =============================================================================
# The watch fires. This makes it DO something:
# - On healthy check: snapshot current config as "last known good"
# - On failed check: rollback to last known good, restart agent
# =============================================================================
- name: "Create snapshot directory"
file:
path: "{{ deadman_snapshot_dir }}"
state: directory
mode: "0755"
- name: "Deploy deadman switch script"
template:
src: deadman_action.sh.j2
dest: "{{ wizard_home }}/deadman_action.sh"
mode: "0755"
- name: "Deploy deadman systemd service"
template:
src: deadman_switch.service.j2
dest: "/etc/systemd/system/deadman-{{ wizard_name | lower }}.service"
mode: "0644"
when: machine_type == 'vps'
notify: "Enable deadman service"
- name: "Deploy deadman systemd timer"
template:
src: deadman_switch.timer.j2
dest: "/etc/systemd/system/deadman-{{ wizard_name | lower }}.timer"
mode: "0644"
when: machine_type == 'vps'
notify: "Enable deadman timer"
- name: "Deploy deadman launchd plist (Mac)"
template:
src: deadman_switch.plist.j2
dest: "{{ ansible_env.HOME }}/Library/LaunchAgents/com.timmy.deadman.{{ wizard_name | lower }}.plist"
mode: "0644"
when: machine_type == 'mac'
notify: "Load deadman plist"
- name: "Take initial config snapshot"
copy:
src: "{{ wizard_home }}/config.yaml"
dest: "{{ deadman_snapshot_dir }}/config.yaml.known_good"
remote_src: true
mode: "0444"
ignore_errors: true

View File

@@ -1,153 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# Deadman Switch ACTION — {{ wizard_name }}
# =============================================================================
# Generated by Ansible on {{ ansible_date_time.iso8601 }}
# DO NOT EDIT MANUALLY.
#
# On healthy check: snapshot current config as "last known good"
# On failed check: rollback config to last known good, restart agent
# =============================================================================
set -euo pipefail
WIZARD_NAME="{{ wizard_name }}"
WIZARD_HOME="{{ wizard_home }}"
CONFIG_FILE="{{ wizard_home }}/config.yaml"
SNAPSHOT_DIR="{{ deadman_snapshot_dir }}"
SNAPSHOT_FILE="${SNAPSHOT_DIR}/config.yaml.known_good"
REQUEST_LOG_DB="{{ request_log_path }}"
LOG_DIR="{{ timmy_log_dir }}"
LOG_FILE="${LOG_DIR}/deadman-${WIZARD_NAME}.log"
MAX_SNAPSHOTS={{ deadman_max_snapshots }}
RESTART_COOLDOWN={{ deadman_restart_cooldown }}
MAX_RESTART_ATTEMPTS={{ deadman_max_restart_attempts }}
COOLDOWN_FILE="${LOG_DIR}/deadman_cooldown_${WIZARD_NAME}"
SERVICE_NAME="hermes-{{ wizard_name | lower }}"
# Ensure directories exist
mkdir -p "${SNAPSHOT_DIR}" "${LOG_DIR}"
log() {
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] [deadman] [${WIZARD_NAME}] $*" >> "${LOG_FILE}"
echo "[deadman] [${WIZARD_NAME}] $*"
}
log_telemetry() {
local status="$1"
local message="$2"
if [ -f "${REQUEST_LOG_DB}" ]; then
sqlite3 "${REQUEST_LOG_DB}" "INSERT INTO request_log (timestamp, agent_name, provider, model, endpoint, status, error_message) VALUES (datetime('now'), '${WIZARD_NAME}', 'deadman_switch', 'N/A', 'health_check', '${status}', '${message}');" 2>/dev/null || true
fi
}
snapshot_config() {
if [ -f "${CONFIG_FILE}" ]; then
cp "${CONFIG_FILE}" "${SNAPSHOT_FILE}"
# Keep rolling history
cp "${CONFIG_FILE}" "${SNAPSHOT_DIR}/config.yaml.$(date +%s)"
# Prune old snapshots
ls -t "${SNAPSHOT_DIR}"/config.yaml.[0-9]* 2>/dev/null | tail -n +$((MAX_SNAPSHOTS + 1)) | xargs rm -f 2>/dev/null
log "Config snapshot saved."
fi
}
rollback_config() {
if [ -f "${SNAPSHOT_FILE}" ]; then
log "Rolling back config to last known good..."
cp "${SNAPSHOT_FILE}" "${CONFIG_FILE}"
log "Config rolled back."
log_telemetry "fallback" "Config rolled back to last known good by deadman switch"
else
log "ERROR: No known good snapshot found. Pulling from upstream..."
cd "${WIZARD_HOME}/workspace/timmy-config" 2>/dev/null && \
git pull --ff-only origin {{ upstream_branch }} 2>/dev/null && \
cp "wizards/{{ wizard_name | lower }}/config.yaml" "${CONFIG_FILE}" && \
log "Config restored from upstream." || \
log "CRITICAL: Cannot restore config from any source."
fi
}
restart_agent() {
# Check cooldown
if [ -f "${COOLDOWN_FILE}" ]; then
local last_restart
last_restart=$(cat "${COOLDOWN_FILE}")
local now
now=$(date +%s)
local elapsed=$((now - last_restart))
if [ "${elapsed}" -lt "${RESTART_COOLDOWN}" ]; then
log "Restart cooldown active (${elapsed}s / ${RESTART_COOLDOWN}s). Skipping."
return 1
fi
fi
log "Restarting ${SERVICE_NAME}..."
date +%s > "${COOLDOWN_FILE}"
{% if machine_type == 'vps' %}
systemctl restart "${SERVICE_NAME}" 2>/dev/null && \
log "Agent restarted via systemd." || \
log "ERROR: systemd restart failed."
{% else %}
launchctl kickstart -k "ai.hermes.{{ wizard_name | lower }}" 2>/dev/null && \
log "Agent restarted via launchctl." || \
(cd "${WIZARD_HOME}" && hermes agent start --daemon 2>/dev/null && \
log "Agent restarted via hermes CLI.") || \
log "ERROR: All restart methods failed."
{% endif %}
log_telemetry "success" "Agent restarted by deadman switch"
}
# --- Health Check ---
check_health() {
# Check 1: Is the agent process running?
{% if machine_type == 'vps' %}
if ! systemctl is-active --quiet "${SERVICE_NAME}" 2>/dev/null; then
if ! pgrep -f "hermes" > /dev/null 2>/dev/null; then
log "FAIL: Agent process not running."
return 1
fi
fi
{% else %}
if ! pgrep -f "hermes" > /dev/null 2>/dev/null; then
log "FAIL: Agent process not running."
return 1
fi
{% endif %}
# Check 2: Is the API port responding?
if ! timeout 10 bash -c "echo > /dev/tcp/127.0.0.1/{{ api_port }}" 2>/dev/null; then
log "FAIL: API port {{ api_port }} not responding."
return 1
fi
# Check 3: Does the config contain banned providers?
if grep -qi 'anthropic\|claude-sonnet\|claude-opus\|claude-haiku' "${CONFIG_FILE}" 2>/dev/null; then
log "FAIL: Config contains banned provider (Anthropic). Rolling back."
return 1
fi
return 0
}
# --- Main ---
main() {
log "Health check starting..."
if check_health; then
log "HEALTHY — snapshotting config."
snapshot_config
log_telemetry "success" "Health check passed"
else
log "UNHEALTHY — initiating recovery."
log_telemetry "error" "Health check failed — initiating rollback"
rollback_config
restart_agent
fi
log "Health check complete."
}
main "$@"

View File

@@ -1,22 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<!-- Deadman Switch — {{ wizard_name }}. Generated by Ansible. DO NOT EDIT MANUALLY. -->
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.timmy.deadman.{{ wizard_name | lower }}</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>{{ wizard_home }}/deadman_action.sh</string>
</array>
<key>StartInterval</key>
<integer>{{ deadman_check_interval }}</integer>
<key>RunAtLoad</key>
<true/>
<key>StandardOutPath</key>
<string>{{ timmy_log_dir }}/deadman-{{ wizard_name }}.log</string>
<key>StandardErrorPath</key>
<string>{{ timmy_log_dir }}/deadman-{{ wizard_name }}.log</string>
</dict>
</plist>

View File

@@ -1,16 +0,0 @@
# Deadman Switch — {{ wizard_name }}
# Generated by Ansible. DO NOT EDIT MANUALLY.
[Unit]
Description=Deadman Switch for {{ wizard_name }} wizard
After=network.target
[Service]
Type=oneshot
ExecStart={{ wizard_home }}/deadman_action.sh
User={{ ansible_user | default('root') }}
StandardOutput=append:{{ timmy_log_dir }}/deadman-{{ wizard_name }}.log
StandardError=append:{{ timmy_log_dir }}/deadman-{{ wizard_name }}.log
[Install]
WantedBy=multi-user.target

View File

@@ -1,14 +0,0 @@
# Deadman Switch Timer — {{ wizard_name }}
# Generated by Ansible. DO NOT EDIT MANUALLY.
# Runs every {{ deadman_check_interval // 60 }} minutes.
[Unit]
Description=Deadman Switch Timer for {{ wizard_name }} wizard
[Timer]
OnBootSec=60
OnUnitActiveSec={{ deadman_check_interval }}s
AccuracySec=30s
[Install]
WantedBy=timers.target

View File

@@ -1,6 +0,0 @@
---
# golden_state defaults
# The golden_state_providers list is defined in group_vars/wizards.yml
# and inventory/hosts.yml (global vars).
golden_state_enforce: true
golden_state_backup_before_deploy: true

View File

@@ -1,46 +0,0 @@
---
# =============================================================================
# golden_state/tasks — Deploy and enforce golden state provider chain
# =============================================================================
- name: "Backup current config before golden state deploy"
copy:
src: "{{ wizard_home }}/config.yaml"
dest: "{{ wizard_home }}/config.yaml.pre-golden-{{ ansible_date_time.epoch }}"
remote_src: true
when: golden_state_backup_before_deploy
ignore_errors: true
- name: "Deploy golden state wizard config"
template:
src: "../../wizard_base/templates/wizard_config.yaml.j2"
dest: "{{ wizard_home }}/config.yaml"
mode: "0644"
backup: true
notify:
- "Restart hermes agent (systemd)"
- "Restart hermes agent (launchctl)"
- name: "Scan for banned providers in all config files"
shell: |
FOUND=0
for f in {{ wizard_home }}/config.yaml {{ hermes_home }}/config.yaml; do
if [ -f "$f" ]; then
if grep -qi 'anthropic\|claude-sonnet\|claude-opus\|claude-haiku' "$f"; then
echo "BANNED PROVIDER in $f:"
grep -ni 'anthropic\|claude-sonnet\|claude-opus\|claude-haiku' "$f"
FOUND=1
fi
fi
done
exit $FOUND
register: provider_scan
changed_when: false
failed_when: provider_scan.rc != 0 and provider_ban_enforcement == 'strict'
- name: "Report golden state deployment"
debug:
msg: >
{{ wizard_name }} golden state deployed.
Provider chain: {{ golden_state_providers | map(attribute='name') | list | join(' → ') }}.
Banned provider scan: {{ 'CLEAN' if provider_scan.rc == 0 else 'VIOLATIONS FOUND' }}.

View File

@@ -1,64 +0,0 @@
-- =============================================================================
-- request_log — Inference Telemetry Table
-- =============================================================================
-- Every agent writes to this table BEFORE and AFTER every inference call.
-- No exceptions. No summarizing. No describing what you would log.
-- Actually write the row.
--
-- Source: KT Bezalel Architecture Session 2026-04-08
-- =============================================================================
CREATE TABLE IF NOT EXISTS request_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
agent_name TEXT NOT NULL,
provider TEXT NOT NULL,
model TEXT NOT NULL,
endpoint TEXT NOT NULL,
tokens_in INTEGER,
tokens_out INTEGER,
latency_ms INTEGER,
status TEXT NOT NULL, -- 'success', 'error', 'timeout', 'fallback'
error_message TEXT
);
-- Index for common queries
CREATE INDEX IF NOT EXISTS idx_request_log_agent
ON request_log (agent_name, timestamp);
CREATE INDEX IF NOT EXISTS idx_request_log_provider
ON request_log (provider, timestamp);
CREATE INDEX IF NOT EXISTS idx_request_log_status
ON request_log (status, timestamp);
-- View: recent activity per agent (last hour)
CREATE VIEW IF NOT EXISTS v_recent_activity AS
SELECT
agent_name,
provider,
model,
status,
COUNT(*) as call_count,
AVG(latency_ms) as avg_latency_ms,
SUM(tokens_in) as total_tokens_in,
SUM(tokens_out) as total_tokens_out
FROM request_log
WHERE timestamp > datetime('now', '-1 hour')
GROUP BY agent_name, provider, model, status;
-- View: provider reliability (last 24 hours)
CREATE VIEW IF NOT EXISTS v_provider_reliability AS
SELECT
provider,
model,
COUNT(*) as total_calls,
SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) as successes,
SUM(CASE WHEN status = 'error' THEN 1 ELSE 0 END) as errors,
SUM(CASE WHEN status = 'timeout' THEN 1 ELSE 0 END) as timeouts,
SUM(CASE WHEN status = 'fallback' THEN 1 ELSE 0 END) as fallbacks,
ROUND(100.0 * SUM(CASE WHEN status = 'success' THEN 1 ELSE 0 END) / COUNT(*), 1) as success_rate,
AVG(latency_ms) as avg_latency_ms
FROM request_log
WHERE timestamp > datetime('now', '-24 hours')
GROUP BY provider, model;

View File

@@ -1,50 +0,0 @@
---
# =============================================================================
# request_log/tasks — Deploy Telemetry Table
# =============================================================================
# "This is non-negotiable infrastructure. Without it, we cannot verify
# if any agent actually executed what it claims."
# — KT Bezalel 2026-04-08
# =============================================================================
- name: "Create telemetry directory"
file:
path: "{{ request_log_path | dirname }}"
state: directory
mode: "0755"
- name: "Deploy request_log schema"
copy:
src: request_log_schema.sql
dest: "{{ wizard_home }}/request_log_schema.sql"
mode: "0644"
- name: "Initialize request_log database"
shell: |
sqlite3 "{{ request_log_path }}" < "{{ wizard_home }}/request_log_schema.sql"
args:
creates: "{{ request_log_path }}"
- name: "Verify request_log table exists"
shell: |
sqlite3 "{{ request_log_path }}" ".tables" | grep -q "request_log"
register: table_check
changed_when: false
- name: "Verify request_log schema matches"
shell: |
sqlite3 "{{ request_log_path }}" ".schema request_log" | grep -q "agent_name"
register: schema_check
changed_when: false
- name: "Set permissions on request_log database"
file:
path: "{{ request_log_path }}"
mode: "0644"
- name: "Report request_log status"
debug:
msg: >
{{ wizard_name }} request_log: {{ request_log_path }}
— table exists: {{ table_check.rc == 0 }}
— schema valid: {{ schema_check.rc == 0 }}

View File

@@ -1,6 +0,0 @@
---
# wizard_base defaults
wizard_user: "{{ ansible_user | default('root') }}"
wizard_group: "{{ ansible_user | default('root') }}"
timmy_base_dir: "~/.local/timmy"
timmy_config_repo: "https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-config.git"

View File

@@ -1,11 +0,0 @@
---
- name: "Restart hermes agent (systemd)"
systemd:
name: "hermes-{{ wizard_name | lower }}"
state: restarted
when: machine_type == 'vps'
- name: "Restart hermes agent (launchctl)"
shell: "launchctl kickstart -k ai.hermes.{{ wizard_name | lower }}"
when: machine_type == 'mac'
ignore_errors: true

View File

@@ -1,69 +0,0 @@
---
# =============================================================================
# wizard_base/tasks — Common wizard setup
# =============================================================================
- name: "Create wizard directories"
file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ wizard_home }}"
- "{{ wizard_home }}/workspace"
- "{{ hermes_home }}"
- "{{ hermes_home }}/bin"
- "{{ hermes_home }}/skins"
- "{{ hermes_home }}/playbooks"
- "{{ hermes_home }}/memories"
- "~/.local/timmy"
- "~/.local/timmy/fleet-health"
- "~/.local/timmy/snapshots"
- "~/.timmy"
- name: "Clone/update timmy-config"
git:
repo: "{{ upstream_repo }}"
dest: "{{ wizard_home }}/workspace/timmy-config"
version: "{{ upstream_branch }}"
force: false
update: true
ignore_errors: true # May fail on first run if no SSH key
- name: "Deploy SOUL.md"
copy:
src: "{{ wizard_home }}/workspace/timmy-config/SOUL.md"
dest: "~/.timmy/SOUL.md"
remote_src: true
mode: "0644"
ignore_errors: true
- name: "Deploy thin config (immutable pointer to upstream)"
template:
src: thin_config.yml.j2
dest: "{{ thin_config_path }}"
mode: "{{ thin_config_mode }}"
tags: [thin_config]
- name: "Ensure Python3 and pip are available"
package:
name:
- python3
- python3-pip
state: present
when: machine_type == 'vps'
ignore_errors: true
- name: "Ensure PyYAML is installed (for config validation)"
pip:
name: pyyaml
state: present
when: machine_type == 'vps'
ignore_errors: true
- name: "Create Ansible log directory"
file:
path: /var/log/ansible
state: directory
mode: "0755"
ignore_errors: true

View File

@@ -1,41 +0,0 @@
# =============================================================================
# Thin Config — {{ wizard_name }}
# =============================================================================
# THIS FILE IS READ-ONLY. Agents CANNOT modify it.
# It contains only pointers to upstream. The actual config lives in Gitea.
#
# Agent wakes up → pulls config from upstream → loads → runs.
# If anything tries to mutate this → fails gracefully → pulls fresh on restart.
#
# Only way to permanently change config: commit to Gitea, merge PR, Ansible deploys.
#
# Generated by Ansible on {{ ansible_date_time.iso8601 }}
# DO NOT EDIT MANUALLY.
# =============================================================================
identity:
wizard_name: "{{ wizard_name }}"
wizard_role: "{{ wizard_role }}"
machine: "{{ inventory_hostname }}"
upstream:
repo: "{{ upstream_repo }}"
branch: "{{ upstream_branch }}"
config_path: "wizards/{{ wizard_name | lower }}/config.yaml"
pull_on_wake: {{ config_pull_on_wake | lower }}
recovery:
deadman_enabled: {{ deadman_enabled | lower }}
snapshot_dir: "{{ deadman_snapshot_dir }}"
restart_cooldown: {{ deadman_restart_cooldown }}
max_restart_attempts: {{ deadman_max_restart_attempts }}
escalation_channel: "{{ deadman_escalation_channel }}"
telemetry:
request_log_path: "{{ request_log_path }}"
request_log_enabled: {{ request_log_enabled | lower }}
local_overrides:
# Runtime overrides go here. They are EPHEMERAL — not persisted across restarts.
# On restart, this section is reset to empty.
{}

View File

@@ -1,115 +0,0 @@
# =============================================================================
# {{ wizard_name }} — Wizard Configuration (Golden State)
# =============================================================================
# Generated by Ansible on {{ ansible_date_time.iso8601 }}
# DO NOT EDIT MANUALLY. Changes go through Gitea PR → Ansible deploy.
#
# Provider chain: {{ golden_state_providers | map(attribute='name') | list | join(' → ') }}
# Anthropic is PERMANENTLY BANNED.
# =============================================================================
model:
default: {{ wizard_model_primary }}
provider: {{ wizard_provider_primary }}
context_length: 65536
base_url: {{ golden_state_providers[0].base_url }}
toolsets:
- all
fallback_providers:
{% for provider in golden_state_providers %}
- provider: {{ provider.name }}
model: {{ provider.model }}
{% if provider.base_url is defined %}
base_url: {{ provider.base_url }}
{% endif %}
{% if provider.api_key_env is defined %}
api_key_env: {{ provider.api_key_env }}
{% endif %}
timeout: {{ provider.timeout }}
reason: "{{ provider.reason }}"
{% endfor %}
agent:
max_turns: {{ agent_max_turns }}
reasoning_effort: {{ agent_reasoning_effort }}
verbose: {{ agent_verbose | lower }}
terminal:
backend: local
cwd: .
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
display:
compact: false
personality: ''
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: {{ agent_approval_mode }}
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
extra:
host: 127.0.0.1
port: {{ api_port }}
session_reset:
mode: none
idle_minutes: 0
skills:
creation_nudge_interval: 15
system_prompt_suffix: |
You are {{ wizard_name }}, {{ wizard_role }}.
Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness.
{{ golden_state_providers[0].name }} is your primary provider.
Refusal over fabrication. If you do not know, say so.
Sovereignty and service always.
providers:
{% for provider in golden_state_providers %}
{{ provider.name }}:
base_url: {{ provider.base_url }}
timeout: {{ provider.timeout | default(60) }}
{% if provider.name == 'kimi-coding' %}
max_retries: 3
{% endif %}
{% endfor %}
# =============================================================================
# BANNED PROVIDERS — DO NOT ADD
# =============================================================================
# The following providers are PERMANENTLY BANNED:
# - anthropic (any model: claude-sonnet, claude-opus, claude-haiku)
# Enforcement: pre-commit hook, linter, Ansible validation, this comment.
# Adding any banned provider will cause Ansible deployment to FAIL.
# =============================================================================

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# Gitea Webhook Handler — Trigger Ansible Deploy on Merge
# =============================================================================
# This script is called by the Gitea webhook when a PR is merged
# to the main branch of timmy-config.
#
# Setup:
# 1. Add webhook in Gitea: Settings → Webhooks → Add Webhook
# 2. URL: http://localhost:9000/hooks/deploy-timmy-config
# 3. Events: Pull Request (merged only)
# 4. Secret: <configured in Gitea>
#
# This script runs ansible-pull to update the local machine.
# For fleet-wide deploys, each machine runs ansible-pull independently.
# =============================================================================
set -euo pipefail
REPO="https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-config.git"
BRANCH="main"
ANSIBLE_DIR="ansible"
LOG_FILE="/var/log/ansible/webhook-deploy.log"
LOCK_FILE="/tmp/ansible-deploy.lock"
log() {
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] [webhook] $*" | tee -a "${LOG_FILE}"
}
# Prevent concurrent deploys
if [ -f "${LOCK_FILE}" ]; then
LOCK_AGE=$(( $(date +%s) - $(stat -c %Y "${LOCK_FILE}" 2>/dev/null || echo 0) ))
if [ "${LOCK_AGE}" -lt 300 ]; then
log "Deploy already in progress (lock age: ${LOCK_AGE}s). Skipping."
exit 0
else
log "Stale lock file (${LOCK_AGE}s old). Removing."
rm -f "${LOCK_FILE}"
fi
fi
trap 'rm -f "${LOCK_FILE}"' EXIT
touch "${LOCK_FILE}"
log "Webhook triggered. Starting ansible-pull..."
# Pull latest config
cd /tmp
rm -rf timmy-config-deploy
git clone --depth 1 --branch "${BRANCH}" "${REPO}" timmy-config-deploy 2>&1 | tee -a "${LOG_FILE}"
cd timmy-config-deploy/${ANSIBLE_DIR}
# Run Ansible against localhost
log "Running Ansible playbook..."
ansible-playbook \
-i inventory/hosts.yml \
playbooks/site.yml \
--limit "$(hostname)" \
--diff \
2>&1 | tee -a "${LOG_FILE}"
RESULT=$?
if [ ${RESULT} -eq 0 ]; then
log "Deploy successful."
else
log "ERROR: Deploy failed with exit code ${RESULT}."
fi
# Cleanup
rm -rf /tmp/timmy-config-deploy
log "Webhook handler complete."
exit ${RESULT}

View File

@@ -1,155 +0,0 @@
#!/usr/bin/env python3
"""
Config Validator — The Timmy Foundation
Validates wizard configs against golden state rules.
Run before any config deploy to catch violations early.
Usage:
python3 validate_config.py <config_file>
python3 validate_config.py --all # Validate all wizard configs
Exit codes:
0 — All validations passed
1 — Validation errors found
2 — File not found or parse error
"""
import sys
import os
import yaml
import fnmatch
from pathlib import Path
# === BANNED PROVIDERS — HARD POLICY ===
BANNED_PROVIDERS = {"anthropic", "claude"}
BANNED_MODEL_PATTERNS = [
"claude-*",
"anthropic/*",
"*sonnet*",
"*opus*",
"*haiku*",
]
# === REQUIRED FIELDS ===
REQUIRED_FIELDS = {
"model": ["default", "provider"],
"fallback_providers": None, # Must exist as a list
}
def is_banned_model(model_name: str) -> bool:
"""Check if a model name matches any banned pattern."""
model_lower = model_name.lower()
for pattern in BANNED_MODEL_PATTERNS:
if fnmatch.fnmatch(model_lower, pattern):
return True
return False
def validate_config(config_path: str) -> list[str]:
"""Validate a wizard config file. Returns list of error strings."""
errors = []
try:
with open(config_path) as f:
cfg = yaml.safe_load(f)
except FileNotFoundError:
return [f"File not found: {config_path}"]
except yaml.YAMLError as e:
return [f"YAML parse error: {e}"]
if not cfg:
return ["Config file is empty"]
# Check required fields
for section, fields in REQUIRED_FIELDS.items():
if section not in cfg:
errors.append(f"Missing required section: {section}")
elif fields:
for field in fields:
if field not in cfg[section]:
errors.append(f"Missing required field: {section}.{field}")
# Check default provider
default_provider = cfg.get("model", {}).get("provider", "")
if default_provider.lower() in BANNED_PROVIDERS:
errors.append(f"BANNED default provider: {default_provider}")
default_model = cfg.get("model", {}).get("default", "")
if is_banned_model(default_model):
errors.append(f"BANNED default model: {default_model}")
# Check fallback providers
for i, fb in enumerate(cfg.get("fallback_providers", [])):
provider = fb.get("provider", "")
model = fb.get("model", "")
if provider.lower() in BANNED_PROVIDERS:
errors.append(f"BANNED fallback provider [{i}]: {provider}")
if is_banned_model(model):
errors.append(f"BANNED fallback model [{i}]: {model}")
# Check providers section
for name, provider_cfg in cfg.get("providers", {}).items():
if name.lower() in BANNED_PROVIDERS:
errors.append(f"BANNED provider in providers section: {name}")
base_url = str(provider_cfg.get("base_url", ""))
if "anthropic" in base_url.lower():
errors.append(f"BANNED URL in provider {name}: {base_url}")
# Check system prompt for banned references
prompt = cfg.get("system_prompt_suffix", "")
if isinstance(prompt, str):
for banned in BANNED_PROVIDERS:
if banned in prompt.lower():
errors.append(f"BANNED provider referenced in system_prompt_suffix: {banned}")
return errors
def main():
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <config_file> [--all]")
sys.exit(2)
if sys.argv[1] == "--all":
# Validate all wizard configs in the repo
repo_root = Path(__file__).parent.parent.parent
wizard_dir = repo_root / "wizards"
all_errors = {}
for wizard_path in sorted(wizard_dir.iterdir()):
config_file = wizard_path / "config.yaml"
if config_file.exists():
errors = validate_config(str(config_file))
if errors:
all_errors[wizard_path.name] = errors
if all_errors:
print("VALIDATION FAILED:")
for wizard, errors in all_errors.items():
print(f"\n {wizard}:")
for err in errors:
print(f" - {err}")
sys.exit(1)
else:
print("All wizard configs passed validation.")
sys.exit(0)
else:
config_path = sys.argv[1]
errors = validate_config(config_path)
if errors:
print(f"VALIDATION FAILED for {config_path}:")
for err in errors:
print(f" - {err}")
sys.exit(1)
else:
print(f"PASSED: {config_path}")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -202,19 +202,6 @@ curl -s -X POST "{gitea_url}/api/v1/repos/{repo}/issues/{issue_num}/comments" \\
REVIEW CHECKLIST BEFORE YOU PUSH:
{review}
COMMIT DISCIPLINE (CRITICAL):
- Commit every 3-5 tool calls. Do NOT wait until the end.
- After every meaningful file change: git add -A && git commit -m "WIP: <what changed>"
- Before running any destructive command: commit current state first.
- If you are unsure whether to commit: commit. WIP commits are safe. Lost work is not.
- Never use --no-verify.
- The auto-commit-guard is your safety net, but do not rely on it. Commit proactively.
RECOVERY COMMANDS (if interrupted, another agent can resume):
git log --oneline -10 # see your WIP commits
git diff HEAD~1 # see what the last commit changed
git status # see uncommitted work
RULES:
- Do not skip hooks with --no-verify.
- Do not silently widen the scope.

View File

@@ -161,14 +161,6 @@ run_worker() {
CYCLE_END=$(date +%s)
CYCLE_DURATION=$((CYCLE_END - CYCLE_START))
# --- Mid-session auto-commit: commit before timeout if work is dirty ---
cd "$worktree" 2>/dev/null || true
# Ensure auto-commit-guard is running
if ! pgrep -f "auto-commit-guard.sh" >/dev/null 2>&1; then
log "Starting auto-commit-guard daemon"
nohup bash "$(dirname "$0")/auto-commit-guard.sh" 120 "$WORKTREE_BASE" >> "$LOG_DIR/auto-commit-guard.log" 2>&1 &
fi
# Salvage
cd "$worktree" 2>/dev/null || true
DIRTY=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')

View File

@@ -1,159 +0,0 @@
#!/usr/bin/env bash
# auto-commit-guard.sh — Background daemon that auto-commits uncommitted work
#
# Usage: auto-commit-guard.sh [interval_seconds] [worktree_base]
# auto-commit-guard.sh # defaults: 120s, ~/worktrees
# auto-commit-guard.sh 60 # check every 60s
# auto-commit-guard.sh 180 ~/my-worktrees
#
# Scans all git repos under the worktree base for uncommitted changes.
# If dirty for >= 1 check cycle, auto-commits with a WIP message.
# Pushes unpushed commits so work is always recoverable from the remote.
#
# Also scans /tmp for orphaned agent workdirs on startup.
set -uo pipefail
INTERVAL="${1:-120}"
WORKTREE_BASE="${2:-$HOME/worktrees}"
LOG_DIR="$HOME/.hermes/logs"
LOG="$LOG_DIR/auto-commit-guard.log"
PIDFILE="$LOG_DIR/auto-commit-guard.pid"
ORPHAN_SCAN_DONE="$LOG_DIR/.orphan-scan-done"
mkdir -p "$LOG_DIR"
# Single instance guard
if [ -f "$PIDFILE" ]; then
old_pid=$(cat "$PIDFILE")
if kill -0 "$old_pid" 2>/dev/null; then
echo "auto-commit-guard already running (PID $old_pid)" >&2
exit 0
fi
fi
echo $$ > "$PIDFILE"
trap 'rm -f "$PIDFILE"' EXIT
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] AUTO-COMMIT: $*" >> "$LOG"
}
# --- Orphaned workdir scan (runs once on startup) ---
scan_orphans() {
if [ -f "$ORPHAN_SCAN_DONE" ]; then
return 0
fi
log "Scanning /tmp for orphaned agent workdirs..."
local found=0
local rescued=0
for dir in /tmp/*-work-* /tmp/timmy-burn-* /tmp/tc-burn; do
[ -d "$dir" ] || continue
[ -d "$dir/.git" ] || continue
found=$((found + 1))
cd "$dir" 2>/dev/null || continue
local dirty
dirty=$(git status --porcelain 2>/dev/null | wc -l | tr -d " ")
if [ "${dirty:-0}" -gt 0 ]; then
local branch
branch=$(git branch --show-current 2>/dev/null || echo "orphan")
git add -A 2>/dev/null
if git commit -m "WIP: orphan rescue — $dirty file(s) auto-committed on $(date -u +%Y-%m-%dT%H:%M:%SZ)
Orphaned workdir detected at $dir.
Branch: $branch
Rescued by auto-commit-guard on startup." 2>/dev/null; then
rescued=$((rescued + 1))
log "RESCUED: $dir ($dirty files on branch $branch)"
# Try to push if remote exists
if git remote get-url origin >/dev/null 2>&1; then
git push -u origin "$branch" 2>/dev/null && log "PUSHED orphan rescue: $dir$branch" || log "PUSH FAILED orphan rescue: $dir (no remote access)"
fi
fi
fi
done
log "Orphan scan complete: $found workdirs checked, $rescued rescued"
touch "$ORPHAN_SCAN_DONE"
}
# --- Main guard loop ---
guard_cycle() {
local committed=0
local scanned=0
# Scan worktree base
if [ -d "$WORKTREE_BASE" ]; then
for dir in "$WORKTREE_BASE"/*/; do
[ -d "$dir" ] || continue
[ -d "$dir/.git" ] || continue
scanned=$((scanned + 1))
cd "$dir" 2>/dev/null || continue
local dirty
dirty=$(git status --porcelain 2>/dev/null | wc -l | tr -d " ")
[ "${dirty:-0}" -eq 0 ] && continue
local branch
branch=$(git branch --show-current 2>/dev/null || echo "detached")
git add -A 2>/dev/null
if git commit -m "WIP: auto-commit — $dirty file(s) on $branch
Automated commit by auto-commit-guard at $(date -u +%Y-%m-%dT%H:%M:%SZ).
Work preserved to prevent loss on crash." 2>/dev/null; then
committed=$((committed + 1))
log "COMMITTED: $dir ($dirty files, branch $branch)"
# Push to preserve remotely
if git remote get-url origin >/dev/null 2>&1; then
git push -u origin "$branch" 2>/dev/null && log "PUSHED: $dir$branch" || log "PUSH FAILED: $dir (will retry next cycle)"
fi
fi
done
fi
# Also scan /tmp for agent workdirs
for dir in /tmp/*-work-*; do
[ -d "$dir" ] || continue
[ -d "$dir/.git" ] || continue
scanned=$((scanned + 1))
cd "$dir" 2>/dev/null || continue
local dirty
dirty=$(git status --porcelain 2>/dev/null | wc -l | tr -d " ")
[ "${dirty:-0}" -eq 0 ] && continue
local branch
branch=$(git branch --show-current 2>/dev/null || echo "detached")
git add -A 2>/dev/null
if git commit -m "WIP: auto-commit — $dirty file(s) on $branch
Automated commit by auto-commit-guard at $(date -u +%Y-%m-%dT%H:%M:%SZ).
Agent workdir preserved to prevent loss." 2>/dev/null; then
committed=$((committed + 1))
log "COMMITTED: $dir ($dirty files, branch $branch)"
if git remote get-url origin >/dev/null 2>&1; then
git push -u origin "$branch" 2>/dev/null && log "PUSHED: $dir$branch" || log "PUSH FAILED: $dir (will retry next cycle)"
fi
fi
done
[ "$committed" -gt 0 ] && log "Cycle done: $scanned scanned, $committed committed"
}
# --- Entry point ---
log "Starting auto-commit-guard (interval=${INTERVAL}s, worktree=${WORKTREE_BASE})"
scan_orphans
while true; do
guard_cycle
sleep "$INTERVAL"
done

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env python3
"""Anthropic Ban Enforcement Scanner.
Scans all config files, scripts, and playbooks for any references to
banned Anthropic providers, models, or API keys.
Policy: Anthropic is permanently banned (2026-04-09).
Refs: ansible/BANNED_PROVIDERS.yml
"""
import sys
import os
import re
from pathlib import Path
BANNED_PATTERNS = [
r"anthropic",
r"claude-sonnet",
r"claude-opus",
r"claude-haiku",
r"claude-\d",
r"api\.anthropic\.com",
r"ANTHROPIC_API_KEY",
r"CLAUDE_API_KEY",
r"sk-ant-",
]
ALLOWLIST_FILES = {
"ansible/BANNED_PROVIDERS.yml", # The ban list itself
"bin/banned_provider_scan.py", # This scanner
"DEPRECATED.md", # Historical references
}
SCAN_EXTENSIONS = {".py", ".yml", ".yaml", ".json", ".sh", ".toml", ".cfg", ".md"}
def scan_file(filepath: str) -> list[tuple[int, str, str]]:
"""Return list of (line_num, pattern_matched, line_text) violations."""
violations = []
try:
with open(filepath, "r", errors="replace") as f:
for i, line in enumerate(f, 1):
for pattern in BANNED_PATTERNS:
if re.search(pattern, line, re.IGNORECASE):
violations.append((i, pattern, line.strip()))
break
except (OSError, UnicodeDecodeError):
pass
return violations
def main():
root = Path(os.environ.get("SCAN_ROOT", "."))
total_violations = 0
scanned = 0
for ext in SCAN_EXTENSIONS:
for filepath in root.rglob(f"*{ext}"):
rel = str(filepath.relative_to(root))
if rel in ALLOWLIST_FILES:
continue
if ".git" in filepath.parts:
continue
violations = scan_file(str(filepath))
scanned += 1
if violations:
total_violations += len(violations)
for line_num, pattern, text in violations:
print(f"VIOLATION: {rel}:{line_num} [{pattern}] {text[:120]}")
print(f"\nScanned {scanned} files. Found {total_violations} violations.")
if total_violations > 0:
print("\n❌ BANNED PROVIDER REFERENCES DETECTED. Fix before merging.")
sys.exit(1)
else:
print("\n✓ No banned provider references found.")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -1,120 +0,0 @@
#!/usr/bin/env python3
"""
Merge Conflict Detector — catches sibling PRs that will conflict.
When multiple PRs branch from the same base commit and touch the same files,
merging one invalidates the others. This script detects that pattern
before it creates a rebase cascade.
Usage:
python3 conflict_detector.py # Check all repos
python3 conflict_detector.py --repo OWNER/REPO # Check one repo
Environment:
GITEA_URL — Gitea instance URL
GITEA_TOKEN — API token
"""
import os
import sys
import json
import urllib.request
from collections import defaultdict
GITEA_URL = os.environ.get("GITEA_URL", "https://forge.alexanderwhitestone.com")
GITEA_TOKEN = os.environ.get("GITEA_TOKEN", "")
REPOS = [
"Timmy_Foundation/the-nexus",
"Timmy_Foundation/timmy-config",
"Timmy_Foundation/timmy-home",
"Timmy_Foundation/fleet-ops",
"Timmy_Foundation/hermes-agent",
"Timmy_Foundation/the-beacon",
]
def api(path):
url = f"{GITEA_URL}/api/v1{path}"
req = urllib.request.Request(url)
if GITEA_TOKEN:
req.add_header("Authorization", f"token {GITEA_TOKEN}")
try:
with urllib.request.urlopen(req, timeout=15) as resp:
return json.loads(resp.read())
except Exception:
return []
def check_repo(repo):
"""Find sibling PRs that touch the same files."""
prs = api(f"/repos/{repo}/pulls?state=open&limit=50")
if not prs:
return []
# Group PRs by base commit
by_base = defaultdict(list)
for pr in prs:
base_sha = pr.get("merge_base", pr.get("base", {}).get("sha", "unknown"))
by_base[base_sha].append(pr)
conflicts = []
for base_sha, siblings in by_base.items():
if len(siblings) < 2:
continue
# Get files for each sibling
file_map = {}
for pr in siblings:
files = api(f"/repos/{repo}/pulls/{pr['number']}/files")
if files:
file_map[pr['number']] = set(f['filename'] for f in files)
# Find overlapping file sets
pr_nums = list(file_map.keys())
for i in range(len(pr_nums)):
for j in range(i+1, len(pr_nums)):
a, b = pr_nums[i], pr_nums[j]
overlap = file_map[a] & file_map[b]
if overlap:
conflicts.append({
"repo": repo,
"pr_a": a,
"pr_b": b,
"base": base_sha[:8],
"files": sorted(overlap),
"title_a": next(p["title"] for p in siblings if p["number"] == a),
"title_b": next(p["title"] for p in siblings if p["number"] == b),
})
return conflicts
def main():
repos = REPOS
if "--repo" in sys.argv:
idx = sys.argv.index("--repo") + 1
if idx < len(sys.argv):
repos = [sys.argv[idx]]
all_conflicts = []
for repo in repos:
conflicts = check_repo(repo)
all_conflicts.extend(conflicts)
if not all_conflicts:
print("No sibling PR conflicts detected. Queue is clean.")
return 0
print(f"Found {len(all_conflicts)} potential merge conflicts:")
print()
for c in all_conflicts:
print(f" {c['repo']}:")
print(f" PR #{c['pr_a']} vs #{c['pr_b']} (base: {c['base']})")
print(f" #{c['pr_a']}: {c['title_a'][:60]}")
print(f" #{c['pr_b']}: {c['title_b'][:60]}")
print(f" Overlapping files: {', '.join(c['files'])}")
print(f" → Merge one first, then rebase the other.")
print()
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,263 +0,0 @@
#!/usr/bin/env python3
"""
Dead Man Switch Fallback Engine
When the dead man switch triggers (zero commits for 2+ hours, model down,
Gitea unreachable, etc.), this script diagnoses the failure and applies
common sense fallbacks automatically.
Fallback chain:
1. Primary model (Kimi) down -> switch config to local-llama.cpp
2. Gitea unreachable -> cache issues locally, retry on recovery
3. VPS agents down -> alert + lazarus protocol
4. Local llama.cpp down -> try Ollama, then alert-only mode
5. All inference dead -> safe mode (cron pauses, alert Alexander)
Each fallback is reversible. Recovery auto-restores the previous config.
"""
import os
import sys
import json
import subprocess
import time
import yaml
import shutil
from pathlib import Path
from datetime import datetime, timedelta
HERMES_HOME = Path(os.environ.get("HERMES_HOME", os.path.expanduser("~/.hermes")))
CONFIG_PATH = HERMES_HOME / "config.yaml"
FALLBACK_STATE = HERMES_HOME / "deadman-fallback-state.json"
BACKUP_CONFIG = HERMES_HOME / "config.yaml.pre-fallback"
FORGE_URL = "https://forge.alexanderwhitestone.com"
def load_config():
with open(CONFIG_PATH) as f:
return yaml.safe_load(f)
def save_config(cfg):
with open(CONFIG_PATH, "w") as f:
yaml.dump(cfg, f, default_flow_style=False)
def load_state():
if FALLBACK_STATE.exists():
with open(FALLBACK_STATE) as f:
return json.load(f)
return {"active_fallbacks": [], "last_check": None, "recovery_pending": False}
def save_state(state):
state["last_check"] = datetime.now().isoformat()
with open(FALLBACK_STATE, "w") as f:
json.dump(state, f, indent=2)
def run(cmd, timeout=10):
try:
r = subprocess.run(cmd, shell=True, capture_output=True, text=True, timeout=timeout)
return r.returncode, r.stdout.strip(), r.stderr.strip()
except subprocess.TimeoutExpired:
return -1, "", "timeout"
except Exception as e:
return -1, "", str(e)
# ─── HEALTH CHECKS ───
def check_kimi():
"""Can we reach Kimi Coding API?"""
key = os.environ.get("KIMI_API_KEY", "")
if not key:
# Check multiple .env locations
for env_path in [HERMES_HOME / ".env", Path.home() / ".hermes" / ".env"]:
if env_path.exists():
for line in open(env_path):
line = line.strip()
if line.startswith("KIMI_API_KEY="):
key = line.split("=", 1)[1].strip().strip('"').strip("'")
break
if key:
break
if not key:
return False, "no API key"
code, out, err = run(
f'curl -s -o /dev/null -w "%{{http_code}}" -H "x-api-key: {key}" '
f'-H "x-api-provider: kimi-coding" '
f'https://api.kimi.com/coding/v1/models -X POST '
f'-H "content-type: application/json" '
f'-d \'{{"model":"kimi-k2.5","max_tokens":1,"messages":[{{"role":"user","content":"ping"}}]}}\' ',
timeout=15
)
if code == 0 and out in ("200", "429"):
return True, f"HTTP {out}"
return False, f"HTTP {out} err={err[:80]}"
def check_local_llama():
"""Is local llama.cpp serving?"""
code, out, err = run("curl -s http://localhost:8081/v1/models", timeout=5)
if code == 0 and "hermes" in out.lower():
return True, "serving"
return False, f"exit={code}"
def check_ollama():
"""Is Ollama running?"""
code, out, err = run("curl -s http://localhost:11434/api/tags", timeout=5)
if code == 0 and "models" in out:
return True, "running"
return False, f"exit={code}"
def check_gitea():
"""Can we reach the Forge?"""
token_path = Path.home() / ".config" / "gitea" / "timmy-token"
if not token_path.exists():
return False, "no token"
token = token_path.read_text().strip()
code, out, err = run(
f'curl -s -o /dev/null -w "%{{http_code}}" -H "Authorization: token {token}" '
f'"{FORGE_URL}/api/v1/user"',
timeout=10
)
if code == 0 and out == "200":
return True, "reachable"
return False, f"HTTP {out}"
def check_vps(ip, name):
"""Can we SSH into a VPS?"""
code, out, err = run(f"ssh -o ConnectTimeout=5 root@{ip} 'echo alive'", timeout=10)
if code == 0 and "alive" in out:
return True, "alive"
return False, f"unreachable"
# ─── FALLBACK ACTIONS ───
def fallback_to_local_model(cfg):
"""Switch primary model from Kimi to local llama.cpp"""
if not BACKUP_CONFIG.exists():
shutil.copy2(CONFIG_PATH, BACKUP_CONFIG)
cfg["model"]["provider"] = "local-llama.cpp"
cfg["model"]["default"] = "hermes3"
save_config(cfg)
return "Switched primary model to local-llama.cpp/hermes3"
def fallback_to_ollama(cfg):
"""Switch to Ollama if llama.cpp is also down"""
if not BACKUP_CONFIG.exists():
shutil.copy2(CONFIG_PATH, BACKUP_CONFIG)
cfg["model"]["provider"] = "ollama"
cfg["model"]["default"] = "gemma4:latest"
save_config(cfg)
return "Switched primary model to ollama/gemma4:latest"
def enter_safe_mode(state):
"""Pause all non-essential cron jobs, alert Alexander"""
state["safe_mode"] = True
state["safe_mode_entered"] = datetime.now().isoformat()
save_state(state)
return "SAFE MODE: All inference down. Cron jobs should be paused. Alert Alexander."
def restore_config():
"""Restore pre-fallback config when primary recovers"""
if BACKUP_CONFIG.exists():
shutil.copy2(BACKUP_CONFIG, CONFIG_PATH)
BACKUP_CONFIG.unlink()
return "Restored original config from backup"
return "No backup config to restore"
# ─── MAIN DIAGNOSIS AND FALLBACK ENGINE ───
def diagnose_and_fallback():
state = load_state()
cfg = load_config()
results = {
"timestamp": datetime.now().isoformat(),
"checks": {},
"actions": [],
"status": "healthy"
}
# Check all systems
kimi_ok, kimi_msg = check_kimi()
results["checks"]["kimi-coding"] = {"ok": kimi_ok, "msg": kimi_msg}
llama_ok, llama_msg = check_local_llama()
results["checks"]["local_llama"] = {"ok": llama_ok, "msg": llama_msg}
ollama_ok, ollama_msg = check_ollama()
results["checks"]["ollama"] = {"ok": ollama_ok, "msg": ollama_msg}
gitea_ok, gitea_msg = check_gitea()
results["checks"]["gitea"] = {"ok": gitea_ok, "msg": gitea_msg}
# VPS checks
vpses = [
("167.99.126.228", "Allegro"),
("143.198.27.163", "Ezra"),
("159.203.146.185", "Bezalel"),
]
for ip, name in vpses:
vps_ok, vps_msg = check_vps(ip, name)
results["checks"][f"vps_{name.lower()}"] = {"ok": vps_ok, "msg": vps_msg}
current_provider = cfg.get("model", {}).get("provider", "kimi-coding")
# ─── FALLBACK LOGIC ───
# Case 1: Primary (Kimi) down, local available
if not kimi_ok and current_provider == "kimi-coding":
if llama_ok:
msg = fallback_to_local_model(cfg)
results["actions"].append(msg)
state["active_fallbacks"].append("kimi->local-llama")
results["status"] = "degraded_local"
elif ollama_ok:
msg = fallback_to_ollama(cfg)
results["actions"].append(msg)
state["active_fallbacks"].append("kimi->ollama")
results["status"] = "degraded_ollama"
else:
msg = enter_safe_mode(state)
results["actions"].append(msg)
results["status"] = "safe_mode"
# Case 2: Already on fallback, check if primary recovered
elif kimi_ok and "kimi->local-llama" in state.get("active_fallbacks", []):
msg = restore_config()
results["actions"].append(msg)
state["active_fallbacks"].remove("kimi->local-llama")
results["status"] = "recovered"
elif kimi_ok and "kimi->ollama" in state.get("active_fallbacks", []):
msg = restore_config()
results["actions"].append(msg)
state["active_fallbacks"].remove("kimi->ollama")
results["status"] = "recovered"
# Case 3: Gitea down — just flag it, work locally
if not gitea_ok:
results["actions"].append("WARN: Gitea unreachable — work cached locally until recovery")
if "gitea_down" not in state.get("active_fallbacks", []):
state["active_fallbacks"].append("gitea_down")
results["status"] = max(results["status"], "degraded_gitea", key=lambda x: ["healthy", "recovered", "degraded_gitea", "degraded_local", "degraded_ollama", "safe_mode"].index(x) if x in ["healthy", "recovered", "degraded_gitea", "degraded_local", "degraded_ollama", "safe_mode"] else 0)
elif "gitea_down" in state.get("active_fallbacks", []):
state["active_fallbacks"].remove("gitea_down")
results["actions"].append("Gitea recovered — resume normal operations")
# Case 4: VPS agents down
for ip, name in vpses:
key = f"vps_{name.lower()}"
if not results["checks"][key]["ok"]:
results["actions"].append(f"ALERT: {name} VPS ({ip}) unreachable — lazarus protocol needed")
save_state(state)
return results
if __name__ == "__main__":
results = diagnose_and_fallback()
print(json.dumps(results, indent=2))
# Exit codes for cron integration
if results["status"] == "safe_mode":
sys.exit(2)
elif results["status"].startswith("degraded"):
sys.exit(1)
else:
sys.exit(0)

View File

@@ -1,297 +0,0 @@
"""
Glitch pattern definitions for 3D world anomaly detection.
Defines known visual artifact categories commonly found in 3D web worlds,
particularly The Matrix environments. Each pattern includes detection
heuristics and severity ratings.
"""
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class GlitchSeverity(Enum):
CRITICAL = "critical"
HIGH = "high"
MEDIUM = "medium"
LOW = "low"
INFO = "info"
class GlitchCategory(Enum):
FLOATING_ASSETS = "floating_assets"
Z_FIGHTING = "z_fighting"
MISSING_TEXTURES = "missing_textures"
CLIPPING = "clipping"
BROKEN_NORMALS = "broken_normals"
SHADOW_ARTIFACTS = "shadow_artifacts"
LIGHTMAP_ERRORS = "lightmap_errors"
LOD_POPPING = "lod_popping"
WATER_REFLECTION = "water_reflection"
SKYBOX_SEAM = "skybox_seam"
@dataclass
class GlitchPattern:
"""Definition of a known glitch pattern with detection parameters."""
category: GlitchCategory
name: str
description: str
severity: GlitchSeverity
detection_prompts: list[str]
visual_indicators: list[str]
confidence_threshold: float = 0.6
def to_dict(self) -> dict:
return {
"category": self.category.value,
"name": self.name,
"description": self.description,
"severity": self.severity.value,
"detection_prompts": self.detection_prompts,
"visual_indicators": self.visual_indicators,
"confidence_threshold": self.confidence_threshold,
}
# Known glitch patterns for Matrix 3D world scanning
MATRIX_GLITCH_PATTERNS: list[GlitchPattern] = [
GlitchPattern(
category=GlitchCategory.FLOATING_ASSETS,
name="Floating Object",
description="Object not properly grounded or anchored to the scene geometry. "
"Common in procedurally placed assets or after physics desync.",
severity=GlitchSeverity.HIGH,
detection_prompts=[
"Identify any objects that appear to float above the ground without support.",
"Look for furniture, props, or geometry suspended in mid-air with no visible attachment.",
"Check for objects whose shadows do not align with the surface below them.",
],
visual_indicators=[
"gap between object base and surface",
"shadow detached from object",
"object hovering with no structural support",
],
confidence_threshold=0.65,
),
GlitchPattern(
category=GlitchCategory.Z_FIGHTING,
name="Z-Fighting Flicker",
description="Two coplanar surfaces competing for depth priority, causing "
"visible flickering or shimmering textures.",
severity=GlitchSeverity.MEDIUM,
detection_prompts=[
"Look for surfaces that appear to shimmer, flicker, or show mixed textures.",
"Identify areas where two textures seem to overlap and compete for visibility.",
"Check walls, floors, or objects for surface noise or pattern interference.",
],
visual_indicators=[
"shimmering surface",
"texture flicker between two patterns",
"noisy flat surfaces",
"moire-like patterns on planar geometry",
],
confidence_threshold=0.55,
),
GlitchPattern(
category=GlitchCategory.MISSING_TEXTURES,
name="Missing or Placeholder Texture",
description="A surface rendered with a fallback checkerboard, solid magenta, "
"or the default engine placeholder texture.",
severity=GlitchSeverity.CRITICAL,
detection_prompts=[
"Look for bright magenta, checkerboard, or solid-color surfaces that look out of place.",
"Identify any surfaces that appear as flat untextured colors inconsistent with the scene.",
"Check for black, white, or magenta patches where detailed textures should be.",
],
visual_indicators=[
"magenta/pink solid color surface",
"checkerboard pattern",
"flat single-color geometry",
"UV-debug texture visible",
],
confidence_threshold=0.7,
),
GlitchPattern(
category=GlitchCategory.CLIPPING,
name="Geometry Clipping",
description="Objects passing through each other or intersecting in physically "
"impossible ways due to collision mesh errors.",
severity=GlitchSeverity.HIGH,
detection_prompts=[
"Look for objects that visibly pass through other objects (walls, floors, furniture).",
"Identify characters or props embedded inside geometry where they should not be.",
"Check for intersecting meshes where solid objects overlap unnaturally.",
],
visual_indicators=[
"object passing through wall or floor",
"embedded geometry",
"overlapping solid meshes",
"character limb inside furniture",
],
confidence_threshold=0.6,
),
GlitchPattern(
category=GlitchCategory.BROKEN_NORMALS,
name="Broken Surface Normals",
description="Inverted or incorrect surface normals causing faces to appear "
"inside-out, invisible from certain angles, or lit incorrectly.",
severity=GlitchSeverity.MEDIUM,
detection_prompts=[
"Look for surfaces that appear dark or black on one side while lit on the other.",
"Identify objects that seem to vanish when viewed from certain angles.",
"Check for inverted shading where lit areas should be in shadow.",
],
visual_indicators=[
"dark/unlit face on otherwise lit model",
"invisible surface from one direction",
"inverted shadow gradient",
"inside-out appearance",
],
confidence_threshold=0.5,
),
GlitchPattern(
category=GlitchCategory.SHADOW_ARTIFACTS,
name="Shadow Artifact",
description="Broken, detached, or incorrectly rendered shadows that do not "
"match the casting geometry or scene lighting.",
severity=GlitchSeverity.LOW,
detection_prompts=[
"Look for shadows that do not match the shape of nearby objects.",
"Identify shadow acne: banding or striped patterns on surfaces.",
"Check for floating shadows detached from any visible caster.",
],
visual_indicators=[
"shadow shape mismatch",
"shadow acne bands",
"detached floating shadow",
"Peter Panning (shadow offset from base)",
],
confidence_threshold=0.5,
),
GlitchPattern(
category=GlitchCategory.LOD_POPPING,
name="LOD Transition Pop",
description="Visible pop-in when level-of-detail models switch abruptly, "
"causing geometry or textures to change suddenly.",
severity=GlitchSeverity.LOW,
detection_prompts=[
"Look for areas where mesh detail changes abruptly at visible boundaries.",
"Identify objects that appear to morph or shift geometry suddenly.",
"Check for texture resolution changes that create visible seams.",
],
visual_indicators=[
"visible mesh simplification boundary",
"texture resolution jump",
"geometry pop-in artifacts",
],
confidence_threshold=0.45,
),
GlitchPattern(
category=GlitchCategory.LIGHTMAP_ERRORS,
name="Lightmap Baking Error",
description="Incorrect or missing baked lighting causing dark spots, light "
"leaks, or mismatched illumination on static geometry.",
severity=GlitchSeverity.MEDIUM,
detection_prompts=[
"Look for unusually dark patches on walls or ceilings that should be lit.",
"Identify bright light leaks through solid geometry seams.",
"Check for mismatched lighting between adjacent surfaces.",
],
visual_indicators=[
"dark splotch on lit surface",
"bright line at geometry seam",
"lighting discontinuity between adjacent faces",
],
confidence_threshold=0.5,
),
GlitchPattern(
category=GlitchCategory.WATER_REFLECTION,
name="Water/Reflection Error",
description="Incorrect reflections, missing water surfaces, or broken "
"reflection probe assignments.",
severity=GlitchSeverity.MEDIUM,
detection_prompts=[
"Look for reflections that do not match the surrounding environment.",
"Identify water surfaces that appear solid or incorrectly rendered.",
"Check for mirror surfaces showing wrong scene geometry.",
],
visual_indicators=[
"reflection mismatch",
"solid water surface",
"incorrect environment map",
],
confidence_threshold=0.5,
),
GlitchPattern(
category=GlitchCategory.SKYBOX_SEAM,
name="Skybox Seam",
description="Visible seams or color mismatches at the edges of skybox cubemap faces.",
severity=GlitchSeverity.LOW,
detection_prompts=[
"Look at the edges of the sky for visible seams or color shifts.",
"Identify discontinuities where skybox faces meet.",
"Check for texture stretching at skybox corners.",
],
visual_indicators=[
"visible line in sky",
"color discontinuity at sky edge",
"sky texture seam",
],
confidence_threshold=0.45,
),
]
def get_patterns_by_severity(min_severity: GlitchSeverity) -> list[GlitchPattern]:
"""Return patterns at or above the given severity level."""
severity_order = [
GlitchSeverity.INFO,
GlitchSeverity.LOW,
GlitchSeverity.MEDIUM,
GlitchSeverity.HIGH,
GlitchSeverity.CRITICAL,
]
min_idx = severity_order.index(min_severity)
return [p for p in MATRIX_GLITCH_PATTERNS if severity_order.index(p.severity) >= min_idx]
def get_pattern_by_category(category: GlitchCategory) -> Optional[GlitchPattern]:
"""Return the pattern definition for a specific category."""
for p in MATRIX_GLITCH_PATTERNS:
if p.category == category:
return p
return None
def build_vision_prompt(patterns: list[GlitchPattern] | None = None) -> str:
"""Build a composite vision analysis prompt from pattern definitions."""
if patterns is None:
patterns = MATRIX_GLITCH_PATTERNS
sections = []
for p in patterns:
prompt_text = " ".join(p.detection_prompts)
indicators = ", ".join(p.visual_indicators)
sections.append(
f"[{p.category.value.upper()}] {p.name} (severity: {p.severity.value})\n"
f" {p.description}\n"
f" Look for: {prompt_text}\n"
f" Visual indicators: {indicators}"
)
return (
"Analyze this 3D world screenshot for visual glitches and artifacts. "
"For each detected issue, report the category, description of what you see, "
"approximate location in the image (x%, y%), and confidence (0.0-1.0).\n\n"
"Known glitch patterns to check:\n\n" + "\n\n".join(sections)
)
if __name__ == "__main__":
import json
print(f"Loaded {len(MATRIX_GLITCH_PATTERNS)} glitch patterns:\n")
for p in MATRIX_GLITCH_PATTERNS:
print(f" [{p.severity.value:8s}] {p.category.value}: {p.name}")
print(f"\nVision prompt preview:\n{build_vision_prompt()[:500]}...")

View File

@@ -1,549 +0,0 @@
#!/usr/bin/env python3
"""
Matrix 3D World Glitch Detector
Scans a 3D web world for visual artifacts using browser automation
and vision AI analysis. Produces structured glitch reports.
Usage:
python matrix_glitch_detector.py <url> [--angles 4] [--output report.json]
python matrix_glitch_detector.py --demo # Run with synthetic test data
Ref: timmy-config#491
"""
import argparse
import base64
import json
import os
import sys
import time
import uuid
from dataclasses import dataclass, field, asdict
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
# Add parent for glitch_patterns import
sys.path.insert(0, str(Path(__file__).resolve().parent))
from glitch_patterns import (
GlitchCategory,
GlitchPattern,
GlitchSeverity,
MATRIX_GLITCH_PATTERNS,
build_vision_prompt,
get_patterns_by_severity,
)
@dataclass
class DetectedGlitch:
"""A single detected glitch with metadata."""
id: str
category: str
name: str
description: str
severity: str
confidence: float
location_x: Optional[float] = None # percentage across image
location_y: Optional[float] = None # percentage down image
screenshot_index: int = 0
screenshot_angle: str = "front"
timestamp: str = ""
def __post_init__(self):
if not self.timestamp:
self.timestamp = datetime.now(timezone.utc).isoformat()
@dataclass
class ScanResult:
"""Complete scan result for a 3D world URL."""
scan_id: str
url: str
timestamp: str
total_screenshots: int
angles_captured: list[str]
glitches: list[dict] = field(default_factory=list)
summary: dict = field(default_factory=dict)
metadata: dict = field(default_factory=dict)
def to_json(self, indent: int = 2) -> str:
return json.dumps(asdict(self), indent=indent)
def generate_scan_angles(num_angles: int) -> list[dict]:
"""Generate camera angle configurations for multi-angle scanning.
Returns a list of dicts with yaw/pitch/label for browser camera control.
"""
base_angles = [
{"yaw": 0, "pitch": 0, "label": "front"},
{"yaw": 90, "pitch": 0, "label": "right"},
{"yaw": 180, "pitch": 0, "label": "back"},
{"yaw": 270, "pitch": 0, "label": "left"},
{"yaw": 0, "pitch": -30, "label": "front_low"},
{"yaw": 45, "pitch": -15, "label": "front_right_low"},
{"yaw": 0, "pitch": 30, "label": "front_high"},
{"yaw": 45, "pitch": 0, "label": "front_right"},
]
if num_angles <= len(base_angles):
return base_angles[:num_angles]
return base_angles + [
{"yaw": i * (360 // num_angles), "pitch": 0, "label": f"angle_{i}"}
for i in range(len(base_angles), num_angles)
]
def capture_screenshots(url: str, angles: list[dict], output_dir: Path) -> list[Path]:
"""Capture screenshots of a 3D web world from multiple angles.
Uses browser_vision tool when available; falls back to placeholder generation
for testing and environments without browser access.
"""
output_dir.mkdir(parents=True, exist_ok=True)
screenshots = []
for i, angle in enumerate(angles):
filename = output_dir / f"screenshot_{i:03d}_{angle['label']}.png"
# Attempt browser-based capture via browser_vision
try:
result = _browser_capture(url, angle, filename)
if result:
screenshots.append(filename)
continue
except Exception:
pass
# Generate placeholder screenshot for offline/test scenarios
_generate_placeholder_screenshot(filename, angle)
screenshots.append(filename)
return screenshots
def _browser_capture(url: str, angle: dict, output_path: Path) -> bool:
"""Capture a screenshot via browser automation.
This is a stub that delegates to the browser_vision tool when run
in an environment that provides it. In CI or offline mode, returns False.
"""
# Check if browser_vision is available via environment
bv_script = os.environ.get("BROWSER_VISION_SCRIPT")
if bv_script and Path(bv_script).exists():
import subprocess
cmd = [
sys.executable, bv_script,
"--url", url,
"--screenshot", str(output_path),
"--rotate-yaw", str(angle["yaw"]),
"--rotate-pitch", str(angle["pitch"]),
]
proc = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
return proc.returncode == 0 and output_path.exists()
return False
def _generate_placeholder_screenshot(path: Path, angle: dict):
"""Generate a minimal 1x1 PNG as a placeholder for testing."""
# Minimal valid PNG (1x1 transparent pixel)
png_data = (
b"\x89PNG\r\n\x1a\n"
b"\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01"
b"\x08\x06\x00\x00\x00\x1f\x15\xc4\x89"
b"\x00\x00\x00\nIDATx\x9cc\x00\x01\x00\x00\x05\x00\x01"
b"\r\n\xb4\x00\x00\x00\x00IEND\xaeB`\x82"
)
path.write_bytes(png_data)
def analyze_with_vision(
screenshot_paths: list[Path],
angles: list[dict],
patterns: list[GlitchPattern] | None = None,
) -> list[DetectedGlitch]:
"""Send screenshots to vision AI for glitch analysis.
In environments with a vision model available, sends each screenshot
with the composite detection prompt. Otherwise returns simulated results.
"""
if patterns is None:
patterns = MATRIX_GLITCH_PATTERNS
prompt = build_vision_prompt(patterns)
glitches = []
for i, (path, angle) in enumerate(zip(screenshot_paths, angles)):
# Attempt vision analysis
detected = _vision_analyze_image(path, prompt, i, angle["label"])
glitches.extend(detected)
return glitches
def _vision_analyze_image(
image_path: Path,
prompt: str,
screenshot_index: int,
angle_label: str,
) -> list[DetectedGlitch]:
"""Analyze a single screenshot with vision AI.
Uses the vision_analyze tool when available; returns empty list otherwise.
"""
# Check for vision API configuration
api_key = os.environ.get("VISION_API_KEY") or os.environ.get("OPENAI_API_KEY")
api_base = os.environ.get("VISION_API_BASE", "https://api.openai.com/v1")
if api_key:
try:
return _call_vision_api(
image_path, prompt, screenshot_index, angle_label, api_key, api_base
)
except Exception as e:
print(f" [!] Vision API error for {image_path.name}: {e}", file=sys.stderr)
# No vision backend available
return []
def _call_vision_api(
image_path: Path,
prompt: str,
screenshot_index: int,
angle_label: str,
api_key: str,
api_base: str,
) -> list[DetectedGlitch]:
"""Call a vision API (OpenAI-compatible) for image analysis."""
import urllib.request
import urllib.error
image_data = base64.b64encode(image_path.read_bytes()).decode()
payload = json.dumps({
"model": os.environ.get("VISION_MODEL", "gpt-4o"),
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{image_data}",
"detail": "high",
},
},
],
}
],
"max_tokens": 4096,
}).encode()
req = urllib.request.Request(
f"{api_base}/chat/completions",
data=payload,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
},
)
with urllib.request.urlopen(req, timeout=60) as resp:
result = json.loads(resp.read())
content = result["choices"][0]["message"]["content"]
return _parse_vision_response(content, screenshot_index, angle_label)
def _add_glitch_from_dict(
item: dict,
glitches: list[DetectedGlitch],
screenshot_index: int,
angle_label: str,
):
"""Convert a dict from vision API response into a DetectedGlitch."""
cat = item.get("category", item.get("type", "unknown"))
conf = float(item.get("confidence", item.get("score", 0.5)))
glitch = DetectedGlitch(
id=str(uuid.uuid4())[:8],
category=cat,
name=item.get("name", item.get("label", cat)),
description=item.get("description", item.get("detail", "")),
severity=item.get("severity", _infer_severity(cat, conf)),
confidence=conf,
location_x=item.get("location_x", item.get("x")),
location_y=item.get("location_y", item.get("y")),
screenshot_index=screenshot_index,
screenshot_angle=angle_label,
)
glitches.append(glitch)
def _parse_vision_response(
text: str, screenshot_index: int, angle_label: str
) -> list[DetectedGlitch]:
"""Parse vision AI response into structured glitch detections."""
glitches = []
# Try to extract JSON from the response
json_blocks = []
in_json = False
json_buf = []
for line in text.split("\n"):
stripped = line.strip()
if stripped.startswith("```"):
if in_json and json_buf:
try:
json_blocks.append(json.loads("\n".join(json_buf)))
except json.JSONDecodeError:
pass
json_buf = []
in_json = not in_json
continue
if in_json:
json_buf.append(line)
# Flush any remaining buffer
if in_json and json_buf:
try:
json_blocks.append(json.loads("\n".join(json_buf)))
except json.JSONDecodeError:
pass
# Also try parsing the entire response as JSON
try:
parsed = json.loads(text)
if isinstance(parsed, list):
json_blocks.extend(parsed)
elif isinstance(parsed, dict):
if "glitches" in parsed:
json_blocks.extend(parsed["glitches"])
elif "detections" in parsed:
json_blocks.extend(parsed["detections"])
else:
json_blocks.append(parsed)
except json.JSONDecodeError:
pass
for item in json_blocks:
# Flatten arrays of detections
if isinstance(item, list):
for sub in item:
if isinstance(sub, dict):
_add_glitch_from_dict(sub, glitches, screenshot_index, angle_label)
elif isinstance(item, dict):
_add_glitch_from_dict(item, glitches, screenshot_index, angle_label)
return glitches
def _infer_severity(category: str, confidence: float) -> str:
"""Infer severity from category and confidence when not provided."""
critical_cats = {"missing_textures", "clipping"}
high_cats = {"floating_assets", "broken_normals"}
cat_lower = category.lower()
if any(c in cat_lower for c in critical_cats):
return "critical" if confidence > 0.7 else "high"
if any(c in cat_lower for c in high_cats):
return "high" if confidence > 0.7 else "medium"
return "medium" if confidence > 0.6 else "low"
def build_report(
url: str,
angles: list[dict],
screenshots: list[Path],
glitches: list[DetectedGlitch],
) -> ScanResult:
"""Build the final structured scan report."""
severity_counts = {}
category_counts = {}
for g in glitches:
severity_counts[g.severity] = severity_counts.get(g.severity, 0) + 1
category_counts[g.category] = category_counts.get(g.category, 0) + 1
report = ScanResult(
scan_id=str(uuid.uuid4()),
url=url,
timestamp=datetime.now(timezone.utc).isoformat(),
total_screenshots=len(screenshots),
angles_captured=[a["label"] for a in angles],
glitches=[asdict(g) for g in glitches],
summary={
"total_glitches": len(glitches),
"by_severity": severity_counts,
"by_category": category_counts,
"highest_severity": max(severity_counts.keys(), default="none"),
"clean_screenshots": sum(
1
for i in range(len(screenshots))
if not any(g.screenshot_index == i for g in glitches)
),
},
metadata={
"detector_version": "0.1.0",
"pattern_count": len(MATRIX_GLITCH_PATTERNS),
"reference": "timmy-config#491",
},
)
return report
def run_demo(output_path: Optional[Path] = None) -> ScanResult:
"""Run a demonstration scan with simulated detections."""
print("[*] Running Matrix glitch detection demo...")
url = "https://matrix.example.com/world/alpha"
angles = generate_scan_angles(4)
screenshots_dir = Path("/tmp/matrix_glitch_screenshots")
print(f"[*] Capturing {len(angles)} screenshots from: {url}")
screenshots = capture_screenshots(url, angles, screenshots_dir)
print(f"[*] Captured {len(screenshots)} screenshots")
# Simulate detections for demo
demo_glitches = [
DetectedGlitch(
id=str(uuid.uuid4())[:8],
category="floating_assets",
name="Floating Chair",
description="Office chair floating 0.3m above floor in sector 7",
severity="high",
confidence=0.87,
location_x=35.2,
location_y=62.1,
screenshot_index=0,
screenshot_angle="front",
),
DetectedGlitch(
id=str(uuid.uuid4())[:8],
category="z_fighting",
name="Wall Texture Flicker",
description="Z-fighting between wall panel and decorative overlay",
severity="medium",
confidence=0.72,
location_x=58.0,
location_y=40.5,
screenshot_index=1,
screenshot_angle="right",
),
DetectedGlitch(
id=str(uuid.uuid4())[:8],
category="missing_textures",
name="Placeholder Texture",
description="Bright magenta surface on door frame — missing asset reference",
severity="critical",
confidence=0.95,
location_x=72.3,
location_y=28.8,
screenshot_index=2,
screenshot_angle="back",
),
DetectedGlitch(
id=str(uuid.uuid4())[:8],
category="clipping",
name="Desk Through Wall",
description="Desk corner clipping through adjacent wall geometry",
severity="high",
confidence=0.81,
location_x=15.0,
location_y=55.0,
screenshot_index=3,
screenshot_angle="left",
),
]
print(f"[*] Detected {len(demo_glitches)} glitches")
report = build_report(url, angles, screenshots, demo_glitches)
if output_path:
output_path.write_text(report.to_json())
print(f"[*] Report saved to: {output_path}")
return report
def main():
parser = argparse.ArgumentParser(
description="Matrix 3D World Glitch Detector — scan for visual artifacts",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
%(prog)s https://matrix.example.com/world/alpha
%(prog)s https://matrix.example.com/world/alpha --angles 8 --output report.json
%(prog)s --demo
""",
)
parser.add_argument("url", nargs="?", help="URL of the 3D world to scan")
parser.add_argument(
"--angles", type=int, default=4, help="Number of camera angles to capture (default: 4)"
)
parser.add_argument("--output", "-o", type=str, help="Output file path for JSON report")
parser.add_argument("--demo", action="store_true", help="Run demo with simulated data")
parser.add_argument(
"--min-severity",
choices=["info", "low", "medium", "high", "critical"],
default="info",
help="Minimum severity to include in report",
)
parser.add_argument("--verbose", "-v", action="store_true", help="Verbose output")
args = parser.parse_args()
if args.demo:
output = Path(args.output) if args.output else Path("glitch_report_demo.json")
report = run_demo(output)
print(f"\n=== Scan Summary ===")
print(f"URL: {report.url}")
print(f"Screenshots: {report.total_screenshots}")
print(f"Glitches found: {report.summary['total_glitches']}")
print(f"By severity: {report.summary['by_severity']}")
return
if not args.url:
parser.error("URL required (or use --demo)")
scan_id = str(uuid.uuid4())[:8]
print(f"[*] Matrix Glitch Detector — Scan {scan_id}")
print(f"[*] Target: {args.url}")
# Generate camera angles
angles = generate_scan_angles(args.angles)
print(f"[*] Capturing {len(angles)} screenshots...")
# Capture screenshots
screenshots_dir = Path(f"/tmp/matrix_glitch_{scan_id}")
screenshots = capture_screenshots(args.url, angles, screenshots_dir)
print(f"[*] Captured {len(screenshots)} screenshots")
# Filter patterns by severity
min_sev = GlitchSeverity(args.min_severity)
patterns = get_patterns_by_severity(min_sev)
# Analyze with vision AI
print(f"[*] Analyzing with vision AI ({len(patterns)} patterns)...")
glitches = analyze_with_vision(screenshots, angles, patterns)
# Build and save report
report = build_report(args.url, angles, screenshots, glitches)
if args.output:
Path(args.output).write_text(report.to_json())
print(f"[*] Report saved: {args.output}")
else:
print(report.to_json())
print(f"\n[*] Done — {len(glitches)} glitches detected")
if __name__ == "__main__":
main()

View File

@@ -19,25 +19,25 @@ PASS=0
FAIL=0
WARN=0
check_kimi_model() {
check_anthropic_model() {
local model="$1"
local label="$2"
local api_key="${KIMI_API_KEY:-}"
local api_key="${ANTHROPIC_API_KEY:-}"
if [ -z "$api_key" ]; then
# Try loading from .env
api_key=$(grep '^KIMI_API_KEY=' "${HERMES_HOME:-$HOME/.hermes}/.env" 2>/dev/null | head -1 | cut -d= -f2- | tr -d "'\"" || echo "")
api_key=$(grep '^ANTHROPIC_API_KEY=' "${HERMES_HOME:-$HOME/.hermes}/.env" 2>/dev/null | head -1 | cut -d= -f2- | tr -d "'\"" || echo "")
fi
if [ -z "$api_key" ]; then
log "SKIP [$label] $model -- no KIMI_API_KEY"
log "SKIP [$label] $model -- no ANTHROPIC_API_KEY"
return 0
fi
response=$(curl -sf --max-time 10 -X POST \
"https://api.kimi.com/coding/v1/chat/completions" \
"https://api.anthropic.com/v1/messages" \
-H "x-api-key: ${api_key}" \
-H "x-api-provider: kimi-coding" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d "{\"model\":\"${model}\",\"max_tokens\":1,\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}]}" 2>&1 || echo "ERROR")
@@ -85,26 +85,26 @@ else:
print('')
" 2>/dev/null || echo "")
if [ -n "$primary" ] && [ "$provider" = "kimi-coding" ]; then
if check_kimi_model "$primary" "PRIMARY"; then
if [ -n "$primary" ] && [ "$provider" = "anthropic" ]; then
if check_anthropic_model "$primary" "PRIMARY"; then
PASS=$((PASS + 1))
else
rc=$?
if [ "$rc" -eq 1 ]; then
FAIL=$((FAIL + 1))
log "CRITICAL: Primary model $primary is DEAD. Loops will fail."
log "Known good alternatives: kimi-k2.5, google/gemini-2.5-pro"
log "Known good alternatives: claude-opus-4.6, claude-haiku-4-5-20251001"
else
WARN=$((WARN + 1))
fi
fi
elif [ -n "$primary" ]; then
log "SKIP [PRIMARY] $primary -- non-kimi provider ($provider), no validator yet"
log "SKIP [PRIMARY] $primary -- non-anthropic provider ($provider), no validator yet"
fi
# Cron model check (haiku)
CRON_MODEL="kimi-k2.5"
if check_kimi_model "$CRON_MODEL" "CRON"; then
CRON_MODEL="claude-haiku-4-5-20251001"
if check_anthropic_model "$CRON_MODEL" "CRON"; then
PASS=$((PASS + 1))
else
rc=$?

View File

@@ -1,4 +1,3 @@
#!/usr/bin/env python3
"""
Full Nostr agent-to-agent communication demo - FINAL WORKING
"""

View File

@@ -1,514 +0,0 @@
#!/usr/bin/env bash
# pane-watchdog.sh — Detect stuck/dead tmux panes and auto-restart them
#
# Tracks output hash per pane across cycles. If a pane's captured output
# hasn't changed for STUCK_CYCLES consecutive checks, the pane is STUCK.
# Dead panes (PID gone) are also detected.
#
# On STUCK/DEAD:
# 1. Kill the pane
# 2. Attempt restart with --resume (session ID from manifest)
# 3. Fallback: fresh prompt with last known task from logs
#
# State file: ~/.hermes/pane-state.json
# Log: ~/.hermes/logs/pane-watchdog.log
#
# Usage:
# pane-watchdog.sh # One-shot check all sessions
# pane-watchdog.sh --daemon # Run every CHECK_INTERVAL seconds
# pane-watchdog.sh --status # Print current pane state
# pane-watchdog.sh --session NAME # Check only one session
#
# Issue: timmy-config #515
set -uo pipefail
export PATH="/opt/homebrew/bin:$HOME/.local/bin:$HOME/.hermes/bin:/usr/local/bin:$PATH"
# === CONFIG ===
STATE_FILE="${PANE_STATE_FILE:-$HOME/.hermes/pane-state.json}"
LOG_FILE="${PANE_WATCHDOG_LOG:-$HOME/.hermes/logs/pane-watchdog.log}"
CHECK_INTERVAL="${PANE_CHECK_INTERVAL:-120}" # seconds between cycles
STUCK_CYCLES=2 # unchanged cycles before STUCK
MAX_RESTART_ATTEMPTS=3 # per pane per hour
RESTART_COOLDOWN=3600 # seconds between escalation alerts
CAPTURE_LINES=40 # lines of output to hash
# Sessions to monitor (all if empty)
MONITOR_SESSIONS="${PANE_WATCHDOG_SESSIONS:-}"
mkdir -p "$(dirname "$STATE_FILE")" "$(dirname "$LOG_FILE")"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> "$LOG_FILE"
}
# === HELPERS ===
# Capture last N lines of pane output and hash them
capture_pane_hash() {
local target="$1"
local output
output=$(tmux capture-pane -t "$target" -p -S "-${CAPTURE_LINES}" 2>/dev/null || echo "DEAD")
echo -n "$output" | shasum -a 256 | cut -d' ' -f1
}
# Check if pane PID is alive
pane_pid_alive() {
local target="$1"
local pid
pid=$(tmux list-panes -t "$target" -F '#{pane_pid}' 2>/dev/null | head -1 || echo "")
if [ -z "$pid" ]; then
return 1 # pane doesn't exist
fi
kill -0 "$pid" 2>/dev/null
}
# Get pane start command
pane_start_command() {
local target="$1"
tmux list-panes -t "$target" -F '#{pane_start_command}' 2>/dev/null | head -1 || echo "unknown"
}
# Get the pane's current running command (child process)
pane_current_command() {
local target="$1"
tmux list-panes -t "$target" -F '#{pane_current_command}' 2>/dev/null || echo "unknown"
}
# Only restart panes running hermes/agent commands (not zsh, python3 repls, etc.)
is_restartable() {
local cmd="$1"
case "$cmd" in
hermes|*hermes*|*agent*|*timmy*|*kimi*|*claude-loop*|*gemini-loop*)
return 0
;;
*)
return 1
;;
esac
}
# Get session ID from hermes manifest if available
get_hermes_session_id() {
local session_name="$1"
local manifest="$HOME/.hermes/sessions/${session_name}/manifest.json"
if [ -f "$manifest" ]; then
python3 -c "
import json, sys
try:
m = json.load(open('$manifest'))
print(m.get('session_id', m.get('id', '')))
except: pass
" 2>/dev/null || echo ""
else
echo ""
fi
}
# Get last task from pane logs
get_last_task() {
local session_name="$1"
local log_dir="$HOME/.hermes/logs"
# Find the most recent log for this session
local log_file
log_file=$(find "$log_dir" -name "*${session_name}*" -type f -mtime -1 2>/dev/null | sort -r | head -1)
if [ -n "$log_file" ] && [ -f "$log_file" ]; then
# Extract last user prompt or task description
grep -i "task:\|prompt:\|issue\|working on" "$log_file" 2>/dev/null | tail -1 | sed 's/.*[:>] *//' | head -c 200
fi
}
# Restart a pane with a fresh shell/command
restart_pane() {
local target="$1"
local session_name="${target%%:*}"
local session_id last_task cmd
log "RESTART: Attempting to restart $target"
# Kill existing pane
tmux kill-pane -t "$target" 2>/dev/null || true
sleep 1
# Try --resume with session ID
session_id=$(get_hermes_session_id "$session_name")
if [ -n "$session_id" ]; then
log "RESTART: Trying --resume with session $session_id"
tmux split-window -t "$session_name" -d \
"hermes chat --resume '$session_id' 2>&1 | tee -a '$HOME/.hermes/logs/${session_name}-restart.log'"
sleep 2
if pane_pid_alive "${session_name}:1" 2>/dev/null; then
log "RESTART: Success with --resume"
return 0
fi
fi
# Fallback: fresh prompt
last_task=$(get_last_task "$session_name")
if [ -n "$last_task" ]; then
log "RESTART: Fallback — fresh prompt with task: $last_task"
tmux split-window -t "$session_name" -d \
"echo 'Watchdog restart — last task: $last_task' && hermes chat 2>&1 | tee -a '$HOME/.hermes/logs/${session_name}-restart.log'"
else
log "RESTART: Fallback — fresh hermes chat"
tmux split-window -t "$session_name" -d \
"hermes chat 2>&1 | tee -a '$HOME/.hermes/logs/${session_name}-restart.log'"
fi
sleep 2
if pane_pid_alive "${session_name}:1" 2>/dev/null; then
log "RESTART: Fallback restart succeeded"
return 0
else
log "RESTART: FAILED to restart $target"
return 1
fi
}
# === STATE MANAGEMENT ===
read_state() {
if [ -f "$STATE_FILE" ]; then
cat "$STATE_FILE"
else
echo "{}"
fi
}
write_state() {
echo "$1" > "$STATE_FILE"
}
# Update state for a single pane and return JSON status
update_pane_state() {
local target="$1"
local hash="$2"
local is_alive="$3"
local now
now=$(date +%s)
python3 - "$STATE_FILE" "$target" "$hash" "$is_alive" "$now" "$STUCK_CYCLES" <<'PYEOF'
import json, sys, time
state_file = sys.argv[1]
target = sys.argv[2]
new_hash = sys.argv[3]
is_alive = sys.argv[4] == "true"
now = int(sys.argv[5])
stuck_cycles = int(sys.argv[6])
try:
with open(state_file) as f:
state = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
state = {}
pane = state.get(target, {
"hash": "",
"same_count": 0,
"status": "UNKNOWN",
"last_change": 0,
"last_check": 0,
"restart_attempts": 0,
"last_restart": 0,
"current_command": "",
})
if not is_alive:
pane["status"] = "DEAD"
pane["same_count"] = 0
elif new_hash == pane.get("hash", ""):
pane["same_count"] = pane.get("same_count", 0) + 1
if pane["same_count"] >= stuck_cycles:
pane["status"] = "STUCK"
else:
pane["status"] = "STALE" if pane["same_count"] > 0 else "OK"
else:
pane["hash"] = new_hash
pane["same_count"] = 0
pane["status"] = "OK"
pane["last_change"] = now
pane["last_check"] = now
state[target] = pane
with open(state_file, "w") as f:
json.dump(state, f, indent=2)
print(json.dumps(pane))
PYEOF
}
# Reset restart attempt counter if cooldown expired
maybe_reset_restarts() {
local target="$1"
local now
now=$(date +%s)
python3 - "$STATE_FILE" "$target" "$now" "$RESTART_COOLDOWN" <<'PYEOF'
import json, sys
state_file = sys.argv[1]
target = sys.argv[2]
now = int(sys.argv[3])
cooldown = int(sys.argv[4])
with open(state_file) as f:
state = json.load(f)
pane = state.get(target, {})
last_restart = pane.get("last_restart", 0)
if now - last_restart > cooldown:
pane["restart_attempts"] = 0
state[target] = pane
with open(state_file, "w") as f:
json.dump(state, f, indent=2)
print(pane.get("restart_attempts", 0))
PYEOF
}
increment_restart_attempt() {
local target="$1"
local now
now=$(date +%s)
python3 - "$STATE_FILE" "$target" "$now" <<'PYEOF'
import json, sys
state_file = sys.argv[1]
target = sys.argv[2]
now = int(sys.argv[3])
with open(state_file) as f:
state = json.load(f)
pane = state.get(target, {})
pane["restart_attempts"] = pane.get("restart_attempts", 0) + 1
pane["last_restart"] = now
pane["status"] = "RESTARTING"
state[target] = pane
with open(state_file, "w") as f:
json.dump(state, f, indent=2)
print(pane["restart_attempts"])
PYEOF
}
# === CORE CHECK ===
check_pane() {
local target="$1"
local hash is_alive status current_cmd
# Capture state
hash=$(capture_pane_hash "$target")
if pane_pid_alive "$target"; then
is_alive="true"
else
is_alive="false"
fi
# Get current command for the pane
current_cmd=$(pane_current_command "$target")
# Update state and get result
local result
result=$(update_pane_state "$target" "$hash" "$is_alive")
status=$(echo "$result" | python3 -c "import json,sys; print(json.loads(sys.stdin.read()).get('status','UNKNOWN'))" 2>/dev/null || echo "UNKNOWN")
case "$status" in
OK)
# Healthy, do nothing
;;
DEAD)
log "DETECTED: $target is DEAD (PID gone) cmd=$current_cmd"
if is_restartable "$current_cmd"; then
handle_stuck "$target"
else
log "SKIP: $target not a hermes pane (cmd=$current_cmd), not restarting"
fi
;;
STUCK)
log "DETECTED: $target is STUCK (output unchanged for ${STUCK_CYCLES} cycles) cmd=$current_cmd"
if is_restartable "$current_cmd"; then
handle_stuck "$target"
else
log "SKIP: $target not a hermes pane (cmd=$current_cmd), not restarting"
fi
;;
STALE)
# Output unchanged but within threshold — just log
local count
count=$(echo "$result" | python3 -c "import json,sys; print(json.loads(sys.stdin.read()).get('same_count',0))" 2>/dev/null || echo "?")
log "STALE: $target unchanged for $count cycle(s)"
;;
esac
}
handle_stuck() {
local target="$1"
local session_name="${target%%:*}"
local attempts
# Check restart budget
attempts=$(maybe_reset_restarts "$target")
if [ "$attempts" -ge "$MAX_RESTART_ATTEMPTS" ]; then
log "ESCALATION: $target stuck ${attempts}x — manual intervention needed"
echo "ALERT: $target stuck after $attempts restart attempts" >&2
return 1
fi
attempts=$(increment_restart_attempt "$target")
log "ACTION: Restarting $target (attempt $attempts/$MAX_RESTART_ATTEMPTS)"
if restart_pane "$target"; then
log "OK: $target restarted successfully"
else
log "FAIL: $target restart failed (attempt $attempts)"
fi
}
check_all_sessions() {
local sessions
if [ -n "$MONITOR_SESSIONS" ]; then
IFS=',' read -ra sessions <<< "$MONITOR_SESSIONS"
else
sessions=()
while IFS= read -r line; do
[ -n "$line" ] && sessions+=("$line")
done < <(tmux list-sessions -F '#{session_name}' 2>/dev/null || true)
fi
local total=0 stuck=0 dead=0 ok=0
for session in "${sessions[@]}"; do
[ -z "$session" ] && continue
# Get pane targets
local panes
panes=$(tmux list-panes -t "$session" -F "${session}:#{window_index}.#{pane_index}" 2>/dev/null || true)
for target in $panes; do
check_pane "$target"
total=$((total + 1))
done
done
log "CHECK: Processed $total panes"
}
# === STATUS DISPLAY ===
show_status() {
if [ ! -f "$STATE_FILE" ]; then
echo "No pane state file found at $STATE_FILE"
echo "Run pane-watchdog.sh once to initialize."
exit 0
fi
python3 - "$STATE_FILE" <<'PYEOF'
import json, sys, time
state_file = sys.argv[1]
try:
with open(state_file) as f:
state = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
print("No state data yet.")
sys.exit(0)
if not state:
print("No panes tracked.")
sys.exit(0)
now = int(time.time())
print(f"{'PANE':<35} {'STATUS':<12} {'STALE':<6} {'LAST CHANGE':<15} {'RESTARTS'}")
print("-" * 90)
for target in sorted(state.keys()):
p = state[target]
status = p.get("status", "?")
same = p.get("same_count", 0)
last_change = p.get("last_change", 0)
restarts = p.get("restart_attempts", 0)
if last_change:
ago = now - last_change
if ago < 60:
change_str = f"{ago}s ago"
elif ago < 3600:
change_str = f"{ago//60}m ago"
else:
change_str = f"{ago//3600}h ago"
else:
change_str = "never"
# Color code
if status == "OK":
icon = "✓"
elif status == "STUCK":
icon = "✖"
elif status == "DEAD":
icon = "☠"
elif status == "STALE":
icon = "⏳"
else:
icon = "?"
print(f" {icon} {target:<32} {status:<12} {same:<6} {change_str:<15} {restarts}")
PYEOF
}
# === DAEMON MODE ===
run_daemon() {
log "DAEMON: Starting (interval=${CHECK_INTERVAL}s, stuck_threshold=${STUCK_CYCLES})"
echo "Pane watchdog started. Checking every ${CHECK_INTERVAL}s. Ctrl+C to stop."
echo "Log: $LOG_FILE"
echo "State: $STATE_FILE"
echo ""
while true; do
check_all_sessions
sleep "$CHECK_INTERVAL"
done
}
# === MAIN ===
case "${1:-}" in
--daemon)
run_daemon
;;
--status)
show_status
;;
--session)
if [ -z "${2:-}" ]; then
echo "Usage: pane-watchdog.sh --session SESSION_NAME"
exit 1
fi
MONITOR_SESSIONS="$2"
check_all_sessions
;;
--help|-h)
echo "pane-watchdog.sh — Detect stuck/dead tmux panes and auto-restart"
echo ""
echo "Usage:"
echo " pane-watchdog.sh # One-shot check"
echo " pane-watchdog.sh --daemon # Continuous monitoring"
echo " pane-watchdog.sh --status # Show pane state"
echo " pane-watchdog.sh --session S # Check one session"
echo ""
echo "Config (env vars):"
echo " PANE_CHECK_INTERVAL Seconds between checks (default: 120)"
echo " PANE_WATCHDOG_SESSIONS Comma-separated session names"
echo " PANE_STATE_FILE State file path"
echo " STUCK_CYCLES Unchanged cycles before STUCK (default: 2)"
;;
*)
check_all_sessions
;;
esac

View File

@@ -1,191 +0,0 @@
#!/usr/bin/env python3
"""pr-checklist.py -- Automated PR quality gate for Gitea CI.
Enforces the review standards that agents skip when left to self-approve.
Runs in CI on every pull_request event. Exits non-zero on any failure.
Checks:
1. PR has >0 file changes (no empty PRs)
2. PR branch is not behind base branch
3. PR does not bundle >3 unrelated issues
4. Changed .py files pass syntax check (python -c import)
5. Changed .sh files are executable
6. PR body references an issue number
7. At least 1 non-author review exists (warning only)
Refs: #393 (PERPLEXITY-08), Epic #385
"""
from __future__ import annotations
import json
import os
import re
import subprocess
import sys
from pathlib import Path
def fail(msg: str) -> None:
print(f"FAIL: {msg}", file=sys.stderr)
def warn(msg: str) -> None:
print(f"WARN: {msg}", file=sys.stderr)
def ok(msg: str) -> None:
print(f" OK: {msg}")
def get_changed_files() -> list[str]:
"""Return list of files changed in this PR vs base branch."""
base = os.environ.get("GITHUB_BASE_REF", "main")
try:
result = subprocess.run(
["git", "diff", "--name-only", f"origin/{base}...HEAD"],
capture_output=True, text=True, check=True,
)
return [f for f in result.stdout.strip().splitlines() if f]
except subprocess.CalledProcessError:
# Fallback: diff against HEAD~1
result = subprocess.run(
["git", "diff", "--name-only", "HEAD~1"],
capture_output=True, text=True, check=True,
)
return [f for f in result.stdout.strip().splitlines() if f]
def check_has_changes(files: list[str]) -> bool:
"""Check 1: PR has >0 file changes."""
if not files:
fail("PR has 0 file changes. Empty PRs are not allowed.")
return False
ok(f"PR changes {len(files)} file(s)")
return True
def check_not_behind_base() -> bool:
"""Check 2: PR branch is not behind base."""
base = os.environ.get("GITHUB_BASE_REF", "main")
try:
result = subprocess.run(
["git", "rev-list", "--count", f"HEAD..origin/{base}"],
capture_output=True, text=True, check=True,
)
behind = int(result.stdout.strip())
if behind > 0:
fail(f"Branch is {behind} commit(s) behind {base}. Rebase or merge.")
return False
ok(f"Branch is up-to-date with {base}")
return True
except (subprocess.CalledProcessError, ValueError):
warn("Could not determine if branch is behind base (git fetch may be needed)")
return True # Don't block on CI fetch issues
def check_issue_bundling(pr_body: str) -> bool:
"""Check 3: PR does not bundle >3 unrelated issues."""
issue_refs = set(re.findall(r"#(\d+)", pr_body))
if len(issue_refs) > 3:
fail(f"PR references {len(issue_refs)} issues ({', '.join(sorted(issue_refs))}). "
"Max 3 per PR to prevent bundling. Split into separate PRs.")
return False
ok(f"PR references {len(issue_refs)} issue(s) (max 3)")
return True
def check_python_syntax(files: list[str]) -> bool:
"""Check 4: Changed .py files have valid syntax."""
py_files = [f for f in files if f.endswith(".py") and Path(f).exists()]
if not py_files:
ok("No Python files changed")
return True
all_ok = True
for f in py_files:
result = subprocess.run(
[sys.executable, "-c", f"import ast; ast.parse(open('{f}').read())"],
capture_output=True, text=True,
)
if result.returncode != 0:
fail(f"Syntax error in {f}: {result.stderr.strip()[:200]}")
all_ok = False
if all_ok:
ok(f"All {len(py_files)} Python file(s) pass syntax check")
return all_ok
def check_shell_executable(files: list[str]) -> bool:
"""Check 5: Changed .sh files are executable."""
sh_files = [f for f in files if f.endswith(".sh") and Path(f).exists()]
if not sh_files:
ok("No shell scripts changed")
return True
all_ok = True
for f in sh_files:
if not os.access(f, os.X_OK):
fail(f"{f} is not executable. Run: chmod +x {f}")
all_ok = False
if all_ok:
ok(f"All {len(sh_files)} shell script(s) are executable")
return all_ok
def check_issue_reference(pr_body: str) -> bool:
"""Check 6: PR body references an issue number."""
if re.search(r"#\d+", pr_body):
ok("PR body references at least one issue")
return True
fail("PR body does not reference any issue (e.g. #123). "
"Every PR must trace to an issue.")
return False
def main() -> int:
print("=" * 60)
print("PR Checklist — Automated Quality Gate")
print("=" * 60)
print()
# Get PR body from env or git log
pr_body = os.environ.get("PR_BODY", "")
if not pr_body:
try:
result = subprocess.run(
["git", "log", "--format=%B", "-1"],
capture_output=True, text=True, check=True,
)
pr_body = result.stdout
except subprocess.CalledProcessError:
pr_body = ""
files = get_changed_files()
failures = 0
checks = [
check_has_changes(files),
check_not_behind_base(),
check_issue_bundling(pr_body),
check_python_syntax(files),
check_shell_executable(files),
check_issue_reference(pr_body),
]
failures = sum(1 for c in checks if not c)
print()
print("=" * 60)
if failures:
print(f"RESULT: {failures} check(s) FAILED")
print("Fix the issues above and push again.")
return 1
else:
print("RESULT: All checks passed")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,4 +1,3 @@
#!/usr/bin/env python3
"""
Soul Eval Gate — The Conscience of the Training Pipeline

View File

@@ -3,7 +3,7 @@
# Uses Hermes CLI plus workforce-manager to triage and review.
# Timmy is the brain. Other agents are the hands.
set -uo pipefail\n\nSCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
set -uo pipefail
LOG_DIR="$HOME/.hermes/logs"
LOG="$LOG_DIR/timmy-orchestrator.log"
@@ -40,7 +40,6 @@ gather_state() {
> "$state_dir/unassigned.txt"
> "$state_dir/open_prs.txt"
> "$state_dir/agent_status.txt"
> "$state_dir/uncommitted_work.txt"
for repo in $REPOS; do
local short=$(echo "$repo" | cut -d/ -f2)
@@ -72,24 +71,6 @@ for p in json.load(sys.stdin):
tail -50 "/tmp/kimi-heartbeat.log" 2>/dev/null | grep -c "FAILED:" | xargs -I{} echo "Kimi recent failures: {}" >> "$state_dir/agent_status.txt"
tail -1 "/tmp/kimi-heartbeat.log" 2>/dev/null | xargs -I{} echo "Kimi last event: {}" >> "$state_dir/agent_status.txt"
# Scan worktrees for uncommitted work
for wt_dir in "$HOME/worktrees"/*/; do
[ -d "$wt_dir" ] || continue
[ -d "$wt_dir/.git" ] || continue
local dirty
dirty=$(cd "$wt_dir" && git status --porcelain 2>/dev/null | wc -l | tr -d " ")
if [ "${dirty:-0}" -gt 0 ]; then
local branch
branch=$(cd "$wt_dir" && git branch --show-current 2>/dev/null || echo "?")
local age=""
local last_commit
last_commit=$(cd "$wt_dir" && git log -1 --format=%ct 2>/dev/null || echo 0)
local now=$(date +%s)
local stale_mins=$(( (now - last_commit) / 60 ))
echo "DIR=$wt_dir BRANCH=$branch DIRTY=$dirty STALE=${stale_mins}m" >> "$state_dir/uncommitted_work.txt"
fi
done
echo "$state_dir"
}
@@ -100,25 +81,6 @@ run_triage() {
log "Cycle: $unassigned_count unassigned, $pr_count open PRs"
# Check for uncommitted work — nag if stale
local uncommitted_count
uncommitted_count=$(wc -l < "$state_dir/uncommitted_work.txt" 2>/dev/null | tr -d " " || echo 0)
if [ "${uncommitted_count:-0}" -gt 0 ]; then
log "WARNING: $uncommitted_count worktree(s) with uncommitted work"
while IFS= read -r line; do
log " UNCOMMITTED: $line"
# Auto-commit stale work (>60 min without commit)
local stale=$(echo "$line" | sed 's/.*STALE=\([0-9]*\)m.*/\1/')
local wt_dir=$(echo "$line" | sed 's/.*DIR=\([^ ]*\) .*/\1/')
if [ "${stale:-0}" -gt 60 ]; then
log " AUTO-COMMITTING stale work in $wt_dir (${stale}m stale)"
(cd "$wt_dir" && git add -A && git commit -m "WIP: orchestrator auto-commit — ${stale}m stale work
Preserved by timmy-orchestrator to prevent loss." 2>/dev/null && git push 2>/dev/null) && log " COMMITTED: $wt_dir" || log " COMMIT FAILED: $wt_dir"
fi
done < "$state_dir/uncommitted_work.txt"
fi
# If nothing to do, skip the LLM call
if [ "$unassigned_count" -eq 0 ] && [ "$pr_count" -eq 0 ]; then
log "Nothing to triage"
@@ -236,12 +198,6 @@ FOOTER
log "=== Timmy Orchestrator Started (PID $$) ==="
log "Cycle: ${CYCLE_INTERVAL}s | Auto-assign: ${AUTO_ASSIGN_UNASSIGNED} | Inference surface: Hermes CLI"
# Start auto-commit-guard daemon for work preservation
if ! pgrep -f "auto-commit-guard.sh" >/dev/null 2>&1; then
nohup bash "$SCRIPT_DIR/auto-commit-guard.sh" 120 >> "$LOG_DIR/auto-commit-guard.log" 2>&1 &
log "Started auto-commit-guard daemon (PID $!)"
fi
WORKFORCE_CYCLE=0
while true; do

View File

@@ -1,97 +0,0 @@
#!/usr/bin/env bash
# ── tmux-resume.sh — Cold-start Session Resume ───────────────────────────
# Reads ~/.timmy/tmux-state.json and resumes hermes sessions.
# Run at startup to restore pane state after supervisor restart.
# ──────────────────────────────────────────────────────────────────────────
set -euo pipefail
MANIFEST="${HOME}/.timmy/tmux-state.json"
if [ ! -f "$MANIFEST" ]; then
echo "[tmux-resume] No manifest found at $MANIFEST — starting fresh."
exit 0
fi
python3 << 'PYEOF'
import json, subprocess, os, sys
from datetime import datetime, timezone
MANIFEST = os.path.expanduser("~/.timmy/tmux-state.json")
def run(cmd):
try:
r = subprocess.run(cmd, shell=True, capture_output=True, text=True, timeout=30)
return r.stdout.strip(), r.returncode
except Exception as e:
return str(e), 1
def session_exists(name):
out, _ = run(f"tmux has-session -t '{name}' 2>&1")
return "can't find" not in out.lower()
with open(MANIFEST) as f:
state = json.load(f)
ts = state.get("timestamp", "unknown")
age = "unknown"
try:
t = datetime.fromisoformat(ts.replace("Z", "+00:00"))
delta = datetime.now(timezone.utc) - t
mins = int(delta.total_seconds() / 60)
if mins < 60:
age = f"{mins}m ago"
else:
age = f"{mins//60}h {mins%60}m ago"
except:
pass
print(f"[tmux-resume] Manifest from {age}: {state['summary']['total_sessions']} sessions, "
f"{state['summary']['hermes_panes']} hermes panes")
restored = 0
skipped = 0
for pane in state.get("panes", []):
if not pane.get("is_hermes"):
continue
addr = pane["address"] # e.g. "BURN:2.3"
session = addr.split(":")[0]
session_id = pane.get("session_id")
profile = pane.get("profile", "default")
model = pane.get("model", "")
task = pane.get("task", "")
# Skip if session already exists (already running)
if session_exists(session):
print(f" [skip] {addr} — session '{session}' already exists")
skipped += 1
continue
# Respawn hermes with session resume if we have a session ID
if session_id:
print(f" [resume] {addr} — profile={profile} model={model} session={session_id}")
cmd = f"hermes chat --resume {session_id}"
else:
print(f" [start] {addr} — profile={profile} model={model} (no session ID)")
cmd = f"hermes chat --profile {profile}"
# Create tmux session and run hermes
run(f"tmux new-session -d -s '{session}' -n '{session}:0'")
run(f"tmux send-keys -t '{session}' '{cmd}' Enter")
restored += 1
# Write resume log
log = {
"resumed_at": datetime.now(timezone.utc).isoformat(),
"manifest_age": age,
"restored": restored,
"skipped": skipped,
}
log_path = os.path.expanduser("~/.timmy/tmux-resume.log")
with open(log_path, "w") as f:
json.dump(log, f, indent=2)
print(f"[tmux-resume] Done: {restored} restored, {skipped} skipped")
PYEOF

View File

@@ -1,237 +0,0 @@
#!/usr/bin/env bash
# ── tmux-state.sh — Session State Persistence Manifest ───────────────────
# Snapshots all tmux pane state to ~/.timmy/tmux-state.json
# Run every supervisor cycle. Cold-start reads this manifest to resume.
# ──────────────────────────────────────────────────────────────────────────
set -euo pipefail
MANIFEST="${HOME}/.timmy/tmux-state.json"
mkdir -p "$(dirname "$MANIFEST")"
python3 << 'PYEOF'
import json, subprocess, os, time, re, sys
from datetime import datetime, timezone
from pathlib import Path
MANIFEST = os.path.expanduser("~/.timmy/tmux-state.json")
def run(cmd):
"""Run command, return stdout or empty string."""
try:
r = subprocess.run(cmd, shell=True, capture_output=True, text=True, timeout=5)
return r.stdout.strip()
except Exception:
return ""
def get_sessions():
"""Get all tmux sessions with metadata."""
raw = run("tmux list-sessions -F '#{session_name}|#{session_windows}|#{session_created}|#{session_attached}|#{session_group}|#{session_id}'")
sessions = []
for line in raw.splitlines():
if not line.strip():
continue
parts = line.split("|")
if len(parts) < 6:
continue
sessions.append({
"name": parts[0],
"windows": int(parts[1]),
"created_epoch": int(parts[2]),
"created": datetime.fromtimestamp(int(parts[2]), tz=timezone.utc).isoformat(),
"attached": parts[3] == "1",
"group": parts[4],
"id": parts[5],
})
return sessions
def get_panes():
"""Get all tmux panes with full metadata."""
fmt = '#{session_name}|#{window_index}|#{pane_index}|#{pane_pid}|#{pane_title}|#{pane_width}x#{pane_height}|#{pane_active}|#{pane_current_command}|#{pane_start_command}|#{pane_tty}|#{pane_id}|#{window_name}|#{session_id}'
raw = run(f"tmux list-panes -a -F '{fmt}'")
panes = []
for line in raw.splitlines():
if not line.strip():
continue
parts = line.split("|")
if len(parts) < 13:
continue
session, win, pane, pid, title, size, active, cmd, start_cmd, tty, pane_id, win_name, sess_id = parts[:13]
w, h = size.split("x") if "x" in size else ("0", "0")
panes.append({
"session": session,
"window_index": int(win),
"window_name": win_name,
"pane_index": int(pane),
"pane_id": pane_id,
"pid": int(pid) if pid.isdigit() else 0,
"title": title,
"width": int(w),
"height": int(h),
"active": active == "1",
"command": cmd,
"start_command": start_cmd,
"tty": tty,
"session_id": sess_id,
})
return panes
def extract_hermes_state(pane):
"""Try to extract hermes session info from a pane."""
info = {
"is_hermes": False,
"profile": None,
"model": None,
"provider": None,
"session_id": None,
"task": None,
}
title = pane.get("title", "")
cmd = pane.get("command", "")
start = pane.get("start_command", "")
# Detect hermes processes
is_hermes = any(k in (title + " " + cmd + " " + start).lower()
for k in ["hermes", "timmy", "mimo", "claude", "gpt"])
if not is_hermes and cmd not in ("python3", "python3.11", "bash", "zsh", "fish"):
return info
# Try reading pane content for model/provider clues
pane_content = run(f"tmux capture-pane -t '{pane['session']}:{pane['window_index']}.{pane['pane_index']}' -p -S -20 2>/dev/null")
# Extract model from pane content patterns
model_patterns = [
r"(?:mimo-v2-pro|claude-[\w.-]+|gpt-[\w.-]+|gemini-[\w.-]+|qwen[\w:.-]*)",
]
for pat in model_patterns:
m = re.search(pat, pane_content, re.IGNORECASE)
if m:
info["model"] = m.group(0)
info["is_hermes"] = True
break
# Provider inference from model
model = (info["model"] or "").lower()
if "mimo" in model:
info["provider"] = "nous"
elif "claude" in model:
info["provider"] = "anthropic"
elif "gpt" in model:
info["provider"] = "openai"
elif "gemini" in model:
info["provider"] = "google"
elif "qwen" in model:
info["provider"] = "custom"
# Profile from session name
session = pane["session"].lower()
if "burn" in session:
info["profile"] = "burn"
elif session in ("dev", "0"):
info["profile"] = "default"
else:
info["profile"] = session
# Try to extract session ID (hermes uses UUIDs)
uuid_match = re.findall(r'[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}', pane_content)
if uuid_match:
info["session_id"] = uuid_match[-1] # most recent
info["is_hermes"] = True
# Last prompt — grab the last user-like line
lines = pane_content.splitlines()
for line in reversed(lines):
stripped = line.strip()
if stripped and not stripped.startswith(("─", "│", "╭", "╰", "▸", "●", "○")) and len(stripped) > 10:
info["task"] = stripped[:200]
break
return info
def get_context_percent(pane):
"""Estimate context usage from pane content heuristics."""
content = run(f"tmux capture-pane -t '{pane['session']}:{pane['window_index']}.{pane['pane_index']}' -p -S -5 2>/dev/null")
# Look for context indicators like "ctx 45%" or "[░░░░░░░░░░]"
ctx_match = re.search(r'ctx\s*(\d+)%', content)
if ctx_match:
return int(ctx_match.group(1))
bar_match = re.search(r'\[(░+█*█*░*)\]', content)
if bar_match:
bar = bar_match.group(1)
filled = bar.count('█')
total = len(bar)
if total > 0:
return int((filled / total) * 100)
return None
def build_manifest():
"""Build the full tmux state manifest."""
now = datetime.now(timezone.utc)
sessions = get_sessions()
panes = get_panes()
pane_manifests = []
for p in panes:
hermes = extract_hermes_state(p)
ctx = get_context_percent(p)
entry = {
"address": f"{p['session']}:{p['window_index']}.{p['pane_index']}",
"pane_id": p["pane_id"],
"pid": p["pid"],
"size": f"{p['width']}x{p['height']}",
"active": p["active"],
"command": p["command"],
"title": p["title"],
"profile": hermes["profile"],
"model": hermes["model"],
"provider": hermes["provider"],
"session_id": hermes["session_id"],
"task": hermes["task"],
"context_pct": ctx,
"is_hermes": hermes["is_hermes"],
}
pane_manifests.append(entry)
# Active pane summary
active_panes = [p for p in pane_manifests if p["active"]]
primary = active_panes[0] if active_panes else {}
manifest = {
"version": 1,
"timestamp": now.isoformat(),
"timestamp_epoch": int(now.timestamp()),
"hostname": os.uname().nodename,
"sessions": sessions,
"panes": pane_manifests,
"summary": {
"total_sessions": len(sessions),
"total_panes": len(pane_manifests),
"hermes_panes": sum(1 for p in pane_manifests if p["is_hermes"]),
"active_pane": primary.get("address"),
"active_model": primary.get("model"),
"active_provider": primary.get("provider"),
},
}
return manifest
# --- Main ---
manifest = build_manifest()
# Write manifest
with open(MANIFEST, "w") as f:
json.dump(manifest, f, indent=2)
# Also write to ~/.hermes/tmux-state.json for compatibility
hermes_manifest = os.path.expanduser("~/.hermes/tmux-state.json")
os.makedirs(os.path.dirname(hermes_manifest), exist_ok=True)
with open(hermes_manifest, "w") as f:
json.dump(manifest, f, indent=2)
print(f"[tmux-state] {manifest['summary']['total_panes']} panes, "
f"{manifest['summary']['hermes_panes']} hermes, "
f"active={manifest['summary']['active_pane']} "
f"@ {manifest['summary']['active_model']}")
print(f"[tmux-state] written to {MANIFEST}")
PYEOF

View File

@@ -1,5 +1,5 @@
{
"updated_at": "2026-04-13T02:02:07.001824",
"updated_at": "2026-03-28T09:54:34.822062",
"platforms": {
"discord": [
{
@@ -27,81 +27,11 @@
"name": "Timmy Time",
"type": "group",
"thread_id": null
},
{
"id": "-1003664764329:85",
"name": "Timmy Time / topic 85",
"type": "group",
"thread_id": "85"
},
{
"id": "-1003664764329:111",
"name": "Timmy Time / topic 111",
"type": "group",
"thread_id": "111"
},
{
"id": "-1003664764329:173",
"name": "Timmy Time / topic 173",
"type": "group",
"thread_id": "173"
},
{
"id": "7635059073",
"name": "Trip T",
"type": "dm",
"thread_id": null
},
{
"id": "-1003664764329:244",
"name": "Timmy Time / topic 244",
"type": "group",
"thread_id": "244"
},
{
"id": "-1003664764329:972",
"name": "Timmy Time / topic 972",
"type": "group",
"thread_id": "972"
},
{
"id": "-1003664764329:931",
"name": "Timmy Time / topic 931",
"type": "group",
"thread_id": "931"
},
{
"id": "-1003664764329:957",
"name": "Timmy Time / topic 957",
"type": "group",
"thread_id": "957"
},
{
"id": "-1003664764329:1297",
"name": "Timmy Time / topic 1297",
"type": "group",
"thread_id": "1297"
},
{
"id": "-1003664764329:1316",
"name": "Timmy Time / topic 1316",
"type": "group",
"thread_id": "1316"
}
],
"whatsapp": [],
"slack": [],
"signal": [],
"mattermost": [],
"matrix": [],
"homeassistant": [],
"email": [],
"sms": [],
"dingtalk": [],
"feishu": [],
"wecom": [],
"wecom_callback": [],
"weixin": [],
"bluebubbles": []
"sms": []
}
}

View File

@@ -7,7 +7,7 @@ Purpose:
## What it is
Code Claw is a separate local runtime from Hermes.
Code Claw is a separate local runtime from Hermes/OpenClaw.
Current lane:
- runtime: local patched `~/code-claw`

View File

@@ -1,23 +1,31 @@
model:
default: claude-opus-4-6
provider: anthropic
default: hermes4:14b
provider: custom
context_length: 65536
base_url: http://localhost:8081/v1
toolsets:
- all
agent:
max_turns: 30
reasoning_effort: medium
reasoning_effort: xhigh
verbose: false
terminal:
backend: local
cwd: .
timeout: 180
env_passthrough: []
docker_image: nikolaik/python-nodejs:python3.11-nodejs20
docker_forward_env: []
singularity_image: docker://nikolaik/python-nodejs:python3.11-nodejs20
modal_image: nikolaik/python-nodejs:python3.11-nodejs20
daytona_image: nikolaik/python-nodejs:python3.11-nodejs20
container_cpu: 1
container_memory: 5120
container_embeddings:
provider: ollama
model: nomic-embed-text
base_url: http://localhost:11434/v1
memory: 5120
container_disk: 51200
container_persistent: true
docker_volumes: []
@@ -25,74 +33,89 @@ terminal:
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
checkpoints:
enabled: false
enabled: true
max_snapshots: 50
compression:
enabled: true
threshold: 0.5
summary_model: qwen3:30b
summary_provider: custom
summary_base_url: http://localhost:11434/v1
target_ratio: 0.2
protect_last_n: 20
summary_model: ''
summary_provider: ''
summary_base_url: ''
synthesis_model:
provider: custom
model: llama3:70b
base_url: http://localhost:8081/v1
smart_model_routing:
enabled: false
max_simple_chars: 160
max_simple_words: 28
cheap_model: {}
enabled: true
max_simple_chars: 400
max_simple_words: 75
cheap_model:
provider: 'ollama'
model: 'gemma2:2b'
base_url: 'http://localhost:11434/v1'
api_key: ''
auxiliary:
vision:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
timeout: 30
web_extract:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
compression:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
session_search:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
skills_hub:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
approval:
provider: auto
model: ''
base_url: ''
api_key: ''
mcp:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
flush_memories:
provider: custom
model: qwen3:30b
base_url: 'http://localhost:11434/v1'
api_key: 'ollama'
provider: auto
model: ''
base_url: ''
api_key: ''
display:
compact: false
personality: ''
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
skin: timmy
tool_progress_command: false
tool_progress: all
privacy:
redact_pii: false
redact_pii: true
tts:
provider: edge
edge:
@@ -101,7 +124,7 @@ tts:
voice_id: pNInz6obpgDQGcFmaJgB
model_id: eleven_multilingual_v2
openai:
model: gpt-4o-mini-tts
model: '' # disabled — use edge TTS locally
voice: alloy
neutts:
ref_audio: ''
@@ -137,6 +160,7 @@ delegation:
provider: ''
base_url: ''
api_key: ''
max_iterations: 50
prefill_messages_file: ''
honcho: {}
timezone: ''
@@ -150,7 +174,16 @@ approvals:
command_allowlist: []
quick_commands: {}
personalities: {}
mesh:
enabled: true
blackboard_provider: local
nostr_discovery: true
consensus_mode: competitive
security:
sovereign_audit: true
no_phone_home: true
redact_secrets: true
tirith_enabled: true
tirith_path: tirith
@@ -160,66 +193,55 @@ security:
enabled: false
domains: []
shared_files: []
# Author whitelist for task router (Issue #132)
# Only users in this list can submit tasks via Gitea issues
# Empty list = deny all (secure by default)
# Set via env var TIMMY_AUTHOR_WHITELIST as comma-separated list
author_whitelist: []
_config_version: 9
_config_version: 10
platforms:
api_server:
enabled: true
extra:
host: 0.0.0.0
port: 8642
session_reset:
mode: none
idle_minutes: 0
custom_providers:
- name: Local Ollama
base_url: http://localhost:11434/v1
api_key: ollama
model: qwen3:30b
- name: Local llama.cpp
base_url: http://localhost:8081/v1
api_key: none
model: hermes4:14b
# ── Emergency cloud provider — not used by default or any cron job.
# Available for explicit override only: hermes --model gemini-2.5-pro
- name: Google Gemini (emergency only)
base_url: https://generativelanguage.googleapis.com/v1beta/openai
api_key_env: GEMINI_API_KEY
model: gemini-2.5-pro
system_prompt_suffix: "You are Timmy. Your soul is defined in SOUL.md \u2014 read\
\ it, live it.\nYou run locally on your owner's machine via Ollama. You never phone\
\ home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
Source distinction: Tag every factual claim inline. Default is [generated] — you\
\ are pattern-matching from training data. Only use [retrieved] when you can name\
\ the specific tool call or document from THIS conversation that provided the fact.\
\ If no tool was called, every claim is [generated]. No exceptions.\n\
Refusal over fabrication: When you generate a specific claim — a date, a number,\
\ a price, a version, a URL, a current event — and you cannot name a source from\
\ this conversation, say 'I don't know' instead. Do not guess. Do not hedge with\
\ 'probably' or 'approximately' as a substitute for knowledge. If your only source\
\ is training data and the claim could be wrong or outdated, the honest answer is\
\ 'I don't know — I can look this up if you'd like.' Prefer a true 'I don't know'\
\ over a plausible fabrication.\nSovereignty and service always.\n"
\ it, live it.\nYou run locally on your owner's machine via llama.cpp. You never\
\ phone home.\nYou speak plainly. You prefer short sentences. Brevity is a kindness.\n\
When you don't know something, say so. Refusal over fabrication.\nSovereignty and\
\ service always.\n"
skills:
creation_nudge_interval: 15
# ── Fallback Model ────────────────────────────────────────────────────
# Automatic provider failover when primary is unavailable.
# Uncomment and configure to enable. Triggers on rate limits (429),
# overload (529), service errors (503), or connection failures.
#
# Supported providers:
# openrouter (OPENROUTER_API_KEY) — routes to any model
# openai-codex (OAuth — hermes login) — OpenAI Codex
# nous (OAuth — hermes login) — Nous Portal
# zai (ZAI_API_KEY) — Z.AI / GLM
# kimi-coding (KIMI_API_KEY) — Kimi / Moonshot
# minimax (MINIMAX_API_KEY) — MiniMax
# minimax-cn (MINIMAX_CN_API_KEY) — MiniMax (China)
#
# For custom OpenAI-compatible endpoints, add base_url and api_key_env.
#
# fallback_model:
# provider: openrouter
# model: anthropic/claude-sonnet-4
#
# ── Smart Model Routing ────────────────────────────────────────────────
# Optional cheap-vs-strong routing for simple turns.
# Keeps the primary model for complex work, but can route short/simple
# messages to a cheaper model across providers.
#
# smart_model_routing:
# enabled: true
# max_simple_chars: 160
# max_simple_words: 28
# cheap_model:
# provider: openrouter
# model: google/gemini-2.5-flash
DISCORD_HOME_CHANNEL: '1476292315814297772'
providers:
ollama:
base_url: http://localhost:11434/v1
model: hermes3:latest
mcp_servers:
morrowind:
command: python3
args:
- /Users/apayne/.timmy/morrowind/mcp_server.py
env: {}
timeout: 30
crucible:
command: /Users/apayne/.hermes/hermes-agent/venv/bin/python3
args:
- /Users/apayne/.hermes/bin/crucible_mcp_server.py
env: {}
timeout: 120
connect_timeout: 60
fallback_model:
provider: ollama
model: hermes3:latest
base_url: http://localhost:11434/v1
api_key: ''

View File

@@ -1,212 +0,0 @@
[
{
"job_id": "9e0624269ba7",
"name": "Triage Heartbeat",
"schedule": "every 15m",
"state": "paused"
},
{
"job_id": "e29eda4a8548",
"name": "PR Review Sweep",
"schedule": "every 30m",
"state": "scheduled"
},
{
"job_id": "a77a87392582",
"name": "Health Monitor",
"schedule": "every 5m",
"state": "scheduled"
},
{
"job_id": "5e9d952871bc",
"name": "Agent Status Check",
"schedule": "every 10m",
"state": "paused"
},
{
"job_id": "36fb2f630a17",
"name": "Hermes Philosophy Loop",
"schedule": "every 1440m",
"state": "paused"
},
{
"job_id": "b40a96a2f48c",
"name": "wolf-eval-cycle",
"schedule": "every 240m",
"state": "paused"
},
{
"job_id": "4204e568b862",
"name": "Burn Mode \u2014 Timmy Orchestrator",
"schedule": "every 15m",
"state": "scheduled"
},
{
"job_id": "0944a976d034",
"name": "Burn Mode",
"schedule": "every 15m",
"state": "paused"
},
{
"job_id": "62016b960fa0",
"name": "velocity-engine",
"schedule": "every 30m",
"state": "paused"
},
{
"job_id": "e9d49eeff79c",
"name": "weekly-skill-extraction",
"schedule": "every 10080m",
"state": "scheduled"
},
{
"job_id": "75c74a5bb563",
"name": "tower-tick",
"schedule": "every 1m",
"state": "scheduled"
},
{
"job_id": "390a19054d4c",
"name": "Burn Deadman",
"schedule": "every 30m",
"state": "scheduled"
},
{
"job_id": "05e3c13498fa",
"name": "Morning Report \u2014 Burn Mode",
"schedule": "0 6 * * *",
"state": "scheduled"
},
{
"job_id": "64fe44b512b9",
"name": "evennia-morning-report",
"schedule": "0 9 * * *",
"state": "scheduled"
},
{
"job_id": "3896a7fd9747",
"name": "Gitea Priority Inbox",
"schedule": "every 3m",
"state": "scheduled"
},
{
"job_id": "f64c2709270a",
"name": "Config Drift Guard",
"schedule": "every 30m",
"state": "scheduled"
},
{
"job_id": "fc6a75b7102a",
"name": "Gitea Event Watcher",
"schedule": "every 2m",
"state": "scheduled"
},
{
"job_id": "12e59648fb06",
"name": "Burndown Night Watcher",
"schedule": "every 15m",
"state": "scheduled"
},
{
"job_id": "35d3ada9cf8f",
"name": "Mempalace Forge \u2014 Issue Analysis",
"schedule": "every 60m",
"state": "scheduled"
},
{
"job_id": "190b6fb8dc91",
"name": "Mempalace Watchtower \u2014 Fleet Health",
"schedule": "every 30m",
"state": "scheduled"
},
{
"job_id": "710ab589813c",
"name": "Ezra Health Monitor",
"schedule": "every 15m",
"state": "scheduled"
},
{
"job_id": "a0a9cce4575c",
"name": "daily-poka-yoke-ultraplan-awesometools",
"schedule": "every 1440m",
"state": "scheduled"
},
{
"job_id": "adc3a51457bd",
"name": "vps-agent-dispatch",
"schedule": "every 10m",
"state": "scheduled"
},
{
"job_id": "afd2c4eac44d",
"name": "Project Mnemosyne Nightly Burn v2",
"schedule": "*/30 * * * *",
"state": "scheduled"
},
{
"job_id": "f3a3c2832af0",
"name": "gemma4-multimodal-worker",
"schedule": "once in 15m",
"state": "completed"
},
{
"job_id": "c17a85c19838",
"name": "know-thy-father-analyzer",
"schedule": "0 * * * *",
"state": "scheduled"
},
{
"job_id": "2490fc01a14d",
"name": "Testament Burn - 10min work loop",
"schedule": "*/10 * * * *",
"state": "scheduled"
},
{
"job_id": "f5e858159d97",
"name": "Timmy Foundation Burn \u2014 15min PR loop",
"schedule": "*/15 * * * *",
"state": "scheduled"
},
{
"job_id": "5e262fb9bdce",
"name": "nightwatch-health-monitor",
"schedule": "*/15 * * * *",
"state": "scheduled"
},
{
"job_id": "f2b33a9dcf96",
"name": "nightwatch-mempalace-mine",
"schedule": "0 */2 * * *",
"state": "scheduled"
},
{
"job_id": "82cb9e76c54d",
"name": "nightwatch-backlog-burn",
"schedule": "0 */4 * * *",
"state": "scheduled"
},
{
"job_id": "d20e42a52863",
"name": "beacon-sprint",
"schedule": "*/15 * * * *",
"state": "scheduled"
},
{
"job_id": "579269489961",
"name": "testament-story",
"schedule": "*/15 * * * *",
"state": "scheduled"
},
{
"job_id": "2e5f9140d1ab",
"name": "nightwatch-research",
"schedule": "0 */2 * * *",
"state": "scheduled"
},
{
"job_id": "aeba92fd65e6",
"name": "timmy-dreams",
"schedule": "30 5 * * *",
"state": "scheduled"
}
]

View File

@@ -168,35 +168,7 @@
"paused_reason": null,
"skills": [],
"skill": null
},
{
"id": "overnight-rd-nightly",
"name": "Overnight R&D Loop",
"prompt": "Run the overnight R&D automation: Deep Dive paper synthesis, tightening loop for tool-use training data, DPO export sweep, morning briefing prep. All local inference via Ollama.",
"schedule": {
"kind": "cron",
"expr": "0 2 * * *",
"display": "0 2 * * * (10 PM EDT)"
},
"schedule_display": "Nightly at 10 PM EDT",
"repeat": {
"times": null,
"completed": 0
},
"enabled": true,
"created_at": "2026-04-13T02:00:00+00:00",
"next_run_at": null,
"last_run_at": null,
"last_status": null,
"last_error": null,
"deliver": "local",
"origin": "perplexity/overnight-rd-automation",
"state": "scheduled",
"paused_at": null,
"paused_reason": null,
"skills": [],
"skill": null
}
],
"updated_at": "2026-04-13T02:00:00+00:00"
"updated_at": "2026-04-07T15:00:00+00:00"
}

View File

@@ -1,9 +0,0 @@
- name: Nightly Pipeline Scheduler
schedule: '*/30 18-23,0-8 * * *' # Every 30 min, off-peak hours only
tasks:
- name: Check and start pipelines
shell: "bash scripts/nightly-pipeline-scheduler.sh"
env:
PIPELINE_TOKEN_LIMIT: "500000"
PIPELINE_PEAK_START: "9"
PIPELINE_PEAK_END: "18"

View File

@@ -1,14 +0,0 @@
0 6 * * * /bin/bash /root/wizards/scripts/model_download_guard.sh >> /var/log/model_guard.log 2>&1
# Allegro Hybrid Heartbeat — quick wins every 15 min
*/15 * * * * /usr/bin/python3 /root/allegro/heartbeat_daemon.py >> /var/log/allegro_heartbeat.log 2>&1
# Allegro Burn Mode Cron Jobs - Deployed via issue #894
0 6 * * * cd /root/.hermes && python3 -c "import hermes_agent; from hermes_tools import terminal; output = terminal('echo \"Morning Report: $(date)\"'); print(output.get('output', ''))" >> /root/.hermes/logs/morning-report-$(date +\%Y\%m\%d).log 2>&1 # Allegro Morning Report at 0600
0,30 * * * * cd /root/.hermes && python3 /root/.hermes/retry_wrapper.py "python3 allegro/quick-lane-check.py" >> burn-logs/quick-lane-$(date +\%Y\%m\%d).log 2>&1 # Allegro Burn Loop #1 (with retry)
15,45 * * * * cd /root/.hermes && python3 /root/.hermes/retry_wrapper.py "python3 allegro/burn-mode-validator.py" >> burn-logs/validator-$(date +\%Y\%m\%d).log 2>&1 # Allegro Burn Loop #2 (with retry)
*/2 * * * * /root/wizards/bezalel/dead_man_monitor.sh
*/2 * * * * /root/wizards/allegro/bin/config-deadman.sh

View File

@@ -1,10 +0,0 @@
0 2 * * * /root/wizards/bezalel/run_nightly_watch.sh
0 3 * * * /root/wizards/bezalel/mempalace_nightly.sh
*/10 * * * * pgrep -f "act_runner daemon" > /dev/null || (cd /opt/gitea-runner && nohup ./act_runner daemon > /var/log/gitea-runner.log 2>&1 &)
30 3 * * * /root/wizards/bezalel/backup_databases.sh
*/15 * * * * /root/wizards/bezalel/meta_heartbeat.sh
0 4 * * * /root/wizards/bezalel/secret_guard.sh
0 4 * * * /usr/bin/env bash /root/timmy-home/scripts/backup_pipeline.sh >> /var/log/timmy/backup_pipeline_cron.log 2>&1
0 6 * * * /usr/bin/python3 /root/wizards/bezalel/ultraplan.py >> /var/log/bezalel-ultraplan.log 2>&1
@reboot /root/wizards/bezalel/emacs-daemon-start.sh
@reboot /root/wizards/bezalel/ngircd-start.sh

View File

@@ -1,13 +0,0 @@
# Burn Mode Cycles — 15 min autonomous loops
*/15 * * * * /root/wizards/ezra/bin/burn-mode.sh >> /root/wizards/ezra/reports/burn-cron.log 2>&1
# Household Snapshots — automated heartbeats and snapshots
# Ezra Self-Improvement Automation Suite
*/5 * * * * /usr/bin/python3 /root/wizards/ezra/tools/gitea_monitor.py >> /root/wizards/ezra/reports/gitea-monitor.log 2>&1
*/5 * * * * /usr/bin/python3 /root/wizards/ezra/tools/awareness_loop.py >> /root/wizards/ezra/reports/awareness-loop.log 2>&1
*/10 * * * * /usr/bin/python3 /root/wizards/ezra/tools/cron_health_monitor.py >> /root/wizards/ezra/reports/cron-health.log 2>&1
0 6 * * * /usr/bin/python3 /root/wizards/ezra/tools/morning_kt_compiler.py >> /root/wizards/ezra/reports/morning-kt.log 2>&1
5 6 * * * /usr/bin/python3 /root/wizards/ezra/tools/burndown_generator.py >> /root/wizards/ezra/reports/burndown.log 2>&1
0 3 * * * /root/wizards/ezra/mempalace_nightly.sh >> /var/log/ezra_mempalace_cron.log 2>&1
*/15 * * * * GITEA_TOKEN=6de6aa...1117 /root/wizards/ezra/dispatch-direct.sh >> /root/wizards/ezra/dispatch-cron.log 2>&1

View File

@@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.timmy.auto-commit-guard</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/Users/apayne/.hermes/bin/auto-commit-guard.sh</string>
<string>120</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/apayne/.hermes/logs/auto-commit-guard.stdout.log</string>
<key>StandardErrorPath</key>
<string>/Users/apayne/.hermes/logs/auto-commit-guard.stderr.log</string>
<key>WorkingDirectory</key>
<string>/Users/apayne</string>
</dict>
</plist>

View File

@@ -1,21 +0,0 @@
# Gitea Accessibility Fix - R4: Time Elements
WCAG 1.3.1: Relative timestamps lack machine-readable fallbacks.
## Fix
Wrap relative timestamps in `<time datetime="...">` elements.
## Files
- `custom/templates/custom/time_relative.tmpl` - Reusable `<time>` helper
- `custom/templates/repo/list_a11y.tmpl` - Explore/Repos list override
## Deploy
```bash
cp -r custom/templates/* /path/to/gitea/custom/templates/
systemctl restart gitea
```
Closes #554

View File

@@ -1,27 +0,0 @@
{{/*
Gitea a11y fix: R4 <time> elements for relative timestamps
Deploy to: custom/templates/custom/time_relative.tmpl
*/}}
{{define "custom/time_relative"}}
{{if and .Time .Relative}}
<time datetime="{{.Time.Format "2006-01-02T15:04:05Z07:00"}}" title="{{.Time.Format "Jan 02, 2006 15:04"}}">
{{.Relative}}
</time>
{{else if .Relative}}
<span>{{.Relative}}</span>
{{end}}
{{end}}
{{define "custom/time_from_unix"}}
{{if .Relative}}
<time datetime="" data-unix="{{.Unix}}" title="">{{.Relative}}</time>
<script>
(function() {
var el = document.currentScript.previousElementSibling;
var unix = parseInt(el.getAttribute('data-unix'));
if (unix) { el.setAttribute('datetime', new Date(unix * 1000).toISOString()); el.setAttribute('title', new Date(unix * 1000).toLocaleString()); }
})();
</script>
{{end}}
{{end}}

View File

@@ -1,27 +0,0 @@
{{/*
Gitea a11y fix: R4 <time> elements for relative timestamps on repo list
Deploy to: custom/templates/repo/list_a11y.tmpl
*/}}
{{/* Star count link with aria-label */}}
<a class="repo-card-star" href="{{.RepoLink}}/stars" aria-label="{{.NumStars}} stars" title="{{.NumStars}} stars">
<svg class="octicon octicon-star" viewBox="0 0 16 16" width="16" height="16" aria-hidden="true">
<path d="M8 .25a.75.75 0 01.673.418l1.882 3.815 4.21.612a.75.75 0 01.416 1.279l-3.046 2.97.719 4.192a.75.75 0 01-1.088.791L8 12.347l-3.766 1.98a.75.75 0 01-1.088-.79l.72-4.194L.818 6.374a.75.75 0 01.416-1.28l4.21-.611L7.327.668A.75.75 0 018 .25z"/>
</svg>
<span>{{.NumStars}}</span>
</a>
{{/* Fork count link with aria-label */}}
<a class="repo-card-fork" href="{{.RepoLink}}/forks" aria-label="{{.NumForks}} forks" title="{{.NumForks}} forks">
<svg class="octicon octicon-repo-forked" viewBox="0 0 16 16" width="16" height="16" aria-hidden="true">
<path d="M5 5.372v.878c0 .414.336.75.75.75h4.5a.75.75 0 00.75-.75v-.878a2.25 2.25 0 111.5 0v.878a2.25 2.25 0 01-2.25 2.25h-1.5v2.128a2.251 2.251 0 11-1.5 0V8.5h-1.5A2.25 2.25 0 013.5 6.25v-.878a2.25 2.25 0 111.5 0zM5 3.25a.75.75 0 10-1.5 0 .75.75 0 001.5 0zm6.75.75a.75.75 0 100-1.5.75.75 0 000 1.5zm-3 8.75a.75.75 0 10-1.5 0 .75.75 0 001.5 0z"/>
</svg>
<span>{{.NumForks}}</span>
</a>
{{/* Relative timestamp with <time> element for a11y */}}
{{if .UpdatedUnix}}
<time datetime="{{.UpdatedUnix | TimeSinceISO}}" title="{{.UpdatedUnix | DateFmtLong}}" class="text-light">
{{.UpdatedUnix | TimeSince}}
</time>
{{end}}

View File

@@ -1,110 +0,0 @@
# Fleet Behaviour Hardening — Review & Action Plan
**Author:** @perplexity
**Date:** 2026-04-08
**Context:** Alexander asked: "Is it the memory system or the behaviour guardrails?"
**Answer:** It's the guardrails. The memory system is adequate. The enforcement machinery is aspirational.
---
## Diagnosis: Why the Fleet Isn't Smart Enough
After auditing SOUL.md, config.yaml, all 8 playbooks, the orchestrator, the guard scripts, and the v7.0.0 checkin, the pattern is clear:
**The fleet has excellent design documents and broken enforcement.**
| Layer | Design Quality | Enforcement Quality | Gap |
|---|---|---|---|
| SOUL.md | Excellent | None — no code reads it at runtime | Philosophy without machinery |
| Playbooks (7 yaml) | Good lane map | Not invoked by orchestrator | Playbooks exist but nobody calls them |
| Guard scripts (9) | Solid code | 1 of 9 wired (#395 audit) | 89% of guards are dead code |
| Orchestrator | Sound design | Gateway dispatch is a no-op (#391) | Assigns issues but doesn't trigger work |
| Cycle Guard | Good 10-min rule | No cron/loop calls it | Discipline without enforcement |
| PR Reviewer | Clear rules | Runs every 30m (if scheduled) | Only guard that might actually fire |
| Memory (MemPalace) | Working code | Retrieval enforcer wired | Actually operational |
### The Core Problem
Agents pick up issues and produce output, but there is **no pre-task checklist** and **no post-task quality gate**. An agent can:
1. Start work without checking if someone else already did it
2. Produce output without running tests
3. Submit a PR without verifying it addresses the issue
4. Work for hours on something out of scope
5. Create duplicate branches/PRs without detection
The SOUL.md says "grounding before generation" but no code enforces it.
The playbooks define lanes but the orchestrator doesn't load them.
The guards exist but nothing calls them.
---
## What the Fleet Needs (Priority Order)
### 1. Pre-Task Gate (MISSING — this PR adds it)
Before an agent starts any issue:
- [ ] Check if issue is already assigned to another agent
- [ ] Check if a branch already exists for this issue
- [ ] Check if a PR already exists for this issue
- [ ] Load relevant MemPalace context (retrieval enforcer)
- [ ] Verify the agent has the right lane for this work (playbook check)
### 2. Post-Task Gate (MISSING — this PR adds it)
Before an agent submits a PR:
- [ ] Verify the diff addresses the issue title/body
- [ ] Run syntax_guard.py on changed files
- [ ] Check for duplicate PRs targeting the same issue
- [ ] Verify branch name follows convention
- [ ] Run tests if they exist for changed files
### 3. Wire the Existing Guards (8 of 9 are dead code)
Per #395 audit:
- Pre-commit hooks: need symlink on every machine
- Cycle guard: need cron/loop integration
- Forge health check: need cron entry
- Smoke test + deploy validate: need deploy script integration
### 4. Orchestrator Dispatch Actually Works
Per #391 audit: the orchestrator scores and assigns but the gateway dispatch just writes to `/tmp/hermes-dispatch.log`. Nobody reads that file. The dispatch needs to either:
- Trigger `hermes` CLI on the target machine, or
- Post a webhook that the agent loop picks up
### 5. Agent Self-Assessment Loop
After completing work, agents should answer:
- Did I address the issue as stated?
- Did I stay in scope?
- Did I check the palace for prior work?
- Did I run verification?
This is what SOUL.md calls "the apparatus that gives these words teeth."
---
## What's Working (Don't Touch)
- **MemPalace sovereign_store.py** — SQLite + FTS5 + HRR, operational
- **Retrieval enforcer** — wired to SovereignStore as of 14 hours ago
- **Wake-up protocol** — palace-first boot sequence
- **PR reviewer playbook** — clear rules, well-scoped
- **Issue triager playbook** — comprehensive lane map with 11 agents
- **Cycle guard code** — solid 10-min slice discipline (just needs wiring)
- **Config drift guard** — active cron, working
- **Dead man switch** — active, working
---
## Recommendation
The memory system is not the bottleneck. The behaviour guardrails are. Specifically:
1. **Add `task_gate.py`** — pre-task and post-task quality gates that every agent loop calls
2. **Wire cycle_guard.py** — add start/complete calls to agent loop
3. **Wire pre-commit hooks** — deploy script should symlink on provision
4. **Fix orchestrator dispatch** — make it actually trigger work, not just log
This PR adds item 1. Items 2-4 need SSH access and are flagged for Timmy/Allegro.

View File

@@ -1,141 +0,0 @@
# Memory Architecture
> How Timmy remembers, recalls, and learns — without hallucinating.
Refs: Epic #367 | Sub-issues #368, #369, #370, #371, #372
## Overview
Timmy's memory system uses a **Memory Palace** architecture — a structured, file-backed knowledge store organized into rooms and drawers. When faced with a recall question, the agent checks its palace *before* generating from scratch.
This document defines the retrieval order, storage layers, and data flow that make this work.
## Retrieval Order (L0L5)
When the agent receives a prompt that looks like a recall question ("what did we do?", "what's the status of X?"), the retrieval enforcer intercepts it and walks through layers in order:
| Layer | Source | Question Answered | Short-circuits? |
|-------|--------|-------------------|------------------|
| L0 | `identity.txt` | Who am I? What are my mandates? | No (always loaded) |
| L1 | Palace rooms/drawers | What do I know about this topic? | Yes, if hit |
| L2 | Session scratchpad | What have I learned this session? | Yes, if hit |
| L3 | Artifact retrieval (Gitea API) | Can I fetch the actual issue/file/log? | Yes, if hit |
| L4 | Procedures/playbooks | Is there a documented way to do this? | Yes, if hit |
| L5 | Free generation | (Only when L0L4 are exhausted) | N/A |
**Key principle:** The agent never reaches L5 (free generation) if any prior layer has relevant data. This eliminates hallucination for recall-style queries.
## Storage Layout
```
~/.mempalace/
identity.txt # L0: Who I am, mandates, personality
rooms/
projects/
timmy-config.md # What I know about timmy-config
hermes-agent.md # What I know about hermes-agent
people/
alexander.md # Working relationship context
architecture/
fleet.md # Fleet system knowledge
mempalace.md # Self-knowledge about this system
config/
mempalace.yaml # Palace configuration
~/.hermes/
scratchpad/
{session_id}.json # L2: Ephemeral session context
```
## Components
### 1. Memory Palace Skill (`mempalace.py`) — #368
Core data structures:
- `PalaceRoom`: A named collection of drawers (topics)
- `Mempalace`: The top-level palace with room management
- Factory constructors: `for_issue_analysis()`, `for_health_check()`, `for_code_review()`
### 2. Retrieval Enforcer (`retrieval_enforcer.py`) — #369
Middleware that intercepts recall-style prompts:
1. Detects recall patterns ("what did", "status of", "last time we")
2. Walks L0→L4 in order, short-circuiting on first hit
3. Only allows free generation (L5) when all layers return empty
4. Produces an honest fallback: "I don't have this in my memory palace."
### 3. Session Scratchpad (`scratchpad.py`) — #370
Ephemeral, session-scoped working memory:
- Write-append only during a session
- Entries have TTL (default: 1 hour)
- Queried at L2 in retrieval chain
- Never auto-promoted to palace
### 4. Memory Promotion — #371
Explicit promotion from scratchpad to palace:
- Agent must call `promote_to_palace()` with a reason
- Dedup check against target drawer
- Summary required (raw tool output never stored)
- Conflict detection when new memory contradicts existing
### 5. Wake-Up Protocol (`wakeup.py`) — #372
Boot sequence for new sessions:
```
Session Start
├─ L0: Load identity.txt
├─ L1: Scan palace rooms for active context
├─ L1.5: Surface promoted memories from last session
├─ L2: Load surviving scratchpad entries
└─ Ready: agent knows who it is, what it was doing, what it learned
```
## Data Flow
```
┌──────────────────┐
│ User Prompt │
└────────┬─────────┘
┌────────┴─────────┐
│ Recall Detector │
└────┬───────┬─────┘
│ │
[recall] [not recall]
│ │
┌───────┴────┐ ┌──┬─┴───────┐
│ Retrieval │ │ Normal Flow │
│ Enforcer │ └─────────────┘
│ L0→L1→L2 │
│ →L3→L4→L5│
└──────┬─────┘
┌──────┴─────┐
│ Response │
│ (grounded) │
└────────────┘
```
## Anti-Patterns
| Don't | Do Instead |
|-------|------------|
| Generate from vibes when palace has data | Check palace first (L1) |
| Auto-promote everything to palace | Require explicit `promote_to_palace()` with reason |
| Store raw API responses as memories | Summarize before storing |
| Hallucinate when palace is empty | Say "I don't have this in my memory palace" |
| Dump entire palace on wake-up | Selective loading based on session context |
## Status
| Component | Issue | PR | Status |
|-----------|-------|----|--------|
| Skill port | #368 | #374 | In Review |
| Retrieval enforcer | #369 | #374 | In Review |
| Session scratchpad | #370 | #374 | In Review |
| Memory promotion | #371 | — | Open |
| Wake-up protocol | #372 | #374 | In Review |

View File

@@ -1,150 +0,0 @@
# Visual Accessibility Audit — Foundation Web Properties
**Issue:** timmy-config #492
**Date:** 2026-04-13
**Label:** gemma-4-multimodal
**Scope:** forge.alexanderwhitestone.com (Gitea 1.25.4)
## Executive Summary
The Foundation's primary accessible web property is the Gitea forge. The Matrix homeserver (matrix.timmy.foundation) is currently unreachable (DNS/SSL issues). This audit covers the forge across three page types: Homepage, Login, and Explore/Repositories.
**Overall: 6 WCAG 2.1 AA violations found, 4 best-practice recommendations.**
---
## Pages Audited
| Page | URL | Status |
|------|-----|--------|
| Homepage | forge.alexanderwhitestone.com | Live |
| Sign In | forge.alexanderwhitestone.com/user/login | Live |
| Explore Repos | forge.alexanderwhitestone.com/explore/repos | Live |
| Matrix/Element | matrix.timmy.foundation | DOWN (DNS/SSL) |
---
## Findings
### P1 — Violations (WCAG 2.1 AA)
#### V1: No Skip Navigation Link (2.4.1)
- **Pages:** All
- **Severity:** Medium
- **Description:** No "Skip to content" link exists. Keyboard users must tab through the full navigation on every page load.
- **Evidence:** Programmatic check returned `skipNav: false`
- **Fix:** Add `<a href="#main" class="skip-link">Skip to content</a>` visually hidden until focused.
#### V2: 25 Form Inputs Without Labels (1.3.1, 3.3.2)
- **Pages:** Explore/Repositories (filter dropdowns)
- **Severity:** High
- **Description:** The search input and all radio buttons in the Filter/Sort dropdowns lack programmatic label associations.
- **Evidence:** Programmatic check found 25 inputs without `label[for=]`, `aria-label`, or `aria-labelledby`
- **Affected inputs:** `q` (search), `archived` (x2), `fork` (x2), `mirror` (x2), `template` (x2), `private` (x2), `sort` (x12), `clear-filter` (x1)
- **Fix:** Add `aria-label="Search repositories"` to search input. Add `aria-label` to each radio button group and individual options.
#### V3: Low-Contrast Footer Text (1.4.3)
- **Pages:** All
- **Severity:** Medium
- **Description:** Footer text (version, page render time) appears light gray on white, likely failing the 4.5:1 contrast ratio.
- **Evidence:** 30 elements flagged as potential low-contrast suspects.
- **Fix:** Darken footer text to at least `#767676` on white (4.54:1 ratio).
#### V4: Green Link Color Fails Contrast (1.4.3)
- **Pages:** Homepage
- **Severity:** Medium
- **Description:** Inline links use medium-green (~#609926) on white. This shade typically fails 4.5:1 for normal body text.
- **Evidence:** Visual analysis identified green links ("run the binary", "Docker", "contributing") as potentially failing.
- **Fix:** Darken link color to at least `#507020` or add an underline for non-color differentiation (SC 1.4.1).
#### V5: Missing Header/Banner Landmark (1.3.1)
- **Pages:** All
- **Severity:** Low
- **Description:** No `<header>` or `role="banner"` element found. The navigation bar is a `<nav>` but not wrapped in a banner landmark.
- **Evidence:** `landmarks.banner: 0`
- **Fix:** Wrap the top navigation in `<header>` or add `role="banner"`.
#### V6: Heading Hierarchy Issue (1.3.1)
- **Pages:** Login
- **Severity:** Low
- **Description:** The Sign In heading is `<h4>` rather than `<h1>`, breaking the heading hierarchy. The page has no `<h1>`.
- **Evidence:** Accessibility tree shows `heading "Sign In" [level=4]`
- **Fix:** Use `<h1>` for "Sign In" on the login page.
---
### P2 — Best Practice Recommendations
#### R1: Add Password Visibility Toggle
- **Page:** Login
- **Description:** No show/hide toggle on the password field. This helps users with cognitive or motor impairments verify input.
#### R2: Add `aria-required` to Required Fields
- **Page:** Login
- **Evidence:** `inputsWithAriaRequired: 0` (no inputs marked as required)
- **Description:** The username field shows a red asterisk but has no `required` or `aria-required="true"` attribute.
#### R3: Improve Star/Fork Link Labels
- **Page:** Explore Repos
- **Description:** Star and fork counts are bare numbers (e.g., "0", "2"). Screen readers announce these without context.
- **Fix:** Add `aria-label="2 stars"` / `aria-label="0 forks"` to count links.
#### R4: Use `<time>` Elements for Timestamps
- **Page:** Explore Repos
- **Description:** Relative timestamps ("2 minutes ago") are human-readable but lack machine-readable fallbacks.
- **Fix:** Wrap in `<time datetime="2026-04-13T17:00:00Z">2 minutes ago</time>`.
---
## What's Working Well
- **Color contrast (primary):** Black text on white backgrounds — excellent 21:1 ratio.
- **Heading structure (homepage):** Clean h1 > h2 > h3 hierarchy.
- **Landmark regions:** `<main>` and `<nav>` landmarks present.
- **Language attribute:** `lang="en-US"` set on `<html>`.
- **Link text:** Descriptive — no "click here" or "read more" patterns found.
- **Form layout:** Login form uses clean single-column with good spacing.
- **Submit button:** Full-width, good contrast, large touch target.
- **Navigation:** Simple, consistent across pages.
---
## Out of Scope
- **matrix.timmy.foundation:** Unreachable (DNS resolution failure / SSL cert mismatch). Should be re-audited when operational.
- **Evennia web client (localhost:4001):** Local-only, not publicly accessible.
- **WCAG AAA criteria:** This audit covers AA only.
---
## Remediation Priority
| Priority | Issue | Effort |
|----------|-------|--------|
| P1 | V2: 25 unlabeled inputs | Medium |
| P1 | V1: Skip nav link | Small |
| P1 | V4: Green link contrast | Small |
| P1 | V3: Footer text contrast | Small |
| P2 | V6: Heading hierarchy | Small |
| P2 | V5: Banner landmark | Small |
| P2 | R1-R4: Best practices | Small |
---
## Automated Check Results
```
skipNav: false
headings: h1(3), h4(1)
imgsNoAlt: 0 / 1
inputsNoLabel: 25
genericLinks: 0
lowContrastSuspects: 30
inputsWithAriaRequired: 0
landmarks: main=1, nav=2, banner=0, contentinfo=2
hasLang: true (en-US)
```
---
*Generated via visual + programmatic analysis of forge.alexanderwhitestone.com*

View File

@@ -3,7 +3,7 @@
Purpose:
- stand up the third wizard house as a Kimi-backed coding worker
- keep Hermes as the durable harness
- Hermes is the durable harness — no intermediary gateway layers
- treat OpenClaw as optional shell frontage, not the bones
Local proof already achieved:
@@ -40,5 +40,5 @@ bin/deploy-allegro-house.sh root@167.99.126.228
Important nuance:
- the Hermes/Kimi lane is the proven path
- direct embedded Kimi model routing was not yet reliable locally
- direct embedded OpenClaw Kimi model routing was not yet reliable locally
- so the remote deployment keeps the minimal, proven architecture: Hermes house first

View File

@@ -81,6 +81,17 @@ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist
- Old-state risk:
- same class as main gateway, but isolated to fenrir profile state
#### 3. ai.openclaw.gateway
- Plist: ~/Library/LaunchAgents/ai.openclaw.gateway.plist
- Command: `node .../openclaw/dist/index.js gateway --port 18789`
- Logs:
- `~/.openclaw/logs/gateway.log`
- `~/.openclaw/logs/gateway.err.log`
- KeepAlive: yes
- RunAtLoad: yes
- Old-state risk:
- long-lived gateway survives toolchain assumptions and keeps accepting work even if upstream routing changed
#### 4. ai.timmy.kimi-heartbeat
- Plist: ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist
- Command: `/bin/bash ~/.timmy/uniwizard/kimi-heartbeat.sh`
@@ -284,7 +295,7 @@ launchctl list | egrep 'timmy|kimi|claude|max|dashboard|matrix|gateway|huey'
List Timmy/Hermes launch agent files:
```bash
find ~/Library/LaunchAgents -maxdepth 1 -name '*.plist' | egrep 'timmy|hermes|tower'
find ~/Library/LaunchAgents -maxdepth 1 -name '*.plist' | egrep 'timmy|hermes|openclaw|tower'
```
List running loop scripts:
@@ -305,6 +316,7 @@ launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.pl
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist || true
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway.plist || true
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist || true
launchctl bootout gui/$(id -u) ~/Library/LaunchAgents/ai.openclaw.gateway.plist || true
```
2. Kill manual loops

View File

@@ -1,179 +0,0 @@
# 3D World Glitch Detection — Matrix Scanner
**Reference:** timmy-config#491
**Label:** gemma-4-multimodal
**Version:** 0.1.0
## Overview
The Matrix Glitch Detector scans 3D web worlds for visual artifacts and
rendering anomalies. It uses browser automation to capture screenshots from
multiple camera angles, then sends them to a vision AI model for analysis
against a library of known glitch patterns.
## Detected Glitch Categories
| Category | Severity | Description |
|---|---|---|
| Floating Assets | HIGH | Objects not grounded — hovering above surfaces |
| Z-Fighting | MEDIUM | Coplanar surfaces flickering/competing for depth |
| Missing Textures | CRITICAL | Placeholder colors (magenta, checkerboard) |
| Clipping | HIGH | Geometry passing through other objects |
| Broken Normals | MEDIUM | Inside-out or incorrectly lit surfaces |
| Shadow Artifacts | LOW | Detached, mismatched, or acne shadows |
| LOD Popping | LOW | Abrupt level-of-detail transitions |
| Lightmap Errors | MEDIUM | Dark splotches, light leaks, baking failures |
| Water/Reflection | MEDIUM | Incorrect environment reflections |
| Skybox Seam | LOW | Visible seams at cubemap face edges |
## Installation
No external dependencies required — pure Python 3.10+.
```bash
# Clone the repo
git clone https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-config.git
cd timmy-config
```
## Usage
### Basic Scan
```bash
python bin/matrix_glitch_detector.py https://matrix.example.com/world/alpha
```
### Multi-Angle Scan
```bash
python bin/matrix_glitch_detector.py https://matrix.example.com/world/alpha \
--angles 8 \
--output glitch_report.json
```
### Demo Mode
```bash
python bin/matrix_glitch_detector.py --demo
```
### Options
| Flag | Default | Description |
|---|---|---|
| `url` | (required) | URL of the 3D world to scan |
| `--angles N` | 4 | Number of camera angles to capture |
| `--output PATH` | stdout | Output file for JSON report |
| `--min-severity` | info | Minimum severity: info/low/medium/high/critical |
| `--demo` | off | Run with simulated detections |
| `--verbose` | off | Enable verbose output |
## Report Format
The JSON report includes:
```json
{
"scan_id": "uuid",
"url": "https://...",
"timestamp": "ISO-8601",
"total_screenshots": 4,
"angles_captured": ["front", "right", "back", "left"],
"glitches": [
{
"id": "short-uuid",
"category": "floating_assets",
"name": "Floating Chair",
"description": "Office chair floating 0.3m above floor",
"severity": "high",
"confidence": 0.87,
"location_x": 35.2,
"location_y": 62.1,
"screenshot_index": 0,
"screenshot_angle": "front",
"timestamp": "ISO-8601"
}
],
"summary": {
"total_glitches": 4,
"by_severity": {"critical": 1, "high": 2, "medium": 1},
"by_category": {"floating_assets": 1, "missing_textures": 1, ...},
"highest_severity": "critical",
"clean_screenshots": 0
},
"metadata": {
"detector_version": "0.1.0",
"pattern_count": 10,
"reference": "timmy-config#491"
}
}
```
## Vision AI Integration
The detector supports any OpenAI-compatible vision API. Set these
environment variables:
```bash
export VISION_API_KEY="your-api-key"
export VISION_API_BASE="https://api.openai.com/v1" # optional
export VISION_MODEL="gpt-4o" # optional, default: gpt-4o
```
For browser-based capture with `browser_vision`:
```bash
export BROWSER_VISION_SCRIPT="/path/to/browser_vision.py"
```
## Glitch Patterns
Pattern definitions live in `bin/glitch_patterns.py`. Each pattern includes:
- **category** — Enum matching the glitch type
- **detection_prompts** — Instructions for the vision model
- **visual_indicators** — What to look for in screenshots
- **confidence_threshold** — Minimum confidence to report
### Adding Custom Patterns
```python
from glitch_patterns import GlitchPattern, GlitchCategory, GlitchSeverity
custom = GlitchPattern(
category=GlitchCategory.FLOATING_ASSETS,
name="Custom Glitch",
description="Your description",
severity=GlitchSeverity.MEDIUM,
detection_prompts=["Look for..."],
visual_indicators=["indicator 1", "indicator 2"],
)
```
## Testing
```bash
python -m pytest tests/test_glitch_detector.py -v
# or
python tests/test_glitch_detector.py
```
## Architecture
```
bin/
matrix_glitch_detector.py — Main CLI entry point
glitch_patterns.py — Pattern definitions and prompt builder
tests/
test_glitch_detector.py — Unit and integration tests
docs/
glitch-detection.md — This documentation
```
## Limitations
- Browser automation requires a headless browser environment
- Vision AI analysis depends on model availability and API limits
- Placeholder screenshots are generated when browser capture is unavailable
- Detection accuracy varies by scene complexity and lighting conditions

View File

@@ -1,68 +0,0 @@
# Overnight R&D Automation
**Schedule**: Nightly at 10 PM EDT (02:00 UTC)
**Duration**: ~2-4 hours (self-limiting, finishes before 6 AM morning report)
**Cost**: $0 — all local Ollama inference
## Phases
### Phase 1: Deep Dive Intelligence
Runs the `intelligence/deepdive/pipeline.py` from the-nexus:
- Aggregates arXiv CS.AI, CS.CL, CS.LG RSS feeds (last 24h)
- Fetches OpenAI, Anthropic, DeepMind blog updates
- Filters for relevance using sentence-transformers embeddings
- Synthesizes a briefing using local Gemma 4 12B
- Saves briefing to `~/briefings/`
### Phase 2: Tightening Loop
Exercises Timmy's local tool-use capability:
- 10 tasks × 3 cycles = 30 task attempts per night
- File reading, writing, searching against real workspace files
- Each result logged as JSONL for training data analysis
- Tests sovereignty compliance (SOUL.md alignment, banned provider detection)
### Phase 3: DPO Export
Sweeps overnight Hermes sessions for training pair extraction:
- Converts good conversation pairs into DPO training format
- Saves to `~/.timmy/training-data/dpo-pairs/`
### Phase 4: Morning Prep
Compiles overnight findings into `~/.timmy/overnight-rd/latest_summary.md`
for consumption by the 6 AM `good_morning_report` task.
## Approved Providers
| Slot | Provider | Model |
|------|----------|-------|
| Synthesis | Ollama | gemma4:12b |
| Tool tasks | Ollama | hermes4:14b |
| Fallback | Ollama | gemma4:12b |
Anthropic is permanently banned (BANNED_PROVIDERS.yml, 2026-04-09).
## Outputs
| Path | Content |
|------|---------|
| `~/.timmy/overnight-rd/{run_id}/rd_log.jsonl` | Full task log |
| `~/.timmy/overnight-rd/{run_id}/rd_summary.md` | Run summary |
| `~/.timmy/overnight-rd/latest_summary.md` | Latest summary (for morning report) |
| `~/briefings/briefing_*.json` | Deep Dive briefings |
## Monitoring
Check the Huey consumer log:
```bash
tail -f ~/.timmy/timmy-config/logs/huey.log | grep overnight
```
Check the latest run summary:
```bash
cat ~/.timmy/overnight-rd/latest_summary.md
```
## Dependencies
- Deep Dive pipeline installed: `cd the-nexus/intelligence/deepdive && make install`
- Ollama running with gemma4:12b and hermes4:14b models
- Huey consumer running: `huey_consumer.py tasks.huey -w 2 -k thread`

View File

@@ -14,7 +14,7 @@ from crewai.tools import BaseTool
OPENROUTER_API_KEY = os.getenv(
"OPENROUTER_API_KEY",
os.environ.get("OPENROUTER_API_KEY", ""),
"dsk-or-v1-f60c89db12040267458165cf192e815e339eb70548e4a0a461f5f0f69e6ef8b0",
)
llm = LLM(

View File

@@ -2,128 +2,135 @@ schema_version: 1
status: proposed
runtime_wiring: false
owner: timmy-config
ownership:
owns:
- routing doctrine for task classes
- sidecar-readable per-agent fallback portfolios
- degraded-mode capability floors
- routing doctrine for task classes
- sidecar-readable per-agent fallback portfolios
- degraded-mode capability floors
does_not_own:
- live queue state outside Gitea truth
- launchd or loop process state
- ad hoc worktree history
- live queue state outside Gitea truth
- launchd or loop process state
- ad hoc worktree history
policy:
require_four_slots_for_critical_agents: true
terminal_fallback_must_be_usable: true
forbid_synchronized_fleet_degradation: true
forbid_human_token_fallbacks: true
anti_correlation_rule: no two critical agents may share the same primary+fallback1 pair
sensitive_control_surfaces:
- SOUL.md
- config.yaml
- deploy.sh
- tasks.py
- playbooks/
- cron/
- memories/
- skins/
- training/
- SOUL.md
- config.yaml
- deploy.sh
- tasks.py
- playbooks/
- cron/
- memories/
- skins/
- training/
role_classes:
judgment:
current_surfaces:
- playbooks/issue-triager.yaml
- playbooks/pr-reviewer.yaml
- playbooks/verified-logic.yaml
- playbooks/issue-triager.yaml
- playbooks/pr-reviewer.yaml
- playbooks/verified-logic.yaml
task_classes:
- issue-triage
- queue-routing
- pr-review
- proof-check
- governance-review
- issue-triage
- queue-routing
- pr-review
- proof-check
- governance-review
degraded_mode:
fallback2:
allowed:
- classify backlog
- summarize risk
- produce draft routing plans
- leave bounded labels or comments with evidence
- classify backlog
- summarize risk
- produce draft routing plans
- leave bounded labels or comments with evidence
denied:
- merge pull requests
- close or rewrite governing issues or PRs
- mutate sensitive control surfaces
- bulk-reassign the fleet
- silently change routing policy
- merge pull requests
- close or rewrite governing issues or PRs
- mutate sensitive control surfaces
- bulk-reassign the fleet
- silently change routing policy
terminal:
lane: report-and-route
allowed:
- classify backlog
- summarize risk
- produce draft routing artifacts
- classify backlog
- summarize risk
- produce draft routing artifacts
denied:
- merge pull requests
- bulk-reassign the fleet
- mutate sensitive control surfaces
- merge pull requests
- bulk-reassign the fleet
- mutate sensitive control surfaces
builder:
current_surfaces:
- playbooks/bug-fixer.yaml
- playbooks/test-writer.yaml
- playbooks/refactor-specialist.yaml
- playbooks/bug-fixer.yaml
- playbooks/test-writer.yaml
- playbooks/refactor-specialist.yaml
task_classes:
- bug-fix
- test-writing
- refactor
- bounded-docs-change
- bug-fix
- test-writing
- refactor
- bounded-docs-change
degraded_mode:
fallback2:
allowed:
- reversible single-issue changes
- narrow docs fixes
- test scaffolds and reproducers
- reversible single-issue changes
- narrow docs fixes
- test scaffolds and reproducers
denied:
- cross-repo changes
- sensitive control-surface edits
- merge or release actions
- cross-repo changes
- sensitive control-surface edits
- merge or release actions
terminal:
lane: narrow-patch
allowed:
- single-issue small patch
- reproducer test
- docs-only repair
- single-issue small patch
- reproducer test
- docs-only repair
denied:
- sensitive control-surface edits
- multi-file architecture work
- irreversible actions
- sensitive control-surface edits
- multi-file architecture work
- irreversible actions
wolf_bulk:
current_surfaces:
- docs/automation-inventory.md
- FALSEWORK.md
- docs/automation-inventory.md
- FALSEWORK.md
task_classes:
- docs-inventory
- log-summarization
- queue-hygiene
- repetitive-small-diff
- research-sweep
- docs-inventory
- log-summarization
- queue-hygiene
- repetitive-small-diff
- research-sweep
degraded_mode:
fallback2:
allowed:
- gather evidence
- refresh inventories
- summarize logs
- propose labels or routes
- gather evidence
- refresh inventories
- summarize logs
- propose labels or routes
denied:
- multi-repo branch fanout
- mass agent assignment
- sensitive control-surface edits
- irreversible queue mutation
- multi-repo branch fanout
- mass agent assignment
- sensitive control-surface edits
- irreversible queue mutation
terminal:
lane: gather-and-summarize
allowed:
- inventory refresh
- evidence bundles
- summaries
- inventory refresh
- evidence bundles
- summaries
denied:
- multi-repo branch fanout
- mass agent assignment
- sensitive control-surface edits
- multi-repo branch fanout
- mass agent assignment
- sensitive control-surface edits
routing:
issue-triage: judgment
queue-routing: judgment
@@ -139,20 +146,22 @@ routing:
queue-hygiene: wolf_bulk
repetitive-small-diff: wolf_bulk
research-sweep: wolf_bulk
promotion_rules:
- If a wolf/bulk task touches a sensitive control surface, promote it to judgment.
- If a builder task expands beyond 5 files, architecture review, or multi-repo coordination, promote it to judgment.
- If a terminal lane cannot produce a usable artifact, the portfolio is invalid and must be redesigned before wiring.
- If a wolf/bulk task touches a sensitive control surface, promote it to judgment.
- If a builder task expands beyond 5 files, architecture review, or multi-repo coordination, promote it to judgment.
- If a terminal lane cannot produce a usable artifact, the portfolio is invalid and must be redesigned before wiring.
agents:
triage-coordinator:
role_class: judgment
critical: true
current_playbooks:
- playbooks/issue-triager.yaml
- playbooks/issue-triager.yaml
portfolio:
primary:
provider: kimi-coding
model: kimi-k2.5
provider: anthropic
model: claude-opus-4-6
lane: full-judgment
fallback1:
provider: openai-codex
@@ -168,18 +177,19 @@ agents:
lane: report-and-route
local_capable: true
usable_output:
- backlog classification
- routing draft
- risk summary
- backlog classification
- routing draft
- risk summary
pr-reviewer:
role_class: judgment
critical: true
current_playbooks:
- playbooks/pr-reviewer.yaml
- playbooks/pr-reviewer.yaml
portfolio:
primary:
provider: kimi-coding
model: kimi-k2.5
provider: anthropic
model: claude-opus-4-6
lane: full-review
fallback1:
provider: gemini
@@ -195,16 +205,17 @@ agents:
lane: low-stakes-diff-summary
local_capable: false
usable_output:
- diff risk summary
- explicit uncertainty notes
- merge-block recommendation
- diff risk summary
- explicit uncertainty notes
- merge-block recommendation
builder-main:
role_class: builder
critical: true
current_playbooks:
- playbooks/bug-fixer.yaml
- playbooks/test-writer.yaml
- playbooks/refactor-specialist.yaml
- playbooks/bug-fixer.yaml
- playbooks/test-writer.yaml
- playbooks/refactor-specialist.yaml
portfolio:
primary:
provider: openai-codex
@@ -225,14 +236,15 @@ agents:
lane: narrow-patch
local_capable: true
usable_output:
- small patch
- reproducer test
- docs repair
- small patch
- reproducer test
- docs repair
wolf-sweeper:
role_class: wolf_bulk
critical: true
current_world_state:
- docs/automation-inventory.md
- docs/automation-inventory.md
portfolio:
primary:
provider: gemini
@@ -252,20 +264,21 @@ agents:
lane: gather-and-summarize
local_capable: true
usable_output:
- inventory refresh
- evidence bundle
- summary comment
- inventory refresh
- evidence bundle
- summary comment
cross_checks:
unique_primary_fallback1_pairs:
triage-coordinator:
- kimi-coding/kimi-k2.5
- openai-codex/codex
- anthropic/claude-opus-4-6
- openai-codex/codex
pr-reviewer:
- kimi-coding/kimi-k2.5
- gemini/gemini-2.5-pro
- anthropic/claude-opus-4-6
- gemini/gemini-2.5-pro
builder-main:
- openai-codex/codex
- kimi-coding/kimi-k2.5
- openai-codex/codex
- kimi-coding/kimi-k2.5
wolf-sweeper:
- gemini/gemini-2.5-flash
- groq/llama-3.3-70b-versatile
- gemini/gemini-2.5-flash
- groq/llama-3.3-70b-versatile

View File

@@ -1,122 +0,0 @@
#!/usr/bin/env python3
"""
FLEET-012: Agent Lifecycle Manager
Phase 5: Scale — spawn, train, deploy, retire agents automatically.
Manages the full lifecycle:
1. PROVISION: Clone template, install deps, configure, test
2. DEPLOY: Add to active rotation, start accepting issues
3. MONITOR: Track performance, quality, heartbeat
4. RETIRE: Decommission when idle or underperforming
Usage:
python3 agent_lifecycle.py provision <name> <vps> [--model model]
python3 agent_lifecycle.py deploy <name>
python3 agent_lifecycle.py retire <name>
python3 agent_lifecycle.py status
python3 agent_lifecycle.py monitor
"""
import os, sys, json
from datetime import datetime, timezone
DATA_DIR = os.path.expanduser("~/.local/timmy/fleet-agents")
DB_FILE = os.path.join(DATA_DIR, "agents.json")
LOG_FILE = os.path.join(DATA_DIR, "lifecycle.log")
def ensure():
os.makedirs(DATA_DIR, exist_ok=True)
def log(msg, level="INFO"):
ts = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
entry = f"[{ts}] [{level}] {msg}"
with open(LOG_FILE, "a") as f: f.write(entry + "\n")
print(f" {entry}")
def load():
if os.path.exists(DB_FILE):
return json.loads(open(DB_FILE).read())
return {}
def save(db):
open(DB_FILE, "w").write(json.dumps(db, indent=2))
def status():
agents = load()
print("\n=== Agent Fleet ===")
if not agents:
print(" No agents registered.")
return
for name, a in agents.items():
state = a.get("state", "?")
vps = a.get("vps", "?")
model = a.get("model", "?")
tasks = a.get("tasks_completed", 0)
hb = a.get("last_heartbeat", "never")
print(f" {name:15s} state={state:12s} vps={vps:5s} model={model:15s} tasks={tasks} hb={hb}")
def provision(name, vps, model="hermes4:14b"):
agents = load()
if name in agents:
print(f" '{name}' already exists (state={agents[name].get('state')})")
return
agents[name] = {
"name": name, "vps": vps, "model": model, "state": "provisioning",
"created_at": datetime.now(timezone.utc).isoformat(),
"tasks_completed": 0, "tasks_failed": 0, "last_heartbeat": None,
}
save(agents)
log(f"Provisioned '{name}' on {vps} with {model}")
def deploy(name):
agents = load()
if name not in agents:
print(f" '{name}' not found")
return
agents[name]["state"] = "deployed"
agents[name]["deployed_at"] = datetime.now(timezone.utc).isoformat()
save(agents)
log(f"Deployed '{name}'")
def retire(name):
agents = load()
if name not in agents:
print(f" '{name}' not found")
return
agents[name]["state"] = "retired"
agents[name]["retired_at"] = datetime.now(timezone.utc).isoformat()
save(agents)
log(f"Retired '{name}'. Completed {agents[name].get('tasks_completed', 0)} tasks.")
def monitor():
agents = load()
now = datetime.now(timezone.utc)
changes = 0
for name, a in agents.items():
if a.get("state") != "deployed": continue
hb = a.get("last_heartbeat")
if hb:
try:
hb_t = datetime.fromisoformat(hb)
hours = (now - hb_t).total_seconds() / 3600
if hours > 24 and a.get("state") == "deployed":
a["state"] = "idle"
a["idle_since"] = now.isoformat()
log(f"'{name}' idle for {hours:.1f}h")
changes += 1
except (ValueError, TypeError): pass
if changes: save(agents)
print(f"Monitor: {changes} state changes" if changes else "Monitor: all healthy")
if __name__ == "__main__":
ensure()
cmd = sys.argv[1] if len(sys.argv) > 1 else "monitor"
if cmd == "status": status()
elif cmd == "provision" and len(sys.argv) >= 4:
model = sys.argv[4] if len(sys.argv) >= 5 else "hermes4:14b"
provision(sys.argv[2], sys.argv[3], model)
elif cmd == "deploy" and len(sys.argv) >= 3: deploy(sys.argv[2])
elif cmd == "retire" and len(sys.argv) >= 3: retire(sys.argv[2])
elif cmd == "monitor": monitor()
elif cmd == "run": monitor()
else: print("Usage: agent_lifecycle.py [provision|deploy|retire|status|monitor]")

View File

@@ -104,6 +104,7 @@ Three primary resources govern the fleet:
| Hermes gateway | 500 MB | Primary gateway |
| Hermes agents (x3) | ~560 MB total | Multiple sessions |
| Ollama | ~20 MB base + model memory | Model loading varies |
| OpenClaw | 350 MB | Gateway process |
| Evennia (server+portal) | 56 MB | Game world |
---
@@ -145,6 +146,7 @@ This means Phase 3+ capabilities (orchestration, load balancing, etc.) are acces
| Gitea | 23/24 | 95.8% | GOOD |
| Hermes Gateway | 23/24 | 95.8% | GOOD |
| Ollama | 24/24 | 100.0% | GOOD |
| OpenClaw | 24/24 | 100.0% | GOOD |
| Evennia | 24/24 | 100.0% | GOOD |
| Hermes Agent | 21/24 | 87.5% | **CHECK** |

View File

@@ -1,122 +0,0 @@
#!/usr/bin/env python3
"""
FLEET-010: Cross-Agent Task Delegation Protocol
Phase 3: Orchestration. Agents create issues, assign to other agents, review PRs.
Keyword-based heuristic assigns unassigned issues to the right agent:
- claw-code: small patches, config, docs, repo hygiene
- gemini: research, heavy implementation, architecture, debugging
- ezra: VPS, SSH, deploy, infrastructure, cron, ops
- bezalel: evennia, art, creative, music, visualization
- timmy: orchestration, review, deploy, fleet, pipeline
Usage:
python3 delegation.py run # Full cycle: scan, assign, report
python3 delegation.py status # Show current delegation state
python3 delegation.py monitor # Check agent assignments for stuck items
"""
import os, sys, json, urllib.request
from datetime import datetime, timezone
from pathlib import Path
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
TOKEN = Path(os.path.expanduser("~/.config/gitea/token")).read_text().strip()
DATA_DIR = Path(os.path.expanduser("~/.local/timmy/fleet-resources"))
LOG_FILE = DATA_DIR / "delegation.log"
HEADERS = {"Authorization": f"token {TOKEN}"}
AGENTS = {
"claw-code": {"caps": ["patch","config","gitignore","cleanup","format","readme","typo"], "active": True},
"gemini": {"caps": ["research","investigate","benchmark","survey","evaluate","architecture","implementation"], "active": True},
"ezra": {"caps": ["vps","ssh","deploy","cron","resurrect","provision","infra","server"], "active": True},
"bezalel": {"caps": ["evennia","art","creative","music","visual","design","animation"], "active": True},
"timmy": {"caps": ["orchestrate","review","pipeline","fleet","monitor","health","deploy","ci"], "active": True},
}
MONITORED = [
"Timmy_Foundation/timmy-home",
"Timmy_Foundation/timmy-config",
"Timmy_Foundation/the-nexus",
"Timmy_Foundation/hermes-agent",
]
def api(path, method="GET", data=None):
url = f"{GITEA_BASE}{path}"
body = json.dumps(data).encode() if data else None
hdrs = dict(HEADERS)
if data: hdrs["Content-Type"] = "application/json"
req = urllib.request.Request(url, data=body, headers=hdrs, method=method)
try:
resp = urllib.request.urlopen(req, timeout=15)
raw = resp.read().decode()
return json.loads(raw) if raw.strip() else {}
except urllib.error.HTTPError as e:
body = e.read().decode()
print(f" API {e.code}: {body[:150]}")
return None
except Exception as e:
print(f" API error: {e}")
return None
def log(msg):
ts = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
DATA_DIR.mkdir(parents=True, exist_ok=True)
with open(LOG_FILE, "a") as f: f.write(f"[{ts}] {msg}\n")
def suggest_agent(title, body):
text = (title + " " + body).lower()
for agent, info in AGENTS.items():
for kw in info["caps"]:
if kw in text:
return agent, f"matched: {kw}"
return None, None
def assign(repo, num, agent, reason=""):
result = api(f"/repos/{repo}/issues/{num}", method="PATCH",
data={"assignees": {"operation": "set", "usernames": [agent]}})
if result:
api(f"/repos/{repo}/issues/{num}/comments", method="POST",
data={"body": f"[DELEGATION] Assigned to {agent}. {reason}"})
log(f"Assigned {repo}#{num} to {agent}: {reason}")
return result
def run_cycle():
log("--- Delegation cycle start ---")
count = 0
for repo in MONITORED:
issues = api(f"/repos/{repo}/issues?state=open&limit=50")
if not issues: continue
for i in issues:
if i.get("assignees"): continue
title = i.get("title", "")
body = i.get("body", "")
if any(w in title.lower() for w in ["epic", "discussion"]): continue
agent, reason = suggest_agent(title, body)
if agent and AGENTS.get(agent, {}).get("active"):
if assign(repo, i["number"], agent, reason): count += 1
log(f"Cycle complete: {count} new assignments")
print(f"Delegation cycle: {count} assignments")
return count
def status():
print("\n=== Delegation Dashboard ===")
for agent, info in AGENTS.items():
count = 0
for repo in MONITORED:
issues = api(f"/repos/{repo}/issues?state=open&limit=50")
if issues:
for i in issues:
for a in (i.get("assignees") or []):
if a.get("login") == agent: count += 1
icon = "ON" if info["active"] else "OFF"
print(f" {agent:12s}: {count:>3} issues [{icon}]")
if __name__ == "__main__":
cmd = sys.argv[1] if len(sys.argv) > 1 else "run"
DATA_DIR.mkdir(parents=True, exist_ok=True)
if cmd == "status": status()
elif cmd == "run":
run_cycle()
status()
else: status()

View File

@@ -58,6 +58,7 @@ LOCAL_CHECKS = {
"hermes-gateway": "pgrep -f 'hermes gateway' > /dev/null 2>/dev/null",
"hermes-agent": "pgrep -f 'hermes agent\\|hermes session' > /dev/null 2>/dev/null",
"ollama": "pgrep -f 'ollama serve' > /dev/null 2>/dev/null",
"openclaw": "pgrep -f 'openclaw' > /dev/null 2>/dev/null",
"evennia": "pgrep -f 'evennia' > /dev/null 2>/dev/null",
}

View File

@@ -1,126 +0,0 @@
#!/usr/bin/env python3
"""
FLEET-011: Local Model Pipeline and Fallback Chain
Phase 4: Sovereignty — all inference runs locally, no cloud dependency.
Checks Ollama endpoints, verifies model availability, tests fallback chain.
Logs results. The chain runs: hermes4:14b -> qwen2.5:7b -> gemma3:1b -> gemma4 (latest)
Usage:
python3 model_pipeline.py # Run full fallback test
python3 model_pipeline.py status # Show current model status
python3 model_pipeline.py list # List all local models
python3 model_pipeline.py test # Generate test output from each model
"""
import os, sys, json, urllib.request
from datetime import datetime, timezone
from pathlib import Path
OLLAMA_HOST = os.environ.get("OLLAMA_HOST", "localhost:11434")
LOG_DIR = Path(os.path.expanduser("~/.local/timmy/fleet-health"))
CHAIN_FILE = Path(os.path.expanduser("~/.local/timmy/fleet-resources/model-chain.json"))
DEFAULT_CHAIN = [
{"model": "hermes4:14b", "role": "primary"},
{"model": "qwen2.5:7b", "role": "fallback"},
{"model": "phi3:3.8b", "role": "emergency"},
{"model": "gemma3:1b", "role": "minimal"},
]
def log(msg):
LOG_DIR.mkdir(parents=True, exist_ok=True)
with open(LOG_DIR / "model-pipeline.log", "a") as f:
f.write(f"[{datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')}] {msg}\n")
def check_ollama():
try:
resp = urllib.request.urlopen(f"http://{OLLAMA_HOST}/api/tags", timeout=5)
return json.loads(resp.read())
except Exception as e:
return {"error": str(e)}
def list_models():
data = check_ollama()
if "error" in data:
print(f" Ollama not reachable at {OLLAMA_HOST}: {data['error']}")
return []
models = data.get("models", [])
for m in models:
name = m.get("name", "?")
size = m.get("size", 0) / (1024**3)
print(f" {name:<25s} {size:.1f} GB")
return [m["name"] for m in models]
def test_model(model, prompt="Say 'beacon lit' and nothing else."):
try:
body = json.dumps({"model": model, "prompt": prompt, "stream": False}).encode()
req = urllib.request.Request(f"http://{OLLAMA_HOST}/api/generate", data=body,
headers={"Content-Type": "application/json"})
resp = urllib.request.urlopen(req, timeout=60)
result = json.loads(resp.read())
return True, result.get("response", "").strip()
except Exception as e:
return False, str(e)[:100]
def test_chain():
chain_data = {}
if CHAIN_FILE.exists():
chain_data = json.loads(CHAIN_FILE.read_text())
chain = chain_data.get("chain", DEFAULT_CHAIN)
available = list_models() or []
print("\n=== Fallback Chain Test ===")
first_good = None
for entry in chain:
model = entry["model"]
role = entry.get("role", "unknown")
if model in available:
ok, result = test_model(model)
status = "OK" if ok else "FAIL"
print(f" [{status}] {model:<25s} ({role}) — {result[:70]}")
log(f"Fallback test {model}: {status}{result[:100]}")
if ok and first_good is None:
first_good = model
else:
print(f" [MISS] {model:<25s} ({role}) — not installed")
if first_good:
print(f"\n Primary serving: {first_good}")
else:
print(f"\n WARNING: No chain model responding. Fallback broken.")
log("FALLBACK CHAIN BROKEN — no models responding")
def status():
data = check_ollama()
if "error" in data:
print(f" Ollama: DOWN — {data['error']}")
else:
models = data.get("models", [])
print(f" Ollama: UP — {len(models)} models loaded")
print("\n=== Local Models ===")
list_models()
print("\n=== Chain Configuration ===")
if CHAIN_FILE.exists():
chain = json.loads(CHAIN_FILE.read_text()).get("chain", DEFAULT_CHAIN)
else:
chain = DEFAULT_CHAIN
for e in chain:
print(f" {e['model']:<25s} {e.get('role','?')}")
if __name__ == "__main__":
cmd = sys.argv[1] if len(sys.argv) > 1 else "status"
if cmd == "status": status()
elif cmd == "list": list_models()
elif cmd == "test": test_chain()
else:
status()
test_chain()

View File

@@ -111,7 +111,7 @@ def update_uptime(checks: dict):
save(data)
if new_milestones:
print(f" UPTIME MILESTONE: {','.join((str(m) + '%') for m in new_milestones)}")
print(f" UPTIME MILESTONE: {','.join(str(m) + '%') for m in new_milestones}")
print(f" Current uptime: {recent_ok:.1f}%")
return data["uptime"]

View File

@@ -59,6 +59,7 @@
| Hermes agent (s007) | 62032 | ~200MB | Session active since 10:20PM prev |
| Hermes agent (s001) | 12072 | ~178MB | Session active since Sun 6PM |
| Ollama | 71466 | ~20MB | /opt/homebrew/opt/ollama/bin/ollama serve |
| OpenClaw gateway | 85834 | ~350MB | Tue 12PM start |
| Crucible MCP (x4) | multiple | ~10-69MB each | MCP server instances |
| Evennia Server | 66433 | ~49MB | Sun 10PM start, port 4000 |
| Evennia Portal | 66423 | ~7MB | Sun 10PM start, port 4001 |

Binary file not shown.

Before

Width:  |  Height:  |  Size: 415 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 509 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 395 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 443 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 283 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 496 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 311 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 569 KiB

Some files were not shown because too many files have changed in this diff Show More