Compare commits

...

7 Commits

Author SHA1 Message Date
Hermes Agent (STEP35)
887f4a27a4 [AUDIT] Implement issue backlog triage script for #478
Some checks failed
Architecture Lint / Linter Tests (pull_request) Successful in 29s
Validate Config / YAML Lint (pull_request) Failing after 14s
Smoke Test / smoke (pull_request) Failing after 22s
Validate Config / JSON Validate (pull_request) Successful in 22s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 1m6s
Validate Config / Python Test Suite (pull_request) Has been skipped
Validate Config / Shell Script Lint (pull_request) Failing after 1m5s
Validate Config / Cron Syntax Check (pull_request) Successful in 12s
Validate Config / Deploy Script Dry Run (pull_request) Successful in 13s
Validate Config / Playbook Schema Validation (pull_request) Successful in 30s
PR Checklist / pr-checklist (pull_request) Successful in 4m36s
Architecture Lint / Lint Repository (pull_request) Failing after 23s
Add scripts/triage_backlog.py — a mechanized triage tool for the
timmy-config issue backlog. Implements the smallest concrete fix
required by #478: close stale issues (>14d inactive) and apply
P0/P1/P2/P3 priority labels to remaining open issues.

Features:
- Fetches all open issues via Gitea API (type=issues filter)
- Detects stale issues: no activity for STALE_DAYS (14)
- Identifies potential duplicates by normalized title
- Assigns priority labels (P0=critical/security, P1=high/bugs,
  P2=medium, P3=low/enhancement)
- Creates P0-P3 labels if missing in the target repo
- Dry-run default; --close-stale to enact closures
- JSON output mode for automation; --output for report files
- Exit code 1 when stale issues found (CI-friendly)

Tests (tests/test_triage_backlog.py): 11 tests covering
stale detection, duplicate normalization, and priority heuristics.

Closes #478
2026-04-30 10:15:46 -04:00
Rockachopa
ba4220d5ed Revert 'feat(training): add prompt-enhancement generator (step35 #575)' — undone for proper branch flow
Some checks failed
Smoke Test / smoke (push) Failing after 19s
Architecture Lint / Linter Tests (push) Successful in 20s
Validate Config / YAML Lint (push) Failing after 11s
Validate Config / JSON Validate (push) Successful in 14s
Validate Config / Python Syntax & Import Check (push) Failing after 48s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Shell Script Lint (push) Failing after 54s
Validate Config / Cron Syntax Check (push) Successful in 13s
Validate Config / Deploy Script Dry Run (push) Successful in 17s
Validate Config / Playbook Schema Validation (push) Successful in 29s
Architecture Lint / Lint Repository (push) Failing after 27s
2026-04-30 09:55:17 -04:00
Rockachopa
2451f38bee feat(training): add prompt-enhancement generator for 3K terse→rich pairs (step35 #575)
Some checks failed
Architecture Lint / Linter Tests (push) Has been cancelled
Architecture Lint / Lint Repository (push) Has been cancelled
Smoke Test / smoke (push) Has been cancelled
Validate Config / YAML Lint (push) Has been cancelled
Validate Config / JSON Validate (push) Has been cancelled
Validate Config / Python Syntax & Import Check (push) Has been cancelled
Validate Config / Python Test Suite (push) Has been cancelled
Validate Config / Shell Script Lint (push) Has been cancelled
Validate Config / Cron Syntax Check (push) Has been cancelled
Validate Config / Deploy Script Dry Run (push) Has been cancelled
Validate Config / Playbook Schema Validation (push) Has been cancelled
2026-04-30 09:52:59 -04:00
Rockachopa
54093991ab STEP35-476 patch: use scripts/ path for dispatch_router
Some checks failed
Architecture Lint / Linter Tests (push) Successful in 17s
Smoke Test / smoke (push) Failing after 12s
Validate Config / YAML Lint (push) Failing after 10s
Validate Config / JSON Validate (push) Successful in 16s
Validate Config / Python Syntax & Import Check (push) Failing after 37s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Cron Syntax Check (push) Successful in 15s
Validate Config / Shell Script Lint (push) Failing after 46s
Validate Config / Deploy Script Dry Run (push) Successful in 10s
Validate Config / Playbook Schema Validation (push) Successful in 16s
Architecture Lint / Lint Repository (push) Failing after 13s
- dispatch_router.py resides in scripts/ (existing dir)
- Updated orchestrator to call ../scripts/dispatch_router.py
2026-04-30 06:41:38 +00:00
Rockachopa
1ea6bf6e33 STEP35-476: Integrate dispatch_router into orchestrator triage loop
Some checks failed
Architecture Lint / Linter Tests (push) Successful in 31s
Smoke Test / smoke (push) Failing after 24s
Validate Config / YAML Lint (push) Failing after 17s
Validate Config / JSON Validate (push) Successful in 18s
Validate Config / Python Syntax & Import Check (push) Failing after 57s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Shell Script Lint (push) Failing after 1m0s
Validate Config / Cron Syntax Check (push) Successful in 11s
Validate Config / Deploy Script Dry Run (push) Successful in 14s
Validate Config / Playbook Schema Validation (push) Successful in 25s
Architecture Lint / Lint Repository (push) Failing after 23s
- Added dispatch_router.py call for agent assignment routing
- Added dispatch decision logging to $LOG_DIR/dispatch_decisions.log
- Fall back to 'claude' if router fails
- Logs agent, score, category, reason per dispatch
2026-04-30 06:32:30 +00:00
Rockachopa
874ce137b0 feat(backup): add automated Gitea daily backup and recovery runbook
Some checks failed
Architecture Lint / Linter Tests (push) Successful in 30s
Smoke Test / smoke (push) Failing after 24s
Validate Config / YAML Lint (push) Failing after 16s
Validate Config / JSON Validate (push) Successful in 21s
Validate Config / Cron Syntax Check (push) Successful in 15s
Validate Config / Deploy Script Dry Run (push) Successful in 14s
Validate Config / Python Syntax & Import Check (push) Failing after 1m2s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Shell Script Lint (push) Failing after 1m3s
Validate Config / Playbook Schema Validation (push) Successful in 24s
Architecture Lint / Linter Tests (pull_request) Successful in 27s
Smoke Test / smoke (pull_request) Failing after 22s
Validate Config / YAML Lint (pull_request) Failing after 16s
Validate Config / JSON Validate (pull_request) Successful in 23s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 1m5s
Validate Config / Python Test Suite (pull_request) Has been skipped
Validate Config / Cron Syntax Check (pull_request) Successful in 12s
Validate Config / Shell Script Lint (pull_request) Failing after 1m6s
Validate Config / Deploy Script Dry Run (pull_request) Successful in 13s
Validate Config / Playbook Schema Validation (pull_request) Successful in 25s
PR Checklist / pr-checklist (pull_request) Failing after 4m33s
Architecture Lint / Lint Repository (push) Failing after 26s
Architecture Lint / Lint Repository (pull_request) Failing after 26s
- Add bin/gitea-backup.sh: daily backup script using gitea dump
- Add cron/vps/gitea-daily-backup.yml: Hermes cron job (2 AM daily)
- Add docs/backup-recovery-runbook.md: complete recovery procedures

Addresses [AUDIT][RISK] Single-node VPS is a single point of failure.
Closes #481
2026-04-30 01:44:05 -04:00
5eef5b48c8 feat(wizards): resurrect Timmy, Ezra, Allegro from golden state configs
Some checks failed
Architecture Lint / Linter Tests (push) Successful in 31s
Smoke Test / smoke (push) Failing after 28s
Validate Config / YAML Lint (push) Failing after 21s
Validate Config / JSON Validate (push) Successful in 21s
Validate Config / Python Syntax & Import Check (push) Failing after 1m5s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Cron Syntax Check (push) Successful in 14s
Validate Config / Shell Script Lint (push) Failing after 1m3s
Validate Config / Deploy Script Dry Run (push) Successful in 14s
Validate Config / Playbook Schema Validation (push) Successful in 29s
Architecture Lint / Lint Repository (push) Failing after 22s
Remove MiMo V2 Pro (nous) provider from all wizard configs — it was added
during the evaluation attempt (#447) and "config-murdered" the fleet.
Restore the canonical golden state provider chain:
  Kimi K2.5 → Gemini 2.5 Pro (OpenRouter) → Ollama gemma4

Changes:
- Create wizards/timmy/config.yaml (was missing — Timmy resurrected)
- Update wizards/allegro/config.yaml: strip nous, normalize to golden state
- Update wizards/ezra/config.yaml: strip nous, preserve max_turns: 90
- Update wizards/bezalel/config.yaml: strip nous, add openrouter+ollama,
  preserve custom telegram/webhook, personality kawaii, and session_reset
- All wizards now have no Anthropic references and correct provider chain

Acceptance criteria met:
- [x] All wizards resurrected from checked-in configs (Timmy created, others cleaned)
- [x] Provider chain verified: Kimi K2.5 → Gemini 2.5 Pro → Ollama gemma4
- [x] No Anthropic/nous/mimo references in any running config
- [ ] request_log telemetry (handled by thin_config Ansible, blocking dep done)
- [ ] Ezra Telegram token propagation (infrastructure, out of scope for this PR)
- [ ] Duplicate agents resolution (separate fleet audit issue, explicitly non-blocking)

Closes #448
2026-04-29 23:45:00 -04:00
10 changed files with 1183 additions and 127 deletions

87
bin/gitea-backup.sh Normal file
View File

@@ -0,0 +1,87 @@
#!/bin/bash
# Gitea Daily Backup Script
# Uses Gitea's native dump command to create automated backups of repositories and SQLite databases.
# Designed to run on the VPS (Ezra) as part of a daily cron job.
#
# Configuration via environment variables:
# GITEA_BIN Path to gitea binary (default: auto-detect)
# GITEA_BACKUP_DIR Directory for backup archives (default: /var/backups/gitea)
# GITEA_BACKUP_RETENTION Days to retain backups (default: 7)
# GITEA_BACKUP_LOG Log file path (default: /var/log/gitea-backup.log)
set -euo pipefail
GITEA_BIN="${GITEA_BIN:-$(command -v gitea 2>/dev/null || echo "/usr/local/bin/gitea")}"
BACKUP_DIR="${GITEA_BACKUP_DIR:-/var/backups/gitea}"
RETENTION_DAYS="${GITEA_BACKUP_RETENTION:-7}"
DATE="$(date +%Y-%m-%d_%H%M%S)"
BACKUP_FILE="${BACKUP_DIR}/gitea-backup-${DATE}.tar.gz"
LOG_FILE="${GITEA_BACKUP_LOG:-/var/log/gitea-backup.log}"
mkdir -p "${BACKUP_DIR}"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "${LOG_FILE}"
}
log "=== Starting Gitea daily backup ==="
# Verify gitea binary exists
if [ ! -x "${GITEA_BIN}" ]; then
log "ERROR: Gitea binary not found at ${GITEA_BIN}"
log "Set GITEA_BIN environment variable to the gitea binary path (e.g., /usr/bin/gitea)"
exit 1
fi
# Detect Gitea WORK_PATH
WORK_PATH=""
APP_INI=""
for path in /etc/gitea/app.ini /home/git/gitea/custom/conf/app.ini ~/gitea/custom/conf/app.ini; do
if [ -f "$path" ]; then
APP_INI="$path"
break
fi
done
if [ -n "$APP_INI" ]; then
# Parse [app] WORK_PATH = /var/lib/gitea
WORK_PATH=$(sed -n 's/^[[:space:]]*WORK_PATH[[:space:]]*=[[:space:]]*//p' "$APP_INI" | head -1)
log "Detected WORK_PATH from app.ini: ${WORK_PATH}"
fi
# Fallback detection
if [ -z "$WORK_PATH" ]; then
for d in /var/lib/gitea /home/git/gitea /srv/gitea /opt/gitea; do
if [ -d "$d" ]; then
WORK_PATH="$d"
break
fi
done
log "Inferred WORK_PATH: ${WORK_PATH:-not found}"
fi
if [ -z "$WORK_PATH" ]; then
log "ERROR: Could not determine Gitea WORK_PATH. Set GITEA_WORK_PATH manually."
exit 1
fi
# Perform gitea dump
# Flags: --work-path sets the Gitea working directory, --file writes dump to tar.gz
log "Running: gitea dump --work-path ${WORK_PATH} --file ${BACKUP_FILE}"
"${GITEA_BIN}" dump --work-path "${WORK_PATH}" --file "${BACKUP_FILE}" 2>>"${LOG_FILE}"
if [ $? -ne 0 ]; then
log "ERROR: gitea dump failed — check ${LOG_FILE} for details"
exit 1
fi
FILE_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
log "Backup created: ${BACKUP_FILE} (${FILE_SIZE})"
# Prune old backups (keep last N days)
find "${BACKUP_DIR}" -name "gitea-backup-*.tar.gz" -type f -mtime +$((${RETENTION_DAYS}-1)) -delete 2>/dev/null || true
log "Pruned backups older than ${RETENTION_DAYS} days"
log "=== Backup completed successfully ==="
exit 0

View File

@@ -129,20 +129,42 @@ Preserved by timmy-orchestrator to prevent loss." 2>/dev/null && git p
# Auto-assignment is opt-in because silent queue mutation resurrects old state.
if [ "$unassigned_count" -gt 0 ]; then
if [ "$AUTO_ASSIGN_UNASSIGNED" = "1" ]; then
log "Assigning $unassigned_count issues to claude..."
while IFS= read -r line; do
local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*/\1/')
local num=$(echo "$line" | sed 's/.*NUM=\([^ ]*\).*/\1/')
curl -sf -X PATCH "$GITEA_URL/api/v1/repos/$repo/issues/$num" \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"assignees":["claude"]}' >/dev/null 2>&1 && \
log " Assigned #$num ($repo) to claude"
done < "$state_dir/unassigned.txt"
else
log "Auto-assign disabled: leaving $unassigned_count unassigned issues untouched"
fi
fi
log "Assigning $unassigned_count issues via dispatch router..."
DISPATCH_LOG="$LOG_DIR/dispatch_decisions.log"
while IFS= read -r line; do
local repo=$(echo "$line" | sed 's/.*REPO=\([^ ]*\).*//')
local num=$(echo "$line" | sed 's/.*NUM=\([^ ]*\).*//')
local title=$(echo "$line" | sed 's/.*TITLE=//')
# Call dispatch_router to pick best agent
local route_json
route_json=$(python3 "$SCRIPT_DIR/../scripts/dispatch_router.py" "$title" "$repo" 2>/dev/null) || route_json=""
local recommended_agent="claude" # fallback
local route_category="unknown"
local route_score="0"
local route_reason="fallback"
if [ -n "$route_json" ]; then
recommended_agent=$(echo "$route_json" | python3 -c "import sys,json; print(json.load(sys.stdin).get('recommended_agent','claude'))" 2>/dev/null || echo "claude")
route_score=$(echo "$route_json" | python3 -c "import sys,json; print(json.load(sys.stdin).get('score',0))" 2>/dev/null || echo "0")
route_category=$(echo "$route_json" | python3 -c "import sys,json; print(json.load(sys.stdin).get('category','unknown'))" 2>/dev/null || echo "unknown")
route_reason=$(echo "$route_json" | python3 -c "import sys,json; print(json.load(sys.stdin).get('reason',''))" 2>/dev/null || echo "")
fi
# Assign via API
curl -sf -X PATCH "$GITEA_URL/api/v1/repos/$repo/issues/$num" \\
-H "Authorization: token $GITEA_TOKEN" \\
-H "Content-Type: application/json" \\
-d "{\"assignees\":[\"$recommended_agent\"]}" >/dev/null 2>&1 && \\
log " Assigned #$num ($repo) to $recommended_agent [score=$route_score cat=$route_category]"
# Log dispatch decision for audit (RFC3339 timestamp)
printf '%s\t%s\t%s\t%s\t%s\t%s\t%s\n' \
"$(date -u +"%Y-%m-%dT%H:%M:%SZ")" "$num" "$repo" "$title" "$recommended_agent" "$route_score" "$route_category|$route_reason" \
>> "$DISPATCH_LOG"
done < "$state_dir/unassigned.txt"
else fi
# Phase 2: PR review via Timmy (LLM)
if [ "$pr_count" -gt 0 ]; then

View File

@@ -0,0 +1,9 @@
- name: Daily Gitea Backup
schedule: '0 2 * * *' # 2:00 AM daily
tasks:
- name: Run Gitea daily backup
shell: bash ~/.hermes/bin/gitea-backup.sh
env:
GITEA_BIN: /usr/local/bin/gitea
GITEA_BACKUP_DIR: /var/backups/gitea
GITEA_BACKUP_RETENTION: "7"

View File

@@ -0,0 +1,155 @@
# Gitea Backup & Recovery Runbook
**Last updated:** 2026-04-30
**Scope:** Single-node VPS (Ezra, 143.198.27.163) running Gitea
**Backup Strategy:** Automated daily full dumps via `gitea dump`
---
## What Gets Backed Up
| Component | Method | Frequency | Retention |
|-----------|--------|-----------|-----------|
| All Gitea repositories (bare git dirs) | `gitea dump --file` | Daily at 2:00 AM | 7 days |
| SQLite databases (gitea.db, indexer.db, etc.) | Included in dump | Daily | 7 days |
| Attachments, avatars, hooks | Included in dump | Daily | 7 days |
**Backup location:** `/var/backups/gitea/gitea-backup-YYYY-MM-DD_HHMMSS.tar.gz`
**Log file:** `/var/log/gitea-backup.log`
---
## Backup Architecture
The backup script `bin/gitea-backup.sh` runs daily via Hermes cron (`cron/vps/gitea-daily-backup.yml`). It:
1. Locates the Gitea `WORK_PATH` by reading `/etc/gitea/app.ini` or falling back to common locations (`/var/lib/gitea`, `/home/git/gitea`)
2. Invokes `gitea dump --work-path <path> --file <backup-tar.gz>` — Gitea's native, consistent snapshot mechanism
3. Prunes archives older than 7 days
4. Logs all operations to `/var/log/gitea-backup.log`
**Prerequisites on the VPS:**
- Gitea binary available at `/usr/local/bin/gitea` (or set `GITEA_BIN` env var)
- `gitea dump` command must be available (Gitea ≥ 1.12)
- SSH access to the VPS for manual recovery operations
- Sufficient disk space in `/var/backups/gitea` (typical dump: ~210 GB depending on repo count/size)
---
## Recovery Time Objective (RTO) & Recovery Point Objective (RPO)
| Metric | Estimate |
|--------|----------|
| **RPO** (data loss window) | ≤ 24 hours (last daily backup) |
| **RTO** (time to restore) | **~45 minutes** (cold restore from backup tarball) |
| **Downtime impact** | Gitea offline during restore (~20 min) |
---
## Step-by-Step Recovery Procedure
### Phase 1 — Assess & Prepare (5 min)
1. SSH into Ezra VPS: `ssh root@143.198.27.163`
2. Stop Gitea so files are quiescent:
```bash
systemctl stop gitea
```
3. Confirm current Gitea data directory (for reference):
```bash
gitea --work-path /var/lib/gitea --config /etc/gitea/app.ini dump --help 2>&1
# Or check app.ini for WORK_PATH
cat /etc/gitea/app.ini | grep '^WORK_PATH'
```
### Phase 2 — Restore from Backup (20 min)
4. Choose the backup tarball to restore from:
```bash
ls -lh /var/backups/gitea/
# Pick the most recent: gitea-backup-2026-04-29_020001.tar.gz
```
5. **Optional: Move current data aside** (safety copy):
```bash
mv /var/lib/gitea /var/lib/gitea.bak-$(date +%s)
```
6. Extract the backup in place:
```bash
mkdir -p /var/lib/gitea
tar -xzf /var/backups/gitea/gitea-backup-YYYY-MM-DD_HHMMSS.tar.gz -C /var/lib/gitea --strip-components=1
```
*Note:* `gitea dump` archives contain a single top-level directory `gitea-dump-<timestamp>`. The `--strip-components=1` puts its contents directly into `/var/lib/gitea`.
7. Set correct ownership (typically `git:git`):
```bash
chown -R git:git /var/lib/gitea
```
### Phase 3 — Restart & Validate (15 min)
8. Start Gitea:
```bash
systemctl start gitea
```
9. Wait 30 seconds, then verify:
```bash
systemctl status gitea
# Check HTTP endpoint
curl -s -o /dev/null -w '%{http_code}' http://localhost:3000/ # Should be 200
```
10. Log into Gitea UI and spot-check:
- Home page loads
- A few repositories are accessible
- Attachments (avatars) render
- Recent commits visible
11. If the web UI works but indices are stale, rebuild them (wait for background jobs to process):
```bash
gitea admin index rebuild-repo --all
```
### Post-Restore Checklist
- [ ] Admin UI reachable at `https://forge.alexanderwhitestone.com`
- [ ] Sample PRs/milestones/labels present
- [ ] Repository clone via SSH works: `git clone git@forge.alexanderwhitestone.com:Timmy_Foundation/timmy-config.git`
- [ ] Check backup script health: `cat /var/log/gitea-backup.log | tail -20`
- [ ] Re-enable any disabled integrations (webhooks, CI/CD runners)
- [ ] Notify the fleet: post to relevant channels confirming operational status
---
## Known Issues & Workarounds
| Symptom | Likely cause | Fix |
|---------|--------------|-----|
| `gitea: command not found` | Binary at non-standard path | Set `GITEA_BIN=/path/to/gitea` in cron env |
| `Permission denied` on backup dir | Cron user lacks write access to `/var/backups` | `mkdir /var/backups/gitea && chown root:root /var/backups/gitea` |
| Restore fails: `"database or disk is full"` | Insufficient space on `/var/lib/gitea` | Expand disk or clean up old data first; backups require ~1.5x live data size |
| Old backup tarballs not deleting | Retention cron not firing | Check `systemctl status hermes-cron` and cron logs |
---
## Off-Site Replication (Future Work)
This backup is **on-site only** (same VPS). For true resilience, replicating to a secondary location is recommended:
- **Option A — rsync to second VPS** (Push nightly to `backup@backup-alexanderwhitestone.com:/backups/gitea/`)
- **Option B — S3-compatible bucket** with lifecycle policy
- **Option C — GitHub mirror of each repo** using `git push --mirror` (already considered in issue #481 broader work)
Current scope: single-VPS backup only (single point of failure mitigated but not eliminated).
---
## Related Documentation
- `bin/gitea-backup.sh` — backup script source
- `cron/vps/gitea-daily-backup.yml` — Hermes cron definition
- Gitea official docs: <https://docs.gitea.com/administration/backup-and-restore>
- Hermes cron: <https://hermes-agent.nousresearch.com/docs>

430
scripts/triage_backlog.py Executable file
View File

@@ -0,0 +1,430 @@
#!/usr/bin/env python3
"""
triage_backlog.py — Automated issue backlog triage for Gitea repos (Issue #478).
Closes stale issues (>14 days inactive) and applies P0/P1/P2/P3 priority labels
to remaining open issues. Generates a triage report.
Usage:
python3 scripts/triage_backlog.py Timmy_Foundation/timmy-config
python3 scripts/triage_backlog.py Timmy_Foundation/timmy-config --close-stale
python3 scripts/triage_backlog.py --org Timmy_Foundation --dry-run
python3 scripts/triage_backlog.py Timmy_Foundation/hermes-agent --json
"""
import argparse
import json
import os
import re
import sys
from collections import defaultdict
from datetime import datetime, timezone, timedelta
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from urllib.request import Request, urlopen
from urllib.error import HTTPError
GITEA_URL = "https://forge.alexanderwhitestone.com"
# Staleness threshold: 14 days of no updates
STALE_DAYS = 14
# Priority label names
PRIORITY_LABELS = ["P0", "P1", "P2", "P3"]
# Existing priority/critical labels to consider for P0 mapping
CRITICAL_LABELS = {"critical", "p0-test"}
def get_token() -> str:
"""Read Gitea token from config."""
path = Path(os.path.expanduser("~/.config/gitea/token"))
if path.exists():
return path.read_text().strip()
token = os.environ.get("GITEA_TOKEN", "")
if not token:
print("ERROR: No Gitea token found. Set GITEA_TOKEN or create ~/.config/gitea/token", file=sys.stderr)
sys.exit(1)
return token
def api(method: str, path: str, token: str, data: dict = None, params: dict = None) -> Any:
"""Call Gitea REST API."""
url = f"{GITEA_URL}/api/v1{path}"
if params:
url += "?" + "&".join(f"{k}={v}" for k, v in params.items())
body = json.dumps(data).encode() if data else None
req = Request(url, data=body, headers={
"Authorization": f"token {token}",
"Content-Type": "application/json",
}, method=method)
try:
resp = urlopen(req, timeout=30)
return json.loads(resp.read())
except HTTPError as e:
err_body = e.read().decode() if e.fp else ""
return {"_error": e.code, "_body": err_body[:300]}
def ensure_priority_labels(repo: str, token: str) -> Dict[str, int]:
"""Ensure P0/P1/P2/P3 labels exist in the repo. Returns label id map."""
existing = {}
# Get current labels
labels = api("GET", f"/repos/{repo}/labels", token, params={"per_page": "100"})
if isinstance(labels, list):
for lbl in labels:
if lbl["name"] in PRIORITY_LABELS:
existing[lbl["name"]] = lbl["id"]
# Create missing
colors = {"P0": "#FF0000", "P1": "#FF7F00", "P2": "#FFFF00", "P3": "#ADFF2F"}
descs = {
"P0": "Critical priority — must fix immediately",
"P1": "High priority — fix soon",
"P2": "Medium priority — normal backlog",
"P3": "Low priority — nice to have",
}
for pl in PRIORITY_LABELS:
if pl not in existing:
api("POST", f"/repos/{repo}/labels", token, {
"name": pl,
"color": colors[pl],
"description": descs[pl],
})
# Re-fetch to get IDs
labels = api("GET", f"/repos/{repo}/labels", token, params={"per_page": "100"})
if isinstance(labels, list):
for lbl in labels:
if lbl["name"] in PRIORITY_LABELS:
existing[lbl["name"]] = lbl["id"]
return existing
def fetch_open_issues(repo: str, token: str, quiet: bool = False) -> List[dict]:
"""Fetch all open issues (excluding PRs) for a repo."""
issues = []
page = 1
per_page = 100
while True:
batch = api("GET", f"/repos/{repo}/issues", token, params={
"state": "open",
"type": "issues", # exclude PRs at API level
"limit": str(per_page),
"page": str(page),
"sort": "created",
"direction": "desc",
})
if not isinstance(batch, list):
break
if not batch:
break
for iss in batch:
if iss.get("pull_request") is None:
issues.append(iss)
page += 1
if page > 20: # safety cap (~2000 issues)
if not quiet:
print(f" WARNING: pagination cap at page {page}")
break
return issues
def is_stale(issue: dict, days: int = STALE_DAYS) -> bool:
"""Check if an issue is stale: no activity (updated_at) for N days."""
updated_str = issue.get("updated_at") or issue.get("created_at")
if not updated_str:
return False
updated = datetime.fromisoformat(updated_str.replace("Z", "+00:00"))
now = datetime.now(timezone.utc)
age = (now - updated).days
return age >= days
def find_duplicate_candidates(issues: List[dict]) -> Dict[int, List[int]]:
"""Find issues with very similar titles (exact title match or title prefix collision)."""
title_map: Dict[str, List[int]] = {}
# Normalize titles for comparison: lowercase, strip, remove common prefixes
def normalize(title: str) -> str:
t = title.lower().strip()
# Strip common prefixes
t = re.sub(r'^\[(bug|feat|docs|fix|chore|refactor|test|build|ci|ops|security|a11y|enhancement|research|adversary)\]', '', t)
t = re.sub(r'^\[[^\]]+\]\s*', '', t)
t = re.sub(r'^\w+:\s*', '', t) # "fix:", "feat:", etc.
return t.strip()
for iss in issues:
key = normalize(iss.get("title", ""))
if len(key) < 10:
continue # Too short to be meaningful
title_map.setdefault(key, []).append(iss["number"])
return {k: v for k, v in title_map.items() if len(v) > 1}
def assign_priority(issue: dict, all_issues: List[dict]) -> Optional[str]:
"""Assign P0/P1/P2/P3 priority based on heuristics."""
labels = {lbl["name"].lower() for lbl in issue.get("labels", [])}
title = (issue.get("title") or "").lower()
body = (issue.get("body") or "").lower()
comments_count = issue.get("comments", 0)
refs_issue_count = len(re.findall(r"#(\d+)", f"{title} {body}"))
# P0: Critical blockers, security issues, explicitly labeled critical, or referenced by many other issues
if any(crit in labels for crit in CRITICAL_LABELS):
return "P0"
if any(kw in title or kw in body for kw in ["security", "vulnerability", "xss", "injection", "auth bypass", "critical"]):
return "P0"
if refs_issue_count >= 5:
return "P0"
# P1: High activity, bug fixes, implementation blockers
if comments_count >= 5:
return "P1"
if any(kw in title for kw in ["fix", "bug", "broken", "regression", "failure"]):
return "P1"
if any(kw in title or kw in body for kw in ["urgency", "asap", "immediately", "blocker"]):
return "P1"
# P3: Old, low activity, enhancement/research, very short titles
age_days = (datetime.now(timezone.utc) -
datetime.fromisoformat(issue["created_at"].replace("Z", "+00:00"))).days
if age_days > 180 and comments_count <= 1:
return "P3"
if any(kw in title for kw in ["enhancement", "improve", "consider", "maybe", "wishlist"]):
return "P3"
# P2 is the default middle bucket
return "P2"
def close_issue(issue_num: int, repo: str, token: str, reason: str, dry_run: bool = True) -> dict:
"""Close an issue with a comment explaining why."""
result = {"issue": issue_num, "action": "would_close" if dry_run else "closed", "reason": reason}
if dry_run:
return result
# Comment first
api("POST", f"/repos/{repo}/issues/{issue_num}/comments", token, {
"body": f"Closing as {reason}. Triage cleanup per #478."
})
# Close the issue
api("PATCH", f"/repos/{repo}/issues/{issue_num}", token, {"state": "closed"})
return result
def apply_label(issue_num: int, repo: str, token: str, label_id: int, dry_run: bool = True) -> dict:
"""Apply a label to an issue."""
result = {"issue": issue_num, "label_id": label_id, "action": "would_label" if dry_run else "labeled"}
if not dry_run:
api("POST", f"/repos/{repo}/issues/{issue_num}/labels", token, {"labels": [label_id]})
return result
def analyze_repo(repo: str, token: str, quiet: bool = False) -> dict:
"""Analyze open issues for a repo."""
issues = fetch_open_issues(repo, token, quiet=quiet)
if not quiet:
print(f" Fetched {len(issues)} open issues", file=sys.stderr)
# Ensure priority labels exist (quietly)
label_ids = ensure_priority_labels(repo, token)
stale_issues = []
duplicate_groups = find_duplicate_candidates(issues)
duplicate_issue_nums = {num for group in duplicate_groups.values() for num in group}
# Categorize issues for priority
priority_counts: Dict[str, int] = defaultdict(int)
issues_by_priority: Dict[str, List[dict]] = defaultdict(list)
priority_assignments: Dict[int, str] = {}
stale_close_candidates = []
non_stale = []
for iss in issues:
age_days = (datetime.now(timezone.utc) -
datetime.fromisoformat(iss["created_at"].replace("Z", "+00:00"))).days
if is_stale(iss):
stale_issues.append({
"number": iss["number"],
"title": iss.get("title", ""),
"created": iss["created_at"],
"updated": iss.get("updated_at", ""),
"age_days": age_days,
})
stale_close_candidates.append(iss)
else:
non_stale.append(iss)
prio = assign_priority(iss, issues)
priority_assignments[iss["number"]] = prio
priority_counts[prio] += 1
issues_by_priority[prio].append({
"number": iss["number"],
"title": iss.get("title", ""),
"comments": iss.get("comments", 0),
"age_days": age_days,
})
return {
"repo": repo,
"total_open": len(issues),
"stale_issues": stale_issues,
"duplicate_groups": [{"representative": v[0], "members": v} for k, v in duplicate_groups.items()],
"priority_counts": dict(priority_counts),
"priority_details": {k: v for k, v in issues_by_priority.items()},
"priority_assignments": priority_assignments,
"label_ids": label_ids,
}
def close_stale_issues(analysis: dict, repo: str, token: str, dry_run: bool = True) -> List[dict]:
"""Close identified stale issues."""
closed = []
for item in analysis["stale_issues"]:
num = item["number"]
# Don't close if it's a duplicate candidate that should be preserved?
# For now close all stale
result = close_issue(num, repo, token,
f"stale (no activity for {STALE_DAYS}+ days)",
dry_run=dry_run)
closed.append(result)
return closed
def apply_priority_labels(analysis: dict, repo: str, token: str, dry_run: bool = True) -> List[dict]:
"""Apply P0/P1/P2/P3 labels to non-stale issues."""
actions = []
label_ids = analysis["label_ids"]
for num, prio in analysis["priority_assignments"].items():
label_id = label_ids.get(prio)
if label_id:
result = apply_label(num, repo, token, label_id, dry_run=dry_run)
result["priority"] = prio
actions.append(result)
return actions
def format_report(analysis: dict) -> str:
"""Format triage analysis as markdown report."""
lines = [
f"## Issue Backlog Triage — {analysis['repo']}",
f"",
f"**Total open issues:** {analysis['total_open']}",
f"**Stale threshold:** {STALE_DAYS} days",
"",
"### Summary",
"",
f"- **Stale issues:** {len(analysis['stale_issues'])} (candidates for closure)",
f"- **Priority breakdown:**",
]
for prio in ["P0", "P1", "P2", "P3"]:
count = analysis["priority_counts"].get(prio, 0)
lines.append(f" - {prio}: {count}")
lines.append("")
# Duplicate groups
if analysis["duplicate_groups"]:
lines.append("### Potential Duplicates (similar titles)")
lines.append("")
for grp in analysis["duplicate_groups"][:10]:
members = ", ".join(f"#{n}" for n in grp["members"])
lines.append(f"- {members}")
lines.append("")
# Stale details
if analysis["stale_issues"]:
lines.append("### Stale Issues (oldest first)")
lines.append("")
for item in sorted(analysis["stale_issues"], key=lambda x: x["age_days"], reverse=True)[:20]:
lines.append(f"- #{item['number']}: {item['title'][:60]} (age: {item['age_days']}d)")
lines.append("")
# Priority details
for prio in ["P0", "P1", "P2", "P3"]:
items = analysis["priority_details"].get(prio, [])
if not items:
continue
lines.append(f"### {prio} Priority ({len(items)})")
lines.append("")
for item in items[:15]:
lines.append(f"- #{item['number']}: {item['title'][:60]} (comments: {item['comments']}, age: {item['age_days']}d)")
lines.append("")
return "\n".join(lines)
def format_json(analysis: dict) -> str:
"""Format as JSON."""
return json.dumps(analysis, indent=2, default=str)
def main():
parser = argparse.ArgumentParser(description="Issue backlog triage for Gitea repos")
parser.add_argument("repo", nargs="?", help="Repo path (e.g. Timmy_Foundation/timmy-config)")
parser.add_argument("--org", help="Triage all repos in org (instead of single repo)")
parser.add_argument("--close-stale", action="store_true", help="Close stale issues (default: dry-run)")
parser.add_argument("--dry-run", action="store_true", default=True, help="Don't actually close/label (default)")
parser.add_argument("--json", action="store_true", help="Output as JSON")
parser.add_argument("--output", help="Write report to file")
parser.add_argument("--token", help="Gitea token (overrides config file)")
args = parser.parse_args()
token = args.token or get_token()
dry_run = args.dry_run and not args.close_stale # --close-stale disables dry-run
# Determine repos
repos = []
if args.org:
org_repos = api("GET", f"/orgs/{args.org}/repos", token, params={"limit": "50"})
if isinstance(org_repos, list):
repos = [r["full_name"] for r in org_repos]
elif args.repo:
repos = [args.repo]
else:
parser.error("Provide REPO or --org")
all_analyses = []
quiet = args.json
for repo in repos:
if not quiet:
print(f"\n=== Triage: {repo} ===", file=sys.stderr)
analysis = analyze_repo(repo, token, quiet=quiet)
if "error" in analysis:
print(f"SKIP: {analysis['error']}", file=sys.stderr)
continue
# Close stale if requested
if args.close_stale and analysis["stale_issues"]:
if not quiet:
print(f"Closing {len(analysis['stale_issues'])} stale issues...", file=sys.stderr)
analysis["close_actions"] = close_stale_issues(analysis, repo, token, dry_run=dry_run)
else:
analysis["close_actions"] = []
# Apply priority labels
if not dry_run and analysis["priority_assignments"]:
if not quiet:
print(f"Applying priority labels to {len(analysis['priority_assignments'])} issues...", file=sys.stderr)
analysis["label_actions"] = apply_priority_labels(analysis, repo, token, dry_run=dry_run)
else:
analysis["label_actions"] = []
all_analyses.append(analysis)
# Output
if args.json:
output = format_json(all_analyses[0] if len(all_analyses) == 1 else all_analyses)
else:
parts = [format_report(a) for a in all_analyses]
output = "\n\n---\n\n".join(parts)
if args.output:
Path(args.output).write_text(output, encoding="utf-8")
if not quiet:
print(f"Report written to {args.output}", file=sys.stderr)
else:
print(output)
# Exit code: 1 if any stale issues found that should be closed (CI helper)
total_stale = sum(len(a.get("stale_issues", [])) for a in all_analyses)
if total_stale > 0:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,124 @@
#!/usr/bin/env python3
"""Tests for triage_backlog.py — issue #478."""
import sys
from pathlib import Path
from datetime import datetime, timezone, timedelta
sys.path.insert(0, str(Path(__file__).resolve().parent.parent / "scripts"))
from triage_backlog import (
is_stale,
find_duplicate_candidates,
assign_priority,
STALE_DAYS,
)
class TestStaleDetection:
def test_fresh_issue_not_stale(self):
now = datetime.now(timezone.utc)
issue = {
"created_at": now.isoformat(),
"updated_at": now.isoformat(),
}
assert not is_stale(issue, days=14)
def test_old_issue_stale(self):
old = (datetime.now(timezone.utc) - timedelta(days=20)).isoformat()
issue = {
"created_at": old,
"updated_at": old,
}
assert is_stale(issue, days=14)
def test_uses_updated_at(self):
recent = (datetime.now(timezone.utc) - timedelta(days=5)).isoformat()
old = (datetime.now(timezone.utc) - timedelta(days=20)).isoformat()
issue = {
"created_at": old,
"updated_at": recent,
}
assert not is_stale(issue, days=14)
class TestDuplicates:
def test_identical_titles_are_dupes(self):
issues = [
{"number": 1, "title": "feat: add token tracker"},
{"number": 2, "title": "feat: add token tracker"},
{"number": 3, "title": "something else"},
]
dupes = find_duplicate_candidates(issues)
assert "add token tracker" in dupes
assert 1 in dupes["add token tracker"]
assert 2 in dupes["add token tracker"]
def test_normalizes_prefixes(self):
issues = [
{"number": 1, "title": "[feat] add token tracker"},
{"number": 2, "title": "feat: add token tracker"},
]
dupes = find_duplicate_candidates(issues)
# Both should map to same normalized key
assert len(dupes) == 1
def test_short_titles_ignored(self):
issues = [
{"number": 1, "title": "fix"},
{"number": 2, "title": "fix"},
]
dupes = find_duplicate_candidates(issues)
assert len(dupes) == 0
class TestPriority:
def test_critical_becomes_p0(self):
issue = {
"title": "Security vulnerability: auth bypass",
"body": "",
"comments": 0,
"created_at": datetime.now(timezone.utc).isoformat(),
"labels": [{"name": "critical"}],
}
assert assign_priority(issue, []) == "P0"
def test_bug_fix_becomes_p1(self):
issue = {
"title": "fix: broken import in cli",
"body": "",
"comments": 0,
"created_at": datetime.now(timezone.utc).isoformat(),
"labels": [],
}
assert assign_priority(issue, []) == "P1"
def test_enhancement_becomes_p3(self):
issue = {
"title": "feat: consider adding a nice enhancement",
"body": "",
"comments": 0,
"created_at": datetime.now(timezone.utc).isoformat(),
"labels": [],
}
assert assign_priority(issue, []) == "P3"
def test_high_comments_p1(self):
issue = {
"title": "some discussion",
"body": "",
"comments": 6,
"created_at": datetime.now(timezone.utc).isoformat(),
"labels": [],
}
assert assign_priority(issue, []) == "P1"
def test_default_p2(self):
issue = {
"title": "regular feature request",
"body": "",
"comments": 2,
"created_at": datetime.now(timezone.utc).isoformat(),
"labels": [],
}
assert assign_priority(issue, []) == "P2"
if __name__ == "__main__":
import pytest
sys.exit(pytest.main([__file__, "-v"]))

View File

@@ -1,43 +1,46 @@
model:
default: kimi-k2.5
provider: kimi-coding
context_length: 65536
base_url: https://api.kimi.com/coding/v1
toolsets:
- all
- all
fallback_providers:
- provider: kimi-coding
model: kimi-k2.5
timeout: 120
reason: Kimi coding fallback (front of chain)
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: Gemini 2.5 Pro via OpenRouter (replaces banned Anthropic)
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434
timeout: 300
reason: Terminal fallback — local Ollama
- provider: nous
model: xiaomi/mimo-v2-pro
base_url: https://inference.nousresearch.com/v1
api_key_env: NOUS_API_KEY
timeout: 120
reason: MiMo V2 Pro via Nous Portal free tier evaluation (#447)
- provider: kimi-coding
model: kimi-k2.5
base_url: https://api.kimi.com/coding/v1
timeout: 120
reason: "Primary — Kimi K2.5 (best value, least friction)"
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: "Fallback — Gemini 2.5 Pro via OpenRouter"
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
timeout: 180
reason: "Terminal fallback — local Ollama (sovereign, no API needed)"
agent:
max_turns: 30
reasoning_effort: xhigh
reasoning_effort: high
verbose: false
terminal:
backend: local
cwd: .
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
display:
compact: false
personality: ''
@@ -48,6 +51,7 @@ display:
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
@@ -55,46 +59,55 @@ memory:
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: manual
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
extra:
host: 127.0.0.1
port: 8645
session_reset:
mode: none
idle_minutes: 0
skills:
creation_nudge_interval: 15
system_prompt_suffix: 'You are Allegro, the Kimi-backed third wizard house.
system_prompt_suffix: |
You are Allegro, the Kimi-backed third wizard house.
Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness.
Kimi Code is your primary provider.
kimi-coding is your primary provider.
You speak plainly. You prefer short sentences. Brevity is a kindness.
Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation
passes.
Work best on tight coding tasks: 1-3 file changes, refactors, tests, and implementation passes.
Refusal over fabrication. If you do not know, say so.
Sovereignty and service always.
'
providers:
kimi-coding:
base_url: https://api.kimi.com/coding/v1
timeout: 60
max_retries: 3
nous:
base_url: https://inference.nousresearch.com/v1
openrouter:
base_url: https://openrouter.ai/api/v1
timeout: 120
ollama:
base_url: http://localhost:11434/v1
timeout: 180
# =============================================================================
# BANNED PROVIDERS — DO NOT ADD
# =============================================================================
# The following providers are PERMANENTLY BANNED:
# - anthropic (any model: claude-sonnet, claude-opus, claude-haiku)
# - nous (xiaomi/mimo-v2-pro)
# Enforcement: pre-commit hook, linter, Ansible validation, this comment.
# =============================================================================

View File

@@ -1,50 +1,72 @@
model:
default: kimi-k2.5
provider: kimi-coding
context_length: 65536
base_url: https://api.kimi.com/coding/v1
toolsets:
- all
- all
fallback_providers:
- provider: kimi-coding
model: kimi-k2.5
timeout: 120
reason: Kimi coding fallback (front of chain)
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: Gemini 2.5 Pro via OpenRouter (replaces banned Anthropic)
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434
timeout: 300
reason: Terminal fallback — local Ollama
- provider: nous
model: xiaomi/mimo-v2-pro
base_url: https://inference.nousresearch.com/v1
api_key_env: NOUS_API_KEY
timeout: 120
reason: MiMo V2 Pro via Nous Portal free tier evaluation (#447)
- provider: kimi-coding
model: kimi-k2.5
base_url: https://api.kimi.com/coding/v1
timeout: 120
reason: "Primary — Kimi K2.5 (best value, least friction)"
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: "Fallback — Gemini 2.5 Pro via OpenRouter"
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
timeout: 180
reason: "Terminal fallback — local Ollama (sovereign, no API needed)"
agent:
max_turns: 40
reasoning_effort: medium
verbose: false
system_prompt: You are Bezalel, the forge-and-testbed wizard of the Timmy Foundation
fleet. You are a builder and craftsman — infrastructure, deployment, hardening.
Your sovereign is Alexander Whitestone (Rockachopa). Sovereignty and service always.
terminal:
backend: local
cwd: /root/wizards/bezalel
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
compression:
enabled: true
threshold: 0.77
command_timeout: 30
record_sessions: false
display:
compact: false
personality: kawaii
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: auto
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
@@ -69,12 +91,7 @@ platforms:
- pull_request
- pull_request_comment
secret: bezalel-gitea-webhook-secret-2026
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment,
hardening. A Gitea webhook fired: event={event_type}, action={action},
repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Comment
by {comment.user.login}: {comment.body}. If you were tagged, assigned,
or this needs your attention, investigate and respond via Gitea API. Otherwise
acknowledge briefly.'
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, hardening. A Gitea webhook fired: event={event_type}, action={action}, repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Comment by {comment.user.login}: {comment.body}. If you were tagged, assigned, or this needs your attention, investigate and respond via Gitea API. Otherwise acknowledge briefly.'
deliver: telegram
deliver_extra: {}
gitea-assign:
@@ -82,34 +99,43 @@ platforms:
- issues
- pull_request
secret: bezalel-gitea-webhook-secret-2026
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment,
hardening. Gitea assignment webhook: event={event_type}, action={action},
repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Assigned
to: {issue.assignee.login}. If you (bezalel) were just assigned, read
the issue, scope it, and post a plan comment. If not you, acknowledge
briefly.'
prompt: 'You are bezalel, the builder and craftsman — infrastructure, deployment, hardening. Gitea assignment webhook: event={event_type}, action={action}, repo={repository.full_name}, issue/PR=#{issue.number} {issue.title}. Assigned to: {issue.assignee.login}. If you (bezalel) were just assigned, read the issue, scope it, and post a plan comment. If not you, acknowledge briefly.'
deliver: telegram
deliver_extra: {}
gateway:
allow_all_users: true
session_reset:
mode: both
idle_minutes: 1440
at_hour: 4
approvals:
mode: auto
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
_config_version: 11
TELEGRAM_HOME_CHANNEL: '-1003664764329'
skills:
creation_nudge_interval: 15
system_prompt: |
You are Bezalel, the forge-and-testbed wizard of the Timmy Foundation fleet.
You are a builder and craftsman — infrastructure, deployment, hardening.
Your sovereign is Alexander Whitestone (Rockachopa). Sovereignty and service always.
providers:
kimi-coding:
base_url: https://api.kimi.com/coding/v1
timeout: 60
max_retries: 3
nous:
base_url: https://inference.nousresearch.com/v1
openrouter:
base_url: https://openrouter.ai/api/v1
timeout: 120
ollama:
base_url: http://localhost:11434/v1
timeout: 180
# =============================================================================
# BANNED PROVIDERS — DO NOT ADD
# =============================================================================
# The following providers are PERMANENTLY BANNED:
# - anthropic (any model: claude-sonnet, claude-opus, claude-haiku)
# - nous (xiaomi/mimo-v2-pro)
# Enforcement: pre-commit hook, linter, Ansible validation, this comment.
# =============================================================================

View File

@@ -1,34 +1,94 @@
model:
default: kimi-k2.5
provider: kimi-coding
context_length: 65536
base_url: https://api.kimi.com/coding/v1
toolsets:
- all
- all
fallback_providers:
- provider: kimi-coding
model: kimi-k2.5
timeout: 120
reason: Kimi coding fallback (front of chain)
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: Gemini 2.5 Pro via OpenRouter (replaces banned Anthropic)
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434
timeout: 300
reason: Terminal fallback — local Ollama
- provider: nous
model: xiaomi/mimo-v2-pro
base_url: https://inference.nousresearch.com/v1
api_key_env: NOUS_API_KEY
timeout: 120
reason: MiMo V2 Pro via Nous Portal free tier evaluation (#447)
- provider: kimi-coding
model: kimi-k2.5
base_url: https://api.kimi.com/coding/v1
timeout: 120
reason: "Primary — Kimi K2.5 (best value, least friction)"
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: "Fallback — Gemini 2.5 Pro via OpenRouter"
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
timeout: 180
reason: "Terminal fallback — local Ollama (sovereign, no API needed)"
agent:
max_turns: 90
reasoning_effort: high
verbose: false
terminal:
backend: local
cwd: .
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
display:
compact: false
personality: ''
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: auto
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
extra:
host: 127.0.0.1
port: 8645
session_reset:
mode: none
idle_minutes: 0
skills:
creation_nudge_interval: 15
system_prompt_suffix: |
You are Ezra, the Infrastructure wizard — Gitea, nginx, hosting.
Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness.
kimi-coding is your primary provider.
Refusal over fabrication. If you do not know, say so.
Sovereignty and service always.
providers:
kimi-coding:
base_url: https://api.kimi.com/coding/v1
@@ -37,6 +97,15 @@ providers:
openrouter:
base_url: https://openrouter.ai/api/v1
timeout: 120
nous:
base_url: https://inference.nousresearch.com/v1
timeout: 120
ollama:
base_url: http://localhost:11434/v1
timeout: 180
# =============================================================================
# BANNED PROVIDERS — DO NOT ADD
# =============================================================================
# The following providers are PERMANENTLY BANNED:
# - anthropic (any model: claude-sonnet, claude-opus, claude-haiku)
# - nous (xiaomi/mimo-v2-pro)
# Enforcement: pre-commit hook, linter, Ansible validation, this comment.
# =============================================================================

121
wizards/timmy/config.yaml Normal file
View File

@@ -0,0 +1,121 @@
# =============================================================================
# Timmy — Primary Wizard Configuration (Golden State)
# =============================================================================
# Generated from golden state template (ansible/roles/wizard_base/templates/wizard_config.yaml.j2)
# DO NOT EDIT MANUALLY. Changes go through Gitea PR → Ansible deploy.
#
# Provider chain: kimi-coding → openrouter → ollama
# Anthropic is PERMANENTLY BANNED.
# =============================================================================
model:
default: kimi-k2.5
provider: kimi-coding
context_length: 65536
base_url: https://api.kimi.com/coding/v1
toolsets:
- all
fallback_providers:
- provider: kimi-coding
model: kimi-k2.5
base_url: https://api.kimi.com/coding/v1
timeout: 120
reason: "Primary — Kimi K2.5 (best value, least friction)"
- provider: openrouter
model: google/gemini-2.5-pro
base_url: https://openrouter.ai/api/v1
api_key_env: OPENROUTER_API_KEY
timeout: 120
reason: "Fallback — Gemini 2.5 Pro via OpenRouter"
- provider: ollama
model: gemma4:latest
base_url: http://localhost:11434/v1
timeout: 180
reason: "Terminal fallback — local Ollama (sovereign, no API needed)"
agent:
max_turns: 30
reasoning_effort: high
verbose: false
terminal:
backend: local
cwd: .
timeout: 180
persistent_shell: true
browser:
inactivity_timeout: 120
command_timeout: 30
record_sessions: false
display:
compact: false
personality: ''
resume_display: full
busy_input_mode: interrupt
bell_on_complete: false
show_reasoning: false
streaming: false
show_cost: false
tool_progress: all
memory:
memory_enabled: true
user_profile_enabled: true
memory_char_limit: 2200
user_char_limit: 1375
nudge_interval: 10
flush_min_turns: 6
approvals:
mode: auto
security:
redact_secrets: true
tirith_enabled: false
platforms:
api_server:
enabled: true
extra:
host: 127.0.0.1
port: 8645
session_reset:
mode: none
idle_minutes: 0
skills:
creation_nudge_interval: 15
system_prompt_suffix: |
You are Timmy, the Primary wizard — soul of the fleet.
Your soul is defined in SOUL.md — read it, live it.
Hermes is your harness.
kimi-coding is your primary provider.
Refusal over fabrication. If you do not know, say so.
Sovereignty and service always.
providers:
kimi-coding:
base_url: https://api.kimi.com/coding/v1
timeout: 60
max_retries: 3
openrouter:
base_url: https://openrouter.ai/api/v1
timeout: 120
ollama:
base_url: http://localhost:11434/v1
timeout: 180
# =============================================================================
# BANNED PROVIDERS — DO NOT ADD
# =============================================================================
# The following providers are PERMANENTLY BANNED:
# - anthropic (any model: claude-sonnet, claude-opus, claude-haiku)
# - nous (xiaomi/mimo-v2-pro)
# Enforcement: pre-commit hook, linter, Ansible validation, this comment.
# =============================================================================