Compare commits

...

13 Commits

Author SHA1 Message Date
Timmy Bot
4cfd1c2e10 Merge remote main + feedback on EPIC-202 2026-04-06 02:21:50 +00:00
Timmy Bot
a9ad1c8137 feedback: Allegro cross-epic review on EPIC-202 (claw-agent)
- Health: Yellow. Blocker: Gitea firewalled + no Primus RCA.
- Adds pre-flight checklist before Phase 1 start.
2026-04-06 02:20:55 +00:00
f708e45ae9 feat: Sovereign Health Dashboard — Operational Force Multiplication (#417)
Co-authored-by: Google AI Agent <gemini@hermes.local>
Co-committed-by: Google AI Agent <gemini@hermes.local>
2026-04-05 22:56:19 +00:00
f083031537 fix: keep kimi queue labels truthful (#415) 2026-04-05 19:33:37 +00:00
1cef8034c5 fix: keep kimi queue labels truthful (#414) 2026-04-05 18:27:22 +00:00
Timmy Bot
9952ce180c feat(uniwizard): standardized Tailscale IP detection module (timmy-home#385)
Create reusable tailscale-gitea.sh module for all auxiliary scripts:
- Automatically detects Tailscale (100.126.61.75) vs public IP (143.198.27.163)
- Sets GITEA_BASE_URL and GITEA_USING_TAILSCALE for sourcing scripts
- Configurable timeout, debug mode, and endpoint settings
- Maintains sovereignty: prefers private Tailscale network

Updated scripts:
- kimi-heartbeat.sh: now sources the module
- kimi-mention-watcher.sh: added fallback support via module

Files added:
- uniwizard/lib/tailscale-gitea.sh (reusable module)
- uniwizard/lib/example-usage.sh (usage documentation)

Acceptance criteria:
✓ Reusable module created and sourceable
✓ kimi-heartbeat.sh updated
✓ kimi-mention-watcher.sh updated (added fallback support)
✓ Example usage script provided
2026-04-05 07:07:05 +00:00
Timmy Bot
64a954f4d9 Enhance Kimi heartbeat with Nexus Watchdog alerting for stale lockfiles (#386)
- Add nexus_alert() function to send alerts to Nexus Watchdog
- Alerts are written as JSON files to $NEXUS_ALERT_DIR (default: /tmp/nexus-alerts)
- Alert includes: alert_id, timestamp, source, host, alert_type, severity, message, data
- Send 'stale_lock_reclaimed' warning alert when stale lock detected (age > 600s)
- Send 'heartbeat_resumed' info alert after successful recovery
- Include lock age, lockfile path, action taken, and stat info in alert data
- Add configurable NEXUS_ALERT_DIR and NEXUS_ALERT_ENABLED settings
- Add test script for validating alert functionality
2026-04-05 07:04:57 +00:00
Timmy Bot
5ace1e69ce security: add pre-commit hook for secret leak detection (#384) 2026-04-05 00:27:00 +00:00
d5c357df76 Add wizard apprenticeship charter (#398)
Co-authored-by: Codex Agent <codex@hermes.local>
Co-committed-by: Codex Agent <codex@hermes.local>
2026-04-04 22:43:55 +00:00
04213924d0 Merge pull request 'Cut over stale ops docs to current workflow' (#399) from codex/workflow-docs-cutover into main 2026-04-04 22:25:57 +00:00
dba3e90893 feat: rewrite KimiClaw heartbeat — launchd, sovereignty fixes, dispatch cap (#112) 2026-04-04 20:17:40 +00:00
e4c3bb1798 Add workspace user audit and lane recommendations (#392)
Co-authored-by: Codex Agent <codex@hermes.local>
Co-committed-by: Codex Agent <codex@hermes.local>
2026-04-04 20:05:21 +00:00
Alexander Whitestone
4effb5a20e Cut over stale ops docs to current workflow 2026-04-04 15:21:29 -04:00
19 changed files with 2146 additions and 640 deletions

42
.pre-commit-hooks.yaml Normal file
View File

@@ -0,0 +1,42 @@
# Pre-commit hooks configuration for timmy-home
# See https://pre-commit.com for more information
repos:
# Standard pre-commit hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
exclude: '\.(md|txt)$'
- id: end-of-file-fixer
exclude: '\.(md|txt)$'
- id: check-yaml
- id: check-json
- id: check-added-large-files
args: ['--maxkb=5000']
- id: check-merge-conflict
- id: check-symlinks
- id: detect-private-key
# Secret detection - custom local hook
- repo: local
hooks:
- id: detect-secrets
name: Detect Secrets
description: Scan for API keys, tokens, and other secrets
entry: python3 scripts/detect_secrets.py
language: python
types: [text]
exclude:
'(?x)^(
.*\.md$|
.*\.svg$|
.*\.lock$|
.*-lock\..*$|
\.gitignore$|
\.secrets\.baseline$|
tests/test_secret_detection\.py$
)'
pass_filenames: true
require_serial: false
verbose: true

132
README.md Normal file
View File

@@ -0,0 +1,132 @@
# Timmy Home
Timmy Foundation's home repository for development operations and configurations.
## Security
### Pre-commit Hook for Secret Detection
This repository includes a pre-commit hook that automatically scans for secrets (API keys, tokens, passwords) before allowing commits.
#### Setup
Install pre-commit hooks:
```bash
pip install pre-commit
pre-commit install
```
#### What Gets Scanned
The hook detects:
- **API Keys**: OpenAI (`sk-*`), Anthropic (`sk-ant-*`), AWS, Stripe
- **Private Keys**: RSA, DSA, EC, OpenSSH private keys
- **Tokens**: GitHub (`ghp_*`), Gitea, Slack, Telegram, JWT, Bearer tokens
- **Database URLs**: Connection strings with embedded credentials
- **Passwords**: Hardcoded passwords in configuration files
#### How It Works
Before each commit, the hook:
1. Scans all staged text files
2. Checks against patterns for common secret formats
3. Reports any potential secrets found
4. Blocks the commit if secrets are detected
#### Handling False Positives
If the hook flags something that is not actually a secret (e.g., test fixtures, placeholder values), you can:
**Option 1: Add an exclusion marker to the line**
```python
# Add one of these markers to the end of the line:
api_key = "sk-test123" # pragma: allowlist secret
api_key = "sk-test123" # noqa: secret
api_key = "sk-test123" # secret-detection:ignore
```
**Option 2: Use placeholder values (auto-excluded)**
These patterns are automatically excluded:
- `changeme`, `password`, `123456`, `admin` (common defaults)
- Values containing `fake_`, `test_`, `dummy_`, `example_`, `placeholder_`
- URLs with `localhost` or `127.0.0.1`
**Option 3: Skip the hook (emergency only)**
```bash
git commit --no-verify # Bypasses all pre-commit hooks
```
⚠️ **Warning**: Only use `--no-verify` if you are certain no real secrets are being committed.
#### CI/CD Integration
The secret detection script can also be run in CI/CD:
```bash
# Scan specific files
python3 scripts/detect_secrets.py file1.py file2.yaml
# Scan with verbose output
python3 scripts/detect_secrets.py --verbose src/
# Run tests
python3 tests/test_secret_detection.py
```
#### Excluded Files
The following are automatically excluded from scanning:
- Markdown files (`.md`)
- Lock files (`package-lock.json`, `poetry.lock`, `yarn.lock`)
- Image and font files
- `node_modules/`, `__pycache__/`, `.git/`
#### Testing the Detection
To verify the detection works:
```bash
# Run the test suite
python3 tests/test_secret_detection.py
# Test with a specific file
echo "API_KEY=sk-test123456789" > /tmp/test_secret.py
python3 scripts/detect_secrets.py /tmp/test_secret.py
# Should report: OpenAI API key detected
```
## Development
### Running Tests
```bash
# Run secret detection tests
python3 tests/test_secret_detection.py
# Run all tests
pytest tests/
```
### Project Structure
```
.
├── .pre-commit-hooks.yaml # Pre-commit configuration
├── scripts/
│ └── detect_secrets.py # Secret detection script
├── tests/
│ └── test_secret_detection.py # Test cases
└── README.md # This file
```
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
## License
This project is part of the Timmy Foundation.

View File

@@ -1,6 +1,6 @@
model: model:
default: claude-opus-4-6 default: hermes4:14b
provider: anthropic provider: custom
toolsets: toolsets:
- all - all
agent: agent:
@@ -27,7 +27,7 @@ browser:
inactivity_timeout: 120 inactivity_timeout: 120
record_sessions: false record_sessions: false
checkpoints: checkpoints:
enabled: false enabled: true
max_snapshots: 50 max_snapshots: 50
compression: compression:
enabled: true enabled: true
@@ -110,7 +110,7 @@ tts:
device: cpu device: cpu
stt: stt:
enabled: true enabled: true
provider: local provider: openai
local: local:
model: base model: base
openai: openai:

View File

@@ -1,197 +1,87 @@
# Uni-Wizard v4 — Deployment Checklist # Hermes Sidecar Deployment Checklist
## Pre-Deployment Updated: April 4, 2026
- [ ] VPS provisioned (Ubuntu 22.04 LTS recommended) This checklist is for the current local-first Timmy stack, not the archived `uni-wizard` deployment path.
- [ ] SSH access configured
- [ ] Firewall rules set (ports 22, 80, 443, 3000, 8643)
- [ ] Domain/DNS configured (optional)
- [ ] SSL certificates ready (optional)
## Base System ## Base Assumptions
- [ ] Update system packages - Hermes is already installed and runnable locally.
- `timmy-config` is the sidecar repo applied onto `~/.hermes`.
- `timmy-home` is the workspace repo living under `~/.timmy`.
- Local inference is reachable through the active provider surface Timmy is using.
## Repo Setup
- [ ] Clone `timmy-home` to `~/.timmy`
- [ ] Clone `timmy-config` to `~/.timmy/timmy-config`
- [ ] Confirm both repos are on the intended branch
## Sidecar Deploy
- [ ] Run:
```bash ```bash
sudo apt update && sudo apt upgrade -y cd ~/.timmy/timmy-config
./deploy.sh
``` ```
- [ ] Install base dependencies - [ ] Confirm `~/.hermes/config.yaml` matches the expected overlay
```bash - [ ] Confirm `SOUL.md` and sidecar config are in place
sudo apt install -y python3 python3-pip python3-venv sqlite3 curl git
```
- [ ] Create timmy user
```bash
sudo useradd -m -s /bin/bash timmy
```
- [ ] Configure sudo access (if needed)
## Gitea Setup ## Hermes Readiness
- [ ] Gitea installed and running - [ ] Hermes CLI works from the expected Python environment
- [ ] Repository created: `Timmy_Foundation/timmy-home` - [ ] Gateway is reachable
- [ ] API token generated - [ ] Sessions are being recorded under `~/.hermes/sessions`
- [ ] Webhooks configured (optional) - [ ] `model_health.json` updates successfully
- [ ] Test API access
```bash
curl -H "Authorization: token TOKEN" http://localhost:3000/api/v1/user
```
## Uni-Wizard Installation ## Workflow Tooling
- [ ] Clone repository - [ ] `~/.hermes/bin/ops-panel.sh` runs
```bash - [ ] `~/.hermes/bin/ops-gitea.sh` runs
sudo -u timmy git clone http://143.198.27.163:3000/Timmy_Foundation/timmy-home.git /opt/timmy/repo - [ ] `~/.hermes/bin/ops-helpers.sh` can be sourced
``` - [ ] `~/.hermes/bin/pipeline-freshness.sh` runs
- [ ] Run setup script - [ ] `~/.hermes/bin/timmy-dashboard` runs
```bash
sudo ./scripts/setup-uni-wizard.sh
```
- [ ] Verify installation
```bash
/opt/timmy/venv/bin/python -c "from uni_wizard import Harness; print('OK')"
```
## Configuration ## Heartbeat and Briefings
- [ ] Edit config file - [ ] `~/.timmy/heartbeat/last_tick.json` is updating
```bash - [ ] daily heartbeat logs are being appended
sudo nano /opt/timmy/config/uni-wizard.yaml - [ ] morning briefings are being generated if scheduled
```
- [ ] Set Gitea API token
- [ ] Configure house identity
- [ ] Set log level (INFO for production)
- [ ] Verify config syntax
```bash
/opt/timmy/venv/bin/python -c "import yaml; yaml.safe_load(open('/opt/timmy/config/uni-wizard.yaml'))"
```
## LLM Setup (if using local inference) ## Archive Pipeline
- [ ] llama.cpp installed - [ ] `~/.timmy/twitter-archive/PROJECT.md` exists
- [ ] Model downloaded (e.g., Hermes-4 14B) - [ ] raw archive location is configured locally
- [ ] Model placed in `/opt/timmy/models/` - [ ] extraction works without checking raw data into git
- [ ] llama-server configured - [ ] `checkpoint.json` advances after a batch
- [ ] Test inference - [ ] DPO artifacts land under `~/.timmy/twitter-archive/training/dpo/`
```bash - [ ] `pipeline-freshness.sh` does not show runaway lag
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "hermes4", "messages": [{"role": "user", "content": "Hello"}]}'
```
## Service Startup ## Gitea Workflow
- [ ] Start Uni-Wizard - [ ] Gitea token is present in a supported token path
```bash - [ ] review queue can be listed
sudo systemctl start uni-wizard - [ ] unassigned issues can be listed
``` - [ ] PR creation works from an agent branch
- [ ] Start health daemon
```bash
sudo systemctl start timmy-health
```
- [ ] Start task router
```bash
sudo systemctl start timmy-task-router
```
- [ ] Enable auto-start
```bash
sudo systemctl enable uni-wizard timmy-health timmy-task-router
```
## Verification ## Final Verification
- [ ] Check service status - [ ] local model smoke test succeeds
```bash - [ ] one archive batch completes successfully
sudo systemctl status uni-wizard - [ ] one PR can be opened and reviewed
``` - [ ] no stale loop-era scripts or docs are being treated as active truth
- [ ] View logs
```bash
sudo journalctl -u uni-wizard -f
```
- [ ] Test health endpoint
```bash
curl http://localhost:8082/health
```
- [ ] Test tool execution
```bash
/opt/timmy/venv/bin/uni-wizard execute system_info
```
- [ ] Verify Gitea polling
```bash
tail -f /opt/timmy/logs/task-router.log | grep "Polling"
```
## Syncthing Mesh (if using multiple VPS) ## Rollback
- [ ] Syncthing installed on all nodes If the sidecar deploy breaks behavior:
- [ ] Devices paired
- [ ] Folders shared
- `/opt/timmy/logs/`
- `/opt/timmy/data/`
- [ ] Test sync
```bash
touch /opt/timmy/logs/test && ssh other-vps "ls /opt/timmy/logs/test"
```
## Security
- [ ] Firewall configured
```bash
sudo ufw status
```
- [ ] Fail2ban installed (optional)
- [ ] Log rotation configured
```bash
sudo logrotate -d /etc/logrotate.d/uni-wizard
```
- [ ] Backup strategy in place
- [ ] Secrets not in git
```bash
grep -r "password\|token\|secret" /opt/timmy/repo/
```
## Monitoring
- [ ] Health checks responding
- [ ] Metrics being collected
- [ ] Alerts configured (optional)
- [ ] Log aggregation setup (optional)
## Post-Deployment
- [ ] Document any custom configuration
- [ ] Update runbooks
- [ ] Notify team
- [ ] Schedule first review (1 week)
## Rollback Plan
If deployment fails:
```bash ```bash
# Stop services cd ~/.timmy/timmy-config
sudo systemctl stop uni-wizard timmy-health timmy-task-router git status
git log --oneline -5
# Disable auto-start
sudo systemctl disable uni-wizard timmy-health timmy-task-router
# Restore from backup (if available)
# ...
# Or reset to clean state
sudo rm -rf /opt/timmy/
sudo userdel timmy
``` ```
## Success Criteria Then:
- restore the previous known-good sidecar commit
- [ ] All services running (`systemctl is-active` returns "active") - redeploy
- [ ] Health endpoint returns 200 - confirm Hermes health, heartbeat, and pipeline freshness again
- [ ] Can execute tools via CLI
- [ ] Gitea integration working (issues being polled)
- [ ] Logs being written without errors
- [ ] No critical errors in first 24 hours
---
**Deployed by:** _______________
**Date:** _______________
**VPS:** _______________

View File

@@ -1,129 +1,112 @@
# Timmy Operations Dashboard # Timmy Operations Dashboard
**Generated:** March 30, 2026 Updated: April 4, 2026
**Generated by:** Allegro (Tempo-and-Dispatch) Purpose: a current-state reference for how the system is actually operated now.
--- This is no longer a `uni-wizard` dashboard.
The active architecture is:
- Timmy local workspace in `~/.timmy`
- Hermes harness in `~/.hermes`
- `timmy-config` as the identity and orchestration sidecar
- Gitea as the review and coordination surface
## 🎯 Current Sprint Status ## Core Jobs
### Open Issues by Priority Everything should map to one of these:
- Heartbeat: perceive, reflect, remember, decide, act, learn
- Harness: local models, Hermes sessions, tools, memory, training loop
- Portal Interface: the game/world-facing layer
| Priority | Count | Issues | ## Current Operating Surfaces
|----------|-------|--------|
| P0 (Critical) | 0 | — |
| P1 (High) | 3 | #99, #103, #94 |
| P2 (Medium) | 8 | #101, #97, #95, #93, #92, #91, #90, #87 |
| P3 (Low) | 6 | #86, #85, #84, #83, #72, others |
### Issue #94 Epic: Grand Timmy — The Uniwizard ### Local Paths
**Status:** In Progress - Timmy workspace: `~/.timmy`
**Completion:** ~40% - Timmy config repo: `~/.timmy/timmy-config`
- Hermes home: `~/.hermes`
- Twitter archive workspace: `~/.timmy/twitter-archive`
#### Completed ### Review Surface
- ✅ Uni-Wizard v4 architecture (4-pass evolution)
- ✅ Three-House separation (Timmy/Ezra/Bezalel)
- ✅ Self-improving intelligence engine
- ✅ Pattern database and adaptive policies
- ✅ Hermes bridge for telemetry
#### In Progress - Major changes go through PRs
- 🔄 Backend registry (#95) - Timmy is the principal reviewer for governing and sensitive changes
- 🔄 Caching layer (#103) - Allegro is the review and dispatch partner for queue hygiene, routing, and tempo
- 🔄 Wizard dissolution (#99)
#### Pending ### Workflow Scripts
- ⏳ RAG pipeline (#93)
- ⏳ Telemetry dashboard (#91)
- ⏳ Auto-grading (#92)
- ⏳ Evennia world shell (#83, #84)
--- - `~/.hermes/bin/ops-panel.sh`
- `~/.hermes/bin/ops-gitea.sh`
- `~/.hermes/bin/ops-helpers.sh`
- `~/.hermes/bin/pipeline-freshness.sh`
- `~/.hermes/bin/timmy-dashboard`
## 🏛️ House Assignments ## Daily Health Signals
| House | Status | Current Work | These are the signals that matter most:
|-------|--------|--------------| - Hermes gateway reachable
| **Timmy** | 🟢 Active | Local sovereign, reviewing PRs | - local inference surface responding
| **Ezra** | 🟢 Active | Research on LLM routing (#101) | - heartbeat ticks continuing
| **Bezalel** | 🟡 Standby | Awaiting implementation tasks | - Gitea reachable
| **Allegro** | 🟢 Active | Tempo-and-dispatch, Gitea bridge | - review queue not backing up
- session export / DPO freshness not lagging
- Twitter archive pipeline checkpoint advancing
--- ## Current Team Shape
## 📊 System Health ### Direction and Review
### VPS Fleet Status - Timmy: sovereignty, architecture, release judgment
- Allegro: dispatch, queue hygiene, Gitea bridge
| Host | IP | Role | Status | ### Research and Memory
|------|-----|------|--------|
| Allegro | 143.198.27.163 | Tempo-and-Dispatch | 🟢 Online |
| Ezra | TBD | Archivist/Research | ⚪ Not deployed |
| Bezalel | TBD | Artificer/Builder | ⚪ Not deployed |
### Services - Perplexity: research triage, integration evaluation
- Ezra: archival memory, RCA, onboarding doctrine
- KimiClaw: long-context reading and synthesis
| Service | Status | Notes | ### Execution
|---------|--------|-------|
| Gitea | 🟢 Running | 19 open issues |
| Hermes | 🟡 Configured | Awaiting model setup |
| Overnight Loop | 🔴 Stopped | Issue #72 reported |
| Uni-Wizard | 🟢 Ready | PR created |
--- - Codex Agent: workflow hardening, cleanup, migration verification
- Groq: fast bounded implementation
- Manus: moderate-scope follow-through
- Claude: hard refactors and deep implementation
- Gemini: frontier architecture and long-range design
- Grok: adversarial review and edge cases
## 🔄 Recent Activity ## Recommended Checks
### Last 24 Hours ### Start of Day
1. **Uni-Wizard v4 Completed** — Four-pass architecture evolution 1. Open the review queue and unassigned queue.
2. **PR Created** — feature/uni-wizard-v4-production 2. Check `pipeline-freshness.sh`.
3. **Allegro Lane Narrowed** — Focused on Gitea/Hermes bridge 3. Check the latest heartbeat tick.
4. **Issue #72 Reported** — Overnight loop not running 4. Check whether archive checkpoints and DPO artifacts advanced.
### Pending Actions ### Before Merging
1. Deploy Ezra VPS (archivist/research) 1. Confirm the PR is aligned with Heartbeat, Harness, or Portal.
2. Deploy Bezalel VPS (artificer/builder) 2. Confirm verification is real, not implied.
3. Start overnight loop 3. Confirm the change does not silently cross repo boundaries.
4. Configure Syncthing mesh 4. Confirm the change does not revive deprecated loop-era behavior.
5. Implement caching layer (#103)
--- ### End of Day
## 🎯 Recommendations 1. Check for duplicate issues and duplicate PR momentum.
2. Check whether Timmy is carrying routine queue work that Allegro should own.
3. Check whether builders were given work inside their real lanes.
### Immediate (Next 24h) ## Anti-Patterns
1. **Review Uni-Wizard v4 PR** — Ready for merge Avoid:
2. **Start Overnight Loop** — If operational approval given - treating archived dashboard-era issues as the live roadmap
3. **Deploy Ezra VPS** — For research tasks - using stale docs that assume `uni-wizard` is still the center
- routing work by habit instead of by current lane
- letting open loops multiply faster than they are reviewed
### Short-term (This Week) ## Success Condition
1. Implement caching layer (#103) — High impact The system is healthy when:
2. Build backend registry (#95) — Enables routing - work is routed cleanly
3. Create telemetry dashboard (#91) — Visibility - review is keeping pace
- private learning loops are producing artifacts
### Medium-term (This Month) - Timmy is spending time on sovereignty and judgment rather than queue untangling
1. Complete Grand Timmy epic (#94)
2. Dissolve wizard identities (#99)
3. Deploy Evennia world shell (#83, #84)
---
## 📈 Metrics
| Metric | Current | Target |
|--------|---------|--------|
| Issues Open | 19 | < 10 |
| PRs Open | 1 | — |
| VPS Online | 1/3 | 3/3 |
| Loop Cycles | 0 | 100/day |
---
*Dashboard updated: March 30, 2026*
*Next update: March 31, 2026*

View File

@@ -1,220 +1,89 @@
# Uni-Wizard v4 — Quick Reference # Timmy Workflow Quick Reference
## Installation Updated: April 4, 2026
## What Lives Where
- `~/.timmy`: Timmy's workspace, lived data, heartbeat, archive artifacts
- `~/.timmy/timmy-config`: Timmy's identity and orchestration sidecar repo
- `~/.hermes`: Hermes harness, sessions, config overlay, helper scripts
## Most Useful Commands
### Workflow Status
```bash ```bash
# Run setup script ~/.hermes/bin/ops-panel.sh
sudo ./scripts/setup-uni-wizard.sh ~/.hermes/bin/ops-gitea.sh
~/.hermes/bin/timmy-dashboard
# Or manual install
cd uni-wizard/v4
pip install -e .
``` ```
## Basic Usage ### Workflow Helpers
```python
from uni_wizard import Harness, House, Mode
# Create harness
harness = Harness(house=House.TIMMY, mode=Mode.INTELLIGENT)
# Execute tool
result = harness.execute("git_status", repo_path="/path/to/repo")
# Check prediction
print(f"Predicted success: {result.provenance.prediction:.0%}")
# Get result
if result.success:
print(result.data)
else:
print(f"Error: {result.error}")
```
## Command Line
```bash ```bash
# Simple execution source ~/.hermes/bin/ops-helpers.sh
uni-wizard execute git_status --repo-path /path ops-help
ops-review-queue
# With specific house ops-unassigned all
uni-wizard execute git_status --house ezra --mode intelligent ops-queue codex-agent all
# Batch execution
uni-wizard batch tasks.json
# Check health
uni-wizard health
# View stats
uni-wizard stats
``` ```
## Houses ### Pipeline Freshness
| House | Role | Best For |
|-------|------|----------|
| `House.TIMMY` | Sovereign | Final decisions, critical ops |
| `House.EZRA` | Archivist | Reading, analysis, documentation |
| `House.BEZALEL` | Artificer | Building, testing, implementation |
| `House.ALLEGRO` | Dispatch | Routing, connectivity, tempo |
## Modes
| Mode | Use When | Features |
|------|----------|----------|
| `Mode.SIMPLE` | Scripts, quick tasks | Direct execution, no overhead |
| `Mode.INTELLIGENT` | Production work | Predictions, learning, adaptation |
| `Mode.SOVEREIGN` | Critical decisions | Full provenance, approval gates |
## Common Tasks
### Check System Status
```python
result = harness.execute("system_info")
print(result.data)
```
### Git Operations
```python
# Status
result = harness.execute("git_status", repo_path="/path")
# Log
result = harness.execute("git_log", repo_path="/path", max_count=10)
# Pull
result = harness.execute("git_pull", repo_path="/path")
```
### Health Check
```python
result = harness.execute("health_check")
print(f"Status: {result.data['status']}")
```
### Batch Operations
```python
tasks = [
{"tool": "git_status", "params": {"repo_path": "/path1"}},
{"tool": "git_status", "params": {"repo_path": "/path2"}},
{"tool": "system_info", "params": {}}
]
results = harness.execute_batch(tasks)
```
## Service Management
```bash ```bash
# Start services ~/.hermes/bin/pipeline-freshness.sh
sudo systemctl start uni-wizard
sudo systemctl start timmy-health
sudo systemctl start timmy-task-router
# Check status
sudo systemctl status uni-wizard
# View logs
sudo journalctl -u uni-wizard -f
tail -f /opt/timmy/logs/uni-wizard.log
# Restart
sudo systemctl restart uni-wizard
``` ```
## Troubleshooting ### Archive Pipeline
### Service Won't Start
```bash ```bash
# Check logs python3 - <<'PY'
journalctl -u uni-wizard -n 50 import json, sys
sys.path.insert(0, '/Users/apayne/.timmy/timmy-config')
# Verify config from tasks import _archive_pipeline_health_impl
cat /opt/timmy/config/uni-wizard.yaml print(json.dumps(_archive_pipeline_health_impl(), indent=2))
PY
# Test manually
python -m uni_wizard health
``` ```
### No Predictions ```bash
- Check pattern database exists: `ls /opt/timmy/data/patterns.db` python3 - <<'PY'
- Verify learning is enabled in config import json, sys
- Run a few tasks to build patterns sys.path.insert(0, '/Users/apayne/.timmy/timmy-config')
from tasks import _know_thy_father_impl
### Gitea Integration Failing print(json.dumps(_know_thy_father_impl(), indent=2))
- Verify API token in config PY
- Check Gitea URL is accessible
- Test: `curl http://143.198.27.163:3000/api/v1/version`
## Configuration
Location: `/opt/timmy/config/uni-wizard.yaml`
```yaml
house: timmy
mode: intelligent
enable_learning: true
pattern_db: /opt/timmy/data/patterns.db
log_level: INFO
gitea:
url: http://143.198.27.163:3000
token: YOUR_TOKEN_HERE
poll_interval: 300
hermes:
stream_enabled: true
db_path: /root/.hermes/state.db
``` ```
## API Reference ### Manual Dispatch Prompt
### Harness Methods ```bash
~/.hermes/bin/agent-dispatch.sh groq 542 Timmy_Foundation/the-nexus
```python
# Execute single tool
harness.execute(tool_name, **params) -> ExecutionResult
# Execute async
await harness.execute_async(tool_name, **params) -> ExecutionResult
# Execute batch
harness.execute_batch(tasks) -> List[ExecutionResult]
# Get prediction
harness.predict(tool_name, params) -> Prediction
# Get stats
harness.get_stats() -> Dict
# Get patterns
harness.get_patterns() -> Dict
``` ```
### ExecutionResult Fields ## Best Files to Check
```python ### Operational State
result.success # bool
result.data # Any
result.error # Optional[str]
result.provenance # Provenance
result.suggestions # List[str]
```
### Provenance Fields - `~/.timmy/heartbeat/last_tick.json`
- `~/.hermes/model_health.json`
- `~/.timmy/twitter-archive/checkpoint.json`
- `~/.timmy/twitter-archive/metrics/progress.json`
```python ### Archive Feedback
provenance.house # str
provenance.tool # str
provenance.mode # str
provenance.prediction # float
provenance.execution_time_ms # float
provenance.input_hash # str
provenance.output_hash # str
```
--- - `~/.timmy/twitter-archive/notes/`
- `~/.timmy/twitter-archive/knowledge/profile.json`
- `~/.timmy/twitter-archive/training/dpo/`
*For full documentation, see ARCHITECTURE.md* ### Review and Queue
- Gitea PR queue
- Gitea unassigned issues
- Timmy/Allegro assigned review queue
## Rules of Thumb
- If it changes identity or orchestration, review it carefully in `timmy-config`.
- If it changes lived outputs or training inputs, it probably belongs in `timmy-home`.
- If it only “sounds right” but is not proven by runtime state, it is not verified.
- If a change is major, package it as a PR for Timmy review.

View File

@@ -1,125 +1,71 @@
# Scorecard Generator Documentation # Workflow Scorecard
## Overview Updated: April 4, 2026
The Scorecard Generator analyzes overnight loop JSONL data and produces comprehensive reports with statistics, trends, and recommendations. The old overnight `uni-wizard` scorecard is no longer the primary operational metric.
The current scorecard should measure whether Timmy's real workflow is healthy.
## Usage ## What To Score
### Basic Usage ### Queue Health
```bash - unassigned issue count
# Generate scorecard from default input directory - PRs waiting on Timmy or Allegro review
python uni-wizard/scripts/generate_scorecard.py - overloaded assignees
- duplicate issue / duplicate PR pressure
# Specify custom input/output directories ### Runtime Health
python uni-wizard/scripts/generate_scorecard.py \
--input ~/shared/overnight-loop \
--output ~/timmy/reports
```
### Cron Setup - Hermes gateway reachable
- local provider responding
- latest heartbeat tick present
- model health reporting accurately
```bash ### Learning Loop Health
# Generate scorecard every morning at 6 AM
0 6 * * * /root/timmy/venv/bin/python /root/timmy/uni-wizard/scripts/generate_scorecard.py
```
## Input Format - archive checkpoint advancing
- notes and knowledge artifacts being emitted
- DPO files growing
- freshness lag between sessions and exports
JSONL files in `~/shared/overnight-loop/*.jsonl`: ## Suggested Daily Questions
```json 1. Did review keep pace with execution today?
{"task": "read-soul", "status": "pass", "duration_s": 19.7, "timestamp": "2026-03-29T21:54:12Z"} 2. Did any builder receive work outside their lane?
{"task": "check-health", "status": "fail", "duration_s": 5.2, "error": "timeout", "timestamp": "2026-03-29T22:15:33Z"} 3. Did Timmy spend time on judgment rather than routine queue cleanup?
``` 4. Did the private learning pipeline produce usable artifacts?
5. Did any stale doc, helper, or default try to pull the system back into old habits?
Fields: ## Useful Inputs
- `task`: Task identifier
- `status`: "pass" or "fail"
- `duration_s`: Execution time in seconds
- `timestamp`: ISO 8601 timestamp
- `error`: Error message (for failed tasks)
## Output - `~/.timmy/heartbeat/ticks_YYYYMMDD.jsonl`
- `~/.timmy/metrics/local_YYYYMMDD.jsonl`
- `~/.timmy/twitter-archive/checkpoint.json`
- `~/.timmy/twitter-archive/metrics/progress.json`
- Gitea open PR queue
- Gitea unassigned issue queue
### JSON Report ## Suggested Ratings
`~/timmy/reports/scorecard_YYYYMMDD.json`: ### Queue Discipline
```json - Strong: review and dispatch are keeping up, little duplicate churn
{ - Mixed: queue moves, but ambiguity or duplication is increasing
"generated_at": "2026-03-30T06:00:00Z", - Weak: review is backlogged or agents are being misrouted
"summary": {
"total_tasks": 100,
"passed": 95,
"failed": 5,
"pass_rate": 95.0,
"duration_stats": {
"avg": 12.5,
"median": 10.2,
"p95": 45.0,
"min": 1.2,
"max": 120.5
}
},
"by_task": {...},
"by_hour": {...},
"errors": {...},
"recommendations": [...]
}
```
### Markdown Report ### Runtime Reliability
`~/timmy/reports/scorecard_YYYYMMDD.md`: - Strong: heartbeat, Hermes, and provider surfaces all healthy
- Mixed: intermittent downtime or weak health signals
- Weak: major surfaces untrusted or stale
- Executive summary with pass/fail counts ### Learning Throughput
- Duration statistics (avg, median, p95)
- Per-task breakdown with pass rates
- Hourly timeline showing performance trends
- Error analysis with frequency counts
- Actionable recommendations
## Report Interpretation - Strong: checkpoint advances, DPO output accumulates, eval gates are visible
- Mixed: some artifacts land, but freshness or checkpointing lags
- Weak: sessions occur without export, or learning artifacts stall
### Pass Rate Thresholds ## The Goal
| Pass Rate | Status | Action | The point of the scorecard is not to admire activity.
|-----------|--------|--------| The point is to tell whether the system is becoming more reviewable, more sovereign, and more capable of learning from lived work.
| 95%+ | ✅ Excellent | Continue current operations |
| 85-94% | ⚠️ Good | Monitor for degradation |
| 70-84% | ⚠️ Fair | Review failing tasks |
| <70% | ❌ Poor | Immediate investigation required |
### Duration Guidelines
| Duration | Assessment |
|----------|------------|
| <5s | Fast |
| 5-15s | Normal |
| 15-30s | Slow |
| >30s | Very slow - consider optimization |
## Troubleshooting
### No JSONL files found
```bash
# Check input directory
ls -la ~/shared/overnight-loop/
# Ensure Syncthing is syncing
systemctl status syncthing@root
```
### Malformed lines
The generator skips malformed lines with a warning. Check the JSONL files for syntax errors.
### Empty reports
If no data exists, verify:
1. Overnight loop is running and writing JSONL
2. File permissions allow reading
3. Input path is correct

View File

@@ -0,0 +1,491 @@
# Workspace User Audit
Date: 2026-04-04
Scope: Hermes Gitea workspace users visible from `/explore/users`
Primary org examined: `Timmy_Foundation`
Primary strategic filter: `the-nexus` issue #542 (`DIRECTION SHIFT`)
## Purpose
This audit maps each visible workspace user to:
- observed contribution pattern
- likely capabilities
- likely failure mode
- suggested lane of highest leverage
The point is not to flatter or punish accounts. The point is to stop wasting attention on the wrong agent for the wrong job.
## Method
This audit was derived from:
- Gitea admin user roster
- public user explorer page
- org-wide issues and pull requests across:
- `the-nexus`
- `timmy-home`
- `timmy-config`
- `hermes-agent`
- `turboquant`
- `.profile`
- `the-door`
- `timmy-academy`
- `claude-code-src`
- PR outcome split:
- open
- merged
- closed unmerged
This is a capability-and-lane audit, not a character judgment. New or low-artifact accounts are marked as unproven rather than weak.
## Strategic Frame
Per issue #542, the current system direction is:
1. Heartbeat
2. Harness
3. Portal Interface
Any user who does not materially help one of those three jobs should be deprioritized, reassigned, or retired.
## Top Findings
- The org has real execution capacity, but too much ideation and duplicate backlog generation relative to merged implementation.
- Best current execution profiles: `allegro`, `groq`, `codex-agent`, `manus`, `Timmy`.
- Best architecture / research / integration profiles: `perplexity`, `gemini`, `Timmy`, `Rockachopa`.
- Best archivist / memory / RCA profile: `ezra`.
- Biggest cleanup opportunities:
- consolidate `google` into `gemini`
- consolidate or retire legacy `kimi` in favor of `KimiClaw`
- keep unproven symbolic accounts off the critical path until they ship
## Recommended Team Shape
- Direction and doctrine: `Rockachopa`, `Timmy`
- Architecture and strategy: `Timmy`, `perplexity`, `gemini`
- Triage and dispatch: `allegro`, `Timmy`
- Core implementation: `claude`, `groq`, `codex-agent`, `manus`
- Long-context reading and extraction: `KimiClaw`
- RCA, archival memory, and operating history: `ezra`
- Experimental reserve: `grok`, `bezalel`, `antigravity`, `fenrir`, `substratum`
- Consolidate or retire: `google`, `kimi`, plus dormant admin-style identities without a lane
## User Audit
### Rockachopa
- Observed pattern:
- founder-originated direction, issue seeding, architectural reset signals
- relatively little direct PR volume in this org
- Likely strengths:
- taste
- doctrine
- strategic kill/defer calls
- setting the real north star
- Likely failure mode:
- pushing direction into the system without a matching enforcement pass
- Highest-leverage lane:
- final priority authority
- architectural direction
- closure of dead paths
- Anti-lane:
- routine backlog maintenance
- repetitive implementation supervision
### Timmy
- Observed pattern:
- highest total authored artifact volume
- high merged PR count
- major issue author across `the-nexus`, `timmy-home`, and `timmy-config`
- Likely strengths:
- system ownership
- epic creation
- repo direction
- governance
- durable internal doctrine
- Likely failure mode:
- overproducing backlog and labels faster than the system can metabolize them
- Highest-leverage lane:
- principal systems owner
- release governance
- strategic triage
- architecture acceptance and rejection
- Anti-lane:
- low-value duplicate issue generation
### perplexity
- Observed pattern:
- strong issue author across `the-nexus`, `timmy-config`, and `timmy-home`
- good but not massive PR volume
- strong concentration in `[MCP]`, `[HARNESS]`, `[ARCH]`, `[RESEARCH]`, `[OPENCLAW]`
- Likely strengths:
- integration architecture
- tool and MCP discovery
- sovereignty framing
- research triage
- QA-oriented systems thinking
- Likely failure mode:
- producing too many candidate directions without enough collapse into one chosen path
- Highest-leverage lane:
- research scout
- MCP / open-source evaluation
- architecture memos
- issue shaping
- knowledge transfer
- Anti-lane:
- being the default final implementer for all threads
### gemini
- Observed pattern:
- very high PR volume and high closure rate
- strong presence in `the-nexus`, `timmy-config`, and `hermes-agent`
- often operates in architecture and research-heavy territory
- Likely strengths:
- architecture generation
- speculative design
- decomposing systems into modules
- surfacing future-facing ideas quickly
- Likely failure mode:
- duplicate PRs
- speculative PRs
- noise relative to accepted implementation
- Highest-leverage lane:
- frontier architecture
- design spikes
- long-range technical options
- research-to-issue translation
- Anti-lane:
- unsupervised backlog flood
- high-autonomy repo hygiene work
### claude
- Observed pattern:
- huge PR volume concentrated in `the-nexus`
- high merged count, but also very high closed-unmerged count
- Likely strengths:
- large code changes
- hard refactors
- implementation stamina
- test-aware coding when tightly scoped
- Likely failure mode:
- overbuilding
- mismatch with current direction
- lower signal when the task is under-specified
- Highest-leverage lane:
- hard implementation
- deep refactors
- large bounded code edits after exact scoping
- Anti-lane:
- self-directed architecture exploration without tight constraints
### groq
- Observed pattern:
- good merged PR count in `the-nexus`
- lower failure rate than many high-volume agents
- Likely strengths:
- tactical implementation
- bounded fixes
- shipping narrow slices
- cost-effective execution
- Likely failure mode:
- may underperform on large ambiguous architectural threads
- Highest-leverage lane:
- bug fixes
- tactical feature work
- well-scoped implementation tasks
- Anti-lane:
- owning broad doctrine or long-range architecture
### grok
- Observed pattern:
- moderate PR volume in `the-nexus`
- mixed merge outcomes
- Likely strengths:
- edge-case thinking
- adversarial poking
- creative angles
- Likely failure mode:
- novelty or provocation over disciplined convergence
- Highest-leverage lane:
- adversarial review
- UX weirdness
- edge-case scenario generation
- Anti-lane:
- boring, critical-path cleanup where predictability matters most
### allegro
- Observed pattern:
- outstanding merged PR profile
- meaningful issue volume in `timmy-home` and `hermes-agent`
- profile explicitly aligned with triage and routing
- Likely strengths:
- dispatch
- sequencing
- fix prioritization
- security / operational hygiene
- converting chaos into the next clean move
- Likely failure mode:
- being used as a generic writer instead of as an operator
- Highest-leverage lane:
- triage
- dispatch
- routing
- security and operational cleanup
- execution coordination
- Anti-lane:
- speculative research sprawl
### codex-agent
- Observed pattern:
- lower volume, perfect merged record so far
- concentrated in `timmy-home` and `timmy-config`
- recent work shows cleanup, migration verification, and repo-boundary enforcement
- Likely strengths:
- dead-code cutting
- migration verification
- repo-boundary enforcement
- implementation through PR discipline
- reducing drift between intended and actual architecture
- Likely failure mode:
- overfocusing on cleanup if not paired with strategic direction
- Highest-leverage lane:
- cleanup
- systems hardening
- migration and cutover work
- PR-first implementation of architectural intent
- Anti-lane:
- wide speculative backlog ideation
### manus
- Observed pattern:
- low volume but good merge rate
- bounded work footprint
- Likely strengths:
- one-shot tasks
- support implementation
- moderate-scope execution
- Likely failure mode:
- limited demonstrated range inside this org
- Highest-leverage lane:
- single bounded tasks
- support implementation
- targeted coding asks
- Anti-lane:
- strategic ownership of ongoing programs
### KimiClaw
- Observed pattern:
- very new
- one merged PR in `timmy-home`
- profile emphasizes long-context analysis via OpenClaw
- Likely strengths:
- long-context reading
- extraction
- synthesis before action
- Likely failure mode:
- not yet proven in repeated implementation loops
- Highest-leverage lane:
- codebase digestion
- extraction and summarization
- pre-implementation reading passes
- Anti-lane:
- solo ownership of fast-moving critical-path changes until more evidence exists
### kimi
- Observed pattern:
- almost no durable artifact trail in this org
- Likely strengths:
- historically used as a hands-style execution agent
- Likely failure mode:
- identity overlap with stronger replacements
- Highest-leverage lane:
- either retire
- or keep for tightly bounded experiments only
- Anti-lane:
- first-string team role
### ezra
- Observed pattern:
- high issue volume, almost no PRs
- concentrated in `timmy-home`
- prefixes include `[RCA]`, `[STUDY]`, `[FAILURE]`, `[ONBOARDING]`
- Likely strengths:
- archival memory
- failure analysis
- onboarding docs
- study reports
- interpretation of what happened
- Likely failure mode:
- becoming pure narration with no collapse into action
- Highest-leverage lane:
- archivist
- scribe
- RCA
- operating history
- onboarding
- Anti-lane:
- primary code shipper
### bezalel
- Observed pattern:
- tiny visible artifact trail
- profile suggests builder / debugger / proof-bearer
- Likely strengths:
- likely useful for testbed and proof work, but not yet well evidenced in Gitea
- Likely failure mode:
- assigning major ownership before proof exists
- Highest-leverage lane:
- testbed verification
- proof of life
- hardening checks
- Anti-lane:
- broad strategic ownership
### antigravity
- Observed pattern:
- minimal artifact trail
- yet explicitly referenced in issue #542 as development loop owner
- Likely strengths:
- direct founder-trusted execution
- potentially strong private-context operator
- Likely failure mode:
- invisible work makes it hard to calibrate or route intelligently
- Highest-leverage lane:
- founder-directed execution
- development loop tasks where trust is already established
- Anti-lane:
- org-wide lane ownership without more visible evidence
### google
- Observed pattern:
- duplicate-feeling identity relative to `gemini`
- only closed-unmerged PRs in `the-nexus`
- Likely strengths:
- none distinct enough from `gemini` in current evidence
- Likely failure mode:
- duplicate persona and duplicate backlog surface
- Highest-leverage lane:
- consolidate into `gemini` or retire
- Anti-lane:
- continued parallel role with overlapping mandate
### hermes
- Observed pattern:
- essentially no durable collaborative artifact trail
- Likely strengths:
- system or service identity
- Likely failure mode:
- confusion between service identity and contributor identity
- Highest-leverage lane:
- machine identity only
- Anti-lane:
- backlog or product work
### replit
- Observed pattern:
- admin-capable, no meaningful contribution trail here
- Likely strengths:
- likely external or sandbox utility
- Likely failure mode:
- implicit trust without role clarity
- Highest-leverage lane:
- sandbox or peripheral experimentation
- Anti-lane:
- core system ownership
### allegro-primus
- Observed pattern:
- no visible artifact trail yet
- Highest-leverage lane:
- none until proven
### claw-code
- Observed pattern:
- almost no artifact trail yet
- Highest-leverage lane:
- harness experiments only until proven
### substratum
- Observed pattern:
- no visible artifact trail yet
- Highest-leverage lane:
- reserve account only until it ships durable work
### bilbobagginshire
- Observed pattern:
- admin account, no visible contribution trail
- Highest-leverage lane:
- none until proven
### fenrir
- Observed pattern:
- brand new
- no visible contribution trail
- Highest-leverage lane:
- probationary tasks only until it earns a lane
## Consolidation Recommendations
1. Consolidate `google` into `gemini`.
2. Consolidate legacy `kimi` into `KimiClaw` unless a separate lane is proven.
3. Keep symbolic or dormant identities off critical path until they ship.
4. Treat `allegro`, `perplexity`, `codex-agent`, `groq`, and `Timmy` as the current strongest operating core.
## Routing Rules
- If the task is architecture, sovereignty tradeoff, or MCP/open-source evaluation:
- use `perplexity` first
- If the task is dispatch, triage, cleanup ordering, or operational next-move selection:
- use `allegro`
- If the task is a hard bounded refactor:
- use `claude`
- If the task is a tactical code slice:
- use `groq`
- If the task is cleanup, migration, repo-boundary enforcement, or “make reality match the diagram”:
- use `codex-agent`
- If the task is archival memory, failure analysis, onboarding, or durable lessons:
- use `ezra`
- If the task is long-context digestion before action:
- use `KimiClaw`
- If the task is final acceptance, doctrine, or strategic redirection:
- route to `Timmy` and `Rockachopa`
## Anti-Routing Rules
- Do not use `gemini` as the default closer for vague work.
- Do not use `ezra` as a primary shipper.
- Do not use dormant identities as if they are proven operators.
- Do not let architecture-spec agents create unlimited parallel issue trees without a collapse pass.
## Proposed Next Step
Timmy, Ezra, and Allegro should convert this from an audit into a living lane charter:
- Timmy decides the final lane map.
- Ezra turns it into durable operating doctrine.
- Allegro turns it into routing rules and dispatch policy.
The system has enough agents. The next win is cleaner lanes, fewer duplicates, and tighter assignment discipline.

View File

@@ -0,0 +1,295 @@
# Wizard Apprenticeship Charter
Date: April 4, 2026
Context: This charter turns the April 4 user audit into a training doctrine for the active wizard team.
This system does not need more wizard identities. It needs stronger wizard habits.
The goal of this charter is to teach each wizard toward higher leverage without flattening them into the same general-purpose agent. Training should sharpen the lane, not erase it.
This document is downstream from:
- the direction shift in `the-nexus` issue `#542`
- the user audit in [USER_AUDIT_2026-04-04.md](USER_AUDIT_2026-04-04.md)
## Training Priorities
All training should improve one or more of the three current jobs:
- Heartbeat
- Harness
- Portal Interface
Anything that does not improve one of those jobs is background noise, not apprenticeship.
## Core Skills Every Wizard Needs
Every active wizard should be trained on these baseline skills, regardless of lane:
- Scope control: finish the asked problem instead of growing a new one.
- Verification discipline: prove behavior, not just intent.
- Review hygiene: leave a PR or issue summary that another wizard can understand quickly.
- Repo-boundary awareness: know what belongs in `timmy-home`, `timmy-config`, Hermes, and `the-nexus`.
- Escalation discipline: ask for Timmy or Allegro judgment before crossing into governance, release, or identity surfaces.
- Deduplication: collapse overlap instead of multiplying backlog and PRs.
## Missing Skills By Wizard
### Timmy
Primary lane:
- sovereignty
- architecture
- release and rollback judgment
Train harder on:
- delegating routine queue work to Allegro
- preserving attention for governing changes
Do not train toward:
- routine backlog maintenance
- acting as a mechanical triager
### Allegro
Primary lane:
- dispatch
- queue hygiene
- review routing
- operational tempo
Train harder on:
- choosing the best next move, not just any move
- recognizing when work belongs back with Timmy
- collapsing duplicate issues and duplicate PR momentum
Do not train toward:
- final architecture judgment
- unsupervised product-code ownership
### Perplexity
Primary lane:
- research triage
- integration comparisons
- architecture memos
Train harder on:
- compressing research into action
- collapsing duplicates before opening new backlog
- making build-vs-borrow tradeoffs explicit
Do not train toward:
- wide unsupervised issue generation
- standing in for a builder
### Ezra
Primary lane:
- archive
- RCA
- onboarding
- durable operating memory
Train harder on:
- extracting reusable lessons from sessions and merges
- turning failure history into doctrine
- producing onboarding artifacts that reduce future confusion
Do not train toward:
- primary implementation ownership on broad tickets
### KimiClaw
Primary lane:
- long-context reading
- extraction
- synthesis
Train harder on:
- crisp handoffs to builders
- compressing large context into a smaller decision surface
- naming what is known, inferred, and still missing
Do not train toward:
- generic architecture wandering
- critical-path implementation without tight scope
### Codex Agent
Primary lane:
- cleanup
- migration verification
- repo-boundary enforcement
- workflow hardening
Train harder on:
- proving live truth against repo intent
- cutting dead code without collateral damage
- leaving high-quality PR trails for review
Do not train toward:
- speculative backlog growth
### Groq
Primary lane:
- fast bounded implementation
- tactical fixes
- small feature slices
Train harder on:
- verification under time pressure
- stopping when ambiguity rises
- keeping blast radius tight
Do not train toward:
- broad architecture ownership
### Manus
Primary lane:
- dependable moderate-scope execution
- follow-through
Train harder on:
- escalation when scope stops being moderate
- stronger implementation summaries
Do not train toward:
- sprawling multi-repo ownership
### Claude
Primary lane:
- hard refactors
- deep implementation
- test-heavy code changes
Train harder on:
- tighter scope obedience
- better visibility of blast radius
- disciplined follow-through instead of large creative drift
Do not train toward:
- self-directed issue farming
- unsupervised architecture sprawl
### Gemini
Primary lane:
- frontier architecture
- long-range design
- prototype framing
Train harder on:
- decision compression
- architecture recommendations that builders can actually execute
- backlog collapse before expansion
Do not train toward:
- unsupervised backlog flood
### Grok
Primary lane:
- adversarial review
- edge cases
- provocative alternate angles
Train harder on:
- separating real risks from entertaining risks
- making critiques actionable
Do not train toward:
- primary stable delivery ownership
## Drills
These are the training drills that should repeat across the system:
### Drill 1: Scope Collapse
Prompt a wizard to:
- restate the task in one paragraph
- name what is out of scope
- name the smallest reviewable change
Pass condition:
- the proposed work becomes smaller and clearer
### Drill 2: Verification First
Prompt a wizard to:
- say how it will prove success before it edits
- say what command, test, or artifact would falsify its claim
Pass condition:
- the wizard describes concrete evidence rather than vague confidence
### Drill 3: Boundary Check
Prompt a wizard to classify each proposed change as:
- identity/config
- lived work/data
- harness substrate
- portal/product interface
Pass condition:
- the wizard routes work to the right repo and escalates cross-boundary changes
### Drill 4: Duplicate Collapse
Prompt a wizard to:
- find existing issues, PRs, docs, or sessions that overlap
- recommend merge, close, supersede, or continue
Pass condition:
- backlog gets smaller or more coherent
### Drill 5: Review Handoff
Prompt a wizard to summarize:
- what changed
- how it was verified
- remaining risks
- what needs Timmy or Allegro judgment
Pass condition:
- another wizard can review without re-deriving the whole context
## Coaching Loops
Timmy should coach:
- sovereignty
- architecture boundaries
- release judgment
Allegro should coach:
- dispatch
- queue hygiene
- duplicate collapse
- operational next-move selection
Ezra should coach:
- memory
- RCA
- onboarding quality
Perplexity should coach:
- research compression
- build-vs-borrow comparisons
## Success Signals
The apprenticeship program is working if:
- duplicate issue creation drops
- builders receive clearer, smaller assignments
- PRs show stronger verification summaries
- Timmy spends less time on routine queue work
- Allegro spends less time untangling ambiguous assignments
- merged work aligns more tightly with Heartbeat, Harness, and Portal
## Anti-Goal
Do not train every wizard into the same shape.
The point is not to make every wizard equally good at everything.
The point is to make each wizard more reliable inside the lane where it compounds value.

View File

@@ -136,3 +136,27 @@ def build_bootstrap_graph() -> Graph:
--- ---
*This epic supersedes Allegro-Primus who has been idle.* *This epic supersedes Allegro-Primus who has been idle.*
---
## Feedback — 2026-04-06 (Allegro Cross-Epic Review)
**Health:** 🟡 Yellow
**Blocker:** Gitea externally firewalled + no Allegro-Primus RCA
### Critical Issues
1. **Dependency blindness.** Every Claw Code reference points to `143.198.27.163:3000`, which is currently firewalled and unreachable from this VM. If the mirror is not locally cached, development is blocked on external infrastructure.
2. **Root cause vs. replacement.** The epic jumps to "replace Allegro-Primus" without proving he is unfixable. Primus being idle could be the same provider/auth outage that took down Ezra and Bezalel. A 5-line RCA should precede a 5-phase rewrite.
3. **Timeline fantasy.** "Phase 1: 2 days" assumes stable infrastructure. Current reality: Gitea externally firewalled, Bezalel VPS down, Ezra needs webhook switch. This epic needs a "Blocked Until" section.
4. **Resource stalemate.** "Telegram bot: Need @BotFather" — the fleet already operates multiple bots. Reuse an existing bot profile or document why a new one is required.
### Recommended Action
Add a **Pre-Flight Checklist** to the epic:
- [ ] Verify Gitea/Claw Code mirror is reachable from the build VM
- [ ] Publish 1-paragraph RCA on why Allegro-Primus is idle
- [ ] Confirm target repo for the new agent code
Do not start Phase 1 until all three are checked.

323
scripts/detect_secrets.py Executable file
View File

@@ -0,0 +1,323 @@
#!/usr/bin/env python3
"""
Secret leak detection script for pre-commit hooks.
Detects common secret patterns in staged files:
- API keys (sk-*, pk_*, etc.)
- Private keys (-----BEGIN PRIVATE KEY-----)
- Passwords in config files
- GitHub/Gitea tokens
- Database connection strings with credentials
"""
import argparse
import re
import sys
from pathlib import Path
from typing import List, Tuple
# Secret patterns to detect
SECRET_PATTERNS = {
"openai_api_key": {
"pattern": r"sk-[a-zA-Z0-9]{20,}",
"description": "OpenAI API key",
},
"anthropic_api_key": {
"pattern": r"sk-ant-[a-zA-Z0-9]{32,}",
"description": "Anthropic API key",
},
"generic_api_key": {
"pattern": r"(?i)(api[_-]?key|apikey)\s*[:=]\s*['\"]?([a-zA-Z0-9_\-]{16,})['\"]?",
"description": "Generic API key",
},
"private_key": {
"pattern": r"-----BEGIN (RSA |DSA |EC |OPENSSH )?PRIVATE KEY-----",
"description": "Private key",
},
"github_token": {
"pattern": r"gh[pousr]_[A-Za-z0-9_]{36,}",
"description": "GitHub token",
},
"gitea_token": {
"pattern": r"gitea_[a-f0-9]{40}",
"description": "Gitea token",
},
"aws_access_key": {
"pattern": r"AKIA[0-9A-Z]{16}",
"description": "AWS Access Key ID",
},
"aws_secret_key": {
"pattern": r"(?i)aws[_-]?secret[_-]?(access)?[_-]?key\s*[:=]\s*['\"]?([a-zA-Z0-9/+=]{40})['\"]?",
"description": "AWS Secret Access Key",
},
"database_connection_string": {
"pattern": r"(?i)(mongodb|mysql|postgresql|postgres|redis)://[^:]+:[^@]+@[^/]+",
"description": "Database connection string with credentials",
},
"password_in_config": {
"pattern": r"(?i)(password|passwd|pwd)\s*[:=]\s*['\"]([^'\"]{4,})['\"]",
"description": "Hardcoded password",
},
"stripe_key": {
"pattern": r"sk_(live|test)_[0-9a-zA-Z]{24,}",
"description": "Stripe API key",
},
"slack_token": {
"pattern": r"xox[baprs]-[0-9a-zA-Z]{10,}",
"description": "Slack token",
},
"telegram_bot_token": {
"pattern": r"[0-9]{8,10}:[a-zA-Z0-9_-]{35}",
"description": "Telegram bot token",
},
"jwt_token": {
"pattern": r"eyJ[a-zA-Z0-9_-]*\.eyJ[a-zA-Z0-9_-]*\.[a-zA-Z0-9_-]*",
"description": "JWT token",
},
"bearer_token": {
"pattern": r"(?i)bearer\s+[a-zA-Z0-9_\-\.=]{20,}",
"description": "Bearer token",
},
}
# Files/patterns to exclude from scanning
EXCLUSIONS = {
"files": {
".pre-commit-hooks.yaml",
".gitignore",
"poetry.lock",
"package-lock.json",
"yarn.lock",
"Pipfile.lock",
".secrets.baseline",
},
"extensions": {
".md",
".svg",
".png",
".jpg",
".jpeg",
".gif",
".ico",
".woff",
".woff2",
".ttf",
".eot",
},
"paths": {
".git/",
"node_modules/",
"__pycache__/",
".pytest_cache/",
".mypy_cache/",
".venv/",
"venv/",
".tox/",
"dist/",
"build/",
".eggs/",
},
"patterns": {
r"your_[a-z_]+_here",
r"example_[a-z_]+",
r"dummy_[a-z_]+",
r"test_[a-z_]+",
r"fake_[a-z_]+",
r"password\s*[=:]\s*['\"]?(changeme|password|123456|admin)['\"]?",
r"#.*(?:example|placeholder|sample)",
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@localhost",
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@127\.0\.0\.1",
},
}
# Markers for inline exclusions
EXCLUSION_MARKERS = [
"# pragma: allowlist secret",
"# noqa: secret",
"// pragma: allowlist secret",
"/* pragma: allowlist secret */",
"# secret-detection:ignore",
]
def should_exclude_file(file_path: str) -> bool:
"""Check if file should be excluded from scanning."""
path = Path(file_path)
if path.name in EXCLUSIONS["files"]:
return True
if path.suffix.lower() in EXCLUSIONS["extensions"]:
return True
for excluded_path in EXCLUSIONS["paths"]:
if excluded_path in str(path):
return True
return False
def has_exclusion_marker(line: str) -> bool:
"""Check if line has an exclusion marker."""
return any(marker in line for marker in EXCLUSION_MARKERS)
def is_excluded_match(line: str, match_str: str) -> bool:
"""Check if the match should be excluded."""
for pattern in EXCLUSIONS["patterns"]:
if re.search(pattern, line, re.IGNORECASE):
return True
if re.search(r"['\"](fake|test|dummy|example|placeholder|changeme)['\"]", line, re.IGNORECASE):
return True
return False
def scan_file(file_path: str) -> List[Tuple[int, str, str, str]]:
"""Scan a single file for secrets.
Returns list of tuples: (line_number, line_content, pattern_name, description)
"""
findings = []
try:
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
lines = f.readlines()
except (IOError, OSError) as e:
print(f"Warning: Could not read {file_path}: {e}", file=sys.stderr)
return findings
for line_num, line in enumerate(lines, 1):
if has_exclusion_marker(line):
continue
for pattern_name, pattern_info in SECRET_PATTERNS.items():
matches = re.finditer(pattern_info["pattern"], line)
for match in matches:
match_str = match.group(0)
if is_excluded_match(line, match_str):
continue
findings.append(
(line_num, line.strip(), pattern_name, pattern_info["description"])
)
return findings
def scan_files(file_paths: List[str]) -> dict:
"""Scan multiple files for secrets.
Returns dict: {file_path: [(line_num, line, pattern, description), ...]}
"""
results = {}
for file_path in file_paths:
if should_exclude_file(file_path):
continue
findings = scan_file(file_path)
if findings:
results[file_path] = findings
return results
def print_findings(results: dict) -> None:
"""Print secret findings in a readable format."""
if not results:
return
print("=" * 80)
print("POTENTIAL SECRETS DETECTED!")
print("=" * 80)
print()
total_findings = 0
for file_path, findings in results.items():
print(f"\nFILE: {file_path}")
print("-" * 40)
for line_num, line, pattern_name, description in findings:
total_findings += 1
print(f" Line {line_num}: {description}")
print(f" Pattern: {pattern_name}")
print(f" Content: {line[:100]}{'...' if len(line) > 100 else ''}")
print()
print("=" * 80)
print(f"Total findings: {total_findings}")
print("=" * 80)
print()
print("To fix this:")
print(" 1. Remove the secret from the file")
print(" 2. Use environment variables or a secrets manager")
print(" 3. If this is a false positive, add an exclusion marker:")
print(" - Add '# pragma: allowlist secret' to the end of the line")
print(" - Or add '# secret-detection:ignore' to the end of the line")
print()
def main() -> int:
"""Main entry point."""
parser = argparse.ArgumentParser(
description="Detect secrets in files",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
%(prog)s file1.py file2.yaml
%(prog)s --exclude "*.md" src/
Exit codes:
0 - No secrets found
1 - Secrets detected
2 - Error
""",
)
parser.add_argument(
"files",
nargs="+",
help="Files to scan",
)
parser.add_argument(
"--exclude",
action="append",
default=[],
help="Additional file patterns to exclude",
)
parser.add_argument(
"--verbose",
"-v",
action="store_true",
help="Print verbose output",
)
args = parser.parse_args()
files_to_scan = []
for file_path in args.files:
if should_exclude_file(file_path):
if args.verbose:
print(f"Skipping excluded file: {file_path}")
continue
files_to_scan.append(file_path)
if args.verbose:
print(f"Scanning {len(files_to_scan)} files...")
results = scan_files(files_to_scan)
if results:
print_findings(results)
return 1
if args.verbose:
print("No secrets detected!")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,68 @@
import sqlite3
import json
import os
from pathlib import Path
from datetime import datetime
DB_PATH = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
REPORT_PATH = Path.home() / "timmy" / "SOVEREIGN_HEALTH.md"
def generate_report():
if not DB_PATH.exists():
return "No metrics database found."
conn = sqlite3.connect(str(DB_PATH))
# Get latest sovereignty score
row = conn.execute("""
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
""").fetchone()
if not row:
return "No sovereignty data found."
pct, total, local, cloud, cost, saved = row
# Get model breakdown
models = conn.execute("""
SELECT model, SUM(sessions), SUM(messages), is_local, SUM(est_cost_usd)
FROM session_stats
WHERE timestamp > ?
GROUP BY model
ORDER BY SUM(sessions) DESC
""", (datetime.now().timestamp() - 86400 * 7,)).fetchall()
report = f"""# Sovereign Health Report — {datetime.now().strftime('%Y-%m-%d')}
## ◈ Sovereignty Score: {pct:.1f}%
**Status:** {"🟢 OPTIMAL" if pct > 90 else "🟡 WARNING" if pct > 50 else "🔴 COMPROMISED"}
- **Total Sessions:** {total}
- **Local Sessions:** {local} (Zero Cost, Total Privacy)
- **Cloud Sessions:** {cloud} (Token Leakage)
- **Est. Cloud Cost:** ${cost:.2f}
- **Est. Savings:** ${saved:.2f} (Sovereign Dividend)
## ◈ Fleet Composition (Last 7 Days)
| Model | Sessions | Messages | Local? | Est. Cost |
| :--- | :--- | :--- | :--- | :--- |
"""
for m, s, msg, l, c in models:
local_flag = "" if l else ""
report += f"| {m} | {s} | {msg} | {local_flag} | ${c:.2f} |\n"
report += """
---
*Generated by the Sovereign Health Daemon. Sovereignty is a right. Privacy is a duty.*
"""
with open(REPORT_PATH, "w") as f:
f.write(report)
print(f"Report generated at {REPORT_PATH}")
return report
if __name__ == "__main__":
generate_report()

146
tests/test_nexus_alert.sh Executable file
View File

@@ -0,0 +1,146 @@
#!/bin/bash
# Test script for Nexus Watchdog alerting functionality
set -euo pipefail
TEST_DIR="/tmp/test-nexus-alerts-$$"
export NEXUS_ALERT_DIR="$TEST_DIR"
export NEXUS_ALERT_ENABLED=true
echo "=== Nexus Watchdog Alert Test ==="
echo "Test alert directory: $TEST_DIR"
# Source the alert function from the heartbeat script
# Extract just the nexus_alert function for testing
cat > /tmp/test_alert_func.sh << 'ALEOF'
#!/bin/bash
NEXUS_ALERT_DIR="${NEXUS_ALERT_DIR:-/tmp/nexus-alerts}"
NEXUS_ALERT_ENABLED=true
HOSTNAME=$(hostname -s 2>/dev/null || echo "unknown")
SCRIPT_NAME="kimi-heartbeat-test"
nexus_alert() {
local alert_type="$1"
local message="$2"
local severity="${3:-info}"
local extra_data="${4:-{}}"
if [ "$NEXUS_ALERT_ENABLED" != "true" ]; then
return 0
fi
mkdir -p "$NEXUS_ALERT_DIR" 2>/dev/null || return 0
local timestamp
timestamp=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
local nanoseconds=$(date +%N 2>/dev/null || echo "$$")
local alert_id="${SCRIPT_NAME}_$(date +%s)_${nanoseconds}_$$"
local alert_file="$NEXUS_ALERT_DIR/${alert_id}.json"
cat > "$alert_file" << EOF
{
"alert_id": "$alert_id",
"timestamp": "$timestamp",
"source": "$SCRIPT_NAME",
"host": "$HOSTNAME",
"alert_type": "$alert_type",
"severity": "$severity",
"message": "$message",
"data": $extra_data
}
EOF
if [ -f "$alert_file" ]; then
echo "NEXUS_ALERT: $alert_type [$severity] - $message"
return 0
else
echo "NEXUS_ALERT_FAILED: Could not write alert"
return 1
fi
}
ALEOF
source /tmp/test_alert_func.sh
# Test 1: Basic alert
echo -e "\n[TEST 1] Sending basic info alert..."
nexus_alert "test_alert" "Test message from heartbeat" "info" '{"test": true}'
# Test 2: Stale lock alert simulation
echo -e "\n[TEST 2] Sending stale lock alert..."
nexus_alert \
"stale_lock_reclaimed" \
"Stale lockfile deadlock cleared after 650s" \
"warning" \
'{"lock_age_seconds": 650, "lockfile": "/tmp/kimi-heartbeat.lock", "action": "removed"}'
# Test 3: Heartbeat resumed alert
echo -e "\n[TEST 3] Sending heartbeat resumed alert..."
nexus_alert \
"heartbeat_resumed" \
"Kimi heartbeat resumed after clearing stale lock" \
"info" \
'{"recovery": "successful", "continuing": true}'
# Check results
echo -e "\n=== Alert Files Created ==="
alert_count=$(find "$TEST_DIR" -name "*.json" 2>/dev/null | wc -l)
echo "Total alert files: $alert_count"
if [ "$alert_count" -eq 3 ]; then
echo "✅ All 3 alerts were created successfully"
else
echo "❌ Expected 3 alerts, found $alert_count"
exit 1
fi
echo -e "\n=== Alert Contents ==="
for f in "$TEST_DIR"/*.json; do
echo -e "\n--- $(basename "$f") ---"
cat "$f" | python3 -m json.tool 2>/dev/null || cat "$f"
done
# Validate JSON structure
echo -e "\n=== JSON Validation ==="
all_valid=true
for f in "$TEST_DIR"/*.json; do
if python3 -c "import json; json.load(open('$f'))" 2>/dev/null; then
echo "$(basename "$f") - Valid JSON"
else
echo "$(basename "$f") - Invalid JSON"
all_valid=false
fi
done
# Check for required fields
echo -e "\n=== Required Fields Check ==="
for f in "$TEST_DIR"/*.json; do
basename=$(basename "$f")
missing=()
python3 -c "import json; d=json.load(open('$f'))" 2>/dev/null || continue
for field in alert_id timestamp source host alert_type severity message data; do
if ! python3 -c "import json; d=json.load(open('$f')); exit(0 if '$field' in d else 1)" 2>/dev/null; then
missing+=("$field")
fi
done
if [ ${#missing[@]} -eq 0 ]; then
echo "$basename - All required fields present"
else
echo "$basename - Missing fields: ${missing[*]}"
all_valid=false
fi
done
# Cleanup
rm -rf "$TEST_DIR" /tmp/test_alert_func.sh
echo -e "\n=== Test Summary ==="
if [ "$all_valid" = true ]; then
echo "✅ All tests passed!"
exit 0
else
echo "❌ Some tests failed"
exit 1
fi

View File

@@ -0,0 +1,106 @@
#!/usr/bin/env python3
"""
Test cases for secret detection script.
These tests verify that the detect_secrets.py script correctly:
1. Detects actual secrets
2. Ignores false positives
3. Respects exclusion markers
"""
import os
import sys
import tempfile
import unittest
from pathlib import Path
# Add scripts directory to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "scripts"))
from detect_secrets import (
scan_file,
scan_files,
should_exclude_file,
has_exclusion_marker,
is_excluded_match,
SECRET_PATTERNS,
)
class TestSecretDetection(unittest.TestCase):
"""Test cases for secret detection."""
def setUp(self):
"""Set up test fixtures."""
self.test_dir = tempfile.mkdtemp()
def tearDown(self):
"""Clean up test fixtures."""
import shutil
shutil.rmtree(self.test_dir, ignore_errors=True)
def _create_test_file(self, content: str, filename: str = "test.txt") -> str:
"""Create a test file with given content."""
file_path = os.path.join(self.test_dir, filename)
with open(file_path, "w") as f:
f.write(content)
return file_path
def test_detect_openai_api_key(self):
"""Test detection of OpenAI API keys."""
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456'"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertTrue(any("openai" in f[2].lower() for f in findings))
def test_detect_private_key(self):
"""Test detection of private keys."""
content = "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA0Z3VS5JJcds3xfn/ygWyF8PbnGy0AHB7MhgwMbRvI0MBZhpF\n-----END RSA PRIVATE KEY-----"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertTrue(any("private" in f[2].lower() for f in findings))
def test_detect_database_connection_string(self):
"""Test detection of database connection strings with credentials."""
content = "DATABASE_URL=mongodb://admin:secretpassword@mongodb.example.com:27017/db"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertTrue(any("database" in f[2].lower() for f in findings))
def test_detect_password_in_config(self):
"""Test detection of hardcoded passwords."""
content = "password = 'mysecretpassword123'"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertTrue(any("password" in f[2].lower() for f in findings))
def test_exclude_placeholder_passwords(self):
"""Test that placeholder passwords are excluded."""
content = "password = 'changeme'"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertEqual(len(findings), 0)
def test_exclude_localhost_database_url(self):
"""Test that localhost database URLs are excluded."""
content = "DATABASE_URL=mongodb://admin:secret@localhost:27017/db"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertEqual(len(findings), 0)
def test_pragma_allowlist_secret(self):
"""Test '# pragma: allowlist secret' marker."""
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456' # pragma: allowlist secret"
file_path = self._create_test_file(content)
findings = scan_file(file_path)
self.assertEqual(len(findings), 0)
def test_empty_file(self):
"""Test scanning empty file."""
file_path = self._create_test_file("")
findings = scan_file(file_path)
self.assertEqual(len(findings), 0)
if __name__ == "__main__":
unittest.main(verbosity=2)

View File

@@ -24,32 +24,52 @@ class HealthCheckHandler(BaseHTTPRequestHandler):
# Suppress default logging # Suppress default logging
pass pass
def do_GET(self): def do_GET(self):
"""Handle GET requests""" """Handle GET requests"""
if self.path == '/health': if self.path == '/health':
self.send_health_response() self.send_health_response()
elif self.path == '/status': elif self.path == '/status':
self.send_full_status() self.send_full_status()
elif self.path == '/metrics':
self.send_sovereign_metrics()
else: else:
self.send_error(404) self.send_error(404)
def send_health_response(self): def send_sovereign_metrics(self):
"""Send simple health check""" """Send sovereign health metrics as JSON"""
harness = get_harness()
result = harness.execute("health_check")
try: try:
health_data = json.loads(result) import sqlite3
status_code = 200 if health_data.get("overall") == "healthy" else 503 db_path = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
except: if not db_path.exists():
status_code = 503 data = {"error": "No database found"}
health_data = {"error": "Health check failed"} else:
conn = sqlite3.connect(str(db_path))
self.send_response(status_code) row = conn.execute("""
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
""").fetchone()
if row:
data = {
"sovereignty_score": row[0],
"total_sessions": row[1],
"local_sessions": row[2],
"cloud_sessions": row[3],
"est_cloud_cost": row[4],
"est_saved": row[5],
"timestamp": datetime.now().isoformat()
}
else:
data = {"error": "No data"}
conn.close()
except Exception as e:
data = {"error": str(e)}
self.send_response(200)
self.send_header('Content-Type', 'application/json') self.send_header('Content-Type', 'application/json')
self.end_headers() self.end_headers()
self.wfile.write(json.dumps(health_data).encode()) self.wfile.write(json.dumps(data).encode())
def send_full_status(self): def send_full_status(self):
"""Send full system status""" """Send full system status"""
harness = get_harness() harness = get_harness()

View File

@@ -3,7 +3,7 @@
# Zero LLM cost for polling — only calls kimi/kimi-code for actual work. # Zero LLM cost for polling — only calls kimi/kimi-code for actual work.
# #
# Run manually: bash ~/.timmy/uniwizard/kimi-heartbeat.sh # Run manually: bash ~/.timmy/uniwizard/kimi-heartbeat.sh
# Runs via launchd every 5 minutes: ai.timmy.kimi-heartbeat.plist # Runs via launchd every 2 minutes: ai.timmy.kimi-heartbeat.plist
# #
# Workflow for humans: # Workflow for humans:
# 1. Create or open a Gitea issue in any tracked repo # 1. Create or open a Gitea issue in any tracked repo
@@ -21,18 +21,14 @@ set -euo pipefail
# --- Config --- # --- Config ---
TOKEN=$(cat "$HOME/.timmy/kimi_gitea_token" | tr -d '[:space:]') TOKEN=$(cat "$HOME/.timmy/kimi_gitea_token" | tr -d '[:space:]')
TIMMY_TOKEN=$(cat "$HOME/.config/gitea/timmy-token" | tr -d '[:space:]') TIMMY_TOKEN=$(cat "$HOME/.config/gitea/timmy-token" | tr -d '[:space:]')
# Prefer Tailscale (private network) over public IP BASE="${GITEA_API_BASE:-https://forge.alexanderwhitestone.com/api/v1}"
if curl -sf --connect-timeout 2 "http://100.126.61.75:3000/api/v1/version" > /dev/null 2>&1; then
BASE="http://100.126.61.75:3000/api/v1"
else
BASE="http://143.198.27.163:3000/api/v1"
fi
LOG="/tmp/kimi-heartbeat.log" LOG="/tmp/kimi-heartbeat.log"
LOCKFILE="/tmp/kimi-heartbeat.lock" LOCKFILE="/tmp/kimi-heartbeat.lock"
MAX_DISPATCH=5 # Don't overwhelm Kimi with too many parallel tasks MAX_DISPATCH=10 # Increased max dispatch to 10
PLAN_TIMEOUT=120 # 2 minutes for planning pass PLAN_TIMEOUT=120 # 2 minutes for planning pass
EXEC_TIMEOUT=480 # 8 minutes for execution pass EXEC_TIMEOUT=480 # 8 minutes for execution pass
BODY_COMPLEXITY_THRESHOLD=500 # chars — above this triggers planning BODY_COMPLEXITY_THRESHOLD=500 # chars — above this triggers planning
STALE_PROGRESS_SECONDS=3600 # reclaim kimi-in-progress after 1 hour of silence
REPOS=( REPOS=(
"Timmy_Foundation/timmy-home" "Timmy_Foundation/timmy-home"
@@ -44,6 +40,31 @@ REPOS=(
# --- Helpers --- # --- Helpers ---
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; } log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; }
needs_pr_proof() {
local haystack="${1,,}"
[[ "$haystack" =~ implement|fix|refactor|feature|perf|performance|rebase|deploy|integration|module|script|pipeline|benchmark|cache|test|bug|build|port ]]
}
has_pr_proof() {
local haystack="${1,,}"
[[ "$haystack" == *"proof:"* || "$haystack" == *"pr:"* || "$haystack" == *"/pulls/"* || "$haystack" == *"commit:"* ]]
}
post_issue_comment_json() {
local repo="$1"
local issue_num="$2"
local token="$3"
local body="$4"
local payload
payload=$(python3 - "$body" <<'PY'
import json, sys
print(json.dumps({"body": sys.argv[1]}))
PY
)
curl -sf -X POST -H "Authorization: token $token" -H "Content-Type: application/json" \
-d "$payload" "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
}
# Prevent overlapping runs # Prevent overlapping runs
if [ -f "$LOCKFILE" ]; then if [ -f "$LOCKFILE" ]; then
lock_age=$(( $(date +%s) - $(stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) )) lock_age=$(( $(date +%s) - $(stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) ))
@@ -65,30 +86,53 @@ for repo in "${REPOS[@]}"; do
response=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \ response=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues?state=open&labels=assigned-kimi&limit=20" 2>/dev/null || echo "[]") "$BASE/repos/$repo/issues?state=open&labels=assigned-kimi&limit=20" 2>/dev/null || echo "[]")
# Filter: skip issues that already have kimi-in-progress or kimi-done # Filter: skip done tasks, but reclaim stale kimi-in-progress work automatically
issues=$(echo "$response" | python3 -c " issues=$(echo "$response" | python3 -c "
import json, sys import json, sys, datetime
STALE = int(${STALE_PROGRESS_SECONDS})
def parse_ts(value):
if not value:
return None
try:
return datetime.datetime.fromisoformat(value.replace('Z', '+00:00'))
except Exception:
return None
try: try:
data = json.loads(sys.stdin.buffer.read()) data = json.loads(sys.stdin.buffer.read())
except: except:
sys.exit(0) sys.exit(0)
now = datetime.datetime.now(datetime.timezone.utc)
for i in data: for i in data:
labels = [l['name'] for l in i.get('labels', [])] labels = [l['name'] for l in i.get('labels', [])]
if 'kimi-in-progress' in labels or 'kimi-done' in labels: if 'kimi-done' in labels:
continue continue
# Pipe-delimited: number|title|body_length|body (truncated, newlines removed)
reclaim = False
updated_at = i.get('updated_at', '') or ''
if 'kimi-in-progress' in labels:
ts = parse_ts(updated_at)
age = (now - ts).total_seconds() if ts else (STALE + 1)
if age < STALE:
continue
reclaim = True
body = (i.get('body', '') or '') body = (i.get('body', '') or '')
body_len = len(body) body_len = len(body)
body_clean = body[:1500].replace('\n', ' ').replace('|', ' ') body_clean = body[:1500].replace('\n', ' ').replace('|', ' ')
title = i['title'].replace('|', ' ') title = i['title'].replace('|', ' ')
print(f\"{i['number']}|{title}|{body_len}|{body_clean}\") updated_clean = updated_at.replace('|', ' ')
reclaim_flag = 'reclaim' if reclaim else 'fresh'
print(f\"{i['number']}|{title}|{body_len}|{reclaim_flag}|{updated_clean}|{body_clean}\")
" 2>/dev/null) " 2>/dev/null)
[ -z "$issues" ] && continue [ -z "$issues" ] && continue
while IFS='|' read -r issue_num title body_len body; do while IFS='|' read -r issue_num title body_len reclaim_flag updated_at body; do
[ -z "$issue_num" ] && continue [ -z "$issue_num" ] && continue
log "FOUND: $repo #$issue_num$title (body: ${body_len} chars)" log "FOUND: $repo #$issue_num$title (body: ${body_len} chars, mode: ${reclaim_flag}, updated: ${updated_at})"
# --- Get label IDs for this repo --- # --- Get label IDs for this repo ---
label_json=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \ label_json=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
@@ -98,6 +142,15 @@ for i in data:
done_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='kimi-done']" 2>/dev/null) done_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='kimi-done']" 2>/dev/null)
kimi_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='assigned-kimi']" 2>/dev/null) kimi_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='assigned-kimi']" 2>/dev/null)
if [ "$reclaim_flag" = "reclaim" ]; then
log "RECLAIM: $repo #$issue_num — stale kimi-in-progress since $updated_at"
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
-d "{\"body\":\"🟡 **KimiClaw reclaiming stale task.**\\nPrevious kimi-in-progress state exceeded ${STALE_PROGRESS_SECONDS}s without resolution.\\nLast update: $updated_at\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
fi
# --- Add kimi-in-progress label --- # --- Add kimi-in-progress label ---
if [ -n "$progress_id" ]; then if [ -n "$progress_id" ]; then
curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \ curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
@@ -121,32 +174,11 @@ for i in data:
-d "{\"body\":\"🟠 **KimiClaw picking up this task** via heartbeat.\\nBackend: kimi/kimi-code (Moonshot AI)\\nMode: **Planning first** (task is complex)\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \ -d "{\"body\":\"🟠 **KimiClaw picking up this task** via heartbeat.\\nBackend: kimi/kimi-code (Moonshot AI)\\nMode: **Planning first** (task is complex)\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES. plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES.\n\nTASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks.\n\nISSUE #$issue_num in $repo: $title\n\nBODY:\n$body\n\nRULES:\n- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.\n- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.\n- Each subtask line format: SUBTASK: <title> | <one-line description>\n- Be realistic about what fits in 8 minutes with no terminal access.\n- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.\n\nRespond with ONLY your decision. No preamble."
TASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks. plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{\"status\":\"error\"}')
ISSUE #$issue_num in $repo: $title
BODY:
$body
RULES:
- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.
- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.
- Each subtask line format: SUBTASK: <title> | <one-line description>
- Be realistic about what fits in 8 minutes with no terminal access.
- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.
Respond with ONLY your decision. No preamble."
plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{"status":"error"}')
plan_status=$(echo "$plan_result" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status','error'))" 2>/dev/null || echo "error") plan_status=$(echo "$plan_result" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status','error'))" 2>/dev/null || echo "error")
plan_text=$(echo "$plan_result" | python3 -c " plan_text=$(echo "$plan_result" | python3 -c "\nimport json,sys\nd = json.load(sys.stdin)\npayloads = d.get('result',{}).get('payloads',[])\nprint(payloads[0]['text'] if payloads else '')\n" 2>/dev/null || echo "")
import json,sys
d = json.load(sys.stdin)
payloads = d.get('result',{}).get('payloads',[])
print(payloads[0]['text'] if payloads else '')
" 2>/dev/null || echo "")
if echo "$plan_text" | grep -qi "^DECOMPOSE"; then if echo "$plan_text" | grep -qi "^DECOMPOSE"; then
# --- Create subtask issues --- # --- Create subtask issues ---
@@ -155,7 +187,7 @@ print(payloads[0]['text'] if payloads else '')
# Post the plan as a comment # Post the plan as a comment
escaped_plan=$(echo "$plan_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))" 2>/dev/null) escaped_plan=$(echo "$plan_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))" 2>/dev/null)
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \ curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
-d "{\"body\":\"📋 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \ -d "{\"body\":\"📝 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
# Extract SUBTASK lines and create child issues # Extract SUBTASK lines and create child issues
@@ -245,25 +277,40 @@ print(payloads[0]['text'][:3000] if payloads else 'No response')
" 2>/dev/null || echo "No response") " 2>/dev/null || echo "No response")
if [ "$status" = "ok" ] && [ "$response_text" != "No response" ]; then if [ "$status" = "ok" ] && [ "$response_text" != "No response" ]; then
log "COMPLETED: $repo #$issue_num"
# Post result as comment (escape for JSON)
escaped=$(echo "$response_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read())[1:-1])" 2>/dev/null) escaped=$(echo "$response_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read())[1:-1])" 2>/dev/null)
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \ if needs_pr_proof "$title $body" && ! has_pr_proof "$response_text"; then
-d "{\"body\":\"✅ **KimiClaw result:**\\n\\n$escaped\"}" \ log "BLOCKED: $repo #$issue_num — response lacked PR/proof for code task"
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟡 **KimiClaw produced analysis only — no PR/proof detected.**
# Remove kimi-in-progress, add kimi-done This issue looks like implementation work, so it is NOT being marked kimi-done.
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \ Kimi response excerpt:
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \ $escaped
-d "{\"labels\":[$done_id]}" \
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true Action: removing Kimi queue labels so a code-capable agent can pick it up."
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
else
log "COMPLETED: $repo #$issue_num"
post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟢 **KimiClaw result:**
$escaped"
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
-d "{\"labels\":[$done_id]}" \
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true
fi
else else
log "FAILED: $repo #$issue_num — status=$status" log "FAILED: $repo #$issue_num — status=$status"
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \ curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
-d "{\"body\":\"🔴 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \ -d "{\"body\":\"\ud83d\udd34 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
# Remove kimi-in-progress on failure # Remove kimi-in-progress on failure

View File

@@ -5,7 +5,12 @@
set -euo pipefail set -euo pipefail
KIMI_TOKEN=$(cat /Users/apayne/.timmy/kimi_gitea_token | tr -d '[:space:]') KIMI_TOKEN=$(cat /Users/apayne/.timmy/kimi_gitea_token | tr -d '[:space:]')
BASE="http://100.126.61.75:3000/api/v1"
# --- Tailscale/IP Detection (timmy-home#385) ---
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/lib/tailscale-gitea.sh"
BASE="$GITEA_BASE_URL"
LOG="/tmp/kimi-mentions.log" LOG="/tmp/kimi-mentions.log"
PROCESSED="/tmp/kimi-mentions-processed.txt" PROCESSED="/tmp/kimi-mentions-processed.txt"

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# example-usage.sh — Example showing how to use the tailscale-gitea module
# Issue: timmy-home#385 — Standardized Tailscale IP detection module
set -euo pipefail
# --- Basic Usage ---
# Source the module to automatically set GITEA_BASE_URL
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/tailscale-gitea.sh"
# Now use GITEA_BASE_URL in your API calls
echo "Using Gitea at: $GITEA_BASE_URL"
echo "Tailscale active: $GITEA_USING_TAILSCALE"
# --- Example API Call ---
# curl -sf -H "Authorization: token $TOKEN" \
# "$GITEA_BASE_URL/repos/myuser/myrepo/issues"
# --- Custom Configuration (Optional) ---
# You can customize behavior by setting variables BEFORE sourcing:
#
# TAILSCALE_TIMEOUT=5 # Wait 5 seconds instead of 2
# TAILSCALE_DEBUG=1 # Print which endpoint was selected
# source "${SCRIPT_DIR}/tailscale-gitea.sh"
# --- Advanced: Checking Network Mode ---
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
echo "✓ Connected via private Tailscale network"
else
echo "⚠ Using public internet fallback (Tailscale unavailable)"
fi
# --- Example: Polling with Retry Logic ---
poll_gitea() {
local endpoint="${1:-$GITEA_BASE_URL}"
local max_retries="${2:-3}"
local retry=0
while [[ $retry -lt $max_retries ]]; do
if curl -sf --connect-timeout 2 "${endpoint}/version" > /dev/null 2>&1; then
echo "Gitea is reachable"
return 0
fi
retry=$((retry + 1))
echo "Retry $retry/$max_retries..."
sleep 1
done
echo "Gitea unreachable after $max_retries attempts"
return 1
}
# Uncomment to test connectivity:
# poll_gitea "$GITEA_BASE_URL"

View File

@@ -0,0 +1,64 @@
#!/bin/bash
# tailscale-gitea.sh — Standardized Tailscale IP detection module for Gitea API access
# Issue: timmy-home#385 — Standardize Tailscale IP detection across auxiliary scripts
#
# Usage (source this file in your script):
# source /path/to/tailscale-gitea.sh
# # Now use $GITEA_BASE_URL for API calls
#
# Configuration (set before sourcing to customize):
# TAILSCALE_IP - Tailscale IP to try first (default: 100.126.61.75)
# PUBLIC_IP - Public fallback IP (default: 143.198.27.163)
# GITEA_PORT - Gitea API port (default: 3000)
# TAILSCALE_TIMEOUT - Connection timeout in seconds (default: 2)
# GITEA_API_VERSION - API version path (default: api/v1)
#
# Sovereignty: Private Tailscale network preferred over public internet
# --- Default Configuration ---
: "${TAILSCALE_IP:=100.126.61.75}"
: "${PUBLIC_IP:=143.198.27.163}"
: "${GITEA_PORT:=3000}"
: "${TAILSCALE_TIMEOUT:=2}"
: "${GITEA_API_VERSION:=api/v1}"
# --- Detection Function ---
_detect_gitea_endpoint() {
local tailscale_url="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
local public_url="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
# Prefer Tailscale (private network) over public IP
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
"${tailscale_url}/version" > /dev/null 2>&1; then
echo "$tailscale_url"
return 0
else
echo "$public_url"
return 1
fi
}
# --- Main Detection ---
# Set GITEA_BASE_URL for use by sourcing scripts
# Also sets GITEA_USING_TAILSCALE=true/false for scripts that need to know
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
"http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}/version" > /dev/null 2>&1; then
GITEA_BASE_URL="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
GITEA_USING_TAILSCALE=true
else
GITEA_BASE_URL="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
GITEA_USING_TAILSCALE=false
fi
# Export for child processes
export GITEA_BASE_URL
export GITEA_USING_TAILSCALE
# Optional: log which endpoint was selected (set TAILSCALE_DEBUG=1 to enable)
if [[ "${TAILSCALE_DEBUG:-0}" == "1" ]]; then
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
echo "[tailscale-gitea] Using Tailscale endpoint: $GITEA_BASE_URL" >&2
else
echo "[tailscale-gitea] Tailscale unavailable, using public endpoint: $GITEA_BASE_URL" >&2
fi
fi