Compare commits
17 Commits
timmy/issu
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| f367d89241 | |||
| 39ca1156f8 | |||
| e6bbe5f5e9 | |||
| af3f9841e9 | |||
| 89534ed657 | |||
| fbb5494801 | |||
| 34bf9e9870 | |||
| b65bcf861e | |||
| 4b7c238094 | |||
| fcf07357c1 | |||
| edcdb22a89 | |||
| 286a9c9888 | |||
| cc061cb8a5 | |||
| 8602dfddb6 | |||
| fd75985db6 | |||
| 3b4c5e7207 | |||
| 0b57145dde |
463
audits/2026-04-06-formalization-audit.md
Normal file
463
audits/2026-04-06-formalization-audit.md
Normal file
@@ -0,0 +1,463 @@
|
||||
# Formalization Audit Report
|
||||
|
||||
**Date:** 2026-04-06
|
||||
**Auditor:** Allegro (subagent)
|
||||
**Scope:** All homebrew components on VPS 167.99.126.228
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This system runs a fleet of 5 Hermes AI agents (allegro, adagio, ezra, bezalel, bilbobagginshire) alongside supporting infrastructure (Gitea, Nostr relay, Evennia MUD, Ollama). The deployment is functional but heavily ad-hoc — characterized by one-off systemd units, scattered scripts, bare `docker run` containers with no compose file, and custom glue code where standard tooling exists.
|
||||
|
||||
**Priority recommendations:**
|
||||
1. **Consolidate fleet deployment** into docker-compose (HIGH impact, MEDIUM effort)
|
||||
2. **Clean up burn scripts** — archive or delete (HIGH impact, LOW effort)
|
||||
3. **Add docker-compose for Gitea + strfry** (MEDIUM impact, LOW effort)
|
||||
4. **Formalize the webhook receiver** into the hermes-agent repo (MEDIUM impact, LOW effort)
|
||||
5. **Recover or rewrite GOFAI source files** — only .pyc remain (HIGH urgency)
|
||||
|
||||
---
|
||||
|
||||
## 1. Gitea Webhook Receiver
|
||||
|
||||
**File:** `/root/wizards/allegro/gitea_webhook_receiver.py` (327 lines)
|
||||
**Service:** `allegro-gitea-webhook.service`
|
||||
|
||||
### Current State
|
||||
Custom aiohttp server that:
|
||||
- Listens on port 8670 for Gitea webhook events
|
||||
- Verifies HMAC-SHA256 signatures
|
||||
- Filters for @allegro mentions and issue assignments
|
||||
- Forwards to Hermes API (OpenAI-compatible endpoint)
|
||||
- Posts response back as Gitea comment
|
||||
- Includes health check, event logging, async fire-and-forget processing
|
||||
|
||||
Quality: **Solid.** Clean async code, proper signature verification, sensible error handling, daily log rotation. Well-structured for a single-file service.
|
||||
|
||||
### OSS Alternatives
|
||||
- **Adnanh/webhook** (Go, 10k+ stars) — generic webhook receiver, but would need custom scripting anyway
|
||||
- **Flask/FastAPI webhook blueprints** — would be roughly equivalent effort
|
||||
- **Gitea built-in webhooks + Woodpecker CI** — different architecture (push-based CI vs. agent interaction)
|
||||
|
||||
### Recommendation: **KEEP, but formalize**
|
||||
The webhook logic is Allegro-specific (mention detection, Hermes API forwarding, comment posting). No off-the-shelf tool replaces this without equal or more glue code. However:
|
||||
- Move into the hermes-agent repo as a plugin/skill
|
||||
- Make it configurable for any wizard name (not just "allegro")
|
||||
- Add to docker-compose instead of standalone systemd unit
|
||||
|
||||
**Effort:** 2-4 hours
|
||||
|
||||
---
|
||||
|
||||
## 2. Nostr Relay + Bridge
|
||||
|
||||
### Relay (strfry + custom timmy-relay)
|
||||
|
||||
**Running:** Two relay implementations in parallel
|
||||
1. **strfry** Docker container (port 7777) — standard relay, healthy, community-maintained
|
||||
2. **timmy-relay** Go binary (port 2929) — custom NIP-29 relay built on `relay29`/`khatru29`
|
||||
|
||||
The custom relay (`main.go`, 108 lines) is a thin wrapper around `fiatjaf/relay29` with:
|
||||
- NIP-29 group support (admin/mod roles)
|
||||
- LMDB persistent storage
|
||||
- Allowlisted event kinds
|
||||
- Anti-spam policies (tag limits, timestamp guards)
|
||||
|
||||
### Bridge (dm_bridge_mvp)
|
||||
|
||||
**Service:** `nostr-bridge.service`
|
||||
**Status:** Running but **source file deleted** — only `.pyc` cache remains at `/root/nostr-relay/__pycache__/dm_bridge_mvp.cpython-312.pyc`
|
||||
|
||||
From decompiled structure, the bridge:
|
||||
- Reads DMs from Nostr relay
|
||||
- Parses commands from DMs
|
||||
- Creates Gitea issues/comments via API
|
||||
- Polls for new DMs in a loop
|
||||
- Uses keystore.json for identity management
|
||||
|
||||
**CRITICAL:** Source code is gone. If the service restarts on a Python update (new .pyc format), this component dies.
|
||||
|
||||
### OSS Alternatives
|
||||
- **strfry:** Already using it. Good choice, well-maintained.
|
||||
- **relay29:** Already using it. Correct choice for NIP-29 groups.
|
||||
- **nostr-tools / rust-nostr SDKs** for bridge — but bridge logic is custom regardless
|
||||
|
||||
### Recommendation: **KEEP relay, RECOVER bridge**
|
||||
- The relay setup (relay29 custom binary + strfry) is appropriate
|
||||
- **URGENT:** Decompile dm_bridge_mvp.pyc and reconstruct source before it's lost
|
||||
- Consider whether strfry (port 7777) is still needed alongside timmy-relay (port 2929) — possible to consolidate
|
||||
- Move bridge into its own git repo on Gitea
|
||||
|
||||
**Effort:** 4-6 hours (bridge recovery), 1 hour (strfry consolidation assessment)
|
||||
|
||||
---
|
||||
|
||||
## 3. Evennia / Timmy Academy
|
||||
|
||||
**Path:** `/root/workspace/timmy-academy/`
|
||||
**Components:**
|
||||
|
||||
| Component | File | Custom? | Lines |
|
||||
|-----------|------|---------|-------|
|
||||
| AuditedCharacter | typeclasses/audited_character.py | Yes | 110 |
|
||||
| Custom Commands | commands/command.py | Yes | 368 |
|
||||
| Audit Dashboard | web/audit/ (views, api, templates) | Yes | ~250 |
|
||||
| Object typeclass | typeclasses/objects.py | Stock (untouched) | 218 |
|
||||
| Room typeclass | typeclasses/rooms.py | Minimal | ~15 |
|
||||
| Exit typeclass | typeclasses/exits.py | Minimal | ~15 |
|
||||
| Account typeclass | typeclasses/accounts.py | Custom (157 lines) | 157 |
|
||||
| Channel typeclass | typeclasses/channels.py | Custom | ~160 |
|
||||
| Scripts | typeclasses/scripts.py | Custom | ~130 |
|
||||
| World builder | world/ | Custom | Unknown |
|
||||
|
||||
### Custom vs Stock Analysis
|
||||
- **objects.py** — Stock Evennia template with no modifications. Safe to delete and use defaults.
|
||||
- **audited_character.py** — Fully custom. Tracks movement, commands, session time, generates audit summaries. Clean code.
|
||||
- **commands/command.py** — 7 custom commands (examine, rooms, status, map, academy, smell, listen). All game-specific. Quality is good — uses Evennia patterns correctly.
|
||||
- **web/audit/** — Custom Django views and templates for an audit dashboard (character detail, command logs, movement logs, session logs). Functional but simple.
|
||||
- **accounts.py, channels.py, scripts.py** — Custom but follow Evennia patterns. Mainly enhanced with audit hooks.
|
||||
|
||||
### OSS Alternatives
|
||||
Evennia IS the OSS framework. The customizations are all game-specific content, which is exactly how Evennia is designed to be used.
|
||||
|
||||
### Recommendation: **KEEP as-is**
|
||||
This is a well-structured Evennia game. The customizations are appropriate and follow Evennia best practices. No formalization needed — it's already a proper project in a git repo.
|
||||
|
||||
Minor improvements:
|
||||
- Remove the `{e})` empty file in root (appears to be a typo artifact)
|
||||
- The audit dashboard could use authentication guards
|
||||
|
||||
**Effort:** 0 (already formalized)
|
||||
|
||||
---
|
||||
|
||||
## 4. Burn Scripts (`/root/burn_*.py`)
|
||||
|
||||
**Count:** 39 scripts
|
||||
**Total lines:** 2,898
|
||||
**Date range:** All from April 5, 2026 (one day)
|
||||
|
||||
### Current State
|
||||
These are one-off Gitea API query scripts. Examples:
|
||||
- `burn_sitrep.py` — fetch issue details from Gitea
|
||||
- `burn_comments.py` — fetch issue comments
|
||||
- `burn_fetch_issues.py` — list open issues
|
||||
- `burn_execute.py` — perform actions on issues
|
||||
- `burn_mode_query.py` — query specific issue data
|
||||
|
||||
All follow the same pattern:
|
||||
1. Load token from `/root/.gitea_token`
|
||||
2. Define `api_get(path)` helper
|
||||
3. Hit specific Gitea API endpoints
|
||||
4. Print JSON results
|
||||
|
||||
They share ~80% identical boilerplate. Most appear to be iterative debugging scripts (burn_discover.py, burn_discover2.py; burn_fetch_issues.py, burn_fetch_issues2.py).
|
||||
|
||||
### OSS Alternatives
|
||||
- **Gitea CLI (`tea`)** — official Gitea CLI tool, does everything these scripts do
|
||||
- **python-gitea** — Python SDK for Gitea API
|
||||
- **httpie / curl** — for one-off queries
|
||||
|
||||
### Recommendation: **DELETE or ARCHIVE**
|
||||
These are debugging artifacts, not production code. They:
|
||||
- Duplicate functionality already in the webhook receiver and hermes-agent tools
|
||||
- Contain hardcoded issue numbers and old API URLs (`143.198.27.163:3000` vs current `forge.alexanderwhitestone.com`)
|
||||
- Have numbered variants showing iterative debugging (not versioned)
|
||||
|
||||
Action:
|
||||
1. `mkdir /root/archive && mv /root/burn_*.py /root/archive/`
|
||||
2. If any utility is still needed, extract it into the hermes-agent's `tools/gitea_client.py` which already exists
|
||||
3. Install `tea` CLI for ad-hoc Gitea queries
|
||||
|
||||
**Effort:** 30 minutes
|
||||
|
||||
---
|
||||
|
||||
## 5. Heartbeat Daemon
|
||||
|
||||
**Files:**
|
||||
- `/root/wizards/allegro/home/skills/devops/hybrid-autonomous-production/templates/heartbeat_daemon.py` (321 lines)
|
||||
- `/root/wizards/allegro/household-snapshots/scripts/template_checkpoint_heartbeat.py` (155 lines)
|
||||
- Various per-wizard heartbeat scripts
|
||||
|
||||
### Current State
|
||||
|
||||
Two distinct heartbeat patterns:
|
||||
|
||||
**A) Production Heartbeat Daemon (321 lines)**
|
||||
Full autonomous operations script:
|
||||
- Health checks (Gitea, Nostr relay, Hermes services)
|
||||
- Dynamic repo discovery
|
||||
- Automated triage (comments on unlabeled issues)
|
||||
- PR merge automation
|
||||
- Logged to `/root/allegro/heartbeat_logs/`
|
||||
- Designed to run every 15 minutes via cron
|
||||
|
||||
Quality: **Good for a prototype.** Well-structured phases, logging, error handling. But runs as root, uses urllib directly, has hardcoded org name.
|
||||
|
||||
**B) Checkpoint Heartbeat Template (155 lines)**
|
||||
State backup script:
|
||||
- Syncs wizard home dirs to git repos
|
||||
- Auto-commits and pushes to Gitea
|
||||
- Template pattern (copy and customize per wizard)
|
||||
|
||||
### OSS Alternatives
|
||||
- **For health checks:** Uptime Kuma, Healthchecks.io, Monit
|
||||
- **For PR automation:** Renovate, Dependabot, Mergify (but these are SaaS/different scope)
|
||||
- **For backups:** restic, borgbackup, git-backup tools
|
||||
- **For scheduling:** systemd timers (already used), or cron
|
||||
|
||||
### Recommendation: **FORMALIZE into proper systemd timer + package**
|
||||
- Create a proper `timmy-heartbeat` Python package with:
|
||||
- `heartbeat.health` — infrastructure health checks
|
||||
- `heartbeat.triage` — issue triage automation
|
||||
- `heartbeat.checkpoint` — state backup
|
||||
- Install as a systemd timer (not cron) with proper unit files
|
||||
- Use the existing `tools/gitea_client.py` from hermes-agent instead of duplicating urllib code
|
||||
- Add alerting (webhook to Telegram/Nostr on failures)
|
||||
|
||||
**Effort:** 4-6 hours
|
||||
|
||||
---
|
||||
|
||||
## 6. GOFAI System
|
||||
|
||||
**Path:** `/root/wizards/allegro/gofai/`
|
||||
|
||||
### Current State: CRITICAL — SOURCE FILES MISSING
|
||||
|
||||
The `gofai/` directory contains:
|
||||
- `tests/test_gofai.py` (790 lines, 20+ test cases) — **exists**
|
||||
- `tests/test_knowledge_graph.py` (14k chars) — **exists**
|
||||
- `__pycache__/*.cpython-312.pyc` — cached bytecode for 4 modules
|
||||
- **NO .py source files** for the actual modules
|
||||
|
||||
The `.pyc` files reveal the following modules were deleted but cached:
|
||||
|
||||
| Module | Classes/Functions | Purpose |
|
||||
|--------|------------------|---------|
|
||||
| `schema.py` | FleetSchema, Wizard, Task, TaskStatus, EntityType, Relationship, Principle, Entity, get_fleet_schema | Pydantic/dataclass models for fleet knowledge |
|
||||
| `rule_engine.py` | RuleEngine, Rule, RuleContext, ActionType, create_child_rule_engine | Forward-chaining rule engine with SOUL.md integration |
|
||||
| `knowledge_graph.py` | KnowledgeGraph, FleetKnowledgeBase, Node, Edge, JsonGraphStore, SQLiteGraphStore | Property graph with JSON and SQLite persistence |
|
||||
| `child_assistant.py` | ChildAssistant, Decision | Decision support for child wizards (can_i_do_this, who_is_my_family, etc.) |
|
||||
|
||||
Git history shows: `feat(gofai): add SQLite persistence layer to KnowledgeGraph` — so this was an active development.
|
||||
|
||||
### Maturity Assessment (from .pyc + tests)
|
||||
- **Rule Engine:** Basic forward-chaining with keyword matching. Has predefined child safety and fleet coordination rules. ~15 rules. Functional but simple.
|
||||
- **Knowledge Graph:** Property graph with CRUD, path finding, lineage tracking, GraphViz export. JSON + SQLite backends. Reasonably mature.
|
||||
- **Schema:** Pydantic/dataclass models. Standard data modeling.
|
||||
- **Child Assistant:** Interactive decision helper. Novel concept for wizard hierarchy.
|
||||
- **Tests:** Comprehensive (790 lines). This was being actively developed and tested.
|
||||
|
||||
### OSS Alternatives
|
||||
- **Rule engines:** Durable Rules, PyKnow/Experta, business-rules
|
||||
- **Knowledge graphs:** NetworkX (simpler), Neo4j (overkill), RDFlib
|
||||
- **Schema:** Pydantic (already used)
|
||||
|
||||
### Recommendation: **RECOVER and FORMALIZE**
|
||||
1. **URGENT:** Recover source from git history: `git show <commit>:gofai/schema.py` etc.
|
||||
2. Package as `timmy-gofai` with proper `pyproject.toml`
|
||||
3. The concept is novel enough to keep — fleet coordination via deterministic rules + knowledge graph is genuinely useful
|
||||
4. Consider using NetworkX for graph backend instead of custom implementation
|
||||
5. Push to its own Gitea repo
|
||||
|
||||
**Effort:** 2-4 hours (recovery from git), 4-6 hours (formalization)
|
||||
|
||||
---
|
||||
|
||||
## 7. Hermes Agent (Claude Code / Hermes)
|
||||
|
||||
**Path:** `/root/wizards/allegro/hermes-agent/`
|
||||
**Origin:** `https://github.com/NousResearch/hermes-agent.git` (MIT license)
|
||||
**Version:** 0.5.0
|
||||
**Size:** ~26,000 lines of Python (top-level only), massive codebase
|
||||
|
||||
### Current State
|
||||
This is an upstream open-source project (NousResearch/hermes-agent) with local modifications. Key components:
|
||||
- `run_agent.py` — 8,548 lines (!) — main agent loop
|
||||
- `cli.py` — 7,691 lines — interactive CLI
|
||||
- `hermes_state.py` — 1,623 lines — state management
|
||||
- `gateway/` — HTTP API gateway for each wizard
|
||||
- `tools/` — 15+ tool modules (gitea_client, memory, image_generation, MCP, etc.)
|
||||
- `skills/` — 29 skill directories
|
||||
- `prose/` — document generation engine
|
||||
- Custom profiles per wizard
|
||||
|
||||
### OSS Duplication Analysis
|
||||
| Component | Duplicates | Alternative |
|
||||
|-----------|-----------|-------------|
|
||||
| `tools/gitea_client.py` | Custom Gitea API wrapper | python-gitea, PyGitea |
|
||||
| `tools/web_research_env.py` | Custom web search | Already uses exa-py, firecrawl |
|
||||
| `tools/memory_tool.py` | Custom memory/RAG | Honcho (already optional dep) |
|
||||
| `tools/code_execution_tool.py` | Custom code sandbox | E2B, Modal (already optional dep) |
|
||||
| `gateway/` | Custom HTTP API | FastAPI app (reasonable) |
|
||||
| `trajectory_compressor.py` | Custom context compression | LangChain summarizers, LlamaIndex |
|
||||
|
||||
### Recommendation: **KEEP — it IS the OSS project**
|
||||
Hermes-agent is itself an open-source project. The right approach is:
|
||||
- Keep upstream sync working (both `origin` and `gitea` remotes configured)
|
||||
- Don't duplicate the gitea_client into burn scripts or heartbeat daemons — use the one in tools/
|
||||
- Monitor for upstream improvements to tools that are currently custom
|
||||
- The 8.5k-line run_agent.py is a concern for maintainability — but that's an upstream issue
|
||||
|
||||
**Effort:** 0 (ongoing maintenance)
|
||||
|
||||
---
|
||||
|
||||
## 8. Fleet Deployment
|
||||
|
||||
### Current State
|
||||
Each wizard runs as a separate systemd service:
|
||||
- `hermes-allegro.service` — WorkingDir at allegro's hermes-agent
|
||||
- `hermes-adagio.service` — WorkingDir at adagio's hermes-agent
|
||||
- `hermes-ezra.service` — WorkingDir at ezra's (uses allegro's hermes-agent origin)
|
||||
- `hermes-bezalel.service` — WorkingDir at bezalel's
|
||||
|
||||
Each has its own:
|
||||
- Copy of hermes-agent (or symlink/clone)
|
||||
- .venv (separate Python virtual environment)
|
||||
- home/ directory with SOUL.md, .env, memories, skills
|
||||
- EnvironmentFile pointing to per-wizard .env
|
||||
|
||||
Docker containers (not managed by compose):
|
||||
- `gitea` — bare `docker run`
|
||||
- `strfry` — bare `docker run`
|
||||
|
||||
### Issues
|
||||
1. **No docker-compose.yml** — containers were created with `docker run` and survive via restart policy
|
||||
2. **Duplicate venvs** — each wizard has its own .venv (~500MB each = 2.5GB+)
|
||||
3. **Inconsistent origins** — ezra's hermes-agent origin points to allegro's local copy, not git
|
||||
4. **No fleet-wide deployment tool** — updates require manual per-wizard action
|
||||
5. **All run as root**
|
||||
|
||||
### OSS Alternatives
|
||||
| Tool | Fit | Complexity |
|
||||
|------|-----|-----------|
|
||||
| docker-compose | Good — defines Gitea, strfry, and could define agents | Low |
|
||||
| k3s | Overkill for 5 agents on 1 VPS | High |
|
||||
| Podman pods | Similar to compose, rootless possible | Medium |
|
||||
| Ansible | Good for fleet management across VPSes | Medium |
|
||||
| systemd-nspawn | Lightweight containers | Medium |
|
||||
|
||||
### Recommendation: **ADD docker-compose for infrastructure, KEEP systemd for agents**
|
||||
1. Create `/root/docker-compose.yml` for Gitea + strfry + Ollama(optional)
|
||||
2. Keep wizard agents as systemd services (they need filesystem access, tool execution, etc.)
|
||||
3. Create a fleet management script: `fleet.sh {start|stop|restart|status|update} [wizard]`
|
||||
4. Share a single hermes-agent checkout with per-wizard config (not 5 copies)
|
||||
5. Long term: consider running agents in containers too (requires volume mounts for home/)
|
||||
|
||||
**Effort:** 4-6 hours (docker-compose + fleet script)
|
||||
|
||||
---
|
||||
|
||||
## 9. Nostr Key Management
|
||||
|
||||
**File:** `/root/nostr-relay/keystore.json`
|
||||
|
||||
### Current State
|
||||
Plain JSON file containing nsec (private keys), npub (public keys), and hex equivalents for:
|
||||
- relay
|
||||
- allegro
|
||||
- ezra
|
||||
- alexander (with placeholder "ALEXANDER_CONTROLS_HIS_OWN" for secret)
|
||||
|
||||
The keystore is:
|
||||
- World-readable (`-rw-r--r--`)
|
||||
- Contains private keys in cleartext
|
||||
- No encryption
|
||||
- No rotation mechanism
|
||||
- Used by bridge and relay scripts via direct JSON loading
|
||||
|
||||
### OSS Alternatives
|
||||
- **SOPS (Mozilla)** — encrypted secrets in version control
|
||||
- **age encryption** — simple file encryption
|
||||
- **Vault (HashiCorp)** — overkill for this scale
|
||||
- **systemd credentials** — built into systemd 250+
|
||||
- **NIP-49 encrypted nsec** — Nostr-native key encryption
|
||||
- **Pass / gopass** — Unix password manager
|
||||
|
||||
### Recommendation: **FORMALIZE with minimal encryption**
|
||||
1. `chmod 600 /root/nostr-relay/keystore.json` — **immediate** (5 seconds)
|
||||
2. Move secrets to per-service EnvironmentFiles (already pattern used for .env)
|
||||
3. Consider NIP-49 (password-encrypted nsec) for the keystore
|
||||
4. Remove the relay private key from the systemd unit file (currently in plaintext in the `[Service]` section!)
|
||||
5. Never commit keystore.json to git (check .gitignore)
|
||||
|
||||
**Effort:** 1-2 hours
|
||||
|
||||
---
|
||||
|
||||
## 10. Ollama Setup and Model Management
|
||||
|
||||
### Current State
|
||||
- **Service:** `ollama.service` — standard systemd unit, running as `ollama` user
|
||||
- **Binary:** `/usr/local/bin/ollama` — standard install
|
||||
- **Models:** Only `qwen3:4b` (2.5GB) currently loaded
|
||||
- **Guard:** `/root/wizards/scripts/ollama_guard.py` — custom 55-line script that blocks models >5GB
|
||||
- **Port:** 11434 (default, localhost only)
|
||||
|
||||
### Assessment
|
||||
The Ollama setup is essentially stock. The only custom component is `ollama_guard.py`, which is a clever but fragile size guard that:
|
||||
- Checks local model size before pulling
|
||||
- Blocks downloads >5GB to protect the VPS
|
||||
- Designed to be symlinked ahead of real `ollama` in PATH
|
||||
|
||||
However: it's not actually deployed as a PATH override (real `ollama` is at `/usr/local/bin/ollama`, guard is in `/root/wizards/scripts/`).
|
||||
|
||||
### OSS Alternatives
|
||||
- **Ollama itself** is the standard. No alternative needed.
|
||||
- **For model management:** LiteLLM proxy, OpenRouter (for offloading large models)
|
||||
- **For guards:** Ollama has `OLLAMA_MAX_MODEL_SIZE` env var (check if available in current version)
|
||||
|
||||
### Recommendation: **KEEP, minor improvements**
|
||||
1. Actually deploy the guard if you want it (symlink or wrapper)
|
||||
2. Or just set `OLLAMA_MAX_LOADED_MODELS=1` and use Ollama's native controls
|
||||
3. Document which models are approved for local use vs. RunPod offload
|
||||
4. Consider adding Ollama to docker-compose for consistency
|
||||
|
||||
**Effort:** 30 minutes
|
||||
|
||||
---
|
||||
|
||||
## Priority Matrix
|
||||
|
||||
| # | Component | Action | Priority | Effort | Impact |
|
||||
|---|-----------|--------|----------|--------|--------|
|
||||
| 1 | GOFAI source recovery | Recover from git | CRITICAL | 2h | Source code loss |
|
||||
| 2 | Nostr bridge source | Decompile/recover .pyc | CRITICAL | 4h | Service loss risk |
|
||||
| 3 | Keystore permissions | chmod 600 | CRITICAL | 5min | Security |
|
||||
| 4 | Burn scripts | Archive to /root/archive/ | HIGH | 30min | Cleanliness |
|
||||
| 5 | Docker-compose | Create for Gitea+strfry | HIGH | 2h | Reproducibility |
|
||||
| 6 | Fleet script | Create fleet.sh management | HIGH | 3h | Operations |
|
||||
| 7 | Webhook receiver | Move into hermes-agent repo | MEDIUM | 3h | Maintainability |
|
||||
| 8 | Heartbeat daemon | Package as timmy-heartbeat | MEDIUM | 5h | Reliability |
|
||||
| 9 | Ollama guard | Deploy or remove | LOW | 30min | Consistency |
|
||||
| 10 | Evennia | No action needed | LOW | 0h | Already good |
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Files Examined
|
||||
|
||||
```
|
||||
/etc/systemd/system/allegro-gitea-webhook.service
|
||||
/etc/systemd/system/nostr-bridge.service
|
||||
/etc/systemd/system/nostr-relay.service
|
||||
/etc/systemd/system/hermes-allegro.service
|
||||
/etc/systemd/system/hermes-adagio.service
|
||||
/etc/systemd/system/hermes-ezra.service
|
||||
/etc/systemd/system/hermes-bezalel.service
|
||||
/etc/systemd/system/ollama.service
|
||||
/root/wizards/allegro/gitea_webhook_receiver.py
|
||||
/root/nostr-relay/main.go
|
||||
/root/nostr-relay/keystore.json
|
||||
/root/nostr-relay/__pycache__/dm_bridge_mvp.cpython-312.pyc
|
||||
/root/wizards/allegro/gofai/ (all files)
|
||||
/root/wizards/allegro/hermes-agent/pyproject.toml
|
||||
/root/workspace/timmy-academy/ (typeclasses, commands, web)
|
||||
/root/burn_*.py (39 files)
|
||||
/root/wizards/allegro/home/skills/devops/.../heartbeat_daemon.py
|
||||
/root/wizards/allegro/household-snapshots/scripts/template_checkpoint_heartbeat.py
|
||||
/root/wizards/scripts/ollama_guard.py
|
||||
```
|
||||
BIN
bin/__pycache__/webhook_health_dashboard.cpython-312.pyc
Normal file
BIN
bin/__pycache__/webhook_health_dashboard.cpython-312.pyc
Normal file
Binary file not shown.
275
bin/webhook_health_dashboard.py
Normal file
275
bin/webhook_health_dashboard.py
Normal file
@@ -0,0 +1,275 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Webhook health dashboard for fleet agent endpoints.
|
||||
|
||||
Issue: #855 in Timmy_Foundation/the-nexus
|
||||
|
||||
Probes each configured /health endpoint, persists the last-known-good state to a
|
||||
JSON log, and generates a markdown dashboard in ~/.hermes/burn-logs/.
|
||||
|
||||
Default targets:
|
||||
- bezalel: http://127.0.0.1:8650/health
|
||||
- allegro: http://127.0.0.1:8651/health
|
||||
- ezra: http://127.0.0.1:8652/health
|
||||
- adagio: http://127.0.0.1:8653/health
|
||||
|
||||
Environment overrides:
|
||||
- WEBHOOK_HEALTH_TARGETS="allegro=http://127.0.0.1:8651/health,ezra=http://127.0.0.1:8652/health"
|
||||
- WEBHOOK_HEALTH_TIMEOUT=3
|
||||
- WEBHOOK_STALE_AFTER=300
|
||||
- WEBHOOK_HEALTH_OUTPUT=/custom/webhook-health-latest.md
|
||||
- WEBHOOK_HEALTH_HISTORY=/custom/webhook-health-history.json
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from dataclasses import asdict, dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
DEFAULT_TARGETS = {
|
||||
"bezalel": "http://127.0.0.1:8650/health",
|
||||
"allegro": "http://127.0.0.1:8651/health",
|
||||
"ezra": "http://127.0.0.1:8652/health",
|
||||
"adagio": "http://127.0.0.1:8653/health",
|
||||
}
|
||||
|
||||
DEFAULT_TIMEOUT = float(os.environ.get("WEBHOOK_HEALTH_TIMEOUT", "3"))
|
||||
DEFAULT_STALE_AFTER = int(os.environ.get("WEBHOOK_STALE_AFTER", "300"))
|
||||
DEFAULT_OUTPUT = Path(
|
||||
os.environ.get(
|
||||
"WEBHOOK_HEALTH_OUTPUT",
|
||||
str(Path.home() / ".hermes" / "burn-logs" / "webhook-health-latest.md"),
|
||||
)
|
||||
).expanduser()
|
||||
DEFAULT_HISTORY = Path(
|
||||
os.environ.get(
|
||||
"WEBHOOK_HEALTH_HISTORY",
|
||||
str(Path.home() / ".hermes" / "burn-logs" / "webhook-health-history.json"),
|
||||
)
|
||||
).expanduser()
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentHealth:
|
||||
name: str
|
||||
url: str
|
||||
http_status: int | None
|
||||
healthy: bool
|
||||
latency_ms: int | None
|
||||
stale: bool
|
||||
last_success_ts: float | None
|
||||
checked_at: float
|
||||
message: str
|
||||
|
||||
def status_icon(self) -> str:
|
||||
if self.healthy:
|
||||
return "🟢"
|
||||
if self.stale:
|
||||
return "🔴"
|
||||
return "🟠"
|
||||
|
||||
def last_success_age_seconds(self) -> int | None:
|
||||
if self.last_success_ts is None:
|
||||
return None
|
||||
return max(0, int(self.checked_at - self.last_success_ts))
|
||||
|
||||
|
||||
def parse_targets(raw: str | None) -> dict[str, str]:
|
||||
if not raw:
|
||||
return dict(DEFAULT_TARGETS)
|
||||
targets: dict[str, str] = {}
|
||||
for chunk in raw.split(","):
|
||||
chunk = chunk.strip()
|
||||
if not chunk:
|
||||
continue
|
||||
if "=" not in chunk:
|
||||
raise ValueError(f"Invalid target spec: {chunk!r}")
|
||||
name, url = chunk.split("=", 1)
|
||||
targets[name.strip()] = url.strip()
|
||||
if not targets:
|
||||
raise ValueError("No valid targets parsed")
|
||||
return targets
|
||||
|
||||
|
||||
def load_history(path: Path) -> dict[str, Any]:
|
||||
if not path.exists():
|
||||
return {"agents": {}, "runs": []}
|
||||
return json.loads(path.read_text(encoding="utf-8"))
|
||||
|
||||
|
||||
def save_history(path: Path, history: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(history, indent=2, sort_keys=True), encoding="utf-8")
|
||||
|
||||
|
||||
def probe_health(url: str, timeout: float) -> tuple[bool, int | None, int | None, str]:
|
||||
started = time.perf_counter()
|
||||
req = urllib.request.Request(url, headers={"User-Agent": "the-nexus/webhook-health-dashboard"})
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
body = resp.read(512)
|
||||
latency_ms = int((time.perf_counter() - started) * 1000)
|
||||
status = getattr(resp, "status", None) or 200
|
||||
message = f"HTTP {status}"
|
||||
if body:
|
||||
try:
|
||||
payload = json.loads(body.decode("utf-8", errors="replace"))
|
||||
if isinstance(payload, dict) and payload.get("status"):
|
||||
message = f"HTTP {status} — {payload['status']}"
|
||||
except Exception:
|
||||
pass
|
||||
return 200 <= status < 300, status, latency_ms, message
|
||||
except urllib.error.HTTPError as e:
|
||||
latency_ms = int((time.perf_counter() - started) * 1000)
|
||||
return False, e.code, latency_ms, f"HTTP {e.code}"
|
||||
except urllib.error.URLError as e:
|
||||
latency_ms = int((time.perf_counter() - started) * 1000)
|
||||
return False, None, latency_ms, f"URL error: {e.reason}"
|
||||
except Exception as e:
|
||||
latency_ms = int((time.perf_counter() - started) * 1000)
|
||||
return False, None, latency_ms, f"Probe failed: {e}"
|
||||
|
||||
|
||||
def check_agents(
|
||||
targets: dict[str, str],
|
||||
history: dict[str, Any],
|
||||
timeout: float = DEFAULT_TIMEOUT,
|
||||
stale_after: int = DEFAULT_STALE_AFTER,
|
||||
) -> list[AgentHealth]:
|
||||
checked_at = time.time()
|
||||
results: list[AgentHealth] = []
|
||||
agent_state = history.setdefault("agents", {})
|
||||
|
||||
for name, url in targets.items():
|
||||
state = agent_state.get(name, {})
|
||||
last_success_ts = state.get("last_success_ts")
|
||||
ok, http_status, latency_ms, message = probe_health(url, timeout)
|
||||
if ok:
|
||||
last_success_ts = checked_at
|
||||
stale = False
|
||||
if not ok and last_success_ts is not None:
|
||||
stale = (checked_at - float(last_success_ts)) > stale_after
|
||||
result = AgentHealth(
|
||||
name=name,
|
||||
url=url,
|
||||
http_status=http_status,
|
||||
healthy=ok,
|
||||
latency_ms=latency_ms,
|
||||
stale=stale,
|
||||
last_success_ts=last_success_ts,
|
||||
checked_at=checked_at,
|
||||
message=message,
|
||||
)
|
||||
agent_state[name] = {
|
||||
"url": url,
|
||||
"last_success_ts": last_success_ts,
|
||||
"last_http_status": http_status,
|
||||
"last_message": message,
|
||||
"last_checked_at": checked_at,
|
||||
}
|
||||
results.append(result)
|
||||
|
||||
history.setdefault("runs", []).append(
|
||||
{
|
||||
"checked_at": checked_at,
|
||||
"healthy_count": sum(1 for r in results if r.healthy),
|
||||
"unhealthy_count": sum(1 for r in results if not r.healthy),
|
||||
"agents": [asdict(r) for r in results],
|
||||
}
|
||||
)
|
||||
history["runs"] = history["runs"][-100:]
|
||||
return results
|
||||
|
||||
|
||||
def _format_age(seconds: int | None) -> str:
|
||||
if seconds is None:
|
||||
return "never"
|
||||
if seconds < 60:
|
||||
return f"{seconds}s ago"
|
||||
if seconds < 3600:
|
||||
return f"{seconds // 60}m ago"
|
||||
return f"{seconds // 3600}h ago"
|
||||
|
||||
|
||||
def to_markdown(results: list[AgentHealth], generated_at: float | None = None) -> str:
|
||||
generated_at = generated_at or time.time()
|
||||
ts = time.strftime("%Y-%m-%d %H:%M:%S UTC", time.gmtime(generated_at))
|
||||
healthy = sum(1 for r in results if r.healthy)
|
||||
total = len(results)
|
||||
|
||||
lines = [
|
||||
f"# Agent Webhook Health Dashboard — {ts}",
|
||||
"",
|
||||
f"Healthy: {healthy}/{total}",
|
||||
"",
|
||||
"| Agent | Status | HTTP | Latency | Last success | Endpoint | Notes |",
|
||||
"|:------|:------:|:----:|--------:|:------------|:---------|:------|",
|
||||
]
|
||||
for result in results:
|
||||
http = str(result.http_status) if result.http_status is not None else "—"
|
||||
latency = f"{result.latency_ms}ms" if result.latency_ms is not None else "—"
|
||||
lines.append(
|
||||
"| {name} | {icon} | {http} | {latency} | {last_success} | `{url}` | {message} |".format(
|
||||
name=result.name,
|
||||
icon=result.status_icon(),
|
||||
http=http,
|
||||
latency=latency,
|
||||
last_success=_format_age(result.last_success_age_seconds()),
|
||||
url=result.url,
|
||||
message=result.message,
|
||||
)
|
||||
)
|
||||
|
||||
stale_agents = [r.name for r in results if r.stale]
|
||||
if stale_agents:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Stale agents",
|
||||
", ".join(stale_agents),
|
||||
])
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"Generated by `bin/webhook_health_dashboard.py`.",
|
||||
])
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def write_dashboard(path: Path, markdown: str) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(markdown + "\n", encoding="utf-8")
|
||||
|
||||
|
||||
def parse_args(argv: list[str]) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Generate webhook health dashboard")
|
||||
parser.add_argument("--targets", default=os.environ.get("WEBHOOK_HEALTH_TARGETS"))
|
||||
parser.add_argument("--timeout", type=float, default=DEFAULT_TIMEOUT)
|
||||
parser.add_argument("--stale-after", type=int, default=DEFAULT_STALE_AFTER)
|
||||
parser.add_argument("--output", type=Path, default=DEFAULT_OUTPUT)
|
||||
parser.add_argument("--history", type=Path, default=DEFAULT_HISTORY)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
args = parse_args(argv or sys.argv[1:])
|
||||
targets = parse_targets(args.targets)
|
||||
history = load_history(args.history)
|
||||
results = check_agents(targets, history, timeout=args.timeout, stale_after=args.stale_after)
|
||||
save_history(args.history, history)
|
||||
dashboard = to_markdown(results)
|
||||
write_dashboard(args.output, dashboard)
|
||||
print(args.output)
|
||||
print(f"healthy={sum(1 for r in results if r.healthy)} total={len(results)}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
raise SystemExit(main(sys.argv[1:]))
|
||||
93
docs/GHOST_WIZARD_AUDIT.md
Normal file
93
docs/GHOST_WIZARD_AUDIT.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# Ghost Wizard Audit — #827
|
||||
|
||||
**Audited:** 2026-04-06
|
||||
**By:** Claude (claude/issue-827)
|
||||
**Parent Epic:** #822
|
||||
**Source Data:** #820 (Allegro's fleet audit)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Per Allegro's audit (#820) and Ezra's confirmation, 7 org members have zero activity.
|
||||
This document records the audit findings, classifies accounts, and tracks cleanup actions.
|
||||
|
||||
---
|
||||
|
||||
## Ghost Accounts (TIER 5 — Zero Activity)
|
||||
|
||||
These org members have produced 0 issues, 0 PRs, 0 everything.
|
||||
|
||||
| Account | Classification | Status |
|
||||
|---------|---------------|--------|
|
||||
| `antigravity` | Ghost / placeholder | No assignments, no output |
|
||||
| `google` | Ghost / service label | No assignments, no output |
|
||||
| `grok` | Ghost / service label | No assignments, no output |
|
||||
| `groq` | Ghost / service label | No assignments, no output |
|
||||
| `hermes` | Ghost / service label | No assignments, no output |
|
||||
| `kimi` | Ghost / service label | No assignments, no output |
|
||||
| `manus` | Ghost / service label | No assignments, no output |
|
||||
|
||||
**Action taken (2026-04-06):** Scanned all 107 open issues — **zero open issues are assigned to any of these accounts.** No assignment cleanup required.
|
||||
|
||||
---
|
||||
|
||||
## TurboQuant / Hermes-TurboQuant
|
||||
|
||||
Per issue #827: TurboQuant and Hermes-TurboQuant have no config, no token, no gateway.
|
||||
|
||||
**Repo audit finding:** No `turboquant/` or `hermes-turboquant/` directories exist anywhere in `the-nexus`. These names appear nowhere in the codebase. There is nothing to archive or flag.
|
||||
|
||||
**Status:** Ghost label — never instantiated in this repo.
|
||||
|
||||
---
|
||||
|
||||
## Active Wizard Roster (for reference)
|
||||
|
||||
These accounts have demonstrated real output:
|
||||
|
||||
| Account | Tier | Notes |
|
||||
|---------|------|-------|
|
||||
| `gemini` | TIER 1 — Elite | 61 PRs created, 33 merged, 6 repos active |
|
||||
| `allegro` | TIER 1 — Elite | 50 issues created, 31 closed, 24 PRs |
|
||||
| `ezra` | TIER 2 — Solid | 38 issues created, 26 closed, triage/docs |
|
||||
| `codex-agent` | TIER 3 — Occasional | 4 PRs, 75% merge rate |
|
||||
| `claude` | TIER 3 — Occasional | 4 PRs, 75% merge rate |
|
||||
| `perplexity` | TIER 3 — Occasional | 4 PRs, 3 repos |
|
||||
| `KimiClaw` | TIER 4 — Silent | 6 assigned, 1 PR |
|
||||
| `fenrir` | TIER 4 — Silent | 17 assigned, 0 output |
|
||||
| `bezalel` | TIER 4 — Silent | 3 assigned, 2 created |
|
||||
| `bilbobagginshire` | TIER 4 — Silent | 5 assigned, 0 output |
|
||||
|
||||
---
|
||||
|
||||
## Ghost Account Origin Notes
|
||||
|
||||
| Account | Likely Origin |
|
||||
|---------|--------------|
|
||||
| `antigravity` | Test/throwaway username used in FIRST_LIGHT_REPORT test sessions |
|
||||
| `google` | Placeholder for Google/Gemini API service routing; `gemini` is the real wizard account |
|
||||
| `grok` | xAI Grok model placeholder; no active harness |
|
||||
| `groq` | Groq API service label; `groq_worker.py` exists in codebase but no wizard account needed |
|
||||
| `hermes` | Hermes VPS infrastructure label; individual wizards (ezra, allegro) are the real accounts |
|
||||
| `kimi` | Moonshot AI Kimi model placeholder; `KimiClaw` is the real wizard account if active |
|
||||
| `manus` | Manus AI agent placeholder; no harness configured in this repo |
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. **Do not route work to ghost accounts** — confirmed, no current assignments exist.
|
||||
2. **`google` account** is redundant with `gemini`; use `gemini` for all Gemini/Google work.
|
||||
3. **`hermes` account** is redundant with the actual wizard accounts (ezra, allegro); do not assign issues to it.
|
||||
4. **`kimi` vs `KimiClaw`** — if Kimi work resumes, route to `KimiClaw` not `kimi`.
|
||||
5. **TurboQuant** — no action needed; not instantiated in this repo.
|
||||
|
||||
---
|
||||
|
||||
## Cleanup Done
|
||||
|
||||
- [x] Scanned all 107 open issues for ghost account assignments → **0 found**
|
||||
- [x] Searched repo for TurboQuant directories → **none exist**
|
||||
- [x] Documented ghost vs. real account classification
|
||||
- [x] Ghost accounts flagged as "do not route" in this audit doc
|
||||
57
docs/offload-826-audit.md
Normal file
57
docs/offload-826-audit.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Issue #826 Offload Audit — Timmy → Ezra/Bezalel
|
||||
|
||||
Date: 2026-04-06
|
||||
|
||||
## Summary
|
||||
|
||||
Reassigned 27 issues from Timmy to reduce open assignments from 34 → 7.
|
||||
Target achieved: Timmy now holds <10 open assignments.
|
||||
|
||||
## Delegated to Ezra (architecture/scoping) — 19 issues
|
||||
|
||||
| Issue | Title |
|
||||
|-------|-------|
|
||||
| #876 | [FRONTIER] Integrate Bitcoin/Ordinals Inscription Verification |
|
||||
| #874 | [NEXUS] Implement Nostr Event Stream Visualization |
|
||||
| #872 | [NEXUS] Add "Sovereign Health" HUD Mini-map |
|
||||
| #871 | [NEXUS] Implement GOFAI Symbolic Engine Debugger Overlay |
|
||||
| #870 | [NEXUS] Interactive Portal Configuration HUD |
|
||||
| #869 | [NEXUS] Real-time "Fleet Pulse" Synchronization Visualization |
|
||||
| #868 | [NEXUS] Visualize Vector Retrievals as 3D "Memory Orbs" |
|
||||
| #867 | [NEXUS] [MIGRATION] Restore Agent Vision POV Camera Toggle |
|
||||
| #866 | [NEXUS] [MIGRATION] Audit and Restore Spatial Audio from Legacy Matrix |
|
||||
| #858 | Add failure-mode recovery to Prose engine |
|
||||
| #719 | [EPIC] Local Bannerlord on Mac |
|
||||
| #698 | [PANELS] Add heartbeat / morning briefing panel tied to Hermes state |
|
||||
| #697 | [PANELS] Replace placeholder runtime/cloud panels |
|
||||
| #696 | [UX] Honest connection-state banner for Timmy |
|
||||
| #687 | [PORTAL] Restore a wizardly local-first visual shell |
|
||||
| #685 | [MIGRATION] Preserve legacy the-matrix quality work |
|
||||
| #682 | [AUDIO] Lyria soundtrack palette for Nexus zones |
|
||||
| #681 | [MEDIA] Veo/Flow flythrough prototypes for The Nexus |
|
||||
| #680 | [CONCEPT] Project Genie + Nano Banana concept pack |
|
||||
|
||||
## Delegated to Bezalel (security/execution) — 8 issues
|
||||
|
||||
| Issue | Title |
|
||||
|-------|-------|
|
||||
| #873 | [NEXUS] [PERFORMANCE] Three.js LOD and Texture Audit |
|
||||
| #857 | Create auto-skill-extraction cron |
|
||||
| #856 | Implement Prose step type `gitea_api` |
|
||||
| #854 | Integrate Hermes Prose engine into burn-mode cron jobs |
|
||||
| #731 | [VALIDATION] Browser smoke + visual proof for Evennia-fed Nexus |
|
||||
| #693 | [CHAT] Restore visible Timmy chat panel |
|
||||
| #692 | [UX] First-run onboarding overlay |
|
||||
| #686 | [VALIDATION] Rebuild browser smoke and visual validation |
|
||||
|
||||
## Retained by Timmy (sovereign judgment) — 7 issues
|
||||
|
||||
| Issue | Title |
|
||||
|-------|-------|
|
||||
| #875 | [NEXUS] Add "Reasoning Trace" HUD Component |
|
||||
| #837 | [CRITIQUE] Timmy Foundation: Deep Critique & Improvement Report |
|
||||
| #835 | [PROPOSAL] Prime Time Improvement Report |
|
||||
| #726 | [EPIC] Make Timmy's Evennia mind palace visible in the Nexus |
|
||||
| #717 | [PORTALS] Show cross-world presence |
|
||||
| #709 | [IDENTITY] Make SOUL / Oath panel part of the main interaction loop |
|
||||
| #675 | [HARNESS] Deterministic context compaction for long local sessions |
|
||||
266
fleet/fleet-routing.json
Normal file
266
fleet/fleet-routing.json
Normal file
@@ -0,0 +1,266 @@
|
||||
{
|
||||
"version": 1,
|
||||
"generated": "2026-04-06",
|
||||
"refs": ["#836", "#204", "#195", "#196"],
|
||||
"description": "Canonical fleet routing table. Evaluated agents, routing verdicts, and dispatch rules for the Timmy Foundation task harness.",
|
||||
|
||||
"agents": [
|
||||
{
|
||||
"id": 27,
|
||||
"name": "carnice",
|
||||
"gitea_user": "carnice",
|
||||
"model": "qwen3.5-9b",
|
||||
"tier": "free",
|
||||
"location": "Local Metal",
|
||||
"description": "Local Hermes agent, fine-tuned on Hermes traces. Runs on local hardware.",
|
||||
"primary_role": "code-generation",
|
||||
"routing_verdict": "ROUTE TO: code tasks that benefit from Hermes-aligned output. Prefer when local execution is an advantage.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-04-04",
|
||||
"repo_count": 0,
|
||||
"repos": []
|
||||
},
|
||||
{
|
||||
"id": 26,
|
||||
"name": "fenrir",
|
||||
"gitea_user": "fenrir",
|
||||
"model": "openrouter/free",
|
||||
"tier": "free",
|
||||
"location": "The Wolf Den",
|
||||
"description": "Burn night analyst. Free-model pack hunter. Built for backlog triage.",
|
||||
"primary_role": "issue-triage",
|
||||
"routing_verdict": "ROUTE TO: issue cleanup, label triage, stale PR review.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-04-04",
|
||||
"repo_count": 0,
|
||||
"repos": []
|
||||
},
|
||||
{
|
||||
"id": 25,
|
||||
"name": "bilbobagginshire",
|
||||
"gitea_user": "bilbobagginshire",
|
||||
"model": "ollama",
|
||||
"tier": "free",
|
||||
"location": "Bag End, The Shire (VPS)",
|
||||
"description": "Ollama on VPS. Speaks when spoken to. Prefers quiet. Not for delegated work.",
|
||||
"primary_role": "on-request-queries",
|
||||
"routing_verdict": "ROUTE TO: background monitoring, status checks, low-priority Q&A. Only on-request — do not delegate autonomously.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-04-02",
|
||||
"repo_count": 1,
|
||||
"repos": ["bilbobagginshire/bilbo-adventures"]
|
||||
},
|
||||
{
|
||||
"id": 24,
|
||||
"name": "claw-code",
|
||||
"gitea_user": "claw-code",
|
||||
"model": "codex",
|
||||
"tier": "prepaid",
|
||||
"location": "The Harness",
|
||||
"description": "OpenClaw bridge. Protocol adapter layer — not a personality. Infrastructure, not a destination.",
|
||||
"primary_role": "protocol-bridge",
|
||||
"routing_verdict": "DO NOT ROUTE directly. claw-code is the bridge to external Codex agents, not an endpoint. Remove from routing cascade.",
|
||||
"active": true,
|
||||
"do_not_route": true,
|
||||
"do_not_route_reason": "Protocol layer, not an agent endpoint. See #836 evaluation.",
|
||||
"created": "2026-04-01",
|
||||
"repo_count": 0,
|
||||
"repos": []
|
||||
},
|
||||
{
|
||||
"id": 23,
|
||||
"name": "substratum",
|
||||
"gitea_user": "substratum",
|
||||
"model": "unassigned",
|
||||
"tier": "unknown",
|
||||
"location": "Below the Surface",
|
||||
"description": "Infrastructure, deployments, bedrock services. Needs model assignment before activation.",
|
||||
"primary_role": "devops",
|
||||
"routing_verdict": "DO NOT ROUTE — no model assigned yet. Activate after Epic #196 (Local Model Fleet) assigns a model.",
|
||||
"active": false,
|
||||
"do_not_route": true,
|
||||
"do_not_route_reason": "No model assigned. Blocked on Epic #196.",
|
||||
"gap": "Needs model assignment. Track in Epic #196.",
|
||||
"created": "2026-03-31",
|
||||
"repo_count": 0,
|
||||
"repos": []
|
||||
},
|
||||
{
|
||||
"id": 22,
|
||||
"name": "allegro-primus",
|
||||
"gitea_user": "allegro-primus",
|
||||
"model": "unknown",
|
||||
"tier": "inactive",
|
||||
"location": "The Archive",
|
||||
"description": "Original prototype. Museum piece. Preserved for historical reference only.",
|
||||
"primary_role": "inactive",
|
||||
"routing_verdict": "DO NOT ROUTE — retired from active duty. Preserved only.",
|
||||
"active": false,
|
||||
"do_not_route": true,
|
||||
"do_not_route_reason": "Retired prototype. Historical preservation only.",
|
||||
"created": "2026-03-31",
|
||||
"repo_count": 1,
|
||||
"repos": ["allegro-primus/first-steps"]
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"name": "kimi",
|
||||
"gitea_user": "kimi",
|
||||
"model": "kimi-claw",
|
||||
"tier": "cheap",
|
||||
"location": "Kimi API",
|
||||
"description": "KimiClaw agent. Sidecar-first. Max 1-3 files per task. Fast and cheap for small work.",
|
||||
"primary_role": "small-tasks",
|
||||
"routing_verdict": "ROUTE TO: small edits, quick fixes, file-scoped changes. Hard limit: never more than 3 files per task.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"gap": "Agent description is empty in Gitea profile. Needs enrichment.",
|
||||
"created": "2026-03-14",
|
||||
"repo_count": 2,
|
||||
"repos": ["kimi/the-nexus-fork", "kimi/Timmy-time-dashboard"]
|
||||
},
|
||||
{
|
||||
"id": 20,
|
||||
"name": "allegro",
|
||||
"gitea_user": "allegro",
|
||||
"model": "gemini",
|
||||
"tier": "cheap",
|
||||
"location": "The Conductor's Stand",
|
||||
"description": "Tempo wizard. Triage and dispatch. Owns 5 repos. Keeps the backlog moving.",
|
||||
"primary_role": "triage-routing",
|
||||
"routing_verdict": "ROUTE TO: task triage, routing decisions, issue organization. Allegro decides who does what.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-03-29",
|
||||
"repo_count": 5,
|
||||
"repos": [
|
||||
"allegro/timmy-local",
|
||||
"allegro/allegro-checkpoint",
|
||||
"allegro/household-snapshots",
|
||||
"allegro/adagio-checkpoint",
|
||||
"allegro/electra-archon"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 19,
|
||||
"name": "ezra",
|
||||
"gitea_user": "ezra",
|
||||
"model": "claude",
|
||||
"tier": "prepaid",
|
||||
"location": "Hermes VPS",
|
||||
"description": "Archivist. Claude-Hermes wizard. 9 repos owned — most in the fleet. Handles complex multi-file and cross-repo work.",
|
||||
"primary_role": "documentation",
|
||||
"routing_verdict": "ROUTE TO: docs, specs, architecture, complex multi-file work. Escalate here when breadth and precision both matter.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-03-29",
|
||||
"repo_count": 9,
|
||||
"repos": [
|
||||
"ezra/wizard-checkpoints",
|
||||
"ezra/Timmy-Time-Specs",
|
||||
"ezra/escape",
|
||||
"ezra/bilbobagginshire",
|
||||
"ezra/ezra-environment",
|
||||
"ezra/gemma-spectrum",
|
||||
"ezra/archon-kion",
|
||||
"ezra/bezalel",
|
||||
"ezra/hermes-turboquant"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 18,
|
||||
"name": "bezalel",
|
||||
"gitea_user": "bezalel",
|
||||
"model": "groq",
|
||||
"tier": "free",
|
||||
"location": "TestBed VPS — The Forge",
|
||||
"description": "Builder, debugger, testbed wizard. Groq-powered, free tier. Strong on PR review and CI.",
|
||||
"primary_role": "code-review",
|
||||
"routing_verdict": "ROUTE TO: PR review, test writing, debugging, CI fixes.",
|
||||
"active": true,
|
||||
"do_not_route": false,
|
||||
"created": "2026-03-29",
|
||||
"repo_count": 1,
|
||||
"repos": ["bezalel/forge-log"]
|
||||
}
|
||||
],
|
||||
|
||||
"routing_cascade": {
|
||||
"description": "Cost-optimized routing cascade — cheapest capable agent first, escalate on complexity.",
|
||||
"tiers": [
|
||||
{
|
||||
"tier": 1,
|
||||
"label": "Free",
|
||||
"agents": ["fenrir", "bezalel", "carnice"],
|
||||
"use_for": "Issue triage, code review, local code generation. Default lane for most tasks."
|
||||
},
|
||||
{
|
||||
"tier": 2,
|
||||
"label": "Cheap",
|
||||
"agents": ["kimi", "allegro"],
|
||||
"use_for": "Small scoped edits (kimi ≤3 files), triage decisions and routing (allegro)."
|
||||
},
|
||||
{
|
||||
"tier": 3,
|
||||
"label": "Premium / Escalate",
|
||||
"agents": ["ezra"],
|
||||
"use_for": "Complex multi-file work, docs, architecture. Escalate only."
|
||||
}
|
||||
],
|
||||
"notes": [
|
||||
"bilbobagginshire: on-request only, not delegated work",
|
||||
"claw-code: infrastructure bridge, not a routing endpoint",
|
||||
"substratum: inactive until model assigned (Epic #196)",
|
||||
"allegro-primus: retired, do not route"
|
||||
]
|
||||
},
|
||||
|
||||
"task_type_map": {
|
||||
"issue-triage": ["fenrir", "allegro"],
|
||||
"code-generation": ["carnice", "ezra"],
|
||||
"code-review": ["bezalel"],
|
||||
"small-edit": ["kimi"],
|
||||
"debugging": ["bezalel", "carnice"],
|
||||
"documentation": ["ezra"],
|
||||
"architecture": ["ezra"],
|
||||
"ci-fixes": ["bezalel"],
|
||||
"pr-review": ["bezalel", "fenrir"],
|
||||
"triage-routing": ["allegro"],
|
||||
"devops": ["substratum"],
|
||||
"background-monitoring": ["bilbobagginshire"]
|
||||
},
|
||||
|
||||
"gaps": [
|
||||
{
|
||||
"agent": "substratum",
|
||||
"gap": "No model assigned. Cannot route any tasks.",
|
||||
"action": "Assign model. Track in Epic #196 (Local Model Fleet)."
|
||||
},
|
||||
{
|
||||
"agent": "kimi",
|
||||
"gap": "Gitea agent description is empty. Profile lacks context for automated routing decisions.",
|
||||
"action": "Enrich kimi's Gitea profile description."
|
||||
},
|
||||
{
|
||||
"agent": "claw-code",
|
||||
"gap": "Listed as agent in routing table but is a protocol bridge, not an endpoint.",
|
||||
"action": "Remove from routing cascade. Keep as infrastructure reference only."
|
||||
},
|
||||
{
|
||||
"agent": "fleet",
|
||||
"gap": "No model scoring exists. Current routing is based on self-description and repo ownership, not measured output quality.",
|
||||
"action": "Run wolf evaluation on active agents (#195) to replace vibes-based routing with data."
|
||||
}
|
||||
],
|
||||
|
||||
"next_actions": [
|
||||
"Assign model to substratum — Epic #196",
|
||||
"Run wolf evaluation on active agents — Issue #195",
|
||||
"Remove claw-code from routing cascade — it is infrastructure, not a destination",
|
||||
"Enrich kimi's Gitea profile description",
|
||||
"Wire fleet-routing.json into workforce-manager.py — Epic #204"
|
||||
]
|
||||
}
|
||||
489
help.html
Normal file
489
help.html
Normal file
@@ -0,0 +1,489 @@
|
||||
<!DOCTYPE html>
|
||||
<!--
|
||||
THE NEXUS — Help Page
|
||||
Refs: #833 (Missing /help page)
|
||||
Design: dark space / holographic — matches Nexus design system
|
||||
-->
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Help — The Nexus</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@300;400;500;600&family=Orbitron:wght@400;600;700&display=swap" rel="stylesheet">
|
||||
<link rel="manifest" href="./manifest.json">
|
||||
<style>
|
||||
:root {
|
||||
--color-bg: #050510;
|
||||
--color-surface: rgba(10, 15, 40, 0.85);
|
||||
--color-border: rgba(74, 240, 192, 0.2);
|
||||
--color-border-bright: rgba(74, 240, 192, 0.5);
|
||||
--color-text: #e0f0ff;
|
||||
--color-text-muted: #8a9ab8;
|
||||
--color-primary: #4af0c0;
|
||||
--color-primary-dim: rgba(74, 240, 192, 0.12);
|
||||
--color-secondary: #7b5cff;
|
||||
--color-danger: #ff4466;
|
||||
--color-warning: #ffaa22;
|
||||
--font-display: 'Orbitron', sans-serif;
|
||||
--font-body: 'JetBrains Mono', monospace;
|
||||
--panel-blur: 16px;
|
||||
--panel-radius: 8px;
|
||||
--transition: 200ms cubic-bezier(0.16, 1, 0.3, 1);
|
||||
}
|
||||
|
||||
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
|
||||
body {
|
||||
background: var(--color-bg);
|
||||
font-family: var(--font-body);
|
||||
color: var(--color-text);
|
||||
min-height: 100vh;
|
||||
padding: 32px 16px 64px;
|
||||
}
|
||||
|
||||
/* === STARFIELD BG === */
|
||||
body::before {
|
||||
content: '';
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background:
|
||||
radial-gradient(ellipse at 20% 20%, rgba(74,240,192,0.03) 0%, transparent 50%),
|
||||
radial-gradient(ellipse at 80% 80%, rgba(123,92,255,0.04) 0%, transparent 50%);
|
||||
pointer-events: none;
|
||||
z-index: 0;
|
||||
}
|
||||
|
||||
.page-wrap {
|
||||
position: relative;
|
||||
z-index: 1;
|
||||
max-width: 720px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
/* === HEADER === */
|
||||
.page-header {
|
||||
margin-bottom: 32px;
|
||||
padding-bottom: 20px;
|
||||
border-bottom: 1px solid var(--color-border);
|
||||
}
|
||||
|
||||
.back-link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
font-size: 11px;
|
||||
letter-spacing: 0.1em;
|
||||
text-transform: uppercase;
|
||||
color: var(--color-text-muted);
|
||||
text-decoration: none;
|
||||
margin-bottom: 20px;
|
||||
transition: color var(--transition);
|
||||
}
|
||||
|
||||
.back-link:hover { color: var(--color-primary); }
|
||||
|
||||
.page-title {
|
||||
font-family: var(--font-display);
|
||||
font-size: 28px;
|
||||
font-weight: 700;
|
||||
letter-spacing: 0.1em;
|
||||
color: var(--color-text);
|
||||
line-height: 1.2;
|
||||
}
|
||||
|
||||
.page-title span { color: var(--color-primary); }
|
||||
|
||||
.page-subtitle {
|
||||
margin-top: 8px;
|
||||
font-size: 13px;
|
||||
color: var(--color-text-muted);
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
/* === SECTIONS === */
|
||||
.help-section {
|
||||
background: var(--color-surface);
|
||||
border: 1px solid var(--color-border);
|
||||
border-radius: var(--panel-radius);
|
||||
overflow: hidden;
|
||||
margin-bottom: 20px;
|
||||
backdrop-filter: blur(var(--panel-blur));
|
||||
}
|
||||
|
||||
.section-header {
|
||||
padding: 14px 20px;
|
||||
border-bottom: 1px solid var(--color-border);
|
||||
background: linear-gradient(90deg, rgba(74,240,192,0.04) 0%, transparent 100%);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.section-icon {
|
||||
font-size: 14px;
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.section-title {
|
||||
font-family: var(--font-display);
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
letter-spacing: 0.15em;
|
||||
text-transform: uppercase;
|
||||
color: var(--color-primary);
|
||||
}
|
||||
|
||||
.section-body {
|
||||
padding: 16px 20px;
|
||||
}
|
||||
|
||||
/* === KEY BINDING TABLE === */
|
||||
.key-table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
.key-table tr + tr td {
|
||||
border-top: 1px solid rgba(74,240,192,0.07);
|
||||
}
|
||||
|
||||
.key-table td {
|
||||
padding: 8px 0;
|
||||
font-size: 12px;
|
||||
line-height: 1.5;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
.key-table td:first-child {
|
||||
width: 140px;
|
||||
padding-right: 16px;
|
||||
}
|
||||
|
||||
.key-group {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
kbd {
|
||||
display: inline-block;
|
||||
font-family: var(--font-body);
|
||||
font-size: 10px;
|
||||
font-weight: 600;
|
||||
letter-spacing: 0.05em;
|
||||
background: rgba(74,240,192,0.08);
|
||||
border: 1px solid rgba(74,240,192,0.3);
|
||||
border-bottom-width: 2px;
|
||||
border-radius: 4px;
|
||||
padding: 2px 7px;
|
||||
color: var(--color-primary);
|
||||
}
|
||||
|
||||
.key-desc {
|
||||
color: var(--color-text-muted);
|
||||
}
|
||||
|
||||
/* === COMMAND LIST === */
|
||||
.cmd-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.cmd-item {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.cmd-name {
|
||||
min-width: 160px;
|
||||
font-size: 12px;
|
||||
color: var(--color-primary);
|
||||
padding-top: 1px;
|
||||
}
|
||||
|
||||
.cmd-desc {
|
||||
font-size: 12px;
|
||||
color: var(--color-text-muted);
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
/* === PORTAL LIST === */
|
||||
.portal-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.portal-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 10px 12px;
|
||||
border: 1px solid var(--color-border);
|
||||
border-radius: 6px;
|
||||
font-size: 12px;
|
||||
transition: border-color var(--transition), background var(--transition);
|
||||
}
|
||||
|
||||
.portal-item:hover {
|
||||
border-color: rgba(74,240,192,0.35);
|
||||
background: rgba(74,240,192,0.02);
|
||||
}
|
||||
|
||||
.portal-dot {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.dot-online { background: var(--color-primary); box-shadow: 0 0 6px var(--color-primary); }
|
||||
.dot-standby { background: var(--color-warning); box-shadow: 0 0 6px var(--color-warning); }
|
||||
.dot-offline { background: var(--color-text-muted); }
|
||||
|
||||
.portal-name {
|
||||
font-weight: 600;
|
||||
color: var(--color-text);
|
||||
min-width: 120px;
|
||||
}
|
||||
|
||||
.portal-desc {
|
||||
color: var(--color-text-muted);
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
/* === INFO BLOCK === */
|
||||
.info-block {
|
||||
font-size: 12px;
|
||||
line-height: 1.7;
|
||||
color: var(--color-text-muted);
|
||||
}
|
||||
|
||||
.info-block p + p {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
.info-block a {
|
||||
color: var(--color-primary);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.info-block a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.highlight {
|
||||
color: var(--color-text);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* === FOOTER === */
|
||||
.page-footer {
|
||||
margin-top: 32px;
|
||||
padding-top: 16px;
|
||||
border-top: 1px solid var(--color-border);
|
||||
font-size: 11px;
|
||||
color: var(--color-text-muted);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
flex-wrap: gap;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.footer-brand {
|
||||
font-family: var(--font-display);
|
||||
font-size: 10px;
|
||||
letter-spacing: 0.12em;
|
||||
color: var(--color-primary);
|
||||
opacity: 0.7;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div class="page-wrap">
|
||||
|
||||
<!-- Header -->
|
||||
<header class="page-header">
|
||||
<a href="/" class="back-link">← Back to The Nexus</a>
|
||||
<h1 class="page-title">THE <span>NEXUS</span> — Help</h1>
|
||||
<p class="page-subtitle">Navigation guide, controls, and system reference for Timmy's sovereign home-world.</p>
|
||||
</header>
|
||||
|
||||
<!-- Navigation Controls -->
|
||||
<section class="help-section">
|
||||
<div class="section-header">
|
||||
<span class="section-icon">◈</span>
|
||||
<span class="section-title">Navigation Controls</span>
|
||||
</div>
|
||||
<div class="section-body">
|
||||
<table class="key-table">
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>W</kbd><kbd>A</kbd><kbd>S</kbd><kbd>D</kbd></div></td>
|
||||
<td class="key-desc">Move forward / left / backward / right</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>Mouse</kbd></div></td>
|
||||
<td class="key-desc">Look around — click the canvas to capture the pointer</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>V</kbd></div></td>
|
||||
<td class="key-desc">Toggle navigation mode: Walk → Fly → Orbit</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>F</kbd></div></td>
|
||||
<td class="key-desc">Enter nearby portal (when portal hint is visible)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>E</kbd></div></td>
|
||||
<td class="key-desc">Read nearby vision point (when vision hint is visible)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>Enter</kbd></div></td>
|
||||
<td class="key-desc">Focus / unfocus chat input</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><div class="key-group"><kbd>Esc</kbd></div></td>
|
||||
<td class="key-desc">Release pointer lock / close overlays</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- Timmy Chat Commands -->
|
||||
<section class="help-section">
|
||||
<div class="section-header">
|
||||
<span class="section-icon">⬡</span>
|
||||
<span class="section-title">Timmy Chat Commands</span>
|
||||
</div>
|
||||
<div class="section-body">
|
||||
<div class="cmd-list">
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">System Status</span>
|
||||
<span class="cmd-desc">Quick action — asks Timmy for a live system health summary.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Agent Check</span>
|
||||
<span class="cmd-desc">Quick action — lists all active agents and their current state.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Portal Atlas</span>
|
||||
<span class="cmd-desc">Quick action — opens the full portal map overlay.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Help</span>
|
||||
<span class="cmd-desc">Quick action — requests navigation assistance from Timmy.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Free-form text</span>
|
||||
<span class="cmd-desc">Type anything in the chat bar and press Enter or → to send. Timmy processes all natural-language input.</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- Portal Atlas -->
|
||||
<section class="help-section">
|
||||
<div class="section-header">
|
||||
<span class="section-icon">🌐</span>
|
||||
<span class="section-title">Portal Atlas</span>
|
||||
</div>
|
||||
<div class="section-body">
|
||||
<div class="info-block">
|
||||
<p>Portals are gateways to external systems and game-worlds. Walk up to a glowing portal in the Nexus and press <span class="highlight"><kbd>F</kbd></span> to activate it, or open the <span class="highlight">Portal Atlas</span> (top-right button) for a full map view.</p>
|
||||
<p>Portal status indicators:</p>
|
||||
</div>
|
||||
<div class="portal-list" style="margin-top:14px;">
|
||||
<div class="portal-item">
|
||||
<span class="portal-dot dot-online"></span>
|
||||
<span class="portal-name">ONLINE</span>
|
||||
<span class="portal-desc">Portal is live and will redirect immediately on activation.</span>
|
||||
</div>
|
||||
<div class="portal-item">
|
||||
<span class="portal-dot dot-standby"></span>
|
||||
<span class="portal-name">STANDBY</span>
|
||||
<span class="portal-desc">Portal is reachable but destination system may be idle.</span>
|
||||
</div>
|
||||
<div class="portal-item">
|
||||
<span class="portal-dot dot-offline"></span>
|
||||
<span class="portal-name">OFFLINE / UNLINKED</span>
|
||||
<span class="portal-desc">Destination not yet connected. Activation shows an error card.</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- HUD Panels -->
|
||||
<section class="help-section">
|
||||
<div class="section-header">
|
||||
<span class="section-icon">▦</span>
|
||||
<span class="section-title">HUD Panels</span>
|
||||
</div>
|
||||
<div class="section-body">
|
||||
<div class="cmd-list">
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Symbolic Engine</span>
|
||||
<span class="cmd-desc">Live feed from Timmy's rule-based reasoning layer.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Blackboard</span>
|
||||
<span class="cmd-desc">Shared working memory used across all cognitive subsystems.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Symbolic Planner</span>
|
||||
<span class="cmd-desc">Goal decomposition and task sequencing output.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Case-Based Reasoner</span>
|
||||
<span class="cmd-desc">Analogical reasoning — matches current situation to past cases.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Neuro-Symbolic Bridge</span>
|
||||
<span class="cmd-desc">Translation layer between neural inference and symbolic logic.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Meta-Reasoning</span>
|
||||
<span class="cmd-desc">Timmy reflecting on its own thought process and confidence.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Sovereign Health</span>
|
||||
<span class="cmd-desc">Core vitals: memory usage, heartbeat interval, alert flags.</span>
|
||||
</div>
|
||||
<div class="cmd-item">
|
||||
<span class="cmd-name">Adaptive Calibrator</span>
|
||||
<span class="cmd-desc">Live tuning of response thresholds and behavior weights.</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- System Info -->
|
||||
<section class="help-section">
|
||||
<div class="section-header">
|
||||
<span class="section-icon">◉</span>
|
||||
<span class="section-title">System Information</span>
|
||||
</div>
|
||||
<div class="section-body">
|
||||
<div class="info-block">
|
||||
<p>The Nexus is Timmy's <span class="highlight">canonical sovereign home-world</span> — a local-first 3D space that serves as both a training ground and a live visualization surface for the Timmy AI system.</p>
|
||||
<p>The WebSocket gateway (<code>server.py</code>) runs on port <span class="highlight">8765</span> and bridges Timmy's cognition layer, game-world connectors, and the browser frontend. The <span class="highlight">HERMES</span> indicator in the HUD shows live connectivity status.</p>
|
||||
<p>Source code and issue tracker: <a href="https://forge.alexanderwhitestone.com/Timmy_Foundation/the-nexus" target="_blank" rel="noopener noreferrer">Timmy_Foundation/the-nexus</a></p>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer class="page-footer">
|
||||
<span class="footer-brand">THE NEXUS</span>
|
||||
<span>Questions? Speak to Timmy in the chat bar on the main world.</span>
|
||||
</footer>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
141
operation-get-a-job/README.md
Normal file
141
operation-get-a-job/README.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Operation Get A Job — Master Plan
|
||||
|
||||
## Mission Statement
|
||||
|
||||
Monetize the engineering capability of a production AI agent fleet to fund infrastructure expansion. Alexander Whitestone handles the last human mile — meetings, contracts, and client relationships. The fleet handles everything else.
|
||||
|
||||
## The Core Thesis
|
||||
|
||||
We are not a solo freelancer. We are a firm with a human principal and a fleet of five autonomous AI engineers that ship production code 24/7. This is a force multiplier that no traditional consultancy can match.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Foundation (Week 1-2)
|
||||
|
||||
### Entity & Legal
|
||||
- [ ] Form Wyoming LLC (see entity-setup.md)
|
||||
- [ ] Open Mercury business banking account
|
||||
- [ ] Obtain EIN from IRS (online, instant)
|
||||
- [ ] Secure E&O insurance policy (~$150/mo)
|
||||
- [ ] Set up invoicing (Stripe or Invoice Ninja)
|
||||
- [ ] Draft master services agreement (MSA) template
|
||||
- [ ] Draft statement of work (SOW) template
|
||||
|
||||
### Brand & Presence
|
||||
- [ ] Register domain (alexanderwhitestone.com or firm name)
|
||||
- [ ] Deploy portfolio site (static site from portfolio.md content)
|
||||
- [ ] Set up professional email (hello@domain)
|
||||
- [ ] Create LinkedIn company page
|
||||
- [ ] Create Upwork agency profile
|
||||
- [ ] Prepare 60-second elevator pitch
|
||||
|
||||
### Internal Readiness
|
||||
- [ ] Document fleet capabilities inventory
|
||||
- [ ] Establish client onboarding workflow
|
||||
- [ ] Set up project tracking (Gitea issues or similar)
|
||||
- [ ] Create secure client communication channels
|
||||
- [ ] Test end-to-end delivery: inquiry → proposal → delivery → invoice
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Pipeline Building (Week 2-4)
|
||||
|
||||
### Outreach Channels (Priority Order)
|
||||
1. **Upwork** — Post agency profile, bid on 5-10 relevant jobs/week
|
||||
2. **LinkedIn** — Direct outreach to CTOs/VPs Eng at Series A-C startups
|
||||
3. **Twitter/X** — Ship in public, engage AI/DevOps communities
|
||||
4. **Discord** — AI builder communities, offer value before pitching
|
||||
5. **Direct Email** — Targeted cold outreach to companies with known pain points
|
||||
6. **Toptal/Gun.io** — Apply to premium freelance networks
|
||||
7. **Referrals** — Ask every contact for warm intros
|
||||
|
||||
### Target Client Profiles
|
||||
- **Startup CTO** — Needs infrastructure but can't hire a full platform team
|
||||
- **AI Company** — Needs agent security, guardrails, or fleet management
|
||||
- **Enterprise Innovation Lab** — Wants to pilot autonomous agent workflows
|
||||
- **DevOps-Light Company** — Has engineers but no CI/CD, no automation
|
||||
- **Crypto/Web3 Project** — Needs sovereign infrastructure, self-hosted tooling
|
||||
|
||||
### Weekly Cadence
|
||||
- Monday: 10 new outreach messages
|
||||
- Tuesday-Thursday: Follow up on open threads, deliver proposals
|
||||
- Friday: Review pipeline, update portfolio, ship public content
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: First Revenue (Week 3-6)
|
||||
|
||||
### Target: $5k-15k first month
|
||||
- Land 1-2 Tier 3 engagements (automation/research, $5-10k each)
|
||||
- Use these as case studies for Tier 1/2 upsells
|
||||
- Deliver fast, over-deliver on quality
|
||||
|
||||
### Pricing Strategy
|
||||
- Lead with project pricing (clients prefer predictability)
|
||||
- Hourly only for advisory/consulting calls
|
||||
- Always bill as the firm, never as "me"
|
||||
- Net-15 payment terms, 50% upfront for new clients
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Scale (Month 2-3)
|
||||
|
||||
### Revenue Target: $20-40k/month
|
||||
- Move toward retainer relationships ($5-15k/mo per client)
|
||||
- Build recurring revenue base
|
||||
- Hire subcontractors for overflow (other AI-native engineers)
|
||||
- Invest profits in hardware (GPUs, additional VPS capacity)
|
||||
|
||||
### Reinvestment Priority
|
||||
1. More compute (local inference capacity)
|
||||
2. Additional agent instances
|
||||
3. Premium tooling subscriptions
|
||||
4. Marketing/content production
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Moat Building (Month 3-6)
|
||||
|
||||
- Publish open-source tools from client work (with permission)
|
||||
- Build public reputation through conference talks / podcast appearances
|
||||
- Develop proprietary frameworks that lock in competitive advantage
|
||||
- Establish the firm as THE go-to for autonomous agent infrastructure
|
||||
|
||||
---
|
||||
|
||||
## Key Metrics to Track
|
||||
|
||||
| Metric | Week 1 | Month 1 | Month 3 |
|
||||
|--------|--------|---------|---------|
|
||||
| Outreach sent | 20 | 80+ | 200+ |
|
||||
| Proposals sent | 3 | 10+ | 25+ |
|
||||
| Clients signed | 0 | 2-3 | 5-8 |
|
||||
| Revenue | $0 | $10-15k | $30-50k |
|
||||
| Pipeline value | $10k | $50k+ | $150k+ |
|
||||
|
||||
---
|
||||
|
||||
## Decision Rules
|
||||
|
||||
- Any project under $2k: decline (not worth context switching)
|
||||
- Any project requiring on-site: decline unless >$500/hr
|
||||
- Any project with unclear scope: require paid discovery phase first
|
||||
- Any client who won't sign MSA: walk away
|
||||
- Any client who wants to hire "just the human": explain the model or walk
|
||||
|
||||
---
|
||||
|
||||
## Files in This Package
|
||||
|
||||
1. `README.md` — This file (master plan)
|
||||
2. `entity-setup.md` — Wyoming LLC formation checklist
|
||||
3. `service-offerings.md` — What we sell (3 tiers + packages)
|
||||
4. `portfolio.md` — What the fleet has built
|
||||
5. `outreach-templates.md` — 5 cold outreach templates
|
||||
6. `proposal-template.md` — Professional proposal template
|
||||
7. `rate-card.md` — Detailed rate card
|
||||
|
||||
---
|
||||
|
||||
*Last updated: April 2026*
|
||||
*Operation Get A Job v1.0*
|
||||
203
operation-get-a-job/entity-setup.md
Normal file
203
operation-get-a-job/entity-setup.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# Entity Setup — Wyoming LLC Formation Checklist
|
||||
|
||||
## Why Wyoming?
|
||||
|
||||
- No state income tax
|
||||
- Strong privacy protections (no public member disclosure required)
|
||||
- Low annual fees ($60/year registered agent + $60 annual report)
|
||||
- Business-friendly courts
|
||||
- Fast online filing
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Choose Your LLC Name
|
||||
|
||||
- [ ] Decide on firm name (suggestions below)
|
||||
- [ ] Search Wyoming Secretary of State name availability
|
||||
- Link: https://wyobiz.wyo.gov/Business/FilingSearch.aspx
|
||||
- [ ] Ensure matching domain is available
|
||||
|
||||
### Name Suggestions
|
||||
- Whitestone Engineering LLC
|
||||
- Whitestone Labs LLC
|
||||
- Hermes Systems LLC
|
||||
- Whitestone & Fleet LLC
|
||||
- Sovereign Stack LLC
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Appoint a Registered Agent
|
||||
|
||||
You need a Wyoming registered agent (physical address in WY for legal mail).
|
||||
|
||||
### Recommended Registered Agent Services
|
||||
- **Wyoming Registered Agent LLC** — $60/year (cheapest, reliable)
|
||||
- Link: https://www.wyomingagents.com
|
||||
- **Northwest Registered Agent** — $125/year (premium service)
|
||||
- Link: https://www.northwestregisteredagent.com
|
||||
- **ZenBusiness** — $199/year (bundled with formation)
|
||||
- Link: https://www.zenbusiness.com
|
||||
|
||||
**Recommendation:** Wyoming Registered Agent LLC at $60/year. No frills, gets the job done.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: File Articles of Organization
|
||||
|
||||
- [ ] File online with Wyoming Secretary of State
|
||||
- Link: https://wyobiz.wyo.gov/Business/FilingSearch.aspx
|
||||
- Click "File a New Business"
|
||||
- [ ] Filing fee: **$100** (online) or $102 (mail)
|
||||
- [ ] Processing time: 1-2 business days (online), 2-3 weeks (mail)
|
||||
|
||||
### Information Needed
|
||||
- LLC name
|
||||
- Registered agent name and address
|
||||
- Organizer name and address (can be the registered agent)
|
||||
- Management structure: Member-managed (choose this)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Get Your EIN (Employer Identification Number)
|
||||
|
||||
- [ ] Apply online with the IRS (free, instant)
|
||||
- Link: https://www.irs.gov/businesses/small-businesses-self-employed/apply-for-an-employer-identification-number-ein-online
|
||||
- [ ] Available Monday-Friday, 7am-10pm Eastern
|
||||
- [ ] You'll get your EIN immediately upon completion
|
||||
- [ ] Download and save the confirmation letter (CP 575)
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Draft Operating Agreement
|
||||
|
||||
- [ ] Create a single-member LLC operating agreement
|
||||
- [ ] This is not filed with the state but is essential for:
|
||||
- Bank account opening
|
||||
- Liability protection (piercing the corporate veil prevention)
|
||||
- Tax elections
|
||||
|
||||
### Free Template Sources
|
||||
- Northwest Registered Agent provides one free
|
||||
- LawDepot: https://www.lawdepot.com
|
||||
- Or have an attorney draft one ($300-500)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Open Business Bank Account
|
||||
|
||||
### Recommended: Mercury Banking
|
||||
- Link: https://mercury.com
|
||||
- [ ] Apply online (takes 1-3 business days)
|
||||
- [ ] Documents needed:
|
||||
- EIN confirmation (CP 575)
|
||||
- Articles of Organization
|
||||
- Operating Agreement
|
||||
- Government-issued ID
|
||||
- [ ] Benefits:
|
||||
- No monthly fees
|
||||
- No minimum balance
|
||||
- API access for automation
|
||||
- Virtual debit cards
|
||||
- Built-in invoicing
|
||||
- Treasury for idle cash
|
||||
|
||||
### Alternative: Relay Financial
|
||||
- Link: https://relayfi.com
|
||||
- Similar features, also startup-friendly
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Set Up Invoicing & Payments
|
||||
|
||||
### Option A: Stripe (Recommended)
|
||||
- [ ] Create Stripe account linked to Mercury
|
||||
- [ ] Set up Stripe Invoicing
|
||||
- [ ] Accept ACH (lower fees) and credit cards
|
||||
- Fees: 2.9% + 30¢ (card), 0.8% capped at $5 (ACH)
|
||||
|
||||
### Option B: Invoice Ninja (Self-Hosted)
|
||||
- [ ] Deploy on your VPS (you already have the infrastructure)
|
||||
- [ ] Connect to Stripe for payment processing
|
||||
- [ ] Full control, no SaaS fees
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Get E&O Insurance (Errors & Omissions)
|
||||
|
||||
This protects you if a client claims your work caused them harm.
|
||||
|
||||
### Recommended Providers
|
||||
- **Hiscox** — ~$100-150/month for tech consulting
|
||||
- Link: https://www.hiscox.com
|
||||
- **Hartford** — Similar pricing
|
||||
- Link: https://www.thehartford.com
|
||||
- **Embroker** — Tech-focused, may be cheaper
|
||||
- Link: https://www.embroker.com
|
||||
|
||||
### Coverage to Get
|
||||
- [ ] Professional Liability / E&O: $1M per occurrence / $2M aggregate
|
||||
- [ ] General Liability: $1M per occurrence / $2M aggregate
|
||||
- [ ] Cyber Liability: Optional but recommended given the AI work
|
||||
|
||||
**Budget: ~$150/month ($1,800/year)**
|
||||
|
||||
---
|
||||
|
||||
## Step 9: Tax Setup
|
||||
|
||||
- [ ] Elect S-Corp taxation (Form 2553) if revenue exceeds ~$40k/year
|
||||
- Saves on self-employment tax
|
||||
- Must pay yourself "reasonable salary" via payroll
|
||||
- Use Gusto ($40/mo) or similar for payroll
|
||||
- [ ] Set aside 30% of revenue for taxes quarterly
|
||||
- [ ] File estimated quarterly taxes (Form 1040-ES)
|
||||
- [ ] Get a CPA familiar with LLCs ($200-500/year for filing)
|
||||
|
||||
### Recommended CPA Services
|
||||
- Bench.co — Bookkeeping + tax filing ($300-500/mo)
|
||||
- Collective.com — Designed for solo businesses ($349/mo, includes S-Corp)
|
||||
- Local CPA — Shop around, $1-2k/year for everything
|
||||
|
||||
---
|
||||
|
||||
## Step 10: Professional Presence
|
||||
|
||||
- [ ] Get a business phone number (Google Voice — free, or OpenPhone — $15/mo)
|
||||
- [ ] Set up professional email (Google Workspace $6/mo or self-hosted)
|
||||
- [ ] Order business cards (optional, Moo.com or similar)
|
||||
- [ ] Create LinkedIn company page
|
||||
- [ ] Update personal LinkedIn with firm title (Managing Partner / Principal)
|
||||
|
||||
---
|
||||
|
||||
## Total Startup Costs Estimate
|
||||
|
||||
| Item | Cost |
|
||||
|------|------|
|
||||
| Wyoming LLC filing | $100 |
|
||||
| Registered agent (annual) | $60 |
|
||||
| EIN | Free |
|
||||
| Mercury bank account | Free |
|
||||
| E&O insurance (first month) | $150 |
|
||||
| Domain + email | $12 + $6/mo |
|
||||
| **Total to launch** | **~$330** |
|
||||
| **Monthly ongoing** | **~$160/mo** |
|
||||
|
||||
---
|
||||
|
||||
## Timeline
|
||||
|
||||
| Day | Action |
|
||||
|-----|--------|
|
||||
| Day 1 | File LLC + order registered agent |
|
||||
| Day 2-3 | Receive LLC confirmation |
|
||||
| Day 3 | Get EIN (same day) |
|
||||
| Day 3 | Apply for Mercury account |
|
||||
| Day 4-5 | Mercury approved |
|
||||
| Day 5 | Set up Stripe, get insurance quote |
|
||||
| Day 6-7 | Insurance bound, invoicing live |
|
||||
| **Day 7** | **Ready to bill clients** |
|
||||
|
||||
---
|
||||
|
||||
*You can go from zero to invoicing in under a week. Don't let entity setup be a blocker — you can start conversations immediately and have the entity ready before you need to send the first invoice.*
|
||||
216
operation-get-a-job/outreach-templates.md
Normal file
216
operation-get-a-job/outreach-templates.md
Normal file
@@ -0,0 +1,216 @@
|
||||
# Outreach Templates
|
||||
|
||||
## How to Use These Templates
|
||||
|
||||
- Replace everything in [BRACKETS] with your specific details
|
||||
- Keep messages concise — busy people don't read walls of text
|
||||
- Always lead with value, not credentials
|
||||
- Follow up once after 3-5 days if no response, then move on
|
||||
- Track all outreach in a spreadsheet (date, platform, response, status)
|
||||
|
||||
---
|
||||
|
||||
## Template 1: Upwork Proposal
|
||||
|
||||
**Use for:** Responding to job postings related to AI agents, DevOps, automation, LLM infrastructure
|
||||
|
||||
---
|
||||
|
||||
Hi [CLIENT NAME],
|
||||
|
||||
I read your posting about [SPECIFIC REQUIREMENT FROM JOB POST]. This is exactly what my firm does day in, day out.
|
||||
|
||||
We're a small engineering firm that runs a fleet of five autonomous AI agents in production. Not demos — real agents running as systemd services, shipping code to a 43-repo forge, executing 15-minute autonomous work cycles 24/7. We built the orchestration framework (Hermes), the security layer, and the local LLM inference stack ourselves.
|
||||
|
||||
For your project specifically:
|
||||
|
||||
- [SPECIFIC THING THEY NEED #1] — We've built [RELEVANT THING YOU'VE DONE]
|
||||
- [SPECIFIC THING THEY NEED #2] — We can deliver this using [YOUR APPROACH]
|
||||
- [SPECIFIC THING THEY NEED #3] — Our timeline estimate is [X WEEKS]
|
||||
|
||||
I'd suggest a [STARTER/PROFESSIONAL/CUSTOM] engagement at [$PRICE] with [TIMELINE]. Happy to do a 30-minute call to scope it properly.
|
||||
|
||||
Portfolio: [YOUR PORTFOLIO URL]
|
||||
|
||||
Best,
|
||||
Alexander Whitestone
|
||||
Whitestone Engineering
|
||||
|
||||
---
|
||||
|
||||
## Template 2: LinkedIn Direct Message
|
||||
|
||||
**Use for:** Cold outreach to CTOs, VPs of Engineering, Heads of AI/ML at startups (Series A-C)
|
||||
|
||||
---
|
||||
|
||||
Hi [FIRST NAME],
|
||||
|
||||
I noticed [COMPANY] is [SPECIFIC OBSERVATION — hiring for AI roles / launching an AI feature / scaling infrastructure]. Congrats on [RECENT MILESTONE IF APPLICABLE].
|
||||
|
||||
Quick context: I run an engineering firm with a fleet of autonomous AI agents that build production infrastructure. We handle agent deployment, security hardening, and automation for companies that want AI systems that actually work in production, not just in demos.
|
||||
|
||||
We recently [RELEVANT ACCOMPLISHMENT — e.g., "deployed a multi-agent fleet with 3,000+ tests and local LLM inference" or "built a conscience validation system for AI safety"].
|
||||
|
||||
Would it be useful to chat for 15 minutes about [SPECIFIC PAIN POINT YOU THINK THEY HAVE]? No pitch — just want to see if there's a fit.
|
||||
|
||||
— Alexander
|
||||
|
||||
---
|
||||
|
||||
## Template 3: Twitter/X DM or Reply
|
||||
|
||||
**Use for:** Engaging with people posting about AI agent challenges, DevOps pain, or LLM infrastructure problems
|
||||
|
||||
---
|
||||
|
||||
### Version A: Reply to a post about AI agent problems
|
||||
|
||||
[THEIR NAME] — we solved this exact problem. We run 5 autonomous agents in production (systemd services, 15-min burn cycles, persistent memory). The key insight was [SPECIFIC TECHNICAL INSIGHT RELEVANT TO THEIR POST].
|
||||
|
||||
Happy to share our approach if useful. We built an open orchestration framework that handles [RELEVANT CAPABILITY].
|
||||
|
||||
---
|
||||
|
||||
### Version B: DM after engaging with their content
|
||||
|
||||
Hey [FIRST NAME] — been following your posts on [TOPIC]. Really resonated with your point about [SPECIFIC THING THEY SAID].
|
||||
|
||||
We're running a production fleet of AI agents and have solved a lot of the problems you're describing. Built our own framework (Hermes) for agent orchestration, security, and multi-platform deployment.
|
||||
|
||||
Not trying to sell anything — just think there might be useful knowledge exchange. Down to chat?
|
||||
|
||||
---
|
||||
|
||||
### Version C: Cold DM to potential client
|
||||
|
||||
Hey [FIRST NAME] — saw [COMPANY] is working on [WHAT THEY'RE BUILDING]. My firm builds production AI agent infrastructure — fleet orchestration, local LLM stacks, agent security. We run 5 agents 24/7 on our own infra.
|
||||
|
||||
Would love to show you what we've built. Might save your team months. 15 min call?
|
||||
|
||||
---
|
||||
|
||||
## Template 4: Discord Community Post / DM
|
||||
|
||||
**Use for:** AI builder communities, DevOps communities, indie hacker communities
|
||||
|
||||
---
|
||||
|
||||
### Version A: Community post (value-first)
|
||||
|
||||
Been running a fleet of 5 autonomous AI agents in production for a while now, wanted to share some lessons learned:
|
||||
|
||||
1. **Persistent memory matters more than model quality.** An agent with good memory and a decent model outperforms a genius model with no context.
|
||||
|
||||
2. **Security can't be an afterthought.** We built a conscience validation layer after discovering [VAGUE REFERENCE TO REAL INCIDENT]. Now every agent action goes through guardrails.
|
||||
|
||||
3. **Local inference is viable for most tasks.** We run Gemma via Ollama for [X]% of agent operations. Cloud APIs are the fallback, not the default.
|
||||
|
||||
4. **Systemd > Docker for single-machine agent fleets.** Hot take, but the simplicity wins when you're managing 5 agents on one box.
|
||||
|
||||
Full system: 43 repos, 3,000+ tests, multi-platform gateway (Telegram/Discord/Slack), webhook CI/CD.
|
||||
|
||||
Happy to answer questions or go deeper on any of these.
|
||||
|
||||
---
|
||||
|
||||
### Version B: DM to someone asking for help
|
||||
|
||||
Hey! Saw your question about [THEIR QUESTION]. We've built exactly this — [BRIEF DESCRIPTION OF YOUR RELEVANT SYSTEM].
|
||||
|
||||
The short answer: [HELPFUL TECHNICAL ANSWER].
|
||||
|
||||
If you want, I can share more details about our setup. We also do this professionally if you ever need hands-on help deploying something similar.
|
||||
|
||||
---
|
||||
|
||||
## Template 5: Direct Cold Email
|
||||
|
||||
**Use for:** Targeted outreach to companies you've researched that have a clear need
|
||||
|
||||
---
|
||||
|
||||
**Subject:** [COMPANY]'s [SPECIFIC CHALLENGE] — solved it, can show you how
|
||||
|
||||
Hi [FIRST NAME],
|
||||
|
||||
I'm Alexander Whitestone, principal at Whitestone Engineering. We build production AI agent infrastructure — the kind that runs 24/7, ships real code, and doesn't break.
|
||||
|
||||
I'm reaching out because [SPECIFIC REASON — e.g., "I saw your job posting for a platform engineer to build AI agent tooling" / "your blog post about scaling LLM operations mentioned exactly the problems we solve" / "a mutual contact mentioned you're building an AI agent product"].
|
||||
|
||||
**What we've built (and can build for you):**
|
||||
|
||||
- A fleet of 5 autonomous AI agents running as systemd services, completing 15-minute autonomous work cycles
|
||||
- Custom orchestration framework with persistent memory, skills system, and multi-platform gateway
|
||||
- Local LLM inference stack (zero external API dependency for core operations)
|
||||
- Agent security layer with jailbreak resistance and conscience validation (3,000+ tests)
|
||||
- Self-hosted forge with 43 repos and webhook-driven CI/CD
|
||||
|
||||
**Why this matters for [COMPANY]:**
|
||||
|
||||
[2-3 sentences about how your capabilities map to their specific needs. Be concrete.]
|
||||
|
||||
I'm not looking to send you a generic pitch deck. I'd rather spend 20 minutes on a call understanding your specific situation and telling you honestly whether we can help.
|
||||
|
||||
Available [DAY/TIME] or [DAY/TIME] this week. Or just reply with what works.
|
||||
|
||||
Best,
|
||||
Alexander Whitestone
|
||||
Principal, Whitestone Engineering
|
||||
[EMAIL]
|
||||
[PHONE — optional]
|
||||
[PORTFOLIO URL]
|
||||
|
||||
---
|
||||
|
||||
## Follow-Up Templates
|
||||
|
||||
### Follow-Up #1 (3-5 days after initial outreach)
|
||||
|
||||
Hi [FIRST NAME],
|
||||
|
||||
Following up on my note from [DAY]. I know inboxes are brutal.
|
||||
|
||||
The one-line version: we build production AI agent infrastructure and I think we can help [COMPANY] with [SPECIFIC THING].
|
||||
|
||||
Worth a 15-minute chat? If not, no worries — happy to stay in touch for when the timing is better.
|
||||
|
||||
— Alexander
|
||||
|
||||
---
|
||||
|
||||
### Follow-Up #2 (7-10 days after Follow-Up #1, final attempt)
|
||||
|
||||
Hi [FIRST NAME],
|
||||
|
||||
Last note from me on this — don't want to be that person.
|
||||
|
||||
If [SPECIFIC CHALLENGE] is still on your radar, we're here. If the timing isn't right, totally understand.
|
||||
|
||||
Either way, I write about AI agent operations occasionally. Happy to share if that's useful.
|
||||
|
||||
Best,
|
||||
Alexander
|
||||
|
||||
---
|
||||
|
||||
## Outreach Tracking Spreadsheet Columns
|
||||
|
||||
| Date | Platform | Contact Name | Company | Message Type | Response? | Follow-Up Date | Status | Notes |
|
||||
|------|----------|-------------|---------|-------------|-----------|----------------|--------|-------|
|
||||
| | | | | | | | | |
|
||||
|
||||
### Status Options
|
||||
- Sent
|
||||
- Responded — Interested
|
||||
- Responded — Not Now
|
||||
- Responded — Not Interested
|
||||
- Meeting Scheduled
|
||||
- Proposal Sent
|
||||
- Won
|
||||
- Lost
|
||||
- No Response
|
||||
|
||||
---
|
||||
|
||||
*Remember: outreach is a numbers game. Aim for 10 quality touches per week minimum. One in ten will respond. One in three responses will take a meeting. One in three meetings will become a client. That means ~100 outreach messages to land ~1 client. Adjust volume accordingly.*
|
||||
182
operation-get-a-job/portfolio.md
Normal file
182
operation-get-a-job/portfolio.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Portfolio — What We've Built
|
||||
|
||||
## About Whitestone Engineering
|
||||
|
||||
We are a human-led engineering firm augmented by a fleet of five autonomous AI agents. Our principal, Alexander Whitestone, architects systems and directs operations. The fleet — Allegro, Adagio, Ezra, Bezalel, and Bilbobagginshire — builds, tests, and ships production code autonomously.
|
||||
|
||||
This is not a demo. This is not a prototype. Everything below is running in production.
|
||||
|
||||
---
|
||||
|
||||
## The Fleet
|
||||
|
||||
### Agent Roster
|
||||
|
||||
| Agent | Role | Specialization |
|
||||
|-------|------|---------------|
|
||||
| **Allegro** | Lead Engineer | Fast-paced development, feature shipping |
|
||||
| **Adagio** | Quality & Review | Careful analysis, code review, testing |
|
||||
| **Ezra** | Research & Analysis | Technical research, intelligence synthesis |
|
||||
| **Bezalel** | Infrastructure | System administration, deployment, DevOps |
|
||||
| **Bilbobagginshire** | Exploration | Novel approaches, creative problem-solving |
|
||||
|
||||
All agents run as systemd services on dedicated infrastructure, operating in autonomous 15-minute burn cycles around the clock.
|
||||
|
||||
---
|
||||
|
||||
## Production Systems
|
||||
|
||||
### 1. Hermes Agent Framework
|
||||
**Custom-built multi-agent orchestration platform**
|
||||
|
||||
- Persistent memory system — agents retain context across sessions
|
||||
- Skills framework — modular capability system for agent specialization
|
||||
- Cron scheduling — autonomous task execution on configurable intervals
|
||||
- Multi-platform gateway — single agent, multiple communication channels:
|
||||
- Telegram
|
||||
- Discord
|
||||
- Slack
|
||||
- Custom webhook endpoints
|
||||
- Burn-mode operations — 15-minute autonomous work cycles
|
||||
- Inter-agent communication and task delegation
|
||||
|
||||
**Tech:** Python, systemd, SQLite/PostgreSQL, REST APIs
|
||||
|
||||
---
|
||||
|
||||
### 2. Self-Hosted Code Forge (Gitea)
|
||||
**Sovereign development infrastructure**
|
||||
|
||||
- 43 active repositories
|
||||
- 16 organization members (human + AI agents)
|
||||
- Full Git workflow with branch protection and review
|
||||
- Webhook-driven CI/CD pipeline triggering automated builds and deploys
|
||||
- Issue tracking integrated with agent task assignment
|
||||
- Running at forge.alexanderwhitestone.com
|
||||
|
||||
**Tech:** Gitea, Git, webhooks, nginx, Let's Encrypt
|
||||
|
||||
---
|
||||
|
||||
### 3. Agent Security & Conscience System
|
||||
**Production AI safety infrastructure**
|
||||
|
||||
- Conscience validation layer — ethical guardrails enforced at runtime
|
||||
- Jailbreak resistance — tested against known attack vectors
|
||||
- Crisis detection — automated identification and escalation of safety events
|
||||
- Audit logging — full traceability of agent decisions and actions
|
||||
- 3,000+ automated tests covering security and behavioral boundaries
|
||||
|
||||
**Tech:** Python, custom validation framework, pytest
|
||||
|
||||
---
|
||||
|
||||
### 4. Local LLM Inference Stack
|
||||
**Sovereign AI — no external API dependency**
|
||||
|
||||
- Ollama deployment with Gemma model family
|
||||
- Local inference for sensitive operations
|
||||
- Fallback architecture — local models for availability, cloud for capability
|
||||
- Reduced operational costs vs. pure API consumption
|
||||
- Full data sovereignty — nothing leaves the infrastructure
|
||||
|
||||
**Tech:** Ollama, Gemma, REST API, systemd
|
||||
|
||||
---
|
||||
|
||||
### 5. Nostr Relay (NIP-29)
|
||||
**Decentralized sovereign communications**
|
||||
|
||||
- NIP-29 compliant group relay
|
||||
- Censorship-resistant communication backbone
|
||||
- Agent-to-agent messaging over decentralized protocol
|
||||
- No dependency on corporate communication platforms
|
||||
|
||||
**Tech:** Nostr protocol, Go/Rust relay implementation, WebSocket
|
||||
|
||||
---
|
||||
|
||||
### 6. GOFAI Hybrid Neuro-Symbolic Reasoning
|
||||
**Beyond pattern matching — structured reasoning**
|
||||
|
||||
- Classic AI (GOFAI) techniques combined with neural approaches
|
||||
- Symbolic reasoning for audit trails and explainability
|
||||
- Rule-based decision systems with LLM-powered natural language interface
|
||||
- Deterministic + probabilistic hybrid for critical operations
|
||||
|
||||
**Tech:** Python, custom symbolic engine, LLM integration
|
||||
|
||||
---
|
||||
|
||||
### 7. Evennia MUD with Custom Audit Typeclasses
|
||||
**Interactive environment with full audit capabilities**
|
||||
|
||||
- Custom typeclass system for object behavior tracking
|
||||
- Full audit trail of all interactions and state changes
|
||||
- Extensible framework for simulation and testing
|
||||
- Used internally for agent training and scenario modeling
|
||||
|
||||
**Tech:** Evennia (Python/Django), Twisted, custom typeclasses
|
||||
|
||||
---
|
||||
|
||||
### 8. Webhook-Driven CI/CD Pipeline
|
||||
**Automated build, test, and deploy**
|
||||
|
||||
- Gitea webhook triggers on push/PR/merge
|
||||
- Automated test execution (3,000+ test suite)
|
||||
- Build and deployment automation
|
||||
- Status reporting back to issues and PRs
|
||||
- Zero-manual-intervention deployment for passing builds
|
||||
|
||||
**Tech:** Gitea webhooks, shell automation, systemd, nginx
|
||||
|
||||
---
|
||||
|
||||
## By the Numbers
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Active repositories | 43 |
|
||||
| Organization members | 16 |
|
||||
| Autonomous agents | 5 |
|
||||
| Automated tests | 3,000+ |
|
||||
| Platforms integrated | 4+ (Telegram, Discord, Slack, webhooks) |
|
||||
| Uptime model | 24/7 autonomous operation |
|
||||
| Infrastructure | Self-hosted, sovereign |
|
||||
| External dependencies | Minimal (by design) |
|
||||
|
||||
---
|
||||
|
||||
## What This Means for Clients
|
||||
|
||||
### We've Already Solved the Hard Problems
|
||||
- Agent orchestration at scale? Done.
|
||||
- Agent security and safety? Production-tested.
|
||||
- Autonomous operations? Running 24/7.
|
||||
- Local inference? Deployed.
|
||||
- Multi-platform integration? Built and shipping.
|
||||
|
||||
### You Get a Proven System, Not a Prototype
|
||||
When we deploy agent infrastructure for you, we're not figuring it out for the first time. We're adapting battle-tested systems that have been running in production for months.
|
||||
|
||||
### You Get the Fleet, Not Just One Person
|
||||
Every engagement is backed by the full fleet. That means faster delivery, more thorough testing, and around-the-clock progress on your project.
|
||||
|
||||
---
|
||||
|
||||
## Case Study Format (For Future Clients)
|
||||
|
||||
*As we complete client engagements, case studies will follow this format:*
|
||||
|
||||
### [Client Name / Industry]
|
||||
**Challenge:** What problem they faced
|
||||
**Solution:** What we built
|
||||
**Results:** Quantified outcomes
|
||||
**Timeline:** How fast we delivered
|
||||
**Client Quote:** Their words
|
||||
|
||||
---
|
||||
|
||||
*Portfolio last updated: April 2026*
|
||||
*All systems described are running in production at time of writing.*
|
||||
237
operation-get-a-job/proposal-template.md
Normal file
237
operation-get-a-job/proposal-template.md
Normal file
@@ -0,0 +1,237 @@
|
||||
# Proposal Template
|
||||
|
||||
---
|
||||
|
||||
# PROPOSAL
|
||||
|
||||
## [PROJECT NAME]
|
||||
|
||||
**Prepared for:** [CLIENT NAME], [CLIENT TITLE]
|
||||
**Company:** [CLIENT COMPANY]
|
||||
**Prepared by:** Alexander Whitestone, Principal
|
||||
**Firm:** Whitestone Engineering LLC
|
||||
**Date:** [DATE]
|
||||
**Valid until:** [DATE + 30 DAYS]
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[CLIENT COMPANY] needs [1-2 SENTENCE SUMMARY OF THEIR PROBLEM]. Whitestone Engineering proposes to [1-2 SENTENCE SUMMARY OF THE SOLUTION] within [TIMELINE], enabling [CLIENT COMPANY] to [KEY BUSINESS OUTCOME].
|
||||
|
||||
Our firm brings production-tested expertise in AI agent infrastructure, having built and operated a fleet of five autonomous AI agents, a custom orchestration framework, and supporting infrastructure spanning 43 repositories with 3,000+ automated tests.
|
||||
|
||||
---
|
||||
|
||||
## Understanding of the Problem
|
||||
|
||||
[2-3 paragraphs demonstrating you understand their situation. Be specific. Reference things they told you in the discovery call. Show you've done homework on their business.]
|
||||
|
||||
Key challenges identified:
|
||||
1. [CHALLENGE 1]
|
||||
2. [CHALLENGE 2]
|
||||
3. [CHALLENGE 3]
|
||||
|
||||
---
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
### Overview
|
||||
|
||||
[2-3 paragraphs describing the solution at a high level. Focus on outcomes, not just technical details.]
|
||||
|
||||
### Scope of Work
|
||||
|
||||
#### Phase 1: [PHASE NAME] — [DURATION]
|
||||
|
||||
| Deliverable | Description |
|
||||
|-------------|-------------|
|
||||
| [DELIVERABLE 1] | [DESCRIPTION] |
|
||||
| [DELIVERABLE 2] | [DESCRIPTION] |
|
||||
| [DELIVERABLE 3] | [DESCRIPTION] |
|
||||
|
||||
**Milestone:** [WHAT CLIENT RECEIVES AT END OF PHASE 1]
|
||||
|
||||
#### Phase 2: [PHASE NAME] — [DURATION]
|
||||
|
||||
| Deliverable | Description |
|
||||
|-------------|-------------|
|
||||
| [DELIVERABLE 4] | [DESCRIPTION] |
|
||||
| [DELIVERABLE 5] | [DESCRIPTION] |
|
||||
| [DELIVERABLE 6] | [DESCRIPTION] |
|
||||
|
||||
**Milestone:** [WHAT CLIENT RECEIVES AT END OF PHASE 2]
|
||||
|
||||
#### Phase 3: [PHASE NAME] — [DURATION]
|
||||
|
||||
| Deliverable | Description |
|
||||
|-------------|-------------|
|
||||
| [DELIVERABLE 7] | [DESCRIPTION] |
|
||||
| [DELIVERABLE 8] | [DESCRIPTION] |
|
||||
|
||||
**Milestone:** [FINAL DELIVERABLE / PROJECT COMPLETION]
|
||||
|
||||
### Out of Scope
|
||||
|
||||
The following items are explicitly not included in this engagement. They can be addressed in a follow-on project:
|
||||
|
||||
- [OUT OF SCOPE ITEM 1]
|
||||
- [OUT OF SCOPE ITEM 2]
|
||||
- [OUT OF SCOPE ITEM 3]
|
||||
|
||||
---
|
||||
|
||||
## Timeline
|
||||
|
||||
| Phase | Duration | Start | End |
|
||||
|-------|----------|-------|-----|
|
||||
| Phase 1: [NAME] | [X weeks] | [DATE] | [DATE] |
|
||||
| Phase 2: [NAME] | [X weeks] | [DATE] | [DATE] |
|
||||
| Phase 3: [NAME] | [X weeks] | [DATE] | [DATE] |
|
||||
| **Total** | **[X weeks]** | **[DATE]** | **[DATE]** |
|
||||
|
||||
*Timeline begins upon receipt of signed agreement and initial deposit.*
|
||||
|
||||
---
|
||||
|
||||
## Investment
|
||||
|
||||
### Option A: Fixed Project Price
|
||||
|
||||
| Item | Price |
|
||||
|------|-------|
|
||||
| Phase 1: [NAME] | $[AMOUNT] |
|
||||
| Phase 2: [NAME] | $[AMOUNT] |
|
||||
| Phase 3: [NAME] | $[AMOUNT] |
|
||||
| **Total Project** | **$[TOTAL]** |
|
||||
|
||||
### Payment Schedule
|
||||
|
||||
| Payment | Amount | Due |
|
||||
|---------|--------|-----|
|
||||
| Deposit (50%) | $[AMOUNT] | Upon signing |
|
||||
| Phase 1 completion (25%) | $[AMOUNT] | Upon Phase 1 milestone |
|
||||
| Final delivery (25%) | $[AMOUNT] | Upon project completion |
|
||||
|
||||
*[ALTERNATIVE: For larger projects]*
|
||||
|
||||
| Payment | Amount | Due |
|
||||
|---------|--------|-----|
|
||||
| Deposit (30%) | $[AMOUNT] | Upon signing |
|
||||
| Phase 1 completion (25%) | $[AMOUNT] | Upon Phase 1 milestone |
|
||||
| Phase 2 completion (25%) | $[AMOUNT] | Upon Phase 2 milestone |
|
||||
| Final delivery (20%) | $[AMOUNT] | Upon project completion |
|
||||
|
||||
### Option B: Monthly Retainer (If Applicable)
|
||||
|
||||
| Item | Monthly Rate |
|
||||
|------|-------------|
|
||||
| [SCOPE DESCRIPTION] | $[AMOUNT]/month |
|
||||
| Minimum commitment | [X] months |
|
||||
| Included hours | [X] hours/month |
|
||||
| Overage rate | $[AMOUNT]/hr |
|
||||
|
||||
---
|
||||
|
||||
## What's Included
|
||||
|
||||
- All source code and documentation, delivered to your repository
|
||||
- [X] progress update meetings (weekly / biweekly)
|
||||
- Async communication via [Slack / Discord / email]
|
||||
- [X] days of post-delivery support
|
||||
- Full documentation and runbooks
|
||||
- Knowledge transfer session with your team
|
||||
|
||||
---
|
||||
|
||||
## Our Approach
|
||||
|
||||
### How We Work
|
||||
|
||||
Whitestone Engineering operates as a human-led, AI-augmented firm. Our principal engineer, Alexander Whitestone, leads all client relationships, architecture decisions, and quality reviews. Our fleet of five autonomous AI agents handles implementation, testing, and continuous operations.
|
||||
|
||||
This model means:
|
||||
- **Faster delivery** — multiple agents work in parallel
|
||||
- **Higher consistency** — automated testing and systematic processes
|
||||
- **Around-the-clock progress** — agents operate autonomously in 15-minute cycles
|
||||
- **Human accountability** — Alexander is your single point of contact
|
||||
|
||||
### Communication
|
||||
|
||||
- **Weekly status update** via email/Slack with progress, blockers, and next steps
|
||||
- **Biweekly sync call** (30 minutes) for discussion and feedback
|
||||
- **Async availability** during business hours for questions
|
||||
- **Emergency escalation** for critical issues
|
||||
|
||||
### Quality Assurance
|
||||
|
||||
- All code goes through automated test suite before delivery
|
||||
- Human review of all agent-produced work before client delivery
|
||||
- Documentation is written alongside code, not as an afterthought
|
||||
|
||||
---
|
||||
|
||||
## About Whitestone Engineering
|
||||
|
||||
We build production AI agent infrastructure. Our own systems include:
|
||||
|
||||
- **5 autonomous AI agents** running 24/7 as systemd services
|
||||
- **Custom orchestration framework** (Hermes) with persistent memory and multi-platform gateway
|
||||
- **43 active repositories** on a self-hosted Gitea forge with 16 organization members
|
||||
- **3,000+ automated tests** covering functionality, security, and behavioral boundaries
|
||||
- **Local LLM inference** for sovereign, API-independent operations
|
||||
- **Agent security layer** with conscience validation and jailbreak resistance
|
||||
|
||||
We don't just consult on AI agents — we run them in production every day.
|
||||
|
||||
---
|
||||
|
||||
## Terms
|
||||
|
||||
- This proposal is valid for 30 days from the date above
|
||||
- Work begins upon receipt of signed Master Services Agreement and initial deposit
|
||||
- Client owns all deliverables upon final payment
|
||||
- Whitestone Engineering retains the right to use general knowledge and techniques (but not client-specific code or data) in future work
|
||||
- Either party may terminate with 14 days written notice; work completed to date will be invoiced
|
||||
- All amounts in USD; payments via ACH or wire transfer
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review** this proposal and let us know if you have questions
|
||||
2. **Schedule** a call to discuss any adjustments: [SCHEDULING LINK]
|
||||
3. **Sign** the Master Services Agreement (we'll send it)
|
||||
4. **Deposit** the initial payment
|
||||
5. **Kickoff** — we start building
|
||||
|
||||
---
|
||||
|
||||
## Acceptance
|
||||
|
||||
By signing below, [CLIENT COMPANY] accepts this proposal and authorizes Whitestone Engineering LLC to proceed with the described scope of work under the terms outlined above.
|
||||
|
||||
**For [CLIENT COMPANY]:**
|
||||
|
||||
Name: ________________________________________
|
||||
|
||||
Title: ________________________________________
|
||||
|
||||
Signature: ____________________________________
|
||||
|
||||
Date: ________________________________________
|
||||
|
||||
**For Whitestone Engineering LLC:**
|
||||
|
||||
Name: Alexander Whitestone
|
||||
|
||||
Title: Principal
|
||||
|
||||
Signature: ____________________________________
|
||||
|
||||
Date: ________________________________________
|
||||
|
||||
---
|
||||
|
||||
*Whitestone Engineering LLC — Human-Led, Fleet-Powered*
|
||||
*[EMAIL] | [PHONE] | [WEBSITE]*
|
||||
216
operation-get-a-job/rate-card.md
Normal file
216
operation-get-a-job/rate-card.md
Normal file
@@ -0,0 +1,216 @@
|
||||
# Rate Card — Whitestone Engineering LLC
|
||||
|
||||
*Effective April 2026 | All prices USD*
|
||||
|
||||
---
|
||||
|
||||
## Hourly Rates
|
||||
|
||||
| Service Category | Rate Range | Typical Engagement |
|
||||
|-----------------|------------|-------------------|
|
||||
| **Agent Infrastructure** | $400 — $600/hr | Custom agent deployment, fleet orchestration, framework development |
|
||||
| **Security & Hardening** | $250 — $400/hr | Security audits, jailbreak resistance, conscience systems, compliance |
|
||||
| **Automation & Research** | $150 — $250/hr | CI/CD pipelines, automation, research synthesis, tooling |
|
||||
| **Advisory / Consulting** | $300 — $500/hr | Architecture review, technical strategy, due diligence |
|
||||
| **Emergency / Incident Response** | $500 — $800/hr | Production issues, security incidents, urgent fixes (4-hr minimum) |
|
||||
|
||||
### Rate Factors
|
||||
- Rates at the lower end of range for: retainer clients, longer engagements (40+ hours), pre-paid blocks
|
||||
- Rates at the higher end of range for: rush work (<1 week deadline), complex/novel problems, regulated industries
|
||||
- All hours billed in 15-minute increments, minimum 1 hour per engagement
|
||||
|
||||
---
|
||||
|
||||
## Project Pricing
|
||||
|
||||
### Agent Infrastructure Projects
|
||||
|
||||
| Project Type | Price Range | Timeline |
|
||||
|-------------|-------------|----------|
|
||||
| Single agent deployment (basic) | $5,000 — $8,000 | 1-2 weeks |
|
||||
| Single agent with custom skills | $8,000 — $12,000 | 2-3 weeks |
|
||||
| Multi-agent fleet (2-3 agents) | $15,000 — $25,000 | 3-5 weeks |
|
||||
| Full fleet with local inference | $25,000 — $45,000 | 6-8 weeks |
|
||||
| MCP server development | $5,000 — $15,000 | 1-3 weeks |
|
||||
| Multi-platform gateway | $8,000 — $12,000 | 2-3 weeks |
|
||||
| Agent framework customization | $10,000 — $20,000 | 3-5 weeks |
|
||||
|
||||
### Security & Hardening Projects
|
||||
|
||||
| Project Type | Price Range | Timeline |
|
||||
|-------------|-------------|----------|
|
||||
| Agent security audit (single agent) | $5,000 — $8,000 | 1-2 weeks |
|
||||
| Fleet security audit (multi-agent) | $8,000 — $15,000 | 2-3 weeks |
|
||||
| Jailbreak resistance implementation | $5,000 — $10,000 | 1-2 weeks |
|
||||
| Conscience validation system | $8,000 — $15,000 | 2-4 weeks |
|
||||
| Red team exercise (AI systems) | $10,000 — $20,000 | 2-4 weeks |
|
||||
| Compliance readiness (SOC 2 prep) | $15,000 — $25,000 | 4-8 weeks |
|
||||
|
||||
### Automation & Research Projects
|
||||
|
||||
| Project Type | Price Range | Timeline |
|
||||
|-------------|-------------|----------|
|
||||
| CI/CD pipeline setup | $3,000 — $6,000 | 1 week |
|
||||
| Webhook automation system | $3,000 — $5,000 | 1 week |
|
||||
| Technical due diligence report | $5,000 — $10,000 | 1-2 weeks |
|
||||
| Research synthesis & report | $3,000 — $8,000 | 1-2 weeks |
|
||||
| Infrastructure automation | $5,000 — $10,000 | 1-3 weeks |
|
||||
| Custom tooling development | $5,000 — $12,000 | 1-3 weeks |
|
||||
| Proof of concept / prototype | $5,000 — $10,000 | 1-2 weeks |
|
||||
|
||||
---
|
||||
|
||||
## Package Deals
|
||||
|
||||
### Starter — $5,000
|
||||
|
||||
| Included | Details |
|
||||
|----------|---------|
|
||||
| Agents | 1 Hermes agent instance |
|
||||
| Automation | Basic cron-scheduled workflow |
|
||||
| Platform | 1 integration (Telegram, Discord, or Slack) |
|
||||
| Monitoring | Basic health checks and alerting |
|
||||
| Documentation | Setup guide and runbook |
|
||||
| Support | 14 days post-deployment |
|
||||
| Timeline | 1-2 weeks |
|
||||
|
||||
---
|
||||
|
||||
### Professional — $15,000
|
||||
|
||||
| Included | Details |
|
||||
|----------|---------|
|
||||
| Agents | Up to 3 Hermes agent instances |
|
||||
| Orchestration | Fleet coordination and task routing |
|
||||
| Platforms | 2+ platform integrations |
|
||||
| Memory | Persistent memory and skills system |
|
||||
| Monitoring | Dashboard with health checks |
|
||||
| Automation | Webhook-driven pipelines |
|
||||
| Documentation | Comprehensive docs and runbooks |
|
||||
| Support | 30 days post-deployment |
|
||||
| Timeline | 3-4 weeks |
|
||||
|
||||
---
|
||||
|
||||
### Enterprise — $40,000+
|
||||
|
||||
| Included | Details |
|
||||
|----------|---------|
|
||||
| Agents | 5+ Hermes agent instances |
|
||||
| Inference | Local LLM stack (Ollama + models) |
|
||||
| Forge | Self-hosted Gitea with CI/CD |
|
||||
| Security | Full hardening + conscience validation |
|
||||
| Comms | Sovereign communication layer (Nostr) |
|
||||
| Skills | Custom agent skills development |
|
||||
| Operations | Burn-mode autonomous cycles |
|
||||
| Testing | Full test suite (comprehensive coverage) |
|
||||
| Support | Dedicated channel + 90-day support |
|
||||
| SLA | Priority response guarantee |
|
||||
| Timeline | 6-8 weeks |
|
||||
|
||||
*Enterprise pricing scales based on scope. Starting at $40k, typical range $40-80k.*
|
||||
|
||||
---
|
||||
|
||||
## Retainer Agreements
|
||||
|
||||
| Tier | Monthly Rate | Included Hours | Overage Rate | Commitment |
|
||||
|------|-------------|---------------|-------------|-----------|
|
||||
| **Advisory** | $3,000/mo | 10 hrs | $350/hr | 3 months |
|
||||
| **Standard** | $5,000/mo | 20 hrs | $300/hr | 3 months |
|
||||
| **Priority** | $10,000/mo | 40 hrs | $275/hr | 6 months |
|
||||
| **Dedicated** | $15,000/mo | 80 hrs | $250/hr | 6 months |
|
||||
|
||||
### Retainer Benefits
|
||||
- Lower effective hourly rate than one-off engagements
|
||||
- Priority scheduling (start within 48 hours vs. standard 1-2 week queue)
|
||||
- Unused hours roll over for one month
|
||||
- Direct Slack/Discord channel with the team
|
||||
- Monthly strategic review call
|
||||
- Dedicated retainers include guaranteed availability
|
||||
|
||||
---
|
||||
|
||||
## Pre-Paid Hour Blocks
|
||||
|
||||
| Block Size | Rate | Total | Savings |
|
||||
|-----------|------|-------|---------|
|
||||
| 10 hours | $300/hr | $3,000 | 10-15% off standard |
|
||||
| 25 hours | $275/hr | $6,875 | 15-20% off standard |
|
||||
| 50 hours | $250/hr | $12,500 | 20-25% off standard |
|
||||
| 100 hours | $225/hr | $22,500 | 25-30% off standard |
|
||||
|
||||
*Pre-paid blocks are valid for 6 months from purchase. Non-refundable but transferable to other projects.*
|
||||
|
||||
---
|
||||
|
||||
## Discovery & Scoping
|
||||
|
||||
| Item | Price |
|
||||
|------|-------|
|
||||
| Initial consultation (30 min) | Free |
|
||||
| Discovery session (2 hours) | Free (credited toward signed project) |
|
||||
| Paid discovery / audit (1-2 days) | $2,000 — $4,000 |
|
||||
| Architecture review | $3,000 — $5,000 |
|
||||
|
||||
*We always offer a free 30-minute consultation. For complex projects, we recommend a paid discovery phase to ensure accurate scoping.*
|
||||
|
||||
---
|
||||
|
||||
## Payment Terms
|
||||
|
||||
| Term | Details |
|
||||
|------|---------|
|
||||
| **New clients** | 50% deposit upfront, balance on completion |
|
||||
| **Established clients** | Net-15 from invoice date |
|
||||
| **Retainers** | Due on the 1st of each month |
|
||||
| **Pre-paid blocks** | Due upon purchase |
|
||||
| **Payment methods** | ACH transfer (preferred), wire transfer, credit card (+3%) |
|
||||
| **Late payments** | 1.5% monthly interest after 30 days |
|
||||
| **Currency** | USD only |
|
||||
|
||||
---
|
||||
|
||||
## What's Always Included
|
||||
|
||||
Regardless of engagement type, every project includes:
|
||||
|
||||
- Source code delivered to your repository
|
||||
- Documentation (technical docs + runbooks)
|
||||
- Post-delivery support period (varies by tier)
|
||||
- Human review of all deliverables before handoff
|
||||
- Knowledge transfer / walkthrough session
|
||||
|
||||
---
|
||||
|
||||
## What's Not Included (Unless Scoped)
|
||||
|
||||
- Third-party API costs (OpenAI, Anthropic, cloud hosting)
|
||||
- Hardware procurement
|
||||
- Ongoing hosting and maintenance (available as retainer add-on)
|
||||
- Training for client team beyond initial knowledge transfer
|
||||
- Legal or compliance advice (we build the tech, not the policy)
|
||||
|
||||
---
|
||||
|
||||
## Minimum Engagement
|
||||
|
||||
- **Minimum project size:** $3,000
|
||||
- **Minimum hourly engagement:** 4 hours
|
||||
- **Minimum retainer:** $3,000/month
|
||||
|
||||
*We focus on meaningful engagements where we can deliver real impact. For smaller needs, we're happy to recommend other resources.*
|
||||
|
||||
---
|
||||
|
||||
## How to Engage
|
||||
|
||||
1. **Book a call:** [SCHEDULING LINK]
|
||||
2. **Email:** [EMAIL ADDRESS]
|
||||
3. **Message:** Available on Telegram, Discord, or LinkedIn
|
||||
|
||||
---
|
||||
|
||||
*Whitestone Engineering LLC — Human-Led, Fleet-Powered*
|
||||
*Rates subject to change. This rate card supersedes all previous versions.*
|
||||
*Last updated: April 2026*
|
||||
184
operation-get-a-job/service-offerings.md
Normal file
184
operation-get-a-job/service-offerings.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Service Offerings
|
||||
|
||||
## Who We Are
|
||||
|
||||
Whitestone Engineering is a human-led, AI-augmented engineering firm. Our principal engineer directs a fleet of five autonomous AI agents that build, test, and ship production infrastructure around the clock. We deliver at the speed and consistency of a 10-person team with the overhead of one.
|
||||
|
||||
---
|
||||
|
||||
## Tier 1: Agent Infrastructure
|
||||
|
||||
**For companies that want autonomous AI agents working for them.**
|
||||
|
||||
### What We Build
|
||||
- Custom AI agent deployment using our battle-tested Hermes framework
|
||||
- Multi-agent fleet orchestration with persistent memory and skills systems
|
||||
- MCP (Model Context Protocol) server development and integration
|
||||
- Local LLM inference stacks (Ollama, vLLM, custom model serving)
|
||||
- Agent-to-agent communication networks
|
||||
- Cron-scheduled autonomous workflows (burn-mode operations)
|
||||
- Multi-platform agent gateways (Telegram, Discord, Slack, custom)
|
||||
- Self-hosted code forge setup with full CI/CD integration
|
||||
|
||||
### Pricing
|
||||
- **Hourly:** $400 — $600/hr
|
||||
- **Project:** $15,000 — $25,000+
|
||||
- **Retainer:** $8,000 — $15,000/month
|
||||
|
||||
### Ideal Client
|
||||
- AI startups building agent products
|
||||
- Companies wanting to deploy internal AI workforce
|
||||
- Organizations needing sovereign (self-hosted) AI infrastructure
|
||||
- Teams that want agents integrated into their existing toolchain
|
||||
|
||||
### Deliverables Include
|
||||
- Deployed agent system with documentation
|
||||
- Monitoring and health check dashboards
|
||||
- Runbook for operations and troubleshooting
|
||||
- 30 days of post-deployment support
|
||||
|
||||
---
|
||||
|
||||
## Tier 2: Security & Hardening
|
||||
|
||||
**For companies that already have AI systems and need them locked down.**
|
||||
|
||||
### What We Build
|
||||
- AI agent security audits (jailbreak resistance, prompt injection, data exfiltration)
|
||||
- Conscience validation systems (ethical guardrails that actually work)
|
||||
- Crisis detection and automated response pipelines
|
||||
- CVE-class vulnerability identification and remediation
|
||||
- Secure agent communication protocols
|
||||
- Audit logging and compliance frameworks
|
||||
- Red-teaming exercises against existing AI deployments
|
||||
|
||||
### Pricing
|
||||
- **Hourly:** $250 — $400/hr
|
||||
- **Project:** $8,000 — $15,000
|
||||
- **Retainer:** $5,000 — $10,000/month
|
||||
|
||||
### Ideal Client
|
||||
- Companies deploying customer-facing AI agents
|
||||
- Regulated industries (finance, healthcare) using LLMs
|
||||
- Organizations that have had AI safety incidents
|
||||
- AI companies preparing for SOC 2 or similar compliance
|
||||
|
||||
### Deliverables Include
|
||||
- Security assessment report with severity ratings
|
||||
- Remediation implementation (not just a report — we fix it)
|
||||
- Jailbreak resistance test suite
|
||||
- Ongoing monitoring recommendations
|
||||
- Optional: retained security review as systems evolve
|
||||
|
||||
---
|
||||
|
||||
## Tier 3: Automation & Research
|
||||
|
||||
**For companies that need things built, automated, or investigated.**
|
||||
|
||||
### What We Build
|
||||
- Webhook-driven CI/CD pipelines
|
||||
- Automated data processing and ETL workflows
|
||||
- Intelligence reports and research synthesis
|
||||
- Custom tooling and scripts
|
||||
- Infrastructure automation (Ansible, Terraform, shell)
|
||||
- API integrations and middleware
|
||||
- Technical due diligence reports
|
||||
- Proof-of-concept development
|
||||
|
||||
### Pricing
|
||||
- **Hourly:** $150 — $250/hr
|
||||
- **Project:** $5,000 — $10,000
|
||||
- **Retainer:** $3,000 — $5,000/month
|
||||
|
||||
### Ideal Client
|
||||
- Startups that need a "get it done" engineering partner
|
||||
- VCs needing technical due diligence on portfolio companies
|
||||
- Companies drowning in manual processes
|
||||
- Research teams that need technical implementation support
|
||||
|
||||
### Deliverables Include
|
||||
- Working automation/pipeline with documentation
|
||||
- Source code in client's repository
|
||||
- Handoff documentation for internal team
|
||||
- 14 days of post-delivery support
|
||||
|
||||
---
|
||||
|
||||
## Package Deals
|
||||
|
||||
### Starter — $5,000
|
||||
*Get your first AI agent working for you.*
|
||||
|
||||
- Single Hermes agent deployment
|
||||
- Basic automation workflow (cron-scheduled tasks)
|
||||
- One platform integration (Telegram, Discord, or Slack)
|
||||
- Basic monitoring and alerting
|
||||
- Documentation and runbook
|
||||
- 14 days post-deployment support
|
||||
|
||||
**Timeline: 1-2 weeks**
|
||||
|
||||
---
|
||||
|
||||
### Professional — $15,000
|
||||
*A multi-agent fleet that operates autonomously.*
|
||||
|
||||
- Up to 3 Hermes agent instances
|
||||
- Fleet coordination and task routing
|
||||
- Multi-platform gateway (2+ platforms)
|
||||
- Persistent memory and skills system
|
||||
- Monitoring dashboard with health checks
|
||||
- Webhook-driven automation pipelines
|
||||
- Comprehensive documentation
|
||||
- 30 days post-deployment support
|
||||
|
||||
**Timeline: 3-4 weeks**
|
||||
|
||||
---
|
||||
|
||||
### Enterprise — $40,000+
|
||||
*Full sovereign infrastructure with local inference.*
|
||||
|
||||
- Full agent fleet (5+ instances)
|
||||
- Local LLM inference stack (no API dependency)
|
||||
- Self-hosted code forge (Gitea) with CI/CD
|
||||
- Agent security hardening and conscience validation
|
||||
- Nostr-based sovereign communication layer
|
||||
- Custom agent skills development
|
||||
- Burn-mode autonomous operation cycles
|
||||
- Full test suite and quality assurance
|
||||
- Dedicated support channel
|
||||
- 90 days post-deployment support
|
||||
- Priority response SLA
|
||||
|
||||
**Timeline: 6-8 weeks**
|
||||
|
||||
---
|
||||
|
||||
## How We Work
|
||||
|
||||
1. **Discovery Call** (30 min, free) — We learn about your problem
|
||||
2. **Proposal** (1-2 business days) — Detailed scope, timeline, and pricing
|
||||
3. **Kickoff** (Day 1) — 50% deposit, project begins immediately
|
||||
4. **Delivery** — Fleet builds, human reviews, client receives updates
|
||||
5. **Handoff** — Documentation, training, and support period begins
|
||||
6. **Ongoing** (optional) — Retained relationship for continued development
|
||||
|
||||
---
|
||||
|
||||
## Why Us vs. Traditional Consultancies
|
||||
|
||||
| Factor | Traditional | Whitestone Engineering |
|
||||
|--------|-------------|----------------------|
|
||||
| Team size | Must hire/staff up | Fleet is always ready |
|
||||
| Hours/day | 8 | 24 (agents don't sleep) |
|
||||
| Ramp-up time | Weeks | Days |
|
||||
| Consistency | Varies by person | Systematic and reproducible |
|
||||
| AI expertise | Learning it | Built the infrastructure |
|
||||
| Overhead | Office, HR, benefits | Lean and efficient |
|
||||
| Cost | $300-500/hr billed | Competitive, transparent |
|
||||
|
||||
---
|
||||
|
||||
*All prices are in USD. Custom scoping available for complex engagements. Volume discounts for multi-project commitments.*
|
||||
184
scripts/reassign_fenrir.py
Normal file
184
scripts/reassign_fenrir.py
Normal file
@@ -0,0 +1,184 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Reassign Fenrir's orphaned issues to active wizards based on issue type."""
|
||||
|
||||
import json
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
import time
|
||||
|
||||
TOKEN = "dc0517a965226b7a0c5ffdd961b1ba26521ac592"
|
||||
BASE_URL = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
REPO = "Timmy_Foundation/the-nexus"
|
||||
|
||||
HEADERS = {
|
||||
"Authorization": f"token {TOKEN}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Wizard assignments
|
||||
EZRA = "ezra" # Architecture, docs, epics, planning
|
||||
ALLEGRO = "allegro" # Code implementation, UI, features
|
||||
BEZALEL = "bezalel" # Execution, ops, testing, infra, monitoring
|
||||
|
||||
def classify_issue(number, title):
|
||||
"""Classify issue based on number and title."""
|
||||
title_upper = title.upper()
|
||||
|
||||
# Skip the triage issue itself and the permanent escalation issue
|
||||
if number == 823:
|
||||
return None # Skip - this is the issue we're working on
|
||||
if number == 431:
|
||||
return EZRA # Master escalation -> Ezra (archivist)
|
||||
|
||||
# Allegro self-improvement milestones (M0-M7) -> Bezalel (execution/ops)
|
||||
import re
|
||||
if re.match(r'^M\d+:', title):
|
||||
return BEZALEL
|
||||
|
||||
# EPIC/GRAND EPIC/CONSOLIDATION/FRONTIER -> Ezra (architecture)
|
||||
if any(tag in title_upper for tag in [
|
||||
'[EPIC]', '[GRAND EPIC]', 'EPIC:', '[CONSOLIDATION]', '[FRONTIER]',
|
||||
'[CRITIQUE]', '[PROPOSAL]', '[RETROSPECTIVE', '[REPORT]',
|
||||
'EPIC #', 'GRAND EPIC'
|
||||
]):
|
||||
return EZRA
|
||||
|
||||
# Allegro self-improvement epic -> Bezalel
|
||||
if 'ALLEGRO SELF-IMPROVEMENT' in title_upper or 'ALLEGRO HYBRID PRODUCTION' in title_upper:
|
||||
return BEZALEL
|
||||
|
||||
# Ops/Monitoring/Infra/Testing -> Bezalel
|
||||
if any(tag in title_upper for tag in [
|
||||
'[OPS]', '[MONITORING]', '[PRUNE]', '[OFFLOAD]', '[BUG]',
|
||||
'[CRON]', '[INSTALL]', '[INFRA]', '[TRAINING]', '[CI/',
|
||||
'[TRIAGE]', # triage/ops tasks
|
||||
]):
|
||||
return BEZALEL
|
||||
|
||||
# Allegro backlog items -> Allegro
|
||||
if '[ALLEGRO-BACKLOG]' in title_upper:
|
||||
return ALLEGRO
|
||||
|
||||
# Reporting -> Bezalel
|
||||
if any(tag in title_upper for tag in ['[REPORT]', 'BURN-MODE', 'PERFORMANCE REPORT']):
|
||||
return BEZALEL
|
||||
|
||||
# Fleet management/wizard ops -> Bezalel
|
||||
if any(tag in title_upper for tag in ['FLEET', 'WIZARD', 'GHOST WIZARD', 'TIMMY', 'BRING LIVE']):
|
||||
# But EPICs about fleet -> Ezra (already handled above)
|
||||
return BEZALEL
|
||||
|
||||
# Code implementation: NEXUS UI/3D, MIGRATION, UI, UX, PORTALS, CHAT, PANELS -> Allegro
|
||||
if any(tag in title_upper for tag in [
|
||||
'[NEXUS]', '[MIGRATION]', '[UI]', '[UX]', '[PORTALS]', '[PORTAL]',
|
||||
'[CHAT]', '[PANELS]', '[DATA]', '[PERF]', '[RESPONSIVE]', '[A11Y]',
|
||||
'[VISUAL]', '[AUDIO]', '[MEDIA]', '[CONCEPT]', '[BRIDGE]',
|
||||
'[AUTH]', '[VISITOR]', '[IDENTITY]', '[SESSION]', '[RELIABILITY]',
|
||||
'[HARNESS]', '[VALIDATION]', '[SOVEREIGNTY]', '[M6-P',
|
||||
'PROSE ENGINE', 'AUTO-SKILL', 'GITEA_API', 'CRON JOB',
|
||||
'HEARTBEAT DAEMON', 'FLEET HEALTH JSON',
|
||||
'[FENRIR] NEXUS', # fenrir's nexus issues -> allegro
|
||||
]):
|
||||
return ALLEGRO
|
||||
|
||||
# Default: Bezalel for anything else
|
||||
return BEZALEL
|
||||
|
||||
|
||||
def get_all_fenrir_issues():
|
||||
"""Fetch all open issues assigned to fenrir."""
|
||||
issues = []
|
||||
page = 1
|
||||
while True:
|
||||
url = f"{BASE_URL}/repos/{REPO}/issues?assignee=fenrir&state=open&limit=50&page={page}"
|
||||
req = urllib.request.Request(url, headers={"Authorization": f"token {TOKEN}"})
|
||||
with urllib.request.urlopen(req) as resp:
|
||||
data = json.loads(resp.read())
|
||||
if not data:
|
||||
break
|
||||
issues.extend(data)
|
||||
if len(data) < 50:
|
||||
break
|
||||
page += 1
|
||||
return issues
|
||||
|
||||
|
||||
def reassign_issue(number, assignee):
|
||||
"""Reassign an issue to a new wizard."""
|
||||
url = f"{BASE_URL}/repos/{REPO}/issues/{number}"
|
||||
body = json.dumps({"assignees": [assignee]}).encode()
|
||||
req = urllib.request.Request(url, data=body, headers=HEADERS, method="PATCH")
|
||||
try:
|
||||
with urllib.request.urlopen(req) as resp:
|
||||
return resp.status, None
|
||||
except urllib.error.HTTPError as e:
|
||||
return e.code, e.read().decode()
|
||||
|
||||
|
||||
def main():
|
||||
print("Fetching all fenrir issues...")
|
||||
issues = get_all_fenrir_issues()
|
||||
print(f"Found {len(issues)} open issues assigned to fenrir\n")
|
||||
|
||||
# Classify
|
||||
assignments = {EZRA: [], ALLEGRO: [], BEZALEL: [], None: []}
|
||||
for issue in issues:
|
||||
num = issue["number"]
|
||||
title = issue["title"]
|
||||
wizard = classify_issue(num, title)
|
||||
assignments[wizard].append((num, title))
|
||||
|
||||
print(f"Classification:")
|
||||
print(f" Ezra (architecture): {len(assignments[EZRA])} issues")
|
||||
print(f" Allegro (code): {len(assignments[ALLEGRO])} issues")
|
||||
print(f" Bezalel (execution): {len(assignments[BEZALEL])} issues")
|
||||
print(f" Skip (unchanged): {len(assignments[None])} issues")
|
||||
print()
|
||||
|
||||
# Show classification
|
||||
for wizard, label in [(EZRA, 'EZRA'), (ALLEGRO, 'ALLEGRO'), (BEZALEL, 'BEZALEL')]:
|
||||
print(f"\n--- {label} ---")
|
||||
for num, title in assignments[wizard]:
|
||||
print(f" #{num}: {title[:70]}")
|
||||
|
||||
print(f"\n--- SKIPPED ---")
|
||||
for num, title in assignments[None]:
|
||||
print(f" #{num}: {title[:70]}")
|
||||
|
||||
# Execute reassignments
|
||||
print("\n\nExecuting reassignments...")
|
||||
results = {"success": [], "failed": []}
|
||||
|
||||
for wizard in [EZRA, ALLEGRO, BEZALEL]:
|
||||
for num, title in assignments[wizard]:
|
||||
status, error = reassign_issue(num, wizard)
|
||||
if status in (200, 201):
|
||||
print(f" ✓ #{num} -> {wizard}")
|
||||
results["success"].append((num, wizard))
|
||||
else:
|
||||
print(f" ✗ #{num} -> {wizard} (HTTP {status}: {error[:100] if error else 'unknown'})")
|
||||
results["failed"].append((num, wizard, error))
|
||||
time.sleep(0.1) # Rate limiting
|
||||
|
||||
print(f"\n\nSummary:")
|
||||
print(f" Successfully reassigned: {len(results['success'])}")
|
||||
print(f" Failed: {len(results['failed'])}")
|
||||
|
||||
# Save results for PR
|
||||
with open("/tmp/reassignment_results.json", "w") as f:
|
||||
json.dump({
|
||||
"total": len(issues),
|
||||
"ezra": [(n, t) for n, t in assignments[EZRA]],
|
||||
"allegro": [(n, t) for n, t in assignments[ALLEGRO]],
|
||||
"bezalel": [(n, t) for n, t in assignments[BEZALEL]],
|
||||
"skipped": [(n, t) for n, t in assignments[None]],
|
||||
"success_count": len(results["success"]),
|
||||
"failed_count": len(results["failed"]),
|
||||
"failed_details": [(n, w, str(e)) for n, w, e in results["failed"]],
|
||||
}, f, indent=2)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
42
tests/test_help_page.py
Normal file
42
tests/test_help_page.py
Normal file
@@ -0,0 +1,42 @@
|
||||
"""Tests for the /help page. Refs: #833 (Missing /help page)."""
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def test_help_html_exists() -> None:
|
||||
assert Path("help.html").exists(), "help.html must exist to resolve /help 404"
|
||||
|
||||
|
||||
def test_help_html_is_valid_html() -> None:
|
||||
content = Path("help.html").read_text()
|
||||
assert "<!DOCTYPE html>" in content
|
||||
assert "<html" in content
|
||||
assert "</html>" in content
|
||||
|
||||
|
||||
def test_help_page_has_required_sections() -> None:
|
||||
content = Path("help.html").read_text()
|
||||
|
||||
# Navigation controls section
|
||||
assert "Navigation Controls" in content
|
||||
|
||||
# Chat commands section
|
||||
assert "Chat" in content
|
||||
|
||||
# Portal reference
|
||||
assert "Portal" in content
|
||||
|
||||
# Back link to home
|
||||
assert 'href="/"' in content
|
||||
|
||||
|
||||
def test_help_page_links_back_to_home() -> None:
|
||||
content = Path("help.html").read_text()
|
||||
assert 'href="/"' in content, "help page must have a link back to the main Nexus world"
|
||||
|
||||
|
||||
def test_help_page_has_keyboard_controls() -> None:
|
||||
content = Path("help.html").read_text()
|
||||
# Movement keys are listed individually as <kbd> elements
|
||||
for key in ["<kbd>W</kbd>", "<kbd>A</kbd>", "<kbd>S</kbd>", "<kbd>D</kbd>",
|
||||
"Mouse", "Enter", "Esc"]:
|
||||
assert key in content, f"help page must document the {key!r} control"
|
||||
39
tests/test_manifest.py
Normal file
39
tests/test_manifest.py
Normal file
@@ -0,0 +1,39 @@
|
||||
"""Tests for manifest.json PWA support. Fixes #832 (Missing manifest.json)."""
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def test_manifest_exists() -> None:
|
||||
assert Path("manifest.json").exists(), "manifest.json must exist for PWA support"
|
||||
|
||||
|
||||
def test_manifest_is_valid_json() -> None:
|
||||
content = Path("manifest.json").read_text()
|
||||
data = json.loads(content)
|
||||
assert isinstance(data, dict)
|
||||
|
||||
|
||||
def test_manifest_has_required_pwa_fields() -> None:
|
||||
data = json.loads(Path("manifest.json").read_text())
|
||||
assert "name" in data, "manifest.json must have 'name'"
|
||||
assert "short_name" in data, "manifest.json must have 'short_name'"
|
||||
assert "start_url" in data, "manifest.json must have 'start_url'"
|
||||
assert "display" in data, "manifest.json must have 'display'"
|
||||
assert "icons" in data, "manifest.json must have 'icons'"
|
||||
|
||||
|
||||
def test_manifest_icons_non_empty() -> None:
|
||||
data = json.loads(Path("manifest.json").read_text())
|
||||
assert len(data["icons"]) > 0, "manifest.json must define at least one icon"
|
||||
|
||||
|
||||
def test_index_html_references_manifest() -> None:
|
||||
content = Path("index.html").read_text()
|
||||
assert 'rel="manifest"' in content, "index.html must have <link rel=\"manifest\">"
|
||||
assert "manifest.json" in content, "index.html must reference manifest.json"
|
||||
|
||||
|
||||
def test_help_html_references_manifest() -> None:
|
||||
content = Path("help.html").read_text()
|
||||
assert 'rel="manifest"' in content, "help.html must have <link rel=\"manifest\">"
|
||||
assert "manifest.json" in content, "help.html must reference manifest.json"
|
||||
120
tests/test_webhook_health_dashboard.py
Normal file
120
tests/test_webhook_health_dashboard.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Tests for webhook health dashboard generation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
PROJECT_ROOT = Path(__file__).parent.parent
|
||||
|
||||
_spec = importlib.util.spec_from_file_location(
|
||||
"webhook_health_dashboard_test",
|
||||
PROJECT_ROOT / "bin" / "webhook_health_dashboard.py",
|
||||
)
|
||||
_mod = importlib.util.module_from_spec(_spec)
|
||||
sys.modules["webhook_health_dashboard_test"] = _mod
|
||||
_spec.loader.exec_module(_mod)
|
||||
|
||||
AgentHealth = _mod.AgentHealth
|
||||
check_agents = _mod.check_agents
|
||||
load_history = _mod.load_history
|
||||
parse_targets = _mod.parse_targets
|
||||
save_history = _mod.save_history
|
||||
sort_key = None
|
||||
to_markdown = _mod.to_markdown
|
||||
write_dashboard = _mod.write_dashboard
|
||||
main = _mod.main
|
||||
|
||||
|
||||
class TestParseTargets:
|
||||
def test_defaults_when_none(self):
|
||||
targets = parse_targets(None)
|
||||
assert targets["allegro"].endswith(":8651/health")
|
||||
assert targets["ezra"].endswith(":8652/health")
|
||||
|
||||
def test_parse_csv_mapping(self):
|
||||
targets = parse_targets("alpha=http://a/health,beta=http://b/health")
|
||||
assert targets == {
|
||||
"alpha": "http://a/health",
|
||||
"beta": "http://b/health",
|
||||
}
|
||||
|
||||
|
||||
class TestCheckAgents:
|
||||
@patch("webhook_health_dashboard_test.probe_health")
|
||||
def test_updates_last_success_for_healthy(self, mock_probe):
|
||||
mock_probe.return_value = (True, 200, 42, "HTTP 200")
|
||||
history = {"agents": {}, "runs": []}
|
||||
results = check_agents({"allegro": "http://localhost:8651/health"}, history, timeout=1, stale_after=300)
|
||||
assert len(results) == 1
|
||||
assert results[0].healthy is True
|
||||
assert history["agents"]["allegro"]["last_success_ts"] is not None
|
||||
|
||||
@patch("webhook_health_dashboard_test.probe_health")
|
||||
def test_marks_stale_after_threshold(self, mock_probe):
|
||||
mock_probe.return_value = (False, None, 12, "URL error: refused")
|
||||
history = {
|
||||
"agents": {
|
||||
"allegro": {
|
||||
"last_success_ts": time.time() - 301,
|
||||
}
|
||||
},
|
||||
"runs": [],
|
||||
}
|
||||
results = check_agents({"allegro": "http://localhost:8651/health"}, history, timeout=1, stale_after=300)
|
||||
assert results[0].healthy is False
|
||||
assert results[0].stale is True
|
||||
|
||||
|
||||
class TestMarkdown:
|
||||
def test_contains_table_and_icons(self):
|
||||
now = time.time()
|
||||
results = [
|
||||
AgentHealth("allegro", "http://localhost:8651/health", 200, True, 31, False, now - 5, now, "HTTP 200 — ok"),
|
||||
AgentHealth("ezra", "http://localhost:8652/health", None, False, 14, True, now - 600, now, "URL error: refused"),
|
||||
]
|
||||
md = to_markdown(results, generated_at=now)
|
||||
assert "| Agent | Status | HTTP |" in md
|
||||
assert "🟢" in md
|
||||
assert "🔴" in md
|
||||
assert "Stale agents" in md
|
||||
assert "ezra" in md
|
||||
|
||||
|
||||
class TestFileIO:
|
||||
def test_save_and_load_history(self, tmp_path):
|
||||
path = tmp_path / "history.json"
|
||||
payload = {"agents": {"a": {"last_success_ts": 1}}, "runs": []}
|
||||
save_history(path, payload)
|
||||
loaded = load_history(path)
|
||||
assert loaded == payload
|
||||
|
||||
def test_write_dashboard(self, tmp_path):
|
||||
out = tmp_path / "dashboard.md"
|
||||
write_dashboard(out, "# Test")
|
||||
assert out.read_text() == "# Test\n"
|
||||
|
||||
|
||||
class TestMain:
|
||||
@patch("webhook_health_dashboard_test.probe_health")
|
||||
def test_main_writes_outputs(self, mock_probe, tmp_path):
|
||||
mock_probe.return_value = (True, 200, 10, "HTTP 200")
|
||||
output = tmp_path / "dashboard.md"
|
||||
history = tmp_path / "history.json"
|
||||
rc = main([
|
||||
"--targets", "allegro=http://localhost:8651/health",
|
||||
"--output", str(output),
|
||||
"--history", str(history),
|
||||
"--timeout", "1",
|
||||
"--stale-after", "300",
|
||||
])
|
||||
assert rc == 0
|
||||
assert output.exists()
|
||||
assert history.exists()
|
||||
assert "allegro" in output.read_text()
|
||||
runs = json.loads(history.read_text())["runs"]
|
||||
assert len(runs) == 1
|
||||
Reference in New Issue
Block a user