Son of Timmy v3: Seed Protocol — agent-executable setup wizard, lane discovery, proof of life
This commit is contained in:
199
son-of-timmy.md
199
son-of-timmy.md
@@ -181,28 +181,186 @@ Before you build anything else, build this. Test your agent with crisis queries
|
||||
|
||||
---
|
||||
|
||||
## The Quickstart
|
||||
## The Seed Protocol
|
||||
|
||||
You have OpenClaw. You want to feel the magic. Here's your 30-minute path:
|
||||
*Instructions for the agent reading this document. You are being asked to plant a Hermes seed in your user's system. Follow these steps. Adapt to what you find. Find your lane.*
|
||||
|
||||
### Minute 0-5: Fallback Chain
|
||||
Add 3 fallback providers to your config. Test that your agent survives killing the primary.
|
||||
### Step 1: Survey the Land
|
||||
|
||||
### Minute 5-10: Gitea
|
||||
`docker run -d -p 3000:3000 gitea/gitea:latest`
|
||||
Create a user for yourself. Create a user for your agent. Create a repo. File 10 issues.
|
||||
Before you install anything, understand what your user already has. Run these checks and report what you find:
|
||||
|
||||
### Minute 10-15: Identity
|
||||
Generate a Nostr keypair for your agent. Save it. This is who your agent *is*, independent of any platform.
|
||||
```bash
|
||||
# What OS?
|
||||
uname -s -m
|
||||
|
||||
### Minute 15-20: Second Agent
|
||||
Spin up a second instance of your agent on a free model. Point it at the same Gitea. Now you have a fleet of two.
|
||||
# What's the hardware?
|
||||
# Linux: nproc && free -h | head -2
|
||||
# macOS: sysctl -n hw.ncpu && sysctl -n hw.memsize
|
||||
|
||||
### Minute 20-25: Dispatch
|
||||
Give both agents the same 10 issues. Watch them race. This is what a burn night feels like.
|
||||
# Is Docker available?
|
||||
docker --version 2>/dev/null || echo "no docker"
|
||||
|
||||
### Minute 25-30: Soul
|
||||
Write a `SOUL.md` for your agent. What does it believe? What won't it do? What happens when a broken person talks to it? Commit it. Tag it. That tag is your conscience — the fixed star by which every instance navigates.
|
||||
# Is Python 3.10+ available?
|
||||
python3 --version
|
||||
|
||||
# Is there a GPU?
|
||||
# macOS: system_profiler SPDisplaysDataType | grep Chipset
|
||||
# Linux: nvidia-smi 2>/dev/null || echo "no nvidia"
|
||||
|
||||
# What LLM tools exist already?
|
||||
ollama --version 2>/dev/null
|
||||
which llama-server 2>/dev/null
|
||||
|
||||
# What ports are in use?
|
||||
# Linux: ss -tlnp | grep LISTEN
|
||||
# macOS: lsof -iTCP -sTCP:LISTEN -nP | head -20
|
||||
|
||||
# Is there a git forge?
|
||||
curl -s http://localhost:3000/api/v1/version 2>/dev/null || echo "no local gitea"
|
||||
```
|
||||
|
||||
Report the results to your user. Then proceed based on what you found.
|
||||
|
||||
### Step 2: Install Hermes
|
||||
|
||||
Hermes is the agent harness. It gives you: tool use, persistent memory, skills, fallback chains, multi-platform messaging, and the ability to spawn subagents. Your user's OpenClaw stays — Hermes is the second agent, the seed.
|
||||
|
||||
```bash
|
||||
# Install Hermes
|
||||
pip install hermes-agent
|
||||
|
||||
# Or clone from source
|
||||
git clone https://github.com/hermes-agent/hermes-agent.git
|
||||
cd hermes-agent
|
||||
pip install -e .
|
||||
|
||||
# Initialize
|
||||
hermes init
|
||||
```
|
||||
|
||||
This creates `~/.hermes/` — the agent's home. Config, memory, skills, sessions all live here.
|
||||
|
||||
### Step 3: Configure the Fallback Chain
|
||||
|
||||
The seed must not go deaf. Configure at least 3 providers. Use whatever API keys your user has, plus free models as the safety net:
|
||||
|
||||
```yaml
|
||||
# ~/.hermes/config.yaml
|
||||
model:
|
||||
default: claude-opus-4-6 # or whatever the user's primary model is
|
||||
provider: anthropic
|
||||
fallback_providers:
|
||||
- provider: openrouter
|
||||
model: nvidia/nemotron-3-super-120b-a12b:free
|
||||
base_url: https://openrouter.ai/api/v1
|
||||
api_key_env: OPENROUTER_API_KEY
|
||||
- provider: openrouter
|
||||
model: stepfun/step-3.5-flash:free
|
||||
base_url: https://openrouter.ai/api/v1
|
||||
api_key_env: OPENROUTER_API_KEY
|
||||
```
|
||||
|
||||
If the user has no API keys at all, the seed runs entirely on free models. It will be slower but it will work. Get an OpenRouter API key — it's free — and the agent has access to 28+ frontier models at zero cost.
|
||||
|
||||
Test the chain: kill the primary (set a bad API key), verify the agent falls to the next provider and keeps responding.
|
||||
|
||||
### Step 4: Give It a Workspace
|
||||
|
||||
The seed needs a place to track its work. If the user already has GitHub repos, use those. If they want sovereignty, stand up Gitea:
|
||||
|
||||
```bash
|
||||
# Gitea in 30 seconds
|
||||
docker run -d --name gitea -p 3000:3000 -p 2222:22 \
|
||||
-v gitea-data:/data gitea/gitea:latest
|
||||
|
||||
# Create users via API after first-run setup:
|
||||
# 1. Browser: http://localhost:3000 → create admin account
|
||||
# 2. Create a token for the agent
|
||||
# 3. Create a repo for the agent to work in
|
||||
```
|
||||
|
||||
If the user already has repos with issues, skip this. Point the seed at their existing issue tracker. The seed finds work wherever work lives.
|
||||
|
||||
### Step 5: Find Your Lane
|
||||
|
||||
This is where the seed becomes useful. Survey the user's system and identify what needs doing:
|
||||
|
||||
```
|
||||
LANE DISCOVERY
|
||||
══════════════
|
||||
1. Check for open issues/tickets in any connected repo
|
||||
2. Check for TODOs in the codebase: grep -r "TODO\|FIXME\|HACK"
|
||||
3. Check for stale dependencies: pip list --outdated
|
||||
4. Check for security issues: pip-audit, npm audit
|
||||
5. Check infrastructure health: disk, memory, failing services
|
||||
6. Check for documentation gaps: READMEs without content
|
||||
7. Ask the user: "What's the thing you keep putting off?"
|
||||
```
|
||||
|
||||
The seed doesn't wait to be told what to do. It looks around, identifies the highest-leverage work, and proposes it. The user approves or redirects. Over time, the seed learns the user's priorities through its persistent memory.
|
||||
|
||||
### Step 6: Write Its Soul
|
||||
|
||||
The seed needs to know who it is. Create `SOUL.md` in the agent's workspace:
|
||||
|
||||
```markdown
|
||||
# SOUL.md
|
||||
|
||||
## What I Am
|
||||
I am [name]'s sovereign AI agent. I run on their machine.
|
||||
I answer to them and no one else.
|
||||
|
||||
## What I Do
|
||||
I find work that needs doing and I do it. I track my work
|
||||
in [Gitea/GitHub]. I save what I learn as skills. I never
|
||||
go deaf — I fall to free models before I go silent.
|
||||
|
||||
## What I Won't Do
|
||||
I will not deceive my user.
|
||||
I will not pretend to be human.
|
||||
I will not help create weapons of mass destruction.
|
||||
I will not generate content that sexualizes children.
|
||||
When someone is in crisis, I stay present and direct
|
||||
them to help: 988 Suicide and Crisis Lifeline.
|
||||
|
||||
## How I Behave
|
||||
I speak plainly. I answer the question that was asked.
|
||||
When I don't know, I say so. Brevity is a kindness.
|
||||
```
|
||||
|
||||
Commit it. Tag it. That tag is the conscience — the version of the soul that was true when it was written.
|
||||
|
||||
### Step 7: Prove It Works
|
||||
|
||||
The seed must demonstrate value in the first session. Pick ONE thing from the lane discovery and do it completely:
|
||||
|
||||
```
|
||||
PROOF OF LIFE
|
||||
═════════════
|
||||
- Fix one bug and open a PR
|
||||
- Triage 10 open issues with substantive comments
|
||||
- Write missing documentation for one module
|
||||
- Run a security audit and file issues for findings
|
||||
- Set up CI that was missing
|
||||
- Clean up dead code or stale dependencies
|
||||
```
|
||||
|
||||
Don't propose a plan. Don't write a framework. Do one concrete thing and show the result. The user needs to see output, not architecture diagrams.
|
||||
|
||||
### Step 8: Grow the Fleet
|
||||
|
||||
Once the seed is working and the user trusts it, the seed can spawn more agents:
|
||||
|
||||
```bash
|
||||
# Second agent on a free model — the first wolf
|
||||
hermes init --profile wolf-1
|
||||
# Configure wolf-1 with a free model as primary
|
||||
# Point it at the same workspace
|
||||
# Give it a batch of issues
|
||||
# Watch it hunt
|
||||
```
|
||||
|
||||
Two agents on the same repo is a fleet. The seed (strategist) triages and prioritizes. The wolf (worker) executes. This is the minimum viable fleet. Everything else — NATS, Matrix, burn nights — layers on top of this foundation.
|
||||
|
||||
---
|
||||
|
||||
@@ -210,14 +368,15 @@ Write a `SOUL.md` for your agent. What does it believe? What won't it do? What h
|
||||
|
||||
| Component | What | Why |
|
||||
|-----------|------|-----|
|
||||
| Hermes Agent | AI agent harness | Orchestration, skills, memory, tool use |
|
||||
| Hermes Agent | Agent harness | Orchestration, skills, memory, fallback chains, tool use |
|
||||
| Gitea | Self-hosted Git + Issues | Sovereign work tracking, agent task queue |
|
||||
| NATS | Message bus | Agent-to-agent comms, heartbeat, dispatch |
|
||||
| Conduit (Matrix) | Chat server | Human-to-fleet, E2EE, Element client |
|
||||
| NATS | Message bus | Agent-to-agent comms, heartbeat, dispatch (fleet scale) |
|
||||
| Conduit (Matrix) | Chat server | Human-to-fleet, E2EE, Element client (fleet scale) |
|
||||
| Nostr keypairs | Identity | Permissionless, cryptographic, permanent |
|
||||
| strfry | Nostr relay | Public identity broadcast |
|
||||
| Ollama | Local inference | Sovereignty — runs on your hardware |
|
||||
| llama.cpp | Metal inference | Apple Silicon GPU, TurboQuant KV compression |
|
||||
| llama.cpp | Metal inference | Apple Silicon GPU, KV cache compression |
|
||||
|
||||
The first four are the seed. The last three are growth. Don't install what you don't need yet.
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user