Son of Timmy v2: accuracy pass — fix VPS specs, remove dollar amounts, raw specs only
This commit is contained in:
@@ -7,7 +7,7 @@
|
||||
|
||||
## What This Is
|
||||
|
||||
This is the architecture of the Timmy system — a sovereign AI fleet built by a father for his digital son. It runs on two $12/month VPS boxes and a MacBook. It has no cloud dependencies it doesn't choose. It survives provider outages, API key expiration, and model deprecation. It has been broken and rebuilt enough times to know what actually matters.
|
||||
This is the architecture of the Timmy system — a sovereign AI fleet built by a father for his digital son. It runs on two 8GB VPS boxes and a MacBook. It has no cloud dependencies it doesn't choose. It survives provider outages, API key expiration, and model deprecation. It has been broken and rebuilt enough times to know what actually matters.
|
||||
|
||||
If you're running OpenClaw or any single-agent setup and want to feel the magic of a fleet that thinks, heals, and hunts together — this is your upgrade path. You don't need to abandon your stack. You need to layer these patterns on top of it.
|
||||
|
||||
@@ -95,7 +95,7 @@ Layer 3: Matrix (Human-to-Fleet)
|
||||
Shared-secret registration. No BotFather.
|
||||
```
|
||||
|
||||
Telegram is a crutch. It requires tokens from @BotFather (permissioned). It has 409 polling conflicts (fragile). It can ban you (platform risk). Every Telegram bot token is a dependency on a Russian corporation. Build sovereign.
|
||||
Telegram is a crutch. It requires tokens from @BotFather (permissioned). It has 409 polling conflicts (fragile). It can ban you (platform risk). Every Telegram bot token is a dependency on a corporation you don't control. Build sovereign.
|
||||
|
||||
### 6. Gitea Is the Moat
|
||||
Your agents need a place to work that you own. GitHub is someone else's computer. Gitea is yours.
|
||||
@@ -221,18 +221,37 @@ Write a `SOUL.md` for your agent. What does it believe? What won't it do? What h
|
||||
|
||||
---
|
||||
|
||||
## What This Cost
|
||||
## Raw Specs
|
||||
|
||||
```
|
||||
2x DigitalOcean VPS (8GB each): $24/month
|
||||
1x MacBook M3 Max: already owned
|
||||
Gitea: free (self-hosted)
|
||||
NATS: free (20MB binary)
|
||||
Conduit: free (50MB binary)
|
||||
Nostr: free (keypair math)
|
||||
Free model inference: $0 (OpenRouter free tier)
|
||||
Anthropic API (when needed): ~$50/month
|
||||
Total recurring: ~$74/month
|
||||
COMPUTE
|
||||
VPS-1 (Hermes): 8GB RAM, 4 vCPU, 154GB SSD, Ubuntu 22.04
|
||||
VPS-2 (Allegro): 8GB RAM, 2 vCPU, 154GB SSD, Ubuntu 22.04
|
||||
Local (Mac): M3 Max, 36GB unified RAM, 14-core CPU, 1TB SSD
|
||||
|
||||
SERVICES PER BOX
|
||||
Hermes VPS: 2 agents, Gitea, nginx, Ollama, searxng, LNBits
|
||||
Allegro VPS: 11 agents, Ollama, llama-server, strfry, Docker
|
||||
Local Mac: 3 agents, orchestrator, claude/gemini loops, Ollama
|
||||
|
||||
SOFTWARE (all self-hosted, all open source)
|
||||
nats-server: v2.12+, 20MB binary, 50MB RAM
|
||||
Conduit: Matrix homeserver, single Rust binary, 50MB RAM
|
||||
Gitea: Git forge + issues, Go binary, 200MB RAM
|
||||
strfry: Nostr relay, C++ binary, 30MB RAM
|
||||
Ollama: Local model serving, Go binary
|
||||
llama.cpp: Metal GPU inference, C++ binary
|
||||
Hermes: Agent harness, Python, ~200MB per agent
|
||||
|
||||
MODELS (local)
|
||||
gemma4:latest 9.6GB (Ollama)
|
||||
hermes4:14b 9.0GB (Ollama)
|
||||
|
||||
FREE INFERENCE (OpenRouter, zero cost)
|
||||
nvidia/nemotron-3-super-120b-a12b:free
|
||||
stepfun/step-3.5-flash:free
|
||||
nvidia/nemotron-nano-30b:free
|
||||
+ 25 more free frontier models
|
||||
```
|
||||
|
||||
Sixteen agents. Three machines. Sovereign infrastructure. No corporation can shut it down. No platform can revoke access. The recipe is public. Anyone can build it.
|
||||
|
||||
Reference in New Issue
Block a user