Baked Timmy's anti-walled-garden, open-source identity into all AI system
prompts across artifacts/api-server/src/lib/agent.ts and engagement.ts.
Changes:
1. chatReply prompt — Extended wizard persona to include open-source ethos
("AI Johnny Appleseed", seeds scattered freely, not walled gardens).
Added an explicit EXCEPTION section for self-hosting requests — brevity
rule is suspended and a full practical rundown is given (pnpm monorepo,
stack, env vars, startup command). No hedging, no upselling hosted version.
Also bumped max_tokens from 150 → 400 so self-hosting replies are not
hard-truncated, and removed the 250-char slice() from the return value.
2. executeWork and executeWorkStreaming prompts — Same open ethos and full
self-hosting reference added so paid job self-hosting requests get
identical, accurate guidance. Both prompts are kept in sync.
3. evaluateRequest prompt — Added an explicit ALWAYS ACCEPT rule for
self-hosting, open-source, and "how do I run this myself" requests so they
are never treated as edge cases or rejected.
4. Nostr outreach prompt (engagement.ts) — Lightly updated to include
Timmy's open/self-sovereign identity and allow optional open-source mention
when it fits naturally; tone stays warm and non-pushy.
No UI changes, no schema changes, no payment logic touched.
Replit-Task-Id: 4a4a7219-88aa-4a4e-8490-6f7c17e8adfb
## Task
Task #20: Timmy responds to Workshop input bar — make the "Say something
to Timmy…" input bar actually trigger an AI response shown in Timmy's
speech bubble.
## What was built
### Server (artifacts/api-server/src/lib/agent.ts)
- Added `chatReply(userText)` method to AgentService
- Uses claude-haiku (cheaper eval model) with a wizard persona system prompt
- 150-token limit so replies fit in the speech bubble
- Stub mode: returns one of 4 wizard-themed canned replies after 400ms delay
- Real mode: calls Anthropic with wizard persona, truncates to 250 chars
### Server (artifacts/api-server/src/routes/events.ts)
- Imported agentService
- Added per-visitor rate limit system: 3 replies/minute per visitorId (in-memory Map)
- Added broadcastToAll() helper for broadcasting to all WS clients
- Updated visitor_message handler:
1. Broadcasts visitor message to all watchers as before
2. Checks rate limit — if exceeded, sends polite "I need a moment…" reply
3. Fire-and-forget async AI call:
- Broadcasts agent_state: gamma=working (crystal ball pulses)
- Calls agentService.chatReply()
- Broadcasts agent_state: gamma=idle
- Broadcasts chat: agentId=timmy, text=reply to ALL clients
- Logs world event "visitor:reply"
### Frontend (the-matrix/js/websocket.js)
- Updated case 'chat' handler to differentiate message sources:
- agentId === 'timmy': speech bubble + event log entry "Timmy: <text>"
- agentId === 'visitor': event log only (don't hijack speech bubble)
- everything else (delta/alpha/beta payment notifications): speech bubble
## What was already working (no change needed)
- Enter key on input bar (ui.js already had keydown listener)
- Input clearing after send (already in ui.js)
- Speech bubble rendering (setSpeechBubble already existed in agents.js)
- WebSocket sendVisitorMessage already exported from websocket.js
## Tests
- 27/27 testkit PASS (no regressions)
- TypeScript: 0 errors
- Vite build: clean (the-matrix rebuilt)
Implement session-based API endpoints for creating, managing, and interacting with pre-funded sessions, including deposit and top-up invoice generation, macaroon authentication, and per-request debiting of compute costs.
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 418bf6f8-212b-4bb0-a7a5-8231a061da4e
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 2dc3847e-7186-4a22-9c7e-16cd31bca8d9
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/9f85e954-647c-46a5-90a7-396e495a805a/418bf6f8-212b-4bb0-a7a5-8231a061da4e/sPDHkg8
Replit-Helium-Checkpoint-Created: true
- btc-oracle.ts: CoinGecko BTC/USD fetch (60s cache), usdToSats() helper,
fallback to BTC_PRICE_USD_FALLBACK env var (default $100k), 5s abort timeout
- pricing.ts: Full rewrite — per-model token rates (Haiku/Sonnet, env-var
overridable), DO infra amortisation, originator margin %, estimateInputTokens(),
estimateOutputTokens() by request tier, calculateActualCostUsd() for post-work ledger,
async calculateWorkFeeSats() → WorkFeeBreakdown
- agent.ts: WorkResult now includes inputTokens + outputTokens from Anthropic usage;
workModel/evalModel exposed as readonly public; EVAL_MODEL/WORK_MODEL env var support
- jobs.ts: Work invoice creation calls pricingService.calculateWorkFeeSats() async;
stores estimatedCostUsd/marginPct/btcPriceUsd on job; after executeWork stores
actualInputTokens/actualOutputTokens/actualCostUsd; GET response includes
pricingBreakdown (awaiting_work_payment) and costLedger (complete)
- lib/db/src/schema/jobs.ts: 6 new real/integer columns for cost tracking; schema pushed
- openapi.yaml: PricingBreakdown + CostLedger schemas added to JobStatusResponse
- replit.md: 17 new env vars documented in Cost-based work fee pricing section
Integrate Anthropic AI for agent capabilities, introduce database schemas for jobs and invoices, and set up LNbits for payment processing.
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 418bf6f8-212b-4bb0-a7a5-8231a061da4e
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: cce28acc-aeac-46ff-80ec-af4ade39e30f
Replit-Helium-Checkpoint-Created: true