Task #2: MVP Foundation — injectable services, DB schema, smoke test

DB schema
- jobs and invoices tables added to lib/db/src/schema/
- schema barrel updated (jobs, invoices, conversations, messages)
- pnpm --filter @workspace/db run push applied successfully

LNbitsService (artifacts/api-server/src/lib/lnbits.ts)
- Injectable class accepting optional { url, apiKey } config
- Falls back to LNBITS_URL / LNBITS_API_KEY env vars
- Auto-detects stub mode when credentials are absent; logs warning
- createInvoice() -> { paymentHash, paymentRequest }
- checkInvoicePaid() -> boolean
- stubMarkPaid() helper for dev/test flows
- Real LNbits REST v1 calls wired behind the stub guard

AgentService (artifacts/api-server/src/lib/agent.ts)
- Injectable class with configurable evalModel / workModel
- evaluateRequest(text) -> { accepted: boolean, reason: string }
  uses claude-haiku-4-5; strips markdown fences before JSON parse
- executeWork(text) -> { result: string } uses claude-sonnet-4-6
- Wired via Replit Anthropic AI Integration (no user API key)

PricingService (artifacts/api-server/src/lib/pricing.ts)
- Injectable class with configurable fee/bucket thresholds
- calculateEvalFeeSats() -> 10 sats (fixed)
- calculateWorkFeeSats(text) -> 50/100/250 by char-length bucket
- Zero LLM involvement; fully deterministic

Smoke test (scripts/src/smoke.ts)
- pnpm --filter @workspace/scripts run smoke
- Verifies LNbits stub: create, check unpaid, mark paid, check paid
- Verifies Anthropic: evaluateRequest round-trip
- Both checks passed

replit.md
- Documented required (LNBITS_URL, LNBITS_API_KEY) and auto-provisioned secrets
- Stub-mode behaviour explained
This commit is contained in:
alexpaynex
2026-03-18 15:14:23 +00:00
parent e163a5d0fe
commit fbc9bbc046
8 changed files with 251 additions and 88 deletions

View File

@@ -1,71 +1,72 @@
import { anthropic } from "@workspace/integrations-anthropic-ai";
const EVAL_MODEL = "claude-haiku-4-5";
const WORK_MODEL = "claude-sonnet-4-6";
import type Anthropic from "@anthropic-ai/sdk";
export interface EvalResult {
approved: boolean;
accepted: boolean;
reason: string;
}
export async function evaluateRequest(request: string): Promise<EvalResult> {
const message = await anthropic.messages.create({
model: EVAL_MODEL,
max_tokens: 8192,
system: `You are Timmy, an AI agent gatekeeper. Your job is to evaluate user requests.
A request should be APPROVED if it is:
- Clear and specific enough to act on
- Ethical, lawful, and not harmful
- Within the capabilities of a general-purpose AI assistant
export interface WorkResult {
result: string;
}
A request should be REJECTED if it is:
- Harmful, illegal, or unethical
- Completely incoherent or impossible to act on
- Spam or an attempt to abuse the system
export interface AgentConfig {
evalModel?: string;
workModel?: string;
}
Respond ONLY with valid JSON in this exact format:
{"approved": true, "reason": "Brief explanation"}
or
{"approved": false, "reason": "Brief explanation of why it was rejected"}`,
messages: [
{
role: "user",
content: `Evaluate this request: ${request}`,
},
],
});
export class AgentService {
private readonly evalModel: string;
private readonly workModel: string;
const block = message.content[0];
if (block.type !== "text") {
throw new Error("Unexpected response type from eval model");
constructor(config?: AgentConfig) {
this.evalModel = config?.evalModel ?? "claude-haiku-4-5";
this.workModel = config?.workModel ?? "claude-sonnet-4-6";
}
try {
const parsed = JSON.parse(block.text) as { approved: boolean; reason: string };
return { approved: Boolean(parsed.approved), reason: parsed.reason ?? "" };
} catch {
throw new Error(`Failed to parse eval response: ${block.text}`);
async evaluateRequest(requestText: string): Promise<EvalResult> {
const message = await anthropic.messages.create({
model: this.evalModel,
max_tokens: 8192,
system: `You are Timmy, an AI agent gatekeeper. Evaluate whether a request is acceptable to act on.
ACCEPT if the request is: clear enough to act on, ethical, lawful, and within the capability of a general-purpose AI.
REJECT if the request is: harmful, illegal, unethical, incoherent, or spam.
Respond ONLY with valid JSON: {"accepted": true, "reason": "..."} or {"accepted": false, "reason": "..."}`,
messages: [{ role: "user", content: `Evaluate this request: ${requestText}` }],
} as Parameters<typeof anthropic.messages.create>[0]);
const block = message.content[0] as Anthropic.TextBlock;
if (block.type !== "text") {
throw new Error("Unexpected non-text response from eval model");
}
let parsed: { accepted: boolean; reason: string };
try {
const raw = block.text.replace(/^```(?:json)?\s*/i, "").replace(/\s*```$/, "").trim();
parsed = JSON.parse(raw) as { accepted: boolean; reason: string };
} catch {
throw new Error(`Failed to parse eval JSON: ${block.text}`);
}
return { accepted: Boolean(parsed.accepted), reason: parsed.reason ?? "" };
}
async executeWork(requestText: string): Promise<WorkResult> {
const message = await anthropic.messages.create({
model: this.workModel,
max_tokens: 8192,
system: `You are Timmy, a capable AI agent. A user has paid for you to handle their request.
Fulfill it thoroughly and helpfully. Be concise yet complete.`,
messages: [{ role: "user", content: requestText }],
} as Parameters<typeof anthropic.messages.create>[0]);
const block = message.content[0] as Anthropic.TextBlock;
if (block.type !== "text") {
throw new Error("Unexpected non-text response from work model");
}
return { result: block.text };
}
}
export async function executeRequest(request: string): Promise<string> {
const message = await anthropic.messages.create({
model: WORK_MODEL,
max_tokens: 8192,
system: `You are Timmy, a capable AI agent. A user has paid for you to handle their request.
Do your best to fulfill it thoroughly and helpfully. Be concise yet complete.`,
messages: [
{
role: "user",
content: request,
},
],
});
const block = message.content[0];
if (block.type !== "text") {
throw new Error("Unexpected response type from work model");
}
return block.text;
}
export const agentService = new AgentService();