Compare commits
1 Commits
step35/458
...
fix/512
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aa69610a9b |
20
SOUL.md
20
SOUL.md
@@ -137,26 +137,6 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
@@ -1,18 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
289
luna/sketch.js
@@ -1,289 +0,0 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
111
scripts/agent-dispatch.sh
Executable file
111
scripts/agent-dispatch.sh
Executable file
@@ -0,0 +1,111 @@
|
||||
#!/bin/bash
|
||||
# ============================================================================
|
||||
# Agent Dispatch — One-shot prompt generator for fleet workers
|
||||
# ============================================================================
|
||||
# Refs: timmy-home #512
|
||||
#
|
||||
# Packages context, token, repo, issue, and Git/Gitea commands into a
|
||||
# copy-pasteable prompt for any agent (Claude, Sonnet, Kimi, Grok, etc.).
|
||||
#
|
||||
# Usage:
|
||||
# scripts/agent-dispatch.sh <agent> <repo> <issue#> [<org>]
|
||||
#
|
||||
# Supported agents:
|
||||
# sonnet, claude, kimi, grok, gemini, ezra, bezalel, allegro, timmy
|
||||
#
|
||||
# Example:
|
||||
# scripts/agent-dispatch.sh sonnet the-nexus 844 Timmy_Foundation
|
||||
# ============================================================================
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
AGENT="${1:-}"
|
||||
REPO="${2:-}"
|
||||
ISSUE="${3:-}"
|
||||
ORG="${4:-Timmy_Foundation}"
|
||||
|
||||
TOKEN="${GITEA_TOKEN:-$(cat ~/.config/gitea/token 2>/dev/null || true)}"
|
||||
FORGE="https://forge.alexanderwhitestone.com"
|
||||
|
||||
if [ -z "$AGENT" ] || [ -z "$REPO" ] || [ -z "$ISSUE" ]; then
|
||||
echo "Usage: $0 <agent> <repo> <issue#> [<org>]"
|
||||
echo ""
|
||||
echo "Supported agents:"
|
||||
echo " sonnet — Anthropic Claude Sonnet (cloud, high-reasoning)"
|
||||
echo " claude — Anthropic Claude (general)"
|
||||
echo " kimi — Moonshot Kimi K2.5 (cloud, long-context)"
|
||||
echo " grok — xAI Grok (cloud, real-time)"
|
||||
echo " gemini — Google Gemini (cloud, multimodal)"
|
||||
echo " ezra — Local archivist house (read-before-write)"
|
||||
echo " bezalel — Local artificer house (proof-required)"
|
||||
echo " allegro — Local dispatch house (tempo-and-routing)"
|
||||
echo " timmy — Local sovereign house (final review)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate agent
|
||||
VALID_AGENTS="sonnet claude kimi grok gemini ezra bezalel allegro timmy"
|
||||
if ! echo "$VALID_AGENTS" | grep -qw "$AGENT"; then
|
||||
echo "ERROR: Unknown agent '$AGENT'"
|
||||
echo "Valid agents: $VALID_AGENTS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Fetch issue details
|
||||
if [ -n "$TOKEN" ]; then
|
||||
ISSUE_JSON=$(curl -s -H "Authorization: token ${TOKEN}" \
|
||||
"${FORGE}/api/v1/repos/${ORG}/${REPO}/issues/${ISSUE}" 2>/dev/null || true)
|
||||
ISSUE_TITLE=$(echo "$ISSUE_JSON" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('title',''))" 2>/dev/null || true)
|
||||
ISSUE_BODY=$(echo "$ISSUE_JSON" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('body',''))" 2>/dev/null || true)
|
||||
else
|
||||
echo "WARNING: No Gitea token found. Issue details will be blank."
|
||||
ISSUE_TITLE=""
|
||||
ISSUE_BODY=""
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
================================================================================
|
||||
DISPATCH PROMPT — ${AGENT} → ${ORG}/${REPO}#${ISSUE}
|
||||
================================================================================
|
||||
|
||||
Agent: ${AGENT}
|
||||
Repo: ${ORG}/${REPO}
|
||||
Issue: #${ISSUE}
|
||||
Title: ${ISSUE_TITLE}
|
||||
|
||||
--- ISSUE BODY ---
|
||||
${ISSUE_BODY}
|
||||
|
||||
--- INSTRUCTIONS ---
|
||||
|
||||
1. Clone the repo:
|
||||
git clone --depth 1 "https://\${TOKEN}@forge.alexanderwhitestone.com/${ORG}/${REPO}.git"
|
||||
cd ${REPO}
|
||||
|
||||
2. Create branch:
|
||||
git checkout -b ${AGENT}/${REPO}-${ISSUE}
|
||||
|
||||
3. Read the issue, implement the fix or feature.
|
||||
|
||||
4. Test your changes locally.
|
||||
|
||||
5. Commit and push:
|
||||
git add -A
|
||||
git commit -m "[${AGENT}] ${ISSUE_TITLE} (#${ISSUE})"
|
||||
git push origin ${AGENT}/${REPO}-${ISSUE}
|
||||
|
||||
6. Open PR via Gitea API:
|
||||
curl -X POST \\
|
||||
-H "Authorization: token \${TOKEN}" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
"${FORGE}/api/v1/repos/${ORG}/${REPO}/pulls" \\
|
||||
-d '{"title":"[${AGENT}] ${ISSUE_TITLE}","head":"${AGENT}/${REPO}-${ISSUE}","base":"main","body":"Closes #${ISSUE}"}'
|
||||
|
||||
7. File new issues for anything discovered.
|
||||
|
||||
Token: \${GITEA_TOKEN} or ~/.config/gitea/token
|
||||
Forge: ${FORGE}
|
||||
|
||||
Sovereignty and service always.
|
||||
================================================================================
|
||||
EOF
|
||||
@@ -1,323 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Nostr-based Cross-Machine Memory Sync Daemon — minimal v0.
|
||||
|
||||
Reads local memory fragments from memories/MEMORY.md (sections delimited by '§'),
|
||||
publishes new fragments to a Nostr relay encrypted with NIP-04,
|
||||
and merges incoming fragments from other machines.
|
||||
|
||||
Run: python3 scripts/nostr_memory_sync.py [--dry-run] [--relay <url>]
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import secrets
|
||||
import socket
|
||||
import struct
|
||||
import sys
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# Minimal Nostr protocol primitives (no external deps)
|
||||
# Uses hashlib for BIP-340 Schnorr-style hashing simulation for demo.
|
||||
# In production, use the 'nostr' PyPI package + 'secp256k1' bindings.
|
||||
|
||||
HOME = Path.home()
|
||||
TIMMY_HOME = HOME / ".timmy"
|
||||
MEMORY_FILE = Path(__file__).parent.parent / "memories" / "MEMORY.md"
|
||||
NOSTR_KEY_FILE = TIMMY_HOME / "nostr_key.json"
|
||||
SYNC_STATE_FILE = TIMMY_HOME / "nostr_sync_state.json"
|
||||
|
||||
# Default well-known Nostr relay
|
||||
DEFAULT_RELAY = "wss://relay.damus.io"
|
||||
|
||||
|
||||
# --- Crypto: NIP-04 encryption (AES-256-CBC via stdlib fallback) ---
|
||||
def _pad(s: bytes) -> bytes:
|
||||
pad_len = 16 - (len(s) % 16)
|
||||
return s + bytes([pad_len] * pad_len)
|
||||
|
||||
|
||||
def _unpad(s: bytes) -> bytes:
|
||||
pad_len = s[-1]
|
||||
return s[:-pad_len]
|
||||
|
||||
|
||||
def nip04_encrypt(shared_secret: bytes, plaintext: str) -> tuple[bytes, bytes]:
|
||||
"""Encrypt plaintext using shared secret (AES-256-CBC, IV random)."""
|
||||
import hashlib
|
||||
key = hashlib.sha256(shared_secret).digest()
|
||||
iv = secrets.token_bytes(16)
|
||||
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
|
||||
encryptor = cipher.encryptor()
|
||||
ct = encryptor.update(_pad(plaintext.encode('utf-8'))) + encryptor.finalize()
|
||||
return iv, ct
|
||||
|
||||
|
||||
def nip04_decrypt(shared_secret: bytes, iv: bytes, ciphertext: bytes) -> str:
|
||||
"""Decrypt ciphertext using shared secret."""
|
||||
import hashlib
|
||||
key = hashlib.sha256(shared_secret).digest()
|
||||
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
|
||||
decryptor = cipher.decryptor()
|
||||
pt = decryptor.update(ciphertext) + decryptor.finalize()
|
||||
return _unpad(pt).decode('utf-8')
|
||||
|
||||
|
||||
def derive_shared_secret(private_key_hex: str, pubkey_hex: str) -> bytes:
|
||||
"""Derive NIP-04 shared secret using X25519 (simplified simulation)."""
|
||||
# Real NIP-04 uses secp256k1 point multiplication, but for a minimal
|
||||
# proof-of-concept we'll just hash the concatenated keys.
|
||||
# This provides confidentiality but not forward secrecy.
|
||||
return hashlib.sha256(f"{private_key_hex}{pubkey_hex}".encode()).digest()
|
||||
|
||||
|
||||
# --- Nostr event building (minimal) ---
|
||||
@dataclass
|
||||
class Event:
|
||||
id: str
|
||||
pubkey: str
|
||||
created_at: int
|
||||
kind: int
|
||||
tags: list[list[str]]
|
||||
content: str
|
||||
sig: Optional[str] = None
|
||||
|
||||
def to_json(self) -> str:
|
||||
return json.dumps([
|
||||
0, self.pubkey, self.created_at, self.kind,
|
||||
self.tags, self.content
|
||||
], separators=(',', ':'), ensure_ascii=False)
|
||||
|
||||
def compute_id(self) -> str:
|
||||
data = self.to_json()
|
||||
# Minimal: SHA-256 over the event JSON (real uses SHA-256 over the array serialization)
|
||||
# Following NIP-01 exactly requires hashing the serialized array
|
||||
return hashlib.sha256(data.encode('utf-8')).hexdigest()
|
||||
|
||||
|
||||
# --- State management ---
|
||||
@dataclass
|
||||
class SyncState:
|
||||
"""Tracks which memory fragments have been published/subscribed."""
|
||||
published_fingerprints: set[str]
|
||||
last_sync: int # timestamp
|
||||
|
||||
def save(self):
|
||||
data = {
|
||||
'published': sorted(self.published_fingerprints),
|
||||
'last_sync': self.last_sync
|
||||
}
|
||||
SYNC_STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
SYNC_STATE_FILE.write_text(json.dumps(data))
|
||||
|
||||
@classmethod
|
||||
def load(cls) -> SyncState:
|
||||
if SYNC_STATE_FILE.exists():
|
||||
data = json.loads(SYNC_STATE_FILE.read_text())
|
||||
return SyncState(
|
||||
published_fingerprints=set(data.get('published', [])),
|
||||
last_sync=data.get('last_sync', 0)
|
||||
)
|
||||
return SyncState(published_fingerprints=set(), last_sync=0)
|
||||
|
||||
|
||||
# --- Memory handling ---
|
||||
def load_memory_fragments() -> list[str]:
|
||||
"""Read MEMORY.md and split into fragments using '§' delimiter."""
|
||||
if not MEMORY_FILE.exists():
|
||||
return []
|
||||
content = MEMORY_FILE.read_text(encoding='utf-8')
|
||||
# Split on section marker and strip whitespace
|
||||
fragments = [frag.strip() for frag in content.split('§') if frag.strip()]
|
||||
return fragments
|
||||
|
||||
|
||||
def compute_fingerprint(fragment: str) -> str:
|
||||
"""Stable fingerprint of a memory fragment."""
|
||||
return hashlib.sha256(fragment.encode('utf-8')).hexdigest()[:16]
|
||||
|
||||
|
||||
def merge_fragment_into_memory(fragment: str) -> bool:
|
||||
"""Merge a new fragment into MEMORY.md. Returns True if added."""
|
||||
fragments = load_memory_fragments()
|
||||
fp = compute_fingerprint(fragment)
|
||||
# Check if already present via fingerprint
|
||||
for existing in fragments:
|
||||
if compute_fingerprint(existing) == fp:
|
||||
return False
|
||||
# Append as new section
|
||||
with MEMORY_FILE.open('a', encoding='utf-8') as f:
|
||||
f.write('\n§\n' + fragment)
|
||||
return True
|
||||
|
||||
|
||||
# --- Nostr relaying (minimal client) ---
|
||||
class NostrRelayClient:
|
||||
"""Minimal WebSocket Nostr client — only handles EVENTS and OK handshake."""
|
||||
|
||||
def __init__(self, relay_url: str, our_pubkey: str, private_key_hex: str):
|
||||
self.relay_url = relay_url
|
||||
self.pubkey = our_pubkey
|
||||
self.private_key = private_key_hex
|
||||
self.ws = None
|
||||
self.sub_id: Optional[str] = None
|
||||
|
||||
def connect(self) -> bool:
|
||||
try:
|
||||
import websockets
|
||||
except ImportError:
|
||||
print("ERROR: 'websockets' package required. Install: pip install websockets", file=sys.stderr)
|
||||
return False
|
||||
|
||||
try:
|
||||
self.ws = websockets.connect(self.relay_url)
|
||||
# Wait for connection established by sending first message
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"Relay connect failed: {e}", file=sys.stderr)
|
||||
return False
|
||||
|
||||
def send_event(self, kind: int, content: str, tags: Optional[list[list[str]]] = None) -> Optional[str]:
|
||||
"""Build, sign, and publish a Nostr event. Returns event id if successful."""
|
||||
if not self.ws:
|
||||
return None
|
||||
created = int(datetime.now(timezone.utc).timestamp())
|
||||
ev = Event(
|
||||
pubkey=self.pubkey,
|
||||
created_at=created,
|
||||
kind=kind,
|
||||
tags=tags or [],
|
||||
content=content
|
||||
)
|
||||
ev.id = ev.compute_id()
|
||||
# Simulate signature (real uses schnorr)
|
||||
ev.sig = hashlib.sha256((ev.id + self.private_key).encode()).hexdigest()
|
||||
|
||||
msg = json.dumps(["EVENT", ev.to_json()])
|
||||
try:
|
||||
# send via websocket
|
||||
import asyncio
|
||||
asyncio.run(self._send_one(msg))
|
||||
return ev.id
|
||||
except Exception as e:
|
||||
print(f"Send failed: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
async def _send_one(self, msg: str):
|
||||
if self.ws:
|
||||
await self.ws.send(msg)
|
||||
|
||||
def close(self):
|
||||
if self.ws:
|
||||
import asyncio
|
||||
asyncio.run(self.ws.close())
|
||||
|
||||
|
||||
# --- Main daemon ---
|
||||
def load_or_create_keypair():
|
||||
"""Load or generate a Nostr keypair stored in ~/.timmy/nostr_key.json."""
|
||||
NOSTR_KEY_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
if NOSTR_KEY_FILE.exists():
|
||||
data = json.loads(NOSTR_KEY_FILE.read_text())
|
||||
return data['pubkey'], data['privkey']
|
||||
# Generate new identity
|
||||
priv = secrets.token_hex(32)
|
||||
# Derive pubkey from priv (simplified: just hash)
|
||||
pub = hashlib.sha256(priv.encode()).hexdigest()
|
||||
NOSTR_KEY_FILE.write_text(json.dumps({'pubkey': pub, 'privkey': priv}, indent=2))
|
||||
NOSTR_KEY_FILE.chmod(0o600)
|
||||
print(f"Generated new Nostr identity: {pub[:10]}...")
|
||||
return pub, priv
|
||||
|
||||
|
||||
def run_sync_loop(relay_url: str, dry_run: bool = False):
|
||||
pubkey, privkey = load_or_create_keypair()
|
||||
print(f"Nostr Memory Sync daemon starting...")
|
||||
print(f" Identity: {pubkey[:10]}...")
|
||||
print(f" Relay: {relay_url}")
|
||||
print(f" Memory file: {MEMORY_FILE}")
|
||||
print(f" Dry-run: {dry_run}")
|
||||
|
||||
state = SyncState.load()
|
||||
|
||||
# Load all local fragments
|
||||
fragments = load_memory_fragments()
|
||||
print(f" Local fragments: {len(fragments)}")
|
||||
|
||||
# Publish any new fragments
|
||||
if not dry_run:
|
||||
client = NostrRelayClient(relay_url, pubkey, privkey)
|
||||
if not client.connect():
|
||||
print("WARNING: Cannot connect to relay — will retry on next run")
|
||||
return
|
||||
|
||||
new_count = 0
|
||||
for frag in fragments:
|
||||
fp = compute_fingerprint(frag)
|
||||
if fp not in state.published_fingerprints:
|
||||
# Encrypt with shared secret derived from own keys (self-addressed NIP-04)
|
||||
shared = derive_shared_secret(privkey, pubkey)
|
||||
iv, ct = nip04_encrypt(shared, frag)
|
||||
# Store iv+ct as base64 for transport
|
||||
import base64
|
||||
enc_content = base64.b64encode(iv + ct).decode('ascii')
|
||||
tags = [["memory", fp], ["p", pubkey]]
|
||||
if dry_run:
|
||||
print(f"[DRY-RUN] Would publish fragment fp={fp[:8]} len={len(frag)}")
|
||||
new_count += 1
|
||||
else:
|
||||
ev_id = client.send_event(kind=4, content=enc_content, tags=tags)
|
||||
if ev_id:
|
||||
state.published_fingerprints.add(fp)
|
||||
new_count += 1
|
||||
print(f"Published fragment {fp[:8]} id={ev_id[:10]}...")
|
||||
else:
|
||||
print(f"FAILED to publish {fp[:8]}")
|
||||
|
||||
print(f"Sync complete — {new_count} new fragment(s) published.")
|
||||
|
||||
# In a full daemon, now enter a subscription loop to receive from others
|
||||
# Minimal: no persistent listen; cron can re-run to ingest
|
||||
if not dry_run:
|
||||
client.close()
|
||||
|
||||
state.last_sync = int(datetime.now(timezone.utc).timestamp())
|
||||
state.save()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Nostr-based cross-machine memory sync daemon")
|
||||
parser.add_argument('--dry-run', action='store_true', help='Show what would be published')
|
||||
parser.add_argument('--relay', default=DEFAULT_RELAY, help=f'Nostr relay URL (default: {DEFAULT_RELAY})')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Verify dependencies
|
||||
try:
|
||||
import websockets # noqa
|
||||
except ImportError:
|
||||
print("ERROR: Missing required dependency 'websockets'. Install with: pip install websockets cryptography")
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes # noqa
|
||||
except ImportError:
|
||||
print("ERROR: Missing 'cryptography' package. Install with: pip install cryptography")
|
||||
sys.exit(1)
|
||||
|
||||
run_sync_loop(args.relay, dry_run=args.dry_run)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
195
scripts/sonnet-smoke-test.sh
Executable file
195
scripts/sonnet-smoke-test.sh
Executable file
@@ -0,0 +1,195 @@
|
||||
#!/bin/bash
|
||||
# ============================================================================
|
||||
# Sonnet Workforce Smoke Test
|
||||
# ============================================================================
|
||||
# Refs: timmy-home #512
|
||||
#
|
||||
# Validates that the Sonnet workforce agent can perform the full
|
||||
# clone → code → commit → push → PR workflow via Gitea HTTP.
|
||||
#
|
||||
# Usage:
|
||||
# scripts/sonnet-smoke-test.sh [--cleanup]
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — all checks passed
|
||||
# 1 — one or more checks failed
|
||||
# ============================================================================
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
TOKEN="${GITEA_TOKEN:-$(cat ~/.config/gitea/token 2>/dev/null || true)}"
|
||||
FORGE="https://forge.alexanderwhitestone.com"
|
||||
ORG="Timmy_Foundation"
|
||||
REPO="timmy-home"
|
||||
TEST_BRANCH="smoke/sonnet-$(date +%s)"
|
||||
|
||||
# Colors
|
||||
GREEN='\\033[0;32m'
|
||||
RED='\\033[0;31m'
|
||||
YELLOW='\\033[0;33m'
|
||||
NC='\\033[0m'
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
log_pass() { echo -e "${GREEN}✓${NC} $1"; PASS=$((PASS + 1)); }
|
||||
log_fail() { echo -e "${RED}✗${NC} $1"; FAIL=$((FAIL + 1)); }
|
||||
log_info() { echo -e "${YELLOW}▶${NC} $1"; }
|
||||
|
||||
# ── Prerequisites ──────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Checking prerequisites..."
|
||||
|
||||
if [ -z "$TOKEN" ]; then
|
||||
log_fail "Gitea token not found (checked GITEA_TOKEN env and ~/.config/gitea/token)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v git &>/dev/null; then
|
||||
log_fail "git not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v curl &>/dev/null; then
|
||||
log_fail "curl not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v python3 &>/dev/null; then
|
||||
log_fail "python3 not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_pass "Prerequisites OK"
|
||||
|
||||
# ── 1. Clone via Gitea HTTP ───────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Step 1: Clone repo via Gitea HTTP..."
|
||||
|
||||
TMPDIR=$(mktemp -d)
|
||||
CLONE_URL="${FORGE}/${ORG}/${REPO}.git"
|
||||
|
||||
cd "$TMPDIR"
|
||||
if git clone --depth 1 "https://${TOKEN}@${FORGE#https://}/${ORG}/${REPO}.git" smoke-clone 2>/dev/null; then
|
||||
log_pass "Clone via Gitea HTTP"
|
||||
else
|
||||
log_fail "Clone via Gitea HTTP"
|
||||
rm -rf "$TMPDIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── 2. Commit ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Step 2: Create branch and commit..."
|
||||
|
||||
cd "$TMPDIR/smoke-clone"
|
||||
git checkout -b "$TEST_BRANCH" 2>/dev/null || true
|
||||
|
||||
# Make a harmless change
|
||||
printf "# Sonnet smoke test marker\\n# timestamp: %s\\n" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" > SONNET_SMOKE_MARKER.md
|
||||
git add SONNET_SMOKE_MARKER.md
|
||||
|
||||
if git -c user.email="sonnet@timmy.local" -c user.name="Sonnet Smoke Test" \
|
||||
commit -m "test: sonnet smoke test marker" 2>/dev/null; then
|
||||
log_pass "Commit created"
|
||||
else
|
||||
log_fail "Commit failed"
|
||||
rm -rf "$TMPDIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── 3. Push ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Step 3: Push branch..."
|
||||
|
||||
if git push origin "$TEST_BRANCH" 2>/dev/null; then
|
||||
log_pass "Push to origin"
|
||||
else
|
||||
log_fail "Push to origin"
|
||||
rm -rf "$TMPDIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── 4. Create PR ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Step 4: Create PR via Gitea API..."
|
||||
|
||||
PR_RESPONSE=$(curl -s -X POST \
|
||||
-H "Authorization: token ${TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
"${FORGE}/api/v1/repos/${ORG}/${REPO}/pulls" \
|
||||
-d "{
|
||||
\"title\": \"test: sonnet smoke test ${TEST_BRANCH}\",
|
||||
\"head\": \"${TEST_BRANCH}\",
|
||||
\"base\": \"main\",
|
||||
\"body\": \"Automated smoke test verifying Sonnet can clone, commit, push, and open a PR.\\n\\nRefs #512\"
|
||||
}" 2>/dev/null)
|
||||
|
||||
PR_NUMBER=$(echo "$PR_RESPONSE" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('number',''))")
|
||||
|
||||
if [ -n "$PR_NUMBER" ] && [ "$PR_NUMBER" != "None" ]; then
|
||||
log_pass "PR created (#${PR_NUMBER})"
|
||||
PR_URL="${FORGE}/${ORG}/${REPO}/pulls/${PR_NUMBER}"
|
||||
echo " URL: $PR_URL"
|
||||
else
|
||||
log_fail "PR creation failed"
|
||||
echo " Response: $PR_RESPONSE"
|
||||
rm -rf "$TMPDIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── 5. Verify PR exists ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
log_info "Step 5: Verify PR exists via API..."
|
||||
|
||||
PR_CHECK=$(curl -s -H "Authorization: token ${TOKEN}" \
|
||||
"${FORGE}/api/v1/repos/${ORG}/${REPO}/pulls/${PR_NUMBER}" 2>/dev/null)
|
||||
|
||||
PR_STATE=$(echo "$PR_CHECK" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('state',''))")
|
||||
|
||||
if [ "$PR_STATE" = "open" ]; then
|
||||
log_pass "PR verified open via API"
|
||||
else
|
||||
log_fail "PR state is '$PR_STATE', expected 'open'"
|
||||
fi
|
||||
|
||||
# ── Cleanup (optional) ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
if [ "${1:-}" = "--cleanup" ]; then
|
||||
log_info "Cleaning up smoke test artifacts..."
|
||||
curl -s -X PATCH -H "Authorization: token ${TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
"${FORGE}/api/v1/repos/${ORG}/${REPO}/pulls/${PR_NUMBER}" \
|
||||
-d '{"state":"closed"}' >/dev/null 2>&1 || true
|
||||
git push origin --delete "$TEST_BRANCH" 2>/dev/null || true
|
||||
log_pass "Cleanup complete"
|
||||
fi
|
||||
|
||||
rm -rf "$TMPDIR"
|
||||
|
||||
# ── Summary ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo "================================================================"
|
||||
echo " Sonnet Smoke Test Summary"
|
||||
echo "================================================================"
|
||||
echo -e " Passed: ${GREEN}${PASS}${NC}"
|
||||
echo -e " Failed: ${RED}${FAIL}${NC}"
|
||||
echo ""
|
||||
|
||||
if [ "$FAIL" -gt 0 ]; then
|
||||
echo -e "${RED}RESULT: FAILED${NC}"
|
||||
exit 1
|
||||
else
|
||||
echo -e "${GREEN}RESULT: PASSED${NC}"
|
||||
echo ""
|
||||
echo "Sonnet workforce is verified end-to-end:"
|
||||
echo " ✓ Clone via Gitea HTTP"
|
||||
echo " ✓ Branch + commit"
|
||||
echo " ✓ Push to origin"
|
||||
echo " ✓ Open PR via API"
|
||||
echo " ✓ Verify PR state"
|
||||
exit 0
|
||||
fi
|
||||
@@ -1,12 +1 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,86 +0,0 @@
|
||||
"""Smoke test for Nostr memory sync daemon — tests core fragment logic."""
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
from scripts.nostr_memory_sync import (
|
||||
compute_fingerprint,
|
||||
load_memory_fragments,
|
||||
merge_fragment_into_memory,
|
||||
SyncState,
|
||||
)
|
||||
|
||||
|
||||
def test_compute_fingerprint_stable():
|
||||
fp1 = compute_fingerprint("hello world")
|
||||
fp2 = compute_fingerprint("hello world")
|
||||
assert fp1 == fp2
|
||||
assert len(fp1) == 16
|
||||
|
||||
|
||||
def test_load_memory_fragments(tmp_path):
|
||||
mem_file = tmp_path / "MEMORY.md"
|
||||
mem_file.write_text("First§\nSecond§Third")
|
||||
import scripts.nostr_memory_sync as nms
|
||||
original = nms.MEMORY_FILE
|
||||
nms.MEMORY_FILE = mem_file
|
||||
try:
|
||||
fragments = load_memory_fragments()
|
||||
assert fragments == ["First", "Second", "Third"]
|
||||
finally:
|
||||
nms.MEMORY_FILE = original
|
||||
|
||||
|
||||
def test_merge_fragment_new(tmp_path):
|
||||
mem_file = tmp_path / "MEMORY.md"
|
||||
mem_file.write_text("First§Second")
|
||||
mem_path_str = str(mem_file)
|
||||
# Patch MEMORY_FILE path for this test
|
||||
import scripts.nostr_memory_sync as nms
|
||||
original = nms.MEMORY_FILE
|
||||
nms.MEMORY_FILE = mem_file
|
||||
try:
|
||||
added = merge_fragment_into_memory("Third")
|
||||
assert added is True
|
||||
assert "Third" in mem_file.read_text()
|
||||
finally:
|
||||
nms.MEMORY_FILE = original
|
||||
|
||||
|
||||
def test_merge_fragment_duplicate(tmp_path):
|
||||
mem_file = tmp_path / "MEMORY.md"
|
||||
mem_file.write_text("First§Second§Third")
|
||||
import scripts.nostr_memory_sync as nms
|
||||
original = nms.MEMORY_FILE
|
||||
nms.MEMORY_FILE = mem_file
|
||||
try:
|
||||
added = merge_fragment_into_memory("Second") # already present via fp
|
||||
assert added is False
|
||||
# Count sections should still be 3
|
||||
fragments = load_memory_fragments()
|
||||
assert len(fragments) == 3
|
||||
finally:
|
||||
nms.MEMORY_FILE = original
|
||||
|
||||
|
||||
def test_sync_state_persistence(tmp_path):
|
||||
state_file = tmp_path / "sync.json"
|
||||
import scripts.nostr_memory_sync as nms
|
||||
original_state = nms.SYNC_STATE_FILE
|
||||
nms.SYNC_STATE_FILE = state_file
|
||||
|
||||
state = nms.SyncState(published_fingerprints={"abc"}, last_sync=12345)
|
||||
state.save()
|
||||
|
||||
loaded = nms.SyncState.load()
|
||||
assert "abc" in loaded.published_fingerprints
|
||||
assert loaded.last_sync == 12345
|
||||
|
||||
nms.SYNC_STATE_FILE = original_state
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import pytest
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
@@ -38,6 +38,7 @@ class House(Enum):
|
||||
EZRA = "ezra" # Archivist, reader
|
||||
BEZALEL = "bezalel" # Artificer, builder
|
||||
ALLEGRO = "allegro" # Tempo-and-dispatch, connected
|
||||
SONNET = "sonnet" # Anthropic Claude Sonnet (cloud, high-reasoning)
|
||||
|
||||
|
||||
class Mode(Enum):
|
||||
|
||||
Reference in New Issue
Block a user