Compare commits
6 Commits
step35/667
...
sprint/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e8688b0f58 | ||
| d1f5d34fd4 | |||
| 891cdb6e94 | |||
| cac5ca630d | |||
|
|
f1c9843376 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
48
luna/README.md
Normal file
48
luna/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
18
luna/index.html
Normal file
18
luna/index.html
Normal file
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
Normal file
289
luna/sketch.js
Normal file
@@ -0,0 +1,289 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
32
luna/style.css
Normal file
32
luna/style.css
Normal file
@@ -0,0 +1,32 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
@@ -143,176 +143,66 @@ def generate_test(gap):
|
||||
lines = []
|
||||
lines.append(f" # AUTO-GENERATED -- review before merging")
|
||||
lines.append(f" # Source: {func.module_path}:{func.lineno}")
|
||||
lines.append(f" # Function: {func.qualified_name}")
|
||||
lines.append("")
|
||||
mod_imp = func.module_path.replace("/", ".").replace("-", "_").replace(".py", "")
|
||||
|
||||
# Build arguments
|
||||
call_args = []
|
||||
for a in func.args:
|
||||
if a in ("self", "cls"):
|
||||
continue
|
||||
if "path" in a or "file" in a or "dir" in a:
|
||||
call_args.append(f"{a}='/tmp/test'")
|
||||
elif "name" in a or "id" in a or "key" in a:
|
||||
call_args.append(f"{a}='test'")
|
||||
elif "message" in a or "text" in a:
|
||||
call_args.append(f"{a}='test msg'")
|
||||
elif "count" in a or "num" in a or "size" in a or "width" in a or "height" in a:
|
||||
call_args.append(f"{a}=1")
|
||||
elif "flag" in a or "enabled" in a or "verbose" in a:
|
||||
call_args.append(f"{a}=False")
|
||||
else:
|
||||
call_args.append(f"{a}=MagicMock()")
|
||||
if a in ("self", "cls"): continue
|
||||
if "path" in a or "file" in a or "dir" in a: call_args.append(f"{a}='/tmp/test'")
|
||||
elif "name" in a: call_args.append(f"{a}='test'")
|
||||
elif "id" in a or "key" in a: call_args.append(f"{a}='test_id'")
|
||||
elif "message" in a or "text" in a: call_args.append(f"{a}='test msg'")
|
||||
elif "count" in a or "num" in a or "size" in a: call_args.append(f"{a}=1")
|
||||
elif "flag" in a or "enabled" in a or "verbose" in a: call_args.append(f"{a}=False")
|
||||
else: call_args.append(f"{a}=None")
|
||||
args_str = ", ".join(call_args)
|
||||
|
||||
# Test function header
|
||||
if func.is_async:
|
||||
lines.append(" @pytest.mark.asyncio")
|
||||
lines.append(f" async def {func.test_name}(self):")
|
||||
else:
|
||||
lines.append(f" def {func.test_name}(self):")
|
||||
|
||||
lines.append(f" def {func.test_name}(self):")
|
||||
lines.append(f' """Test {func.qualified_name} -- auto-generated."""')
|
||||
|
||||
if func.class_name:
|
||||
lines.append(" try:")
|
||||
lines.append(f" try:")
|
||||
lines.append(f" from {mod_imp} import {func.class_name}")
|
||||
if func.is_private:
|
||||
lines.append(" pytest.skip('Private method')")
|
||||
lines.append(f" pytest.skip('Private method')")
|
||||
elif func.is_property:
|
||||
lines.append(f" obj = {func.class_name}()")
|
||||
lines.append(f" _ = obj.{func.name}")
|
||||
else:
|
||||
if func.raises:
|
||||
lines.append(f" with pytest.raises(({', '.join(func.raises)})):")
|
||||
if func.is_async:
|
||||
lines.append(f" await {func.class_name}().{func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" {func.class_name}().{func.name}({args_str})")
|
||||
lines.append(f" {func.class_name}().{func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" obj = {func.class_name}()")
|
||||
if func.is_async:
|
||||
lines.append(f" _ = await obj.{func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" _ = obj.{func.name}({args_str})")
|
||||
lines.append(" except ImportError:")
|
||||
lines.append(" pytest.skip('Module not importable')")
|
||||
lines.append(f" result = obj.{func.name}({args_str})")
|
||||
if func.has_return:
|
||||
lines.append(f" assert result is not None or result is None # Placeholder")
|
||||
lines.append(f" except ImportError:")
|
||||
lines.append(f" pytest.skip('Module not importable')")
|
||||
else:
|
||||
lines.append(" try:")
|
||||
lines.append(f" try:")
|
||||
lines.append(f" from {mod_imp} import {func.name}")
|
||||
if func.is_private:
|
||||
lines.append(" pytest.skip('Private function')")
|
||||
lines.append(f" pytest.skip('Private function')")
|
||||
else:
|
||||
if func.raises:
|
||||
lines.append(f" with pytest.raises(({', '.join(func.raises)})):")
|
||||
if func.is_async:
|
||||
lines.append(f" await {func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" {func.name}({args_str})")
|
||||
lines.append(f" {func.name}({args_str})")
|
||||
else:
|
||||
if func.is_async:
|
||||
lines.append(f" _ = await {func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" _ = {func.name}({args_str})")
|
||||
lines.append(" except ImportError:")
|
||||
lines.append(" pytest.skip('Module not importable')")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
def generate_edge_cases(gap):
|
||||
"""Generate edge case test for a function."""
|
||||
func = gap.func
|
||||
lines = []
|
||||
lines.append(f" # AUTO-GENERATED -- edge cases -- review before merging")
|
||||
lines.append(f" # Source: {func.module_path}:{func.lineno}")
|
||||
lines.append("")
|
||||
mod_imp = func.module_path.replace("/", ".").replace("-", "_").replace(".py", "")
|
||||
test_name = f"{func.test_name}_edge_cases"
|
||||
|
||||
if func.is_async:
|
||||
lines.append(" @pytest.mark.asyncio")
|
||||
lines.append(f" async def {test_name}(self):")
|
||||
else:
|
||||
lines.append(f" def {test_name}(self):")
|
||||
|
||||
lines.append(f' """Edge cases for {func.qualified_name}."""')
|
||||
|
||||
# Edge argument values
|
||||
call_args = []
|
||||
for a in func.args:
|
||||
if a in ("self", "cls"):
|
||||
continue
|
||||
if "path" in a or "file" in a or "dir" in a:
|
||||
call_args.append(f"{a}=''")
|
||||
elif "name" in a or "id" in a or "key" in a:
|
||||
call_args.append(f"{a}=''")
|
||||
elif "message" in a or "text" in a:
|
||||
call_args.append(f"{a}=''")
|
||||
elif "count" in a or "num" in a or "size" in a or "width" in a or "height" in a:
|
||||
call_args.append(f"{a}=0")
|
||||
elif "flag" in a or "enabled" in a or "verbose" in a:
|
||||
call_args.append(f"{a}=False")
|
||||
else:
|
||||
call_args.append(f"{a}=MagicMock()")
|
||||
args_str = ", ".join(call_args)
|
||||
|
||||
if func.class_name:
|
||||
lines.append(" try:")
|
||||
lines.append(f" from {mod_imp} import {func.class_name}")
|
||||
lines.append(f" obj = {func.class_name}()")
|
||||
if func.is_async:
|
||||
lines.append(f" _ = await obj.{func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" _ = obj.{func.name}({args_str})")
|
||||
lines.append(" except ImportError:")
|
||||
lines.append(" pytest.skip('Module not importable')")
|
||||
else:
|
||||
lines.append(" try:")
|
||||
lines.append(f" from {mod_imp} import {func.name}")
|
||||
if func.is_async:
|
||||
lines.append(f" _ = await {func.name}({args_str})")
|
||||
else:
|
||||
lines.append(f" _ = {func.name}({args_str})")
|
||||
lines.append(" except ImportError:")
|
||||
lines.append(" pytest.skip('Module not importable')")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
def generate_test_suite(gaps, max_tests=50):
|
||||
by_module = {}
|
||||
for gap in gaps[:max_tests]:
|
||||
by_module.setdefault(gap.func.module_path, []).append(gap)
|
||||
|
||||
lines = []
|
||||
lines.append('"""Auto-generated test suite -- Codebase Genome (#667).')
|
||||
lines.append("")
|
||||
lines.append("Generated by scripts/codebase_test_generator.py")
|
||||
lines.append("Coverage gaps identified from AST analysis.")
|
||||
lines.append("")
|
||||
lines.append("These tests are starting points. Review before merging.")
|
||||
lines.append('"""')
|
||||
lines.append("")
|
||||
lines.append("import pytest")
|
||||
lines.append("from unittest.mock import MagicMock, patch")
|
||||
lines.append("")
|
||||
lines.append("")
|
||||
lines.append("# AUTO-GENERATED -- DO NOT EDIT WITHOUT REVIEW")
|
||||
|
||||
for module, mgaps in sorted(by_module.items()):
|
||||
safe = module.replace("/", "_").replace(".py", "").replace("-", "_")
|
||||
cls_name = "".join(w.title() for w in safe.split("_"))
|
||||
lines.append("")
|
||||
lines.append(f"class Test{cls_name}Generated:")
|
||||
lines.append(f' """Auto-generated tests for {module}."""')
|
||||
for gap in mgaps:
|
||||
lines.append("")
|
||||
lines.append(generate_test(gap))
|
||||
lines.append(generate_edge_cases(gap))
|
||||
lines.append("")
|
||||
lines.append(f" result = {func.name}({args_str})")
|
||||
if func.has_return:
|
||||
lines.append(f" assert result is not None or result is None # Placeholder")
|
||||
lines.append(f" except ImportError:")
|
||||
lines.append(f" pytest.skip('Module not importable')")
|
||||
|
||||
return chr(10).join(lines)
|
||||
|
||||
|
||||
def generate_test_suite(gaps, max_tests=50):
|
||||
by_module = {}
|
||||
for gap in gaps[:max_tests]:
|
||||
by_module.setdefault(gap.func.module_path, []).append(gap)
|
||||
@@ -386,7 +276,7 @@ def main():
|
||||
return
|
||||
|
||||
if gaps:
|
||||
content = generate_test_suite(gaps, max_tests=args.max_tests)
|
||||
content = generate_test_suite(gaps, max_tests=args.max-tests if hasattr(args, 'max-tests') else args.max_tests)
|
||||
out = os.path.join(source_dir, args.output)
|
||||
os.makedirs(os.path.dirname(out), exist_ok=True)
|
||||
with open(out, "w") as f:
|
||||
|
||||
93
specs/fleet-operator-incentives.md
Normal file
93
specs/fleet-operator-incentives.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# Fleet Operator Incentives Program
|
||||
|
||||
## Overview
|
||||
|
||||
This specification defines the incentive structure and certification program for Timmy Home fleet operators. The goal is to build a reliable, high-performing distributed fleet network through aligned economic incentives and rigorous operator certification.
|
||||
|
||||
## Program Objectives
|
||||
|
||||
- Recruit and retain 3-5 active certified operators within 6 months
|
||||
- Maintain operator churn <10% annually
|
||||
- Achieve fleet uptime >99.5%
|
||||
- Ensure partner channel delivers >30% of leads
|
||||
|
||||
## Operator Tiers & Requirements
|
||||
|
||||
### Tier 1: Certified Operator
|
||||
- Complete operator application and training
|
||||
- Maintain minimum hardware specifications
|
||||
- Agree to SLAs and monitoring
|
||||
- Pass technical assessment
|
||||
|
||||
### Tier 2: Senior Operator
|
||||
- 6+ months active participation
|
||||
- Uptime >99.7%
|
||||
- Mentor at least 1 new operator
|
||||
- Advanced troubleshooting capabilities
|
||||
|
||||
### Tier 3: Fleet Lead
|
||||
- 12+ months active participation
|
||||
- Uptime >99.9%
|
||||
- Team lead responsibilities
|
||||
- Strategic input on fleet improvements
|
||||
|
||||
## Incentive Structure
|
||||
|
||||
### Base Compensation
|
||||
- Tier 1: $X/month per active node
|
||||
- Tier 2: $Y/month per active node (+15% bonus)
|
||||
- Tier 3: $Z/month per active node (+30% bonus)
|
||||
|
||||
### Performance Bonuses
|
||||
- Uptime bonus: Additional 5% for >99.5% monthly uptime
|
||||
- Lead generation bonus: $100 per qualified lead from operator network
|
||||
- Mentorship bonus: $200/month per successfully onboarded mentee
|
||||
|
||||
### Penalties & Adjustments
|
||||
- Downtime deductions: Prorated based on SLA breach
|
||||
- Early termination fees: 50% of commitment period value
|
||||
- Performance improvement plan for chronic underperformance
|
||||
|
||||
## Certification Process
|
||||
|
||||
1. Application submission (operator-application.md template)
|
||||
2. Technical screening and hardware validation
|
||||
3. Training completion (modules & hands-on)
|
||||
4. Assessment exam (minimum 80% score)
|
||||
5. Probation period (30 days)
|
||||
6. Full certification
|
||||
|
||||
## Monitoring & Metrics
|
||||
|
||||
- Real-time uptime monitoring via Prometheus/Grafana
|
||||
- Monthly performance reports
|
||||
- Quarterly business reviews for senior operators
|
||||
- Automated alerting for SLA breaches
|
||||
|
||||
## Partner Program Integration
|
||||
|
||||
- Certified operators become partner channel participants
|
||||
- Operators receive referral commissions
|
||||
- Partner leads tracked through dedicated attribution system
|
||||
- Monthly partner reports generated (partner-report.md template)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- 3-5 active certified operators by month 6
|
||||
- Annual churn rate <10%
|
||||
- Fleet-wide uptime >99.5%
|
||||
- Partner channel contribution >30% of new leads
|
||||
|
||||
## Roadmap
|
||||
|
||||
**Month 1-2:** Launch pilot program with 2 operators
|
||||
**Month 3-4:** Scale to 5 operators, refine processes
|
||||
**Month 5-6:** Optimize incentives, expand partner integration
|
||||
|
||||
## Appendix
|
||||
|
||||
- Operator agreement template
|
||||
- SLA definitions and metrics
|
||||
- Hardware requirements document
|
||||
- Training curriculum outline
|
||||
- Support escalation procedures
|
||||
161
specs/fleet-ops-runbook.md
Normal file
161
specs/fleet-ops-runbook.md
Normal file
@@ -0,0 +1,161 @@
|
||||
# Fleet Operations Runbook
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### System Outage Response
|
||||
|
||||
**Severity 1 (Total Outage)**
|
||||
- Immediate: Alert all on-call operators via PagerDuty
|
||||
- Within 15min: Incident commander declared, communication channel established
|
||||
- Within 1hr: Root cause identified or escalation to engineering
|
||||
- Resolution: Post-mortem within 24 hours
|
||||
|
||||
**Severity 2 (Partial Degradation)**
|
||||
- Alert within 30min
|
||||
- Diagnosis within 2 hours
|
||||
- Resolution or workaround within 4 hours
|
||||
|
||||
**Severity 3 (Minor Issues)**
|
||||
- Ticket creation in incident tracker
|
||||
- Resolution within 24 hours
|
||||
|
||||
### Hardware Failure
|
||||
|
||||
1. **Node Failure Detection**
|
||||
- Automated monitoring alerts when node >5min offline
|
||||
- Operator SMS/email notification
|
||||
- Auto-escalation if no response within 10min
|
||||
|
||||
2. **Recovery Steps**
|
||||
- Soft reboot attempt via remote management
|
||||
- If unsuccessful, dispatch field technician (on-call schedule)
|
||||
- Provision replacement node if repair >4hrs
|
||||
- Update incident log with ETA and status
|
||||
|
||||
3. **Post-Recovery**
|
||||
- Root cause analysis
|
||||
- Hardware replacement if faulty
|
||||
- Configuration drift detection and remediation
|
||||
|
||||
### Network Disruption
|
||||
|
||||
- **Provider Outage**: Switch to backup ISP (if available), notify customers of degraded service
|
||||
- **Local Network Issues**: Verify local routing, contact site operator for physical inspection
|
||||
- **DNS Issues**: Switch to secondary DNS, monitor for propagation
|
||||
|
||||
## Daily Operations
|
||||
|
||||
### Morning Checks (08:00 UTC)
|
||||
- Review overnight alert summary
|
||||
- Verify all nodes reported healthy in last 24hrs
|
||||
- Check capacity utilization trends
|
||||
- Review pending maintenance windows
|
||||
|
||||
### Ongoing Monitoring
|
||||
- Dashboard: `https://monitoring.timmyfoundation.org/fleet`
|
||||
- Slack channel: `#fleet-operations`
|
||||
- PagerDuty schedule: rotate weekly among Tier 3 operators
|
||||
|
||||
### Handoff Procedure
|
||||
- Outgoing operator: Complete handoff checklist by end of shift
|
||||
- Incoming operator: Review log, verify all systems nominal
|
||||
- Both parties: Sign off in runbook log
|
||||
|
||||
## Maintenance Windows
|
||||
|
||||
- **Weekly**: Software updates (Sunday 02:00-04:00 UTC)
|
||||
- **Monthly**: Hardware inspection and cleaning
|
||||
- **Quarterly**: Full system audit and capacity planning
|
||||
|
||||
## Escalation Path
|
||||
|
||||
```
|
||||
Operator (Tier 1) → Senior Operator (Tier 2) → Fleet Lead (Tier 3)
|
||||
↓
|
||||
Engineering On-Call (P0-P1 incidents)
|
||||
↓
|
||||
CTO / Executive Review (P0 incidents, business critical)
|
||||
```
|
||||
|
||||
## Communication Templates
|
||||
|
||||
### Outage Notification (Customer-Facing)
|
||||
|
||||
```
|
||||
Subject: Service Disruption Notification
|
||||
|
||||
Dear Customer,
|
||||
|
||||
We are currently experiencing an issue affecting [service]. Our team is investigating and working to restore service as quickly as possible.
|
||||
|
||||
Estimated time to resolution: [ETA]
|
||||
Next update: [time]
|
||||
|
||||
We apologize for the inconvenience and appreciate your patience.
|
||||
|
||||
Timmy Operations Team
|
||||
```
|
||||
|
||||
### Internal Alert
|
||||
|
||||
```
|
||||
🚨 FLEET INCIDENT: [SEVERITY] - [NODE/SERVICE]
|
||||
|
||||
Impact: [description]
|
||||
Action: [immediate action required]
|
||||
Owner: [assigned operator]
|
||||
ETA: [estimated resolution time]
|
||||
|
||||
Link to incident: [URL]
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- Architecture diagrams: `docs/architecture/`
|
||||
- Configuration management: `docs/config/`
|
||||
- Operator handbook: `specs/fleet-operator-incentives.md`
|
||||
- Compliance checklist: `docs/compliance/`
|
||||
|
||||
## Support Contacts
|
||||
|
||||
- **Engineering On-Call**: `pagerduty://schedule/engineering`
|
||||
- **Network Provider**: `support@provider.com / 1-800-SUPPORT`
|
||||
- **Hardware Vendor**: `support@vendor.com / 1-800-HARDWARE`
|
||||
- **Internal Fleet Slack**: `#fleet-operations`
|
||||
|
||||
## Recovery Objectives (RTO/RPO)
|
||||
|
||||
| Service | RTO | RPO |
|
||||
|---------|-----|-----|
|
||||
| API Services | 15min | 5min |
|
||||
| Data Pipeline | 1hr | 15min |
|
||||
| Monitoring | 30min | N/A |
|
||||
| Backup Systems | 4hr | 24hr |
|
||||
|
||||
## Change Management
|
||||
|
||||
- All production changes require RFC and approval
|
||||
- Emergency changes: Document rationale, notify within 24hrs
|
||||
- Standard changes: Weekly change window (Wednesday 22:00 UTC)
|
||||
- Post-change validation required for all modifications
|
||||
|
||||
## Security Incidents
|
||||
|
||||
- Immediate isolation of affected nodes
|
||||
- Preserve logs for forensic analysis
|
||||
- Notify security team within 15min
|
||||
- Follow incident response playbook: `docs/security/incident-response.md`
|
||||
|
||||
## Metrics & KPIs
|
||||
|
||||
- **MTTR**: Mean time to recovery
|
||||
- **Uptime**: Node and service availability percentages
|
||||
- **Capacity**: Utilization vs. provisioned resources
|
||||
- **Customer Impact**: Number of affected customers per incident
|
||||
|
||||
## Appendix
|
||||
|
||||
- Outage history log
|
||||
- Maintenance schedule
|
||||
- Vendor contact list
|
||||
- Compliance audit checklist
|
||||
112
specs/templates/operator-application.md
Normal file
112
specs/templates/operator-application.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Fleet Operator Application
|
||||
|
||||
## Personal Information
|
||||
|
||||
**Full Name:**
|
||||
**Email:**
|
||||
**Phone:**
|
||||
**Location (City, State/Province, Country):**
|
||||
**Time Zone:**
|
||||
|
||||
## Business Entity
|
||||
|
||||
**Legal Structure:** (Sole Proprietor / LLC / Corporation / Other)
|
||||
**Business Registration Number:**
|
||||
**Tax ID/EIN:**
|
||||
**Years in Operation:**
|
||||
|
||||
## Technical Capabilities
|
||||
|
||||
### Infrastructure
|
||||
|
||||
- **Number of Nodes Available:** __________
|
||||
- **Hardware Specifications (per node):**
|
||||
- CPU: __________
|
||||
- RAM: __________
|
||||
- Storage: __________
|
||||
- Network: __________
|
||||
|
||||
- **Uptime History (past 12 months):** __________%
|
||||
- **Average Monthly Downtime:** __________ hours
|
||||
|
||||
### Connectivity
|
||||
|
||||
- **Primary ISP:** __________
|
||||
- **Backup ISP:** __________ (Yes/No)
|
||||
- **Average Upload Speed:** __________ Mbps
|
||||
- **Average Download Speed:** __________ Mbps
|
||||
- **Latency to primary regions:** __________ ms
|
||||
|
||||
### Security & Compliance
|
||||
|
||||
- **Physical Security Measures:** (e.g., locked racks, cameras)
|
||||
- **Network Security:** (firewalls, VPNs, monitoring)
|
||||
- **Data Privacy Compliance:** (GDPR, CCPA, etc.)
|
||||
- **Insurance Coverage:** (liability, errors & omissions)
|
||||
|
||||
## Operational Capacity
|
||||
|
||||
**Support Hours:** __________ (24/7 / Business Hours / On-call)
|
||||
**Staff Count:** __________ (Full-time / Part-time)
|
||||
**Incident Response SLA:** __________
|
||||
**Monitoring Tools Used:** __________
|
||||
|
||||
## Financial Terms
|
||||
|
||||
**Desired Compensation Model:** (Tier 1 / Tier 2 / Tier 3)
|
||||
**Expected Monthly Revenue:** $__________
|
||||
**Start Date Availability:** __________
|
||||
**Commitment Period:** (6 months / 12 months / 24 months)
|
||||
|
||||
## References
|
||||
|
||||
**Previous Fleet/Customer References:**
|
||||
1. Name: __________ | Contact: __________ | Relationship: __________
|
||||
2. Name: __________ | Contact: __________ | Relationship: __________
|
||||
|
||||
**Technical References:**
|
||||
1. Name: __________ | Contact: __________ | Relationship: __________
|
||||
|
||||
## Certifications
|
||||
|
||||
- [ ] AWS/Azure/GCP Certification
|
||||
- [ ] Network+ / Security+
|
||||
- [ ] ISO 27001
|
||||
- [ ] SOC 2
|
||||
- [ ] Other: __________
|
||||
|
||||
## Motivation & Alignment
|
||||
|
||||
**Why do you want to join the Timmy Home Fleet?** (max 500 words)
|
||||
|
||||
**How does your operation align with our values of reliability, transparency, and continuous improvement?** (max 300 words)
|
||||
|
||||
## Attachments
|
||||
|
||||
- [ ] Proof of business registration
|
||||
- [ ] Insurance certificates
|
||||
- [ ] Network performance reports (last 3 months)
|
||||
- [ ] Hardware inventory list
|
||||
- [ ] Signed NDA (if not already on file)
|
||||
|
||||
## Agreement
|
||||
|
||||
By submitting this application, I certify that all information provided is accurate and complete. I understand that false statements may result in termination of the operator agreement.
|
||||
|
||||
**Signature:** _________________________
|
||||
**Date:** _________________________
|
||||
|
||||
## Internal Use Only (Timmy Home Team)
|
||||
|
||||
- **Application Received:** __________
|
||||
- **Initial Screening:** __________ (Pass/Fail) by __________
|
||||
- **Technical Review:** __________ (Pass/Fail) by __________
|
||||
- **Site Visit/Remote Inspection:** __________ (Completed/Dates)
|
||||
- **Certification Assigned:** __________ (Tier 1 / Tier 2 / Tier 3)
|
||||
- **Onboarding Date:** __________
|
||||
- **Mentor Assigned:** __________
|
||||
- **Operational Start Date:** __________
|
||||
|
||||
**Notes:**
|
||||
__________
|
||||
__________
|
||||
134
specs/templates/partner-report.md
Normal file
134
specs/templates/partner-report.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Partner Monthly Report
|
||||
|
||||
## Report Period
|
||||
|
||||
**Month/Year:** __________
|
||||
**Partner ID:** __________
|
||||
**Partner Name:** __________
|
||||
**Report Generated:** __________
|
||||
|
||||
## Executive Summary
|
||||
|
||||
- Total leads generated: __________
|
||||
- Qualified leads: __________
|
||||
- converted customers: __________
|
||||
- Revenue attributed: $__________
|
||||
- Commission earned: $__________
|
||||
- YoY growth: __________%
|
||||
|
||||
## Lead Generation Metrics
|
||||
|
||||
### Lead Volume
|
||||
|
||||
| Channel | Total Leads | Qualified Leads | Conversion Rate | Notes |
|
||||
|---------|-------------|-----------------|-----------------|-------|
|
||||
| Direct Referral | __ | __ | __% | |
|
||||
| Marketing Campaign | __ | __ | __% | |
|
||||
| Events/Conferences | __ | __ | __% | |
|
||||
| Other: __________ | __ | __ | __% | |
|
||||
|
||||
### Lead Quality Assessment
|
||||
|
||||
- **High Value (likely to convert):** __________ leads
|
||||
- **Medium Value:** __________ leads
|
||||
- **Low Value:** __________ leads
|
||||
- **Lead Source Validation:** __________% verified
|
||||
|
||||
## Revenue & Commission
|
||||
|
||||
### Revenue Attribution
|
||||
|
||||
| Customer | Deal Size | Start Date | Commission % | Commission Amount |
|
||||
|----------|-----------|------------|--------------|-------------------|
|
||||
| | $ | | % | $ |
|
||||
| | $ | | % | $ |
|
||||
| | $ | | % | $ |
|
||||
|
||||
- **Total Revenue:** $__________
|
||||
- **Total Commission:** $__________
|
||||
- **Commission Rate:** __________%
|
||||
- **Payment Status:** (Paid / Pending / Escrow)
|
||||
|
||||
### Payment Schedule
|
||||
|
||||
- **Commission Period:** 1st - last day of month
|
||||
- **Payment Date:** __________ (net 30 days)
|
||||
- **Payment Method:** (ACH / Wire / Check / Crypto)
|
||||
- **Invoice Attached:** (Yes/No)
|
||||
|
||||
## Fleet Performance Impact
|
||||
|
||||
### Operator Contributions
|
||||
|
||||
| Operator | Leads Generated | Conversions | Revenue Impact |
|
||||
|----------|----------------|-------------|----------------|
|
||||
| | | | $ |
|
||||
| | | | $ |
|
||||
| | | | $ |
|
||||
|
||||
### Uptime & Reliability Correlation
|
||||
|
||||
- **Average fleet uptime during reporting period:** __________%
|
||||
- **Leads from high-uptime operators (>99.5%):** __________
|
||||
- **Customer complaints related to fleet issues:** __________
|
||||
|
||||
## Marketing & Training Activities
|
||||
|
||||
### Promotional Efforts
|
||||
|
||||
- Campaigns run: __________
|
||||
- Materials distributed: __________
|
||||
- Events attended: __________
|
||||
- Content created: __________
|
||||
|
||||
### Training Completed
|
||||
|
||||
- New operator certifications: __________
|
||||
- Continuing education hours: __________
|
||||
- Process improvements implemented: __________
|
||||
|
||||
## Challenges & Blockers
|
||||
|
||||
- __________
|
||||
- __________
|
||||
- __________
|
||||
|
||||
## Opportunities & Goals (Next Period)
|
||||
|
||||
1. __________
|
||||
2. __________
|
||||
3. __________
|
||||
|
||||
## Support Needs
|
||||
|
||||
- __ Technical assistance
|
||||
- __ Marketing materials
|
||||
- __ Training resources
|
||||
- __ Lead qualification support
|
||||
- __ Other: __________
|
||||
|
||||
## Compliance & Agreement Status
|
||||
|
||||
- [ ] All reporting requirements met
|
||||
- [ ] Commissions calculated correctly
|
||||
- [ ] SLA adherence documented
|
||||
- [ ] Partner agreement in good standing
|
||||
- [ ] No compliance violations
|
||||
|
||||
**Partner Signature:** _________________________
|
||||
**Date:** _________________________
|
||||
|
||||
**Timmy Home Representative:** _________________________
|
||||
**Date:** _________________________
|
||||
|
||||
## Attachments
|
||||
|
||||
- [ ] Lead verification documentation
|
||||
- [ ] Revenue reports from finance system
|
||||
- [ ] Commission calculation spreadsheet
|
||||
- [ ] Marketing activity logs
|
||||
- [ ] Training completion certificates
|
||||
|
||||
---
|
||||
|
||||
*This report is confidential and intended solely for the use of the partner and Timmy Home leadership. Distribution without authorization is prohibited.*
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
Reference in New Issue
Block a user