Compare commits
6 Commits
fix/518
...
sprint/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
61cc79fec9 | ||
| d1f5d34fd4 | |||
| 891cdb6e94 | |||
| cac5ca630d | |||
|
|
f1c9843376 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
48
luna/README.md
Normal file
48
luna/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
18
luna/index.html
Normal file
18
luna/index.html
Normal file
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
Normal file
289
luna/sketch.js
Normal file
@@ -0,0 +1,289 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
32
luna/style.css
Normal file
32
luna/style.css
Normal file
@@ -0,0 +1,32 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
@@ -1,313 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Cross-agent quality audit — #518
|
||||
|
||||
Fetches all PRs across Timmy_Foundation repos, classifies by agent,
|
||||
and produces a merge-rate scorecard.
|
||||
|
||||
Usage:
|
||||
python scripts/cross_agent_quality_audit.py
|
||||
python scripts/cross_agent_quality_audit.py --scorecard timmy-config/agent-quality-scorecard.md
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import requests
|
||||
|
||||
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
ORG = "Timmy_Foundation"
|
||||
TOKEN = os.environ.get("GITEA_TOKEN") or (
|
||||
Path.home() / ".config" / "gitea" / "token"
|
||||
).read_text().strip()
|
||||
|
||||
HEADERS = {"Authorization": f"token {TOKEN}"}
|
||||
|
||||
# Repos to audit (active code repos)
|
||||
DEFAULT_REPOS = [
|
||||
"timmy-home",
|
||||
"hermes-agent",
|
||||
"the-nexus",
|
||||
"the-door",
|
||||
"fleet-ops",
|
||||
"burn-fleet",
|
||||
"the-playground",
|
||||
"compounding-intelligence",
|
||||
"the-beacon",
|
||||
"second-son-of-timmy",
|
||||
"timmy-academy",
|
||||
"timmy-config",
|
||||
]
|
||||
|
||||
|
||||
class AgentClassifier:
|
||||
"""Classify PRs by agent identity."""
|
||||
|
||||
# PR title prefixes that explicitly name an agent
|
||||
AGENT_TITLE_RE = re.compile(
|
||||
r"^\[(?P<agent>Claude|Ezra|Allegro|Bezalel|Timmy|Gemini|Kimi|Manus|Codex)\]",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
# Branch patterns that embed agent names
|
||||
AGENT_BRANCH_RE = re.compile(
|
||||
r"(?P<agent>claude|ezra|allegro|bezalel|timmy|gemini|kimi|manus|codex)",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def classify(cls, pr: Dict[str, Any]) -> str:
|
||||
title = pr.get("title", "")
|
||||
branch = pr.get("head", {}).get("ref", "")
|
||||
user = pr.get("user", {}).get("login", "")
|
||||
|
||||
# 1. Explicit title tag like [Claude] or [Ezra]
|
||||
m = cls.AGENT_TITLE_RE.match(title)
|
||||
if m:
|
||||
return m.group("agent").lower()
|
||||
|
||||
# 2. Branch contains agent name (e.g. claude/issue-123)
|
||||
m = cls.AGENT_BRANCH_RE.search(branch)
|
||||
if m:
|
||||
return m.group("agent").lower()
|
||||
|
||||
# 3. Git user mapping
|
||||
if user.lower() == "claude":
|
||||
return "claude"
|
||||
if user.lower() == "rockachopa":
|
||||
# Rockachopa is the human / orchestrator — map to "burn-loop"
|
||||
return "burn-loop"
|
||||
|
||||
return "unknown"
|
||||
|
||||
|
||||
def fetch_prs(repo: str, state: str = "all", per_page: int = 50) -> List[Dict[str, Any]]:
|
||||
"""Paginate through all PRs for a repo."""
|
||||
prs: List[Dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
url = f"{GITEA_BASE}/repos/{ORG}/{repo}/pulls?state={state}&limit={per_page}&page={page}"
|
||||
resp = requests.get(url, headers=HEADERS, timeout=30)
|
||||
resp.raise_for_status()
|
||||
batch = resp.json()
|
||||
if not batch:
|
||||
break
|
||||
prs.extend(batch)
|
||||
if len(batch) < per_page:
|
||||
break
|
||||
page += 1
|
||||
return prs
|
||||
|
||||
|
||||
def parse_datetime(dt_str: Optional[str]) -> Optional[datetime]:
|
||||
if not dt_str:
|
||||
return None
|
||||
try:
|
||||
return datetime.fromisoformat(dt_str.replace("Z", "+00:00"))
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def hours_between(start: Optional[str], end: Optional[str]) -> Optional[float]:
|
||||
s = parse_datetime(start)
|
||||
e = parse_datetime(end)
|
||||
if s and e:
|
||||
return (e - s).total_seconds() / 3600
|
||||
return None
|
||||
|
||||
|
||||
def audit_repos(repos: List[str]) -> Dict[str, Any]:
|
||||
"""Run the audit and return aggregated stats."""
|
||||
agent_stats: Dict[str, Dict[str, Any]] = defaultdict(
|
||||
lambda: {
|
||||
"total": 0,
|
||||
"merged": 0,
|
||||
"closed_unmerged": 0,
|
||||
"open": 0,
|
||||
"hours_to_merge": [],
|
||||
"hours_to_close": [],
|
||||
"repos": set(),
|
||||
"prs": [],
|
||||
}
|
||||
)
|
||||
|
||||
repo_stats: Dict[str, Dict[str, Any]] = {}
|
||||
|
||||
for repo in repos:
|
||||
print(f"Fetching PRs for {repo} ...", file=sys.stderr)
|
||||
try:
|
||||
prs = fetch_prs(repo)
|
||||
except requests.HTTPError as exc:
|
||||
print(f" SKIP {repo}: {exc}", file=sys.stderr)
|
||||
continue
|
||||
|
||||
repo_merged = 0
|
||||
repo_total = len(prs)
|
||||
for pr in prs:
|
||||
agent = AgentClassifier.classify(pr)
|
||||
s = agent_stats[agent]
|
||||
s["total"] += 1
|
||||
s["repos"].add(repo)
|
||||
s["prs"].append(
|
||||
{
|
||||
"repo": repo,
|
||||
"number": pr["number"],
|
||||
"title": pr["title"],
|
||||
"state": pr["state"],
|
||||
"merged": pr.get("merged", False),
|
||||
"created_at": pr.get("created_at"),
|
||||
"merged_at": pr.get("merged_at"),
|
||||
"closed_at": pr.get("closed_at"),
|
||||
}
|
||||
)
|
||||
|
||||
if pr.get("merged"):
|
||||
s["merged"] += 1
|
||||
repo_merged += 1
|
||||
h = hours_between(pr.get("created_at"), pr.get("merged_at"))
|
||||
if h is not None:
|
||||
s["hours_to_merge"].append(h)
|
||||
elif pr["state"] == "closed":
|
||||
s["closed_unmerged"] += 1
|
||||
h = hours_between(pr.get("created_at"), pr.get("closed_at"))
|
||||
if h is not None:
|
||||
s["hours_to_close"].append(h)
|
||||
else:
|
||||
s["open"] += 1
|
||||
|
||||
repo_stats[repo] = {
|
||||
"total": repo_total,
|
||||
"merged": repo_merged,
|
||||
"merge_rate": round(repo_merged / repo_total, 2) if repo_total else 0,
|
||||
}
|
||||
|
||||
# Compute derived metrics
|
||||
summary = {}
|
||||
for agent, s in sorted(agent_stats.items(), key=lambda x: -x[1]["total"]):
|
||||
total = s["total"]
|
||||
merged = s["merged"]
|
||||
closed = s["closed_unmerged"]
|
||||
resolved = merged + closed
|
||||
merge_rate = round(merged / resolved, 3) if resolved else 0
|
||||
avg_merge_hours = (
|
||||
round(sum(s["hours_to_merge"]) / len(s["hours_to_merge"]), 1)
|
||||
if s["hours_to_merge"]
|
||||
else None
|
||||
)
|
||||
avg_close_hours = (
|
||||
round(sum(s["hours_to_close"]) / len(s["hours_to_close"]), 1)
|
||||
if s["hours_to_close"]
|
||||
else None
|
||||
)
|
||||
summary[agent] = {
|
||||
"total_prs": total,
|
||||
"merged": merged,
|
||||
"closed_unmerged": closed,
|
||||
"open": s["open"],
|
||||
"merge_rate": merge_rate,
|
||||
"rejection_rate": round(closed / resolved, 3) if resolved else 0,
|
||||
"avg_hours_to_merge": avg_merge_hours,
|
||||
"avg_hours_to_close": avg_close_hours,
|
||||
"repos": sorted(s["repos"]),
|
||||
}
|
||||
|
||||
return {
|
||||
"audited_at": datetime.now(timezone.utc).isoformat(),
|
||||
"repos_audited": repos,
|
||||
"repo_stats": repo_stats,
|
||||
"agent_summary": summary,
|
||||
"raw_prs": {a: s["prs"] for a, s in agent_stats.items()},
|
||||
}
|
||||
|
||||
|
||||
def render_scorecard(data: Dict[str, Any]) -> str:
|
||||
"""Render a markdown scorecard."""
|
||||
lines = [
|
||||
"# Cross-Agent Quality Scorecard",
|
||||
"",
|
||||
f"**Audited at:** {data['audited_at']}",
|
||||
f"**Repos audited:** {', '.join(data['repos_audited'])}",
|
||||
"",
|
||||
"## Per-Agent Summary",
|
||||
"",
|
||||
"| Agent | Total PRs | Merged | Closed (unmerged) | Open | Merge Rate | Rejection Rate | Avg Hours to Merge | Avg Hours to Close |",
|
||||
"|---|---|---:|---:|---:|---:|---:|---:|---:|",
|
||||
]
|
||||
|
||||
for agent, s in data["agent_summary"].items():
|
||||
merge_hours = f"{s['avg_hours_to_merge']:.1f}" if s["avg_hours_to_merge"] is not None else "—"
|
||||
close_hours = f"{s['avg_hours_to_close']:.1f}" if s["avg_hours_to_close"] is not None else "—"
|
||||
lines.append(
|
||||
f"| {agent} | {s['total_prs']} | {s['merged']} | {s['closed_unmerged']} | "
|
||||
f"{s['open']} | {s['merge_rate']:.1%} | {s['rejection_rate']:.1%} | "
|
||||
f"{merge_hours} | {close_hours} |"
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Per-Repo Merge Rate",
|
||||
"",
|
||||
"| Repo | Total PRs | Merged | Merge Rate |",
|
||||
"|---|---|---:|---:|",
|
||||
])
|
||||
|
||||
for repo, s in sorted(data["repo_stats"].items(), key=lambda x: -x[1]["total"]):
|
||||
lines.append(
|
||||
f"| {repo} | {s['total']} | {s['merged']} | {s['merge_rate']:.1%} |"
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Methodology",
|
||||
"",
|
||||
"- **Agent classification** uses three signals in priority order:",
|
||||
" 1. Explicit title tag (e.g. `[Claude]`, `[Ezra]`)",
|
||||
" 2. Branch name containing agent name (e.g. `claude/issue-123`)",
|
||||
" 3. Git user (`claude` → claude, `Rockachopa` → burn-loop)",
|
||||
"- **Merge rate** = merged / (merged + closed_unmerged). Open PRs are excluded.",
|
||||
"- **Rejection rate** = closed_unmerged / (merged + closed_unmerged).",
|
||||
"- **Time metrics** are computed from created_at to merged_at / closed_at.",
|
||||
"",
|
||||
"## Raw Data",
|
||||
"",
|
||||
"```json",
|
||||
json.dumps(data["agent_summary"], indent=2),
|
||||
"```",
|
||||
"",
|
||||
])
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Cross-agent quality audit")
|
||||
parser.add_argument("--repos", nargs="+", default=DEFAULT_REPOS, help="Repos to audit")
|
||||
parser.add_argument("--scorecard", default="timmy-config/agent-quality-scorecard.md", help="Output path")
|
||||
parser.add_argument("--json", default=None, help="Also write raw JSON to path")
|
||||
args = parser.parse_args()
|
||||
|
||||
data = audit_repos(args.repos)
|
||||
|
||||
scorecard_path = Path(args.scorecard)
|
||||
scorecard_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
scorecard_path.write_text(render_scorecard(data))
|
||||
print(f"Scorecard written to {scorecard_path}", file=sys.stderr)
|
||||
|
||||
if args.json:
|
||||
json_path = Path(args.json)
|
||||
json_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
json_path.write_text(json.dumps(data, indent=2, default=str))
|
||||
print(f"Raw JSON written to {json_path}", file=sys.stderr)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
108
specs/fleet-operator-incentives.md
Normal file
108
specs/fleet-operator-incentives.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# Fleet Operator Incentives Program
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the incentive structure for certified fleet operators within the Timmy network. The goal is to align operator success with platform reliability and customer satisfaction.
|
||||
|
||||
## Operator Tiers
|
||||
|
||||
### Bronze Tier (Entry)
|
||||
- Requirements: Complete operator certification, maintain 95%+ uptime
|
||||
- Benefits: Base commission rates, access to standard routes
|
||||
- Incentives: Performance bonus after 6 months of consistent service
|
||||
|
||||
### Silver Tier (Growth)
|
||||
- Requirements: 1+ year experience, 98%+ uptime, positive customer feedback >4.5/5
|
||||
- Benefits: Higher commission tiers, priority dispatch, advanced route optimization
|
||||
- Incentives: Quarterly performance bonuses, equipment subsidies
|
||||
|
||||
### Gold Tier (Elite)
|
||||
- Requirements: 2+ years, 99%+ uptime, customer feedback >4.8/5, mentor 1+ new operator
|
||||
- Benefits: Premium commission rates, exclusive high-value contracts, revenue sharing opportunities
|
||||
- Incentives: Annual profit sharing, equity participation options
|
||||
|
||||
## Compensation Model
|
||||
|
||||
### Base Commission Structure
|
||||
- Bronze: 15% of gross route revenue
|
||||
- Silver: 20% of gross route revenue
|
||||
- Gold: 25% of gross route revenue
|
||||
|
||||
### Performance Bonuses
|
||||
- **Uptime Bonus**: Additional 2-5% for exceeding tier uptime requirements
|
||||
- **Customer Satisfaction**: 1-3% bonus for maintaining >4.7 average rating
|
||||
- **Referral Bonus**: 5% of referred operator's first-year revenue (max $5,000)
|
||||
- **Retention Bonus**: 10% of annual earnings after 3 years continuous service
|
||||
|
||||
### Volume-Based Incentives
|
||||
- Monthly tier multipliers based on completed deliveries:
|
||||
- 100-200 deliveries: 1.0x base
|
||||
- 201-500 deliveries: 1.1x base + $500 bonus
|
||||
- 501+ deliveries: 1.2x base + $1,500 bonus
|
||||
|
||||
## Support & Resources
|
||||
|
||||
### Included Benefits
|
||||
- Fleet insurance discounts (up to 20% off standard rates)
|
||||
- Maintenance partnerships (15% discount at certified shops)
|
||||
- Fuel card program (5% cashback on fuel purchases)
|
||||
- Technology subsidies (half off GPS/telematics equipment)
|
||||
- Training stipend ($1,000 annually for certifications)
|
||||
|
||||
### Operational Support
|
||||
- 24/7 dispatch support
|
||||
- Maintenance scheduling assistance
|
||||
- Customer dispute resolution
|
||||
- Compliance and regulatory guidance
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
### Monitoring Metrics
|
||||
- Fleet uptime (target >99.5%)
|
||||
- On-time delivery rate (target >98%)
|
||||
- Customer satisfaction score (target >4.5/5)
|
||||
- Incident rate (target <0.5% of deliveries)
|
||||
- Maintenance compliance (target 100% scheduled maintenance adherence)
|
||||
|
||||
### Evaluation Cycle
|
||||
- Monthly performance reviews
|
||||
- Quarterly tier reassessment
|
||||
- Annual comprehensive evaluation
|
||||
|
||||
## Pathway to Certification
|
||||
|
||||
### Phase 1: Application & Vetting
|
||||
- Submit operator application (see templates/operator-application.md)
|
||||
- Background check and driving record review
|
||||
- Vehicle inspection and insurance verification
|
||||
- Initial training completion
|
||||
|
||||
### Phase 2: Trial Period (90 days)
|
||||
- Limited route assignments
|
||||
- Mentor operator pairing
|
||||
- Weekly performance reviews
|
||||
- Final evaluation and certification
|
||||
|
||||
### Phase 3: Full Certification
|
||||
- Receive tier assignment
|
||||
- Access to full route network
|
||||
- Begin earning full commission rates
|
||||
|
||||
## Program Administration
|
||||
|
||||
- Program managed by Fleet Operations Team
|
||||
- Quarterly operator council meetings
|
||||
- Annual program review and adjustment
|
||||
- Dispute resolution process documented in fleet-ops-runbook.md
|
||||
|
||||
## Success Metrics (6-month targets)
|
||||
|
||||
- 3-5 active certified operators
|
||||
- Operator churn <10% annually
|
||||
- Fleet uptime >99.5%
|
||||
- Partner channel >30% of leads
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2025-01-20*
|
||||
*Version: 1.0*
|
||||
273
specs/fleet-ops-runbook.md
Normal file
273
specs/fleet-ops-runbook.md
Normal file
@@ -0,0 +1,273 @@
|
||||
# Fleet Operations Runbook
|
||||
|
||||
## Purpose
|
||||
|
||||
This runbook provides standardized procedures for Fleet Operators to ensure consistent, reliable, and safe operations across the Timmy network.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Daily Operations](#daily-operations)
|
||||
2. [Maintenance Procedures](#maintenance-procedures)
|
||||
3. [Incident Response](#incident-response)
|
||||
4. [Customer Service](#customer-service)
|
||||
5. [Compliance & Safety](#compliance--safety)
|
||||
6. [Communication Protocols](#communication-protocols)
|
||||
7. [Reporting Requirements](#reporting-requirements)
|
||||
|
||||
---
|
||||
|
||||
## Daily Operations
|
||||
|
||||
### Pre-Shift Checklist
|
||||
|
||||
**Vehicle Inspection:**
|
||||
- [ ] Check tire pressure and condition
|
||||
- [ ] Verify fluid levels (oil, coolant, brake fluid)
|
||||
- [ ] Test lights, signals, and brakes
|
||||
- [ ] Inspect for fluid leaks
|
||||
- [ ] Verify GPS/telematics is operational
|
||||
- [ ] Check emergency equipment (first aid kit, fire extinguisher, warning triangles)
|
||||
|
||||
**Documentation:**
|
||||
- [ ] Valid driver's license
|
||||
- [ ] Vehicle registration and insurance
|
||||
- [ ] Daily log (if required by jurisdiction)
|
||||
- [ ] Route assignments and manifests
|
||||
|
||||
**System Checks:**
|
||||
- [ ] Mobile app/driver app login
|
||||
- [ ] Verify notification settings
|
||||
- [ ] Check for route updates or alerts
|
||||
- [ ] Confirm fuel level adequate for shift
|
||||
|
||||
### Daily Reporting
|
||||
|
||||
**End-of-Shift Requirements:**
|
||||
1. Complete delivery logs in system
|
||||
2. Report any vehicle issues immediately
|
||||
3. Submit fuel receipts if using company card
|
||||
4. Document any incidents or near-misses
|
||||
5. Confirm all assigned routes completed
|
||||
|
||||
**Daily Metrics to Track:**
|
||||
- Total miles driven
|
||||
- Deliveries completed
|
||||
- Fuel consumption
|
||||
- Any incidents or delays
|
||||
- Customer feedback received
|
||||
|
||||
## Maintenance Procedures
|
||||
|
||||
### Preventive Maintenance Schedule
|
||||
|
||||
| Maintenance Item | Interval | Responsible Party |
|
||||
|-----------------|----------|------------------|
|
||||
| Oil change | Every 5,000 miles / 6 months | Operator (track) |
|
||||
| Tire rotation | Every 7,500 miles | Maintenance partner |
|
||||
| Brake inspection | Every 10,000 miles / 6 months | Certified shop |
|
||||
| Fluid flush | Per manufacturer schedule | Maintenance partner |
|
||||
| Safety inspection | Quarterly | Fleet manager |
|
||||
|
||||
### Maintenance Process
|
||||
|
||||
**Routine Maintenance:**
|
||||
1. Schedule maintenance through approved vendor network
|
||||
2. Notify dispatch of maintenance window (minimum 24 hours)
|
||||
3. Document maintenance in fleet management system
|
||||
4. Submit receipt/invoice for reimbursement (if pre-approved)
|
||||
5. Update vehicle records
|
||||
|
||||
**Emergency Repairs:**
|
||||
1. Contact 24/7 Fleet Support immediately
|
||||
2. Use nearest approved vendor when possible
|
||||
3. Document issue with photos/video
|
||||
4. Complete incident report within 24 hours
|
||||
|
||||
### Vehicle Documentation
|
||||
|
||||
Maintain in vehicle at all times:
|
||||
- Registration documents
|
||||
- Insurance certificate
|
||||
- Inspection records
|
||||
- Maintenance log
|
||||
- Emergency contact information
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Incident Categories
|
||||
|
||||
**Level 1 - Minor:**
|
||||
- Late delivery (<15 min)
|
||||
- Minor vehicle damage (<$500)
|
||||
- Customer complaint (non-safety)
|
||||
|
||||
**Level 2 - Moderate:**
|
||||
- Accident with no injuries
|
||||
- Vehicle damage ($500-$5,000)
|
||||
- Significant delivery delay (>1 hour)
|
||||
- Safety concern reported
|
||||
|
||||
**Level 3 - Severe:**
|
||||
- Accident with injuries
|
||||
- Vehicle damage (>$5,000)
|
||||
- Theft or vandalism
|
||||
- Regulatory violation
|
||||
- Service interruption affecting multiple customers
|
||||
|
||||
### Response Procedures
|
||||
|
||||
**For All Incidents:**
|
||||
1. Ensure safety first - move to safe location if needed
|
||||
2. Contact appropriate authorities if required (police, EMS)
|
||||
3. Notify Fleet Support IMMEDIATELY (24/7 hotline)
|
||||
4. Document with photos/video when safe
|
||||
5. Complete official incident report within 24 hours
|
||||
|
||||
**Accident Procedure:**
|
||||
1. Check for injuries, call 911 if needed
|
||||
2. Exchange information with other party
|
||||
3. Document: location, time, weather, damages, witnesses
|
||||
4. Do NOT admit fault or discuss details beyond exchange of info
|
||||
5. Contact Fleet Support for guidance on towing/repairs
|
||||
6. File police report if required by law
|
||||
|
||||
**Customer Complaint:**
|
||||
1. Listen actively and document concerns
|
||||
2. Apologize for any inconvenience
|
||||
3. Escalate to Fleet Support for resolution guidance
|
||||
4. Follow up with customer within 24 hours (if directed)
|
||||
5. Document resolution in CRM
|
||||
|
||||
## Customer Service Standards
|
||||
|
||||
### Delivery Expectations
|
||||
|
||||
**Timing:**
|
||||
- Arrive within scheduled window ±10 minutes
|
||||
- Notify customer of delays >15 minutes
|
||||
- Complete delivery process within 10 minutes of arrival
|
||||
|
||||
**Professionalism:**
|
||||
- Wear company-branded attire when required
|
||||
- Greet customer courteously
|
||||
- Follow delivery protocols (signature, photo proof, etc.)
|
||||
- Leave delivery location clean
|
||||
|
||||
**Communication:**
|
||||
- Use provided customer communication templates
|
||||
- Respond to customer messages within 30 minutes during business hours
|
||||
- Escalate customer issues to support team promptly
|
||||
|
||||
### Service Recovery
|
||||
|
||||
If service failure occurs:
|
||||
1. Immediate acknowledgment of issue
|
||||
2. Offer appropriate compensation (based on policy)
|
||||
3. Escalate to supervisor if needed
|
||||
4. Document in CRM with resolution details
|
||||
5. Follow up to ensure customer satisfaction
|
||||
|
||||
## Compliance & Safety
|
||||
|
||||
### Regulatory Compliance
|
||||
|
||||
Maintain current:
|
||||
- Commercial driver's license (if required)
|
||||
- Vehicle registration and inspection
|
||||
- Insurance coverage meeting minimum requirements
|
||||
- Hours of service logs (if applicable)
|
||||
|
||||
### Safety Requirements
|
||||
|
||||
**Personal Protective Equipment:**
|
||||
- High-visibility vest when loading/unloading
|
||||
- Steel-toe boots in warehouse environments
|
||||
- Gloves for handling freight
|
||||
|
||||
**Safe Driving:**
|
||||
- Obey all traffic laws
|
||||
- No texting or handheld device use while driving
|
||||
- Take breaks every 2-3 hours on long routes
|
||||
- Never drive impaired
|
||||
|
||||
**Load Security:**
|
||||
- Properly secure all cargo
|
||||
- Check load stability before departure
|
||||
- Recheck after first 10 miles
|
||||
- Use appropriate tie-downs for load type
|
||||
|
||||
### Drug & Alcohol Policy
|
||||
|
||||
- Zero tolerance for impairment while on duty
|
||||
- Random testing program participation required
|
||||
- Report any concerns immediately
|
||||
- Violations result in termination
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
### Primary Channels
|
||||
|
||||
**Radio/Dispatch:**
|
||||
- Primary communication for route updates and emergencies
|
||||
- Keep radio volume at appropriate level
|
||||
- Acknowledge all messages
|
||||
|
||||
**Mobile App:**
|
||||
- Delivery instructions and customer communication
|
||||
- Route navigation
|
||||
- Check-in/check-out at locations
|
||||
|
||||
**Emergency Hotline:**
|
||||
- 24/7 for critical incidents
|
||||
- Number: 1-800-FLEET-99 (1-800-353-3899)
|
||||
- Use for accidents, breakdowns, safety issues
|
||||
|
||||
### Communication Standards
|
||||
|
||||
- Use clear, professional language
|
||||
- Identify yourself and location when calling
|
||||
- Keep non-essential communication to minimum
|
||||
- Document all significant communications
|
||||
|
||||
## Reporting Requirements
|
||||
|
||||
### Daily Reports
|
||||
- Delivery completion confirmation
|
||||
- Mileage and fuel reports
|
||||
- Any incidents or issues
|
||||
|
||||
### Weekly Reports
|
||||
- Summary of completed deliveries
|
||||
- Maintenance needs
|
||||
- Customer feedback received
|
||||
|
||||
### Monthly Reports
|
||||
- Vehicle inspection complete
|
||||
- Training completion (if assigned)
|
||||
- Performance metrics review
|
||||
|
||||
### Incident Reporting Timeline
|
||||
|
||||
| Incident Type | Reporting Deadline |
|
||||
|---------------|-------------------|
|
||||
| Accident (any) | Immediately (within 1 hour) |
|
||||
| Vehicle damage | Within 2 hours |
|
||||
| Customer complaint | Within 24 hours |
|
||||
| Near miss | End of shift |
|
||||
| Regulatory stop | Within 1 hour |
|
||||
|
||||
---
|
||||
|
||||
## Support Resources
|
||||
|
||||
- **Fleet Support Hotline:** 1-800-FLEET-99 (1-800-353-3899) - 24/7
|
||||
- **Maintenance Scheduling:** fleet-maint@timmy.example.com
|
||||
- **Dispatch:** dispatch@timmy.example.com or radio channel 1
|
||||
- **Safety Concerns:** safety@timmy.example.com (confidential)
|
||||
- **IT Support:** it-support@timmy.example.com
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2025-01-20*
|
||||
*Version: 1.0*
|
||||
*Approved by: Fleet Operations Director*
|
||||
258
specs/templates/operator-application.md
Normal file
258
specs/templates/operator-application.md
Normal file
@@ -0,0 +1,258 @@
|
||||
# Fleet Operator Application
|
||||
|
||||
## Personal Information
|
||||
|
||||
**Full Legal Name:**
|
||||
[____________________________]
|
||||
|
||||
**Date of Birth:**
|
||||
[____________________________]
|
||||
|
||||
**Social Security Number:**
|
||||
[____________________________] *(Required for background check)*
|
||||
|
||||
**Contact Information:**
|
||||
- Phone: [____________________________]
|
||||
- Email: [____________________________]
|
||||
- Address: [____________________________]
|
||||
City: [________________] State: [______] ZIP: [___________]
|
||||
|
||||
**Emergency Contact:**
|
||||
- Name: [____________________________]
|
||||
- Relationship: [____________________________]
|
||||
- Phone: [____________________________]
|
||||
|
||||
---
|
||||
|
||||
## Business Information
|
||||
|
||||
**Business Entity Type:**
|
||||
- [ ] Sole Proprietorship
|
||||
- [ ] LLC
|
||||
- [ ] Corporation
|
||||
- [ ] Partnership
|
||||
- [ ] Other: _______________
|
||||
|
||||
**Business Name:**
|
||||
[____________________________]
|
||||
|
||||
**Tax ID/EIN:**
|
||||
[____________________________]
|
||||
|
||||
**Years in Business:**
|
||||
[______]
|
||||
|
||||
**Number of Vehicles:**
|
||||
[______]
|
||||
|
||||
**Service Area(s):**
|
||||
[____________________________]
|
||||
|
||||
**Insurance Provider:**
|
||||
[____________________________]
|
||||
|
||||
**Policy Number:**
|
||||
[____________________________]
|
||||
|
||||
**Coverage Limits:**
|
||||
- Liability: $[____________]
|
||||
- Cargo: $[____________]
|
||||
- Physical Damage: $[____________]
|
||||
|
||||
---
|
||||
|
||||
## Experience & Qualifications
|
||||
|
||||
**Commercial Driving Experience:**
|
||||
- Total years: [______]
|
||||
- Type of vehicles operated: [____________________________]
|
||||
- Primary cargo type: [____________________________]
|
||||
|
||||
**Safety Record (Past 3 Years):**
|
||||
- Accidents: [______]
|
||||
- Moving violations: [______]
|
||||
- DOT violations: [______]
|
||||
|
||||
**Relevant Certifications:**
|
||||
- [ ] CDL - Class: ______ Endorsements: _______________
|
||||
- [ ] Hazmat endorsement
|
||||
- [ ] TWIC card
|
||||
- [ ] OSHA safety certification
|
||||
- [ ] First aid/CPR certified
|
||||
- [ ] Other: ________________
|
||||
|
||||
**Fleet Management Software Experience:**
|
||||
- [ ] Timmy platform (describe: ________________)
|
||||
- [ ] Other TMS (list: ________________)
|
||||
- [ ] None
|
||||
|
||||
---
|
||||
|
||||
## Equipment & Fleet
|
||||
|
||||
### Vehicle Inventory
|
||||
|
||||
| Year | Make/Model | VIN | GVWR | Capacity (lbs) | Current Mileage | Condition |
|
||||
|------|-----------|-----|------|---------------|----------------|-----------|
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
|
||||
**Average Vehicle Age:** [______]
|
||||
|
||||
**Maintenance Records:** [_____] (years of available records)
|
||||
|
||||
**Telematics/GPS Equipment:**
|
||||
- [ ] Installed on all vehicles
|
||||
- [ ] Installed on some vehicles (specify: ___________)
|
||||
- [ ] Not installed
|
||||
|
||||
---
|
||||
|
||||
## Operations & Capacity
|
||||
|
||||
**Weekly Operating Hours:**
|
||||
[____________________________]
|
||||
|
||||
**Driver Information:**
|
||||
- Number of drivers: [______]
|
||||
- Average driver experience: [______] years
|
||||
|
||||
**Service Capabilities:**
|
||||
- [ ] Local deliveries (<50 miles)
|
||||
- [ ] Regional (50-500 miles)
|
||||
- [ ] Long-haul (500+ miles)
|
||||
- [ ] Temperature-controlled cargo
|
||||
- [ ] Hazardous materials (with proper endorsements)
|
||||
- [ ] White-glove/delicate handling
|
||||
- [ ] Assembly/installation services
|
||||
|
||||
**Special Equipment:**
|
||||
- [ ] Liftgate
|
||||
- [ ] Pallet jack
|
||||
- [ ] Forklift
|
||||
- [ ] Dollies
|
||||
- [ ] Straps/binders
|
||||
- [ ] Blankets/wrapping
|
||||
- [ ] Other: ____________________
|
||||
|
||||
---
|
||||
|
||||
## Financial Information
|
||||
|
||||
**Annual Revenue (Last 3 Years):**
|
||||
|
||||
| Year | Revenue | Notes |
|
||||
|------|---------|-------|
|
||||
| ____ | $____ | |
|
||||
| ____ | $____ | |
|
||||
| ____ | $____ | |
|
||||
|
||||
**Bank Reference:**
|
||||
- Bank Name: [____________________________]
|
||||
- Account Manager: [____________________________]
|
||||
- Phone: [____________________________]
|
||||
|
||||
**Trade References:**
|
||||
1. [____________________________] - [____________________________]
|
||||
2. [____________________________] - [____________________________]
|
||||
3. [____________________________] - [____________________________]
|
||||
|
||||
---
|
||||
|
||||
## Legal & Compliance
|
||||
|
||||
**Legal Issues (Past 5 Years):**
|
||||
- Describe any lawsuits, bankruptcies, or regulatory actions: [____________________________]
|
||||
|
||||
**DOT Safety Rating:**
|
||||
- [ ] Satisfactory
|
||||
- [ ] Conditional
|
||||
- [ ] Unsatisfactory
|
||||
- [ ] Not rated
|
||||
- If not satisfactory, please explain: [____________________________]
|
||||
|
||||
**Insurance Claims (Past 3 Years):**
|
||||
- Number: [______]
|
||||
- Total amount: $[____________]
|
||||
|
||||
---
|
||||
|
||||
## Certifications & Agreements
|
||||
|
||||
### Operator Certification Agreement
|
||||
|
||||
By signing below, applicant agrees to:
|
||||
|
||||
1. Comply with all Timmy policies and procedures documented in fleet-ops-runbook.md
|
||||
2. Maintain required insurance coverage and certifications
|
||||
3. Adhere to scheduled maintenance requirements
|
||||
4. Report all incidents within required timelines
|
||||
5. Participate in performance review process
|
||||
6. Maintain professional standards in customer interactions
|
||||
7. Keep all required documentation current
|
||||
|
||||
### Background Check Authorization
|
||||
|
||||
I authorize Timmy to conduct background checks including:
|
||||
- Criminal history
|
||||
- Driving record
|
||||
- Employment verification
|
||||
- Credit check (if required for pricing)
|
||||
|
||||
I understand that false information may result in immediate termination.
|
||||
|
||||
**Signature:** _________________________________
|
||||
**Printed Name:** _________________________________
|
||||
**Date:** _________________________________
|
||||
|
||||
---
|
||||
|
||||
## Checklist for Application Completion
|
||||
|
||||
**Required Documents:**
|
||||
- [ ] Completed application form (all sections)
|
||||
- [ ] Copy of valid driver's license(s)
|
||||
- [ ] Proof of insurance (certificate of insurance)
|
||||
- [ ] Vehicle registration for all fleet vehicles
|
||||
- [ ] Tax ID/EIN documentation
|
||||
- [ ] Proof of business registration (if applicable)
|
||||
|
||||
**Supporting Documents:**
|
||||
- [ ] Driver qualification files
|
||||
- [ ] Maintenance records (last 12 months)
|
||||
- [ ] Safety program documentation
|
||||
- [ ] Client references
|
||||
- [ ] Financial statements (last 2 years)
|
||||
- [ ] DOT safety rating documentation (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## Internal Use Only (Timmy Staff)
|
||||
|
||||
**Application Received:** _______________
|
||||
**Initial Review Date:** _______________
|
||||
**Reviewer:** _______________________
|
||||
|
||||
**Background Check:** _______________
|
||||
**Status:** [ ] Pass [ ] Fail [ ] Conditional
|
||||
|
||||
**Insurance Review:** _______________
|
||||
**Status:** [ ] Meets requirements [ ] Needs adjustment
|
||||
|
||||
**Vehicle Inspection:** _______________
|
||||
**Date:** _______________
|
||||
**Inspector:** _______________________
|
||||
|
||||
**Final Decision:**
|
||||
- [ ] Approved - Tier: ______ Start Date: _______________
|
||||
- [ ] Denied - Reason: _________________________________
|
||||
- [ ] Conditional - Requirements: _________________________________
|
||||
|
||||
**Notified Applicant:** _______________
|
||||
**Follow-up Required:** [ ] Yes [ ] No
|
||||
|
||||
---
|
||||
|
||||
*Application valid for 90 days from submission.*
|
||||
*Re-application permitted after 180 days if denied.*
|
||||
209
specs/templates/partner-report.md
Normal file
209
specs/templates/partner-report.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Partner Monthly Performance Report
|
||||
|
||||
**Reporting Period:** [Month] [Year]
|
||||
**Partner Name:** [____________________________]
|
||||
**Partner ID:** [____________________________]
|
||||
**Report Due Date:** [____________________________]
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
| Metric | Current Month | Target | Variance | Trend |
|
||||
|--------|---------------|--------|----------|-------|
|
||||
| Leads Generated | ___ | ___ | ___% | [↗ ↘ →] |
|
||||
| Qualified Leads | ___ | ___ | ___% | [↗ ↘ →] |
|
||||
| Conversions | ___ | ___ | ___% | [↗ ↘ →] |
|
||||
| Revenue Share | $___ | $___ | ___% | [↗ ↘ →] |
|
||||
| Operator Placements | ___ | ___ | ___% | [↗ ↘ →] |
|
||||
|
||||
**Highlights:**
|
||||
- Key achievements this month: [____________________________]
|
||||
- Challenges encountered: [____________________________]
|
||||
- Focus areas for next period: [____________________________]
|
||||
|
||||
---
|
||||
|
||||
## Lead Generation & Qualification
|
||||
|
||||
### Lead Pipeline
|
||||
|
||||
| Stage | Current Month | Cumulative This Quarter | Cumulative This Year |
|
||||
|-------|---------------|-------------------------|----------------------|
|
||||
| Initial Contacts | ___ | ___ | ___ |
|
||||
| Qualified Prospects | ___ | ___ | ___ |
|
||||
| Applications Received | ___ | ___ | ___ |
|
||||
| Under Review | ___ | ___ | ___ |
|
||||
| Certified Operators | ___ | ___ | ___ |
|
||||
| Active Operators | ___ | ___ | ___ |
|
||||
|
||||
### Lead Source Breakdown
|
||||
|
||||
| Source Type | Leads | Qualified | Conversion Rate |
|
||||
|-------------|-------|-----------|----------------|
|
||||
| Organic referrals | ___ | ___ | ___% |
|
||||
| Digital marketing | ___ | ___ | ___% |
|
||||
| Trade shows/events | ___ | ___ | ___% |
|
||||
| Cold outreach | ___ | ___ | ___% |
|
||||
| Other: ________ | ___ | ___ | ___% |
|
||||
| **Total** | ___ | ___ | ___% |
|
||||
|
||||
**Conversion Funnel:**
|
||||
```
|
||||
Leads → Qualified → Application → Review → Certified → Active
|
||||
___ → ___ → ___ → ___ → ___ → ___
|
||||
___% ___% ___% ___% ___%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Financial Performance
|
||||
|
||||
### Revenue & Commission Share
|
||||
|
||||
| Revenue Component | Amount | Partner Share % | Partner Share $ |
|
||||
|-------------------|--------|----------------|-----------------|
|
||||
| Operator base earnings | $______ | ___% | $______ |
|
||||
| Performance bonuses | $______ | ___% | $______ |
|
||||
| Volume incentives | $______ | ___% | $______ |
|
||||
| Referral bonuses | $______ | ___% | $______ |
|
||||
| Other incentives | $______ | ___% | $______ |
|
||||
| **Total** | $______ | | $______ |
|
||||
|
||||
### Incentive Earned This Period
|
||||
|
||||
| Incentive Type | Requirement | Status | Amount |
|
||||
|----------------|-------------|--------|--------|
|
||||
| Lead generation bonus | ___ qualified leads | [Met/Partial/Not Met] | $______ |
|
||||
| Conversion bonus | ___ certified operators | [Met/Partial/Not Met] | $______ |
|
||||
| Retention bonus | ___ operators >12 months | [Met/Partial/Not Met] | $______ |
|
||||
| Performance bonus | Operator KPIs met | [Met/Partial/Not Met] | $______ |
|
||||
| **Total Incentives** | | | $______ |
|
||||
|
||||
**Total Partner Earnings (YTD):** $______
|
||||
|
||||
---
|
||||
|
||||
## Operator Placement & Performance
|
||||
|
||||
### Operator Roster
|
||||
|
||||
| Operator | Certified Date | Status | Monthly Revenue | Commission Tier | Uptime | Customer Rating |
|
||||
|----------|---------------|--------|----------------|----------------|--------|----------------|
|
||||
| | | Active/Terminated/On Leave | | | | |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
|
||||
### Operator Performance Summary
|
||||
|
||||
**Tier Distribution:**
|
||||
- Bronze: ___ operators
|
||||
- Silver: ___ operators
|
||||
- Gold: ___ operators
|
||||
|
||||
**Key Performance Indicators:**
|
||||
- Average operator uptime: ___% (Target: >99.5%)
|
||||
- Average customer rating: ___/5.0 (Target: >4.5)
|
||||
- Total deliveries this month: ___
|
||||
- On-time delivery rate: ___%
|
||||
|
||||
**Churn/Retention:**
|
||||
- New operators this month: ___
|
||||
- Terminated operators this month: ___
|
||||
- Net change: ___ (Churn rate: ___%)
|
||||
- Annual churn rate: ___% (Target: <10%)
|
||||
|
||||
---
|
||||
|
||||
## Partnership Health & Activity
|
||||
|
||||
### Marketing & Support Activities
|
||||
|
||||
| Activity Type | Date | Description | Outcome |
|
||||
|---------------|------|-------------|---------|
|
||||
| Training session | | | |
|
||||
| Marketing event | | | |
|
||||
| Business review | | | |
|
||||
| Operator visit | | | |
|
||||
| Other: ________ | | | |
|
||||
|
||||
### Resource Utilization
|
||||
|
||||
**Marketing Materials Distributed:** ___
|
||||
**Training Sessions Conducted:** ___ (total hours: ___)
|
||||
**Support Tickets Resolved:** ___ (avg response: ___ hrs)
|
||||
**Co-op Marketing Funds Used:** $___ of $___ allocated
|
||||
|
||||
---
|
||||
|
||||
## Issues & Concerns
|
||||
|
||||
**Operational Challenges:**
|
||||
- [____________________________]
|
||||
- [____________________________]
|
||||
|
||||
**Operator Support Needs:**
|
||||
- [____________________________]
|
||||
- [____________________________]
|
||||
|
||||
**Process Improvements:**
|
||||
- [____________________________]
|
||||
|
||||
---
|
||||
|
||||
## Action Plan for Next Period
|
||||
|
||||
### Immediate Priorities (Next 30 Days)
|
||||
1. _____________________________________________________
|
||||
2. _____________________________________________________
|
||||
3. _____________________________________________________
|
||||
|
||||
### Quarterly Goals
|
||||
1. _____________________________________________________
|
||||
2. _____________________________________________________
|
||||
|
||||
### Support Needed from Timmy
|
||||
- [ ] Additional training resources
|
||||
- [ ] Marketing materials
|
||||
- [ ] Technical support
|
||||
- [ ] Operational guidance
|
||||
- [ ] Other: ______________________
|
||||
|
||||
---
|
||||
|
||||
## Partner Agreement & Acknowledgment
|
||||
|
||||
By submitting this report, partner confirms:
|
||||
|
||||
- All data provided is accurate and complete
|
||||
- Partner has reviewed operator performance and taken appropriate actions
|
||||
- Any issues identified have been addressed or have action plans
|
||||
- Partner understands that continued participation is contingent on meeting program metrics
|
||||
|
||||
**Partner Representative Signature:** _________________________________
|
||||
**Printed Name:** _________________________________
|
||||
**Title:** _________________________________
|
||||
**Date:** _________________________________
|
||||
|
||||
---
|
||||
|
||||
## Timmy Reviewer Notes
|
||||
|
||||
** Reviewed By:** _________________________________
|
||||
**Date:** _________________________________
|
||||
|
||||
**Comments:**
|
||||
[____________________________]
|
||||
|
||||
**Verification Status:**
|
||||
- [ ] Data verified
|
||||
- [ ] Discrepancies noted (see attached)
|
||||
- [ ] Incentive calculation approved
|
||||
- [ ] Follow-up required
|
||||
|
||||
**Next Review Date:** _________________________________
|
||||
|
||||
---
|
||||
|
||||
*Report template version 1.0*
|
||||
*Reference: fleet-operator-incentives.md, fleet-ops-runbook.md*
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,45 +0,0 @@
|
||||
"""Tests for cross_agent_quality_audit.py — #518."""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "scripts"))
|
||||
|
||||
from cross_agent_quality_audit import AgentClassifier, hours_between
|
||||
|
||||
|
||||
class TestAgentClassifier:
|
||||
def test_title_tag_claude(self):
|
||||
pr = {"title": "[Claude] fix auth middleware", "head": {"ref": "fix/123"}, "user": {"login": "rockachopa"}}
|
||||
assert AgentClassifier.classify(pr) == "claude"
|
||||
|
||||
def test_title_tag_ezra(self):
|
||||
pr = {"title": "[Ezra] tmux fleet launcher", "head": {"ref": "burn/10"}, "user": {"login": "rockachopa"}}
|
||||
assert AgentClassifier.classify(pr) == "ezra"
|
||||
|
||||
def test_branch_name_claude(self):
|
||||
pr = {"title": "fix auth", "head": {"ref": "claude/issue-1695"}, "user": {"login": "rockachopa"}}
|
||||
assert AgentClassifier.classify(pr) == "claude"
|
||||
|
||||
def test_user_mapping(self):
|
||||
pr = {"title": "some fix", "head": {"ref": "fix/1"}, "user": {"login": "claude"}}
|
||||
assert AgentClassifier.classify(pr) == "claude"
|
||||
|
||||
def test_rockachopa_maps_to_burn_loop(self):
|
||||
pr = {"title": "some fix", "head": {"ref": "fix/1"}, "user": {"login": "Rockachopa"}}
|
||||
assert AgentClassifier.classify(pr) == "burn-loop"
|
||||
|
||||
def test_unknown_fallback(self):
|
||||
pr = {"title": "some fix", "head": {"ref": "fix/1"}, "user": {"login": "random"}}
|
||||
assert AgentClassifier.classify(pr) == "unknown"
|
||||
|
||||
|
||||
class TestHoursBetween:
|
||||
def test_same_day(self):
|
||||
h = hours_between("2026-04-22T10:00:00Z", "2026-04-22T12:00:00Z")
|
||||
assert h == 2.0
|
||||
|
||||
def test_none_returns_none(self):
|
||||
assert hours_between(None, "2026-04-22T12:00:00Z") is None
|
||||
assert hours_between("2026-04-22T10:00:00Z", None) is None
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
@@ -1,244 +0,0 @@
|
||||
# Cross-Agent Quality Scorecard
|
||||
|
||||
**Audited at:** 2026-04-22T06:17:43.574309+00:00
|
||||
**Repos audited:** timmy-home, hermes-agent, the-nexus, the-door, fleet-ops, burn-fleet, the-playground, compounding-intelligence, the-beacon, second-son-of-timmy, timmy-academy, timmy-config
|
||||
|
||||
## Per-Agent Summary
|
||||
|
||||
| Agent | Total PRs | Merged | Closed (unmerged) | Open | Merge Rate | Rejection Rate | Avg Hours to Merge | Avg Hours to Close |
|
||||
|---|---|---:|---:|---:|---:|---:|---:|---:|
|
||||
| burn-loop | 1733 | 346 | 1239 | 148 | 21.8% | 78.2% | 18.9 | 20.6 |
|
||||
| unknown | 843 | 598 | 214 | 31 | 73.6% | 26.4% | 2.3 | 11.3 |
|
||||
| claude | 264 | 138 | 121 | 5 | 53.3% | 46.7% | 3.3 | 6.2 |
|
||||
| gemini | 95 | 24 | 70 | 1 | 25.5% | 74.5% | 0.5 | 11.3 |
|
||||
| timmy | 28 | 15 | 11 | 2 | 57.7% | 42.3% | 9.8 | 20.2 |
|
||||
| bezalel | 21 | 11 | 9 | 1 | 55.0% | 45.0% | 2.7 | 8.0 |
|
||||
| allegro | 21 | 7 | 11 | 3 | 38.9% | 61.1% | 31.1 | 20.2 |
|
||||
| ezra | 8 | 2 | 3 | 3 | 40.0% | 60.0% | 4.4 | 16.8 |
|
||||
| kimi | 6 | 3 | 3 | 0 | 50.0% | 50.0% | 39.5 | 0.5 |
|
||||
| manus | 6 | 5 | 1 | 0 | 83.3% | 16.7% | 0.0 | 18.8 |
|
||||
| codex | 2 | 2 | 0 | 0 | 100.0% | 0.0% | 2.3 | — |
|
||||
|
||||
## Per-Repo Merge Rate
|
||||
|
||||
| Repo | Total PRs | Merged | Merge Rate |
|
||||
|---|---|---:|---:|
|
||||
| the-nexus | 985 | 501 | 51.0% |
|
||||
| hermes-agent | 519 | 128 | 25.0% |
|
||||
| timmy-config | 404 | 140 | 35.0% |
|
||||
| timmy-home | 270 | 104 | 39.0% |
|
||||
| fleet-ops | 266 | 84 | 32.0% |
|
||||
| the-beacon | 175 | 62 | 35.0% |
|
||||
| the-door | 153 | 31 | 20.0% |
|
||||
| second-son-of-timmy | 111 | 82 | 74.0% |
|
||||
| compounding-intelligence | 50 | 9 | 18.0% |
|
||||
| the-playground | 44 | 2 | 5.0% |
|
||||
| burn-fleet | 38 | 2 | 5.0% |
|
||||
| timmy-academy | 12 | 6 | 50.0% |
|
||||
|
||||
## Methodology
|
||||
|
||||
- **Agent classification** uses three signals in priority order:
|
||||
1. Explicit title tag (e.g. `[Claude]`, `[Ezra]`)
|
||||
2. Branch name containing agent name (e.g. `claude/issue-123`)
|
||||
3. Git user (`claude` → claude, `Rockachopa` → burn-loop)
|
||||
- **Merge rate** = merged / (merged + closed_unmerged). Open PRs are excluded.
|
||||
- **Rejection rate** = closed_unmerged / (merged + closed_unmerged).
|
||||
- **Time metrics** are computed from created_at to merged_at / closed_at.
|
||||
|
||||
## Raw Data
|
||||
|
||||
```json
|
||||
{
|
||||
"burn-loop": {
|
||||
"total_prs": 1733,
|
||||
"merged": 346,
|
||||
"closed_unmerged": 1239,
|
||||
"open": 148,
|
||||
"merge_rate": 0.218,
|
||||
"rejection_rate": 0.782,
|
||||
"avg_hours_to_merge": 18.9,
|
||||
"avg_hours_to_close": 20.6,
|
||||
"repos": [
|
||||
"burn-fleet",
|
||||
"compounding-intelligence",
|
||||
"fleet-ops",
|
||||
"hermes-agent",
|
||||
"second-son-of-timmy",
|
||||
"the-beacon",
|
||||
"the-door",
|
||||
"the-nexus",
|
||||
"the-playground",
|
||||
"timmy-academy",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"unknown": {
|
||||
"total_prs": 843,
|
||||
"merged": 598,
|
||||
"closed_unmerged": 214,
|
||||
"open": 31,
|
||||
"merge_rate": 0.736,
|
||||
"rejection_rate": 0.264,
|
||||
"avg_hours_to_merge": 2.3,
|
||||
"avg_hours_to_close": 11.3,
|
||||
"repos": [
|
||||
"fleet-ops",
|
||||
"hermes-agent",
|
||||
"second-son-of-timmy",
|
||||
"the-beacon",
|
||||
"the-door",
|
||||
"the-nexus",
|
||||
"timmy-academy",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"claude": {
|
||||
"total_prs": 264,
|
||||
"merged": 138,
|
||||
"closed_unmerged": 121,
|
||||
"open": 5,
|
||||
"merge_rate": 0.533,
|
||||
"rejection_rate": 0.467,
|
||||
"avg_hours_to_merge": 3.3,
|
||||
"avg_hours_to_close": 6.2,
|
||||
"repos": [
|
||||
"hermes-agent",
|
||||
"the-nexus",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"gemini": {
|
||||
"total_prs": 95,
|
||||
"merged": 24,
|
||||
"closed_unmerged": 70,
|
||||
"open": 1,
|
||||
"merge_rate": 0.255,
|
||||
"rejection_rate": 0.745,
|
||||
"avg_hours_to_merge": 0.5,
|
||||
"avg_hours_to_close": 11.3,
|
||||
"repos": [
|
||||
"hermes-agent",
|
||||
"the-nexus",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"timmy": {
|
||||
"total_prs": 28,
|
||||
"merged": 15,
|
||||
"closed_unmerged": 11,
|
||||
"open": 2,
|
||||
"merge_rate": 0.577,
|
||||
"rejection_rate": 0.423,
|
||||
"avg_hours_to_merge": 9.8,
|
||||
"avg_hours_to_close": 20.2,
|
||||
"repos": [
|
||||
"burn-fleet",
|
||||
"hermes-agent",
|
||||
"the-nexus",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"bezalel": {
|
||||
"total_prs": 21,
|
||||
"merged": 11,
|
||||
"closed_unmerged": 9,
|
||||
"open": 1,
|
||||
"merge_rate": 0.55,
|
||||
"rejection_rate": 0.45,
|
||||
"avg_hours_to_merge": 2.7,
|
||||
"avg_hours_to_close": 8.0,
|
||||
"repos": [
|
||||
"burn-fleet",
|
||||
"hermes-agent",
|
||||
"the-beacon",
|
||||
"the-nexus",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"allegro": {
|
||||
"total_prs": 21,
|
||||
"merged": 7,
|
||||
"closed_unmerged": 11,
|
||||
"open": 3,
|
||||
"merge_rate": 0.389,
|
||||
"rejection_rate": 0.611,
|
||||
"avg_hours_to_merge": 31.1,
|
||||
"avg_hours_to_close": 20.2,
|
||||
"repos": [
|
||||
"burn-fleet",
|
||||
"hermes-agent",
|
||||
"the-beacon",
|
||||
"the-nexus",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"ezra": {
|
||||
"total_prs": 8,
|
||||
"merged": 2,
|
||||
"closed_unmerged": 3,
|
||||
"open": 3,
|
||||
"merge_rate": 0.4,
|
||||
"rejection_rate": 0.6,
|
||||
"avg_hours_to_merge": 4.4,
|
||||
"avg_hours_to_close": 16.8,
|
||||
"repos": [
|
||||
"burn-fleet",
|
||||
"fleet-ops",
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"kimi": {
|
||||
"total_prs": 6,
|
||||
"merged": 3,
|
||||
"closed_unmerged": 3,
|
||||
"open": 0,
|
||||
"merge_rate": 0.5,
|
||||
"rejection_rate": 0.5,
|
||||
"avg_hours_to_merge": 39.5,
|
||||
"avg_hours_to_close": 0.5,
|
||||
"repos": [
|
||||
"hermes-agent",
|
||||
"the-nexus",
|
||||
"timmy-home"
|
||||
]
|
||||
},
|
||||
"manus": {
|
||||
"total_prs": 6,
|
||||
"merged": 5,
|
||||
"closed_unmerged": 1,
|
||||
"open": 0,
|
||||
"merge_rate": 0.833,
|
||||
"rejection_rate": 0.167,
|
||||
"avg_hours_to_merge": 0.0,
|
||||
"avg_hours_to_close": 18.8,
|
||||
"repos": [
|
||||
"the-nexus",
|
||||
"timmy-config"
|
||||
]
|
||||
},
|
||||
"codex": {
|
||||
"total_prs": 2,
|
||||
"merged": 2,
|
||||
"closed_unmerged": 0,
|
||||
"open": 0,
|
||||
"merge_rate": 1.0,
|
||||
"rejection_rate": 0.0,
|
||||
"avg_hours_to_merge": 2.3,
|
||||
"avg_hours_to_close": null,
|
||||
"repos": [
|
||||
"timmy-config",
|
||||
"timmy-home"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user