Compare commits
6 Commits
fix/544
...
sprint/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6cf1d6cfb0 | ||
| d1f5d34fd4 | |||
| 891cdb6e94 | |||
| cac5ca630d | |||
|
|
f1c9843376 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
48
luna/README.md
Normal file
48
luna/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
18
luna/index.html
Normal file
18
luna/index.html
Normal file
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
Normal file
289
luna/sketch.js
Normal file
@@ -0,0 +1,289 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
32
luna/style.css
Normal file
32
luna/style.css
Normal file
@@ -0,0 +1,32 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
@@ -1,51 +0,0 @@
|
||||
# Bezalel Gemma 4 VPS Wiring
|
||||
|
||||
Issue: timmy-home #544
|
||||
|
||||
This helper is the repo-side operator bundle for wiring a live Gemma 4 endpoint into Bezalel's VPS config without hardcoding one dead pod forever.
|
||||
|
||||
What `scripts/bezalel_gemma4_vps.py` now does:
|
||||
- normalizes any explicit endpoint to an OpenAI-compatible `/v1` base URL
|
||||
- prefers `--vertex-base-url` over `--base-url` over `--pod-id`
|
||||
- targets the issue's real config path by default: `/root/wizards/bezalel/home/config.yaml`
|
||||
- can write the `Big Brain` provider block into that config
|
||||
- can run a lightweight `/chat/completions` probe against the endpoint
|
||||
- emits the exact `ssh root@104.131.15.18 ... curl ...` command needed to prove the endpoint is reachable from the Bezalel VPS
|
||||
|
||||
Example dry-run:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--base-url https://<pod-id>-11434.proxy.runpod.net \
|
||||
--json
|
||||
```
|
||||
|
||||
Example live wiring once a real endpoint exists:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--base-url https://<pod-id>-11434.proxy.runpod.net \
|
||||
--config-path /root/wizards/bezalel/home/config.yaml \
|
||||
--write-config \
|
||||
--verify-chat
|
||||
```
|
||||
|
||||
If Vertex AI is fronted by an OpenAI-compatible bridge, prefer that explicit URL:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--vertex-base-url https://<bridge-host>/v1 \
|
||||
--json
|
||||
```
|
||||
|
||||
What this repo change proves:
|
||||
- Bezalel's config target is explicit and correct for the VPS lane
|
||||
- the helper no longer silently writes to the local operator's home directory
|
||||
- endpoint normalization is deterministic
|
||||
- the remote proof command is generated from the same normalized URL the config writer uses
|
||||
|
||||
What still requires live infrastructure outside the repo:
|
||||
- a valid paid RunPod or Vertex credential
|
||||
- a real GPU endpoint serving Gemma 4
|
||||
- successful execution of the emitted SSH proof command on `104.131.15.18`
|
||||
- successful Bezalel Hermes chat against that live endpoint
|
||||
@@ -8,14 +8,12 @@ Safe by default:
|
||||
- can call the RunPod GraphQL API if a key is provided and --apply-runpod is used
|
||||
- can update a Hermes config file in-place when --write-config is used
|
||||
- can verify an OpenAI-compatible endpoint with a lightweight chat probe
|
||||
- emits the exact Bezalel VPS curl proof command for remote verification
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shlex
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib import request
|
||||
@@ -29,9 +27,7 @@ DEFAULT_IMAGE = "ollama/ollama:latest"
|
||||
DEFAULT_MODEL = "gemma4:latest"
|
||||
DEFAULT_PROVIDER_NAME = "Big Brain"
|
||||
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "runpod" / "access_key"
|
||||
DEFAULT_CONFIG_PATH = Path("/root/wizards/bezalel/home/config.yaml")
|
||||
DEFAULT_BEZALEL_VPS_HOST = "104.131.15.18"
|
||||
DEFAULT_VERIFY_PROMPT = "Say READY"
|
||||
DEFAULT_CONFIG_PATH = Path.home() / "wizards" / "bezalel" / "home" / "config.yaml"
|
||||
|
||||
|
||||
def build_deploy_mutation(
|
||||
@@ -67,31 +63,8 @@ mutation {{
|
||||
'''.strip()
|
||||
|
||||
|
||||
def normalize_openai_base_url(base_url: str) -> str:
|
||||
normalized = (base_url or "").strip().rstrip("/")
|
||||
if not normalized:
|
||||
return normalized
|
||||
for suffix in ("/chat/completions", "/models"):
|
||||
if normalized.endswith(suffix):
|
||||
normalized = normalized[: -len(suffix)]
|
||||
break
|
||||
if not normalized.endswith("/v1"):
|
||||
normalized = f"{normalized}/v1"
|
||||
return normalized
|
||||
|
||||
|
||||
def build_runpod_endpoint(pod_id: str, port: int = 11434) -> str:
|
||||
return normalize_openai_base_url(f"https://{pod_id}-{port}.proxy.runpod.net")
|
||||
|
||||
|
||||
def resolve_base_url(*, vertex_base_url: str | None = None, base_url: str | None = None, pod_id: str | None = None) -> tuple[str | None, str | None]:
|
||||
if vertex_base_url:
|
||||
return normalize_openai_base_url(vertex_base_url), "vertex_base_url"
|
||||
if base_url:
|
||||
return normalize_openai_base_url(base_url), "base_url"
|
||||
if pod_id:
|
||||
return build_runpod_endpoint(pod_id), "pod_id"
|
||||
return None, None
|
||||
return f"https://{pod_id}-{port}.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def parse_deploy_response(payload: dict[str, Any]) -> dict[str, str]:
|
||||
@@ -129,7 +102,7 @@ def update_config_text(config_text: str, *, base_url: str, model: str = DEFAULT_
|
||||
|
||||
replacement = {
|
||||
"name": provider_name,
|
||||
"base_url": normalize_openai_base_url(base_url),
|
||||
"base_url": base_url,
|
||||
"api_key": "",
|
||||
"model": model,
|
||||
}
|
||||
@@ -156,8 +129,7 @@ def write_config_file(config_path: Path, *, base_url: str, model: str = DEFAULT_
|
||||
return updated
|
||||
|
||||
|
||||
def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str = DEFAULT_VERIFY_PROMPT) -> str:
|
||||
base_url = normalize_openai_base_url(base_url)
|
||||
def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str = "Say READY") -> str:
|
||||
payload = json.dumps(
|
||||
{
|
||||
"model": model,
|
||||
@@ -167,7 +139,7 @@ def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str
|
||||
}
|
||||
).encode()
|
||||
req = request.Request(
|
||||
f"{base_url}/chat/completions",
|
||||
f"{base_url.rstrip('/')}/chat/completions",
|
||||
data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
@@ -177,30 +149,6 @@ def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str
|
||||
return data["choices"][0]["message"]["content"]
|
||||
|
||||
|
||||
def build_vps_verify_command(
|
||||
*,
|
||||
base_url: str,
|
||||
model: str = DEFAULT_MODEL,
|
||||
prompt: str = DEFAULT_VERIFY_PROMPT,
|
||||
vps_host: str = DEFAULT_BEZALEL_VPS_HOST,
|
||||
) -> str:
|
||||
payload = json.dumps(
|
||||
{
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"stream": False,
|
||||
"max_tokens": 16,
|
||||
},
|
||||
separators=(",", ":"),
|
||||
)
|
||||
remote_command = (
|
||||
f"curl -sS {shlex.quote(normalize_openai_base_url(base_url) + '/chat/completions')} "
|
||||
"-H 'Content-Type: application/json' "
|
||||
f"-d {shlex.quote(payload)}"
|
||||
)
|
||||
return f"ssh root@{vps_host} {shlex.quote(remote_command)}"
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Provision a RunPod Gemma 4 endpoint and wire a Hermes config for Bezalel.")
|
||||
parser.add_argument("--pod-name", default="bezalel-gemma4")
|
||||
@@ -212,8 +160,6 @@ def parse_args() -> argparse.Namespace:
|
||||
parser.add_argument("--config-path", type=Path, default=DEFAULT_CONFIG_PATH)
|
||||
parser.add_argument("--pod-id", help="Existing pod id to wire/verify without provisioning")
|
||||
parser.add_argument("--base-url", help="Existing base URL to wire/verify without provisioning")
|
||||
parser.add_argument("--vertex-base-url", help="OpenAI-compatible Vertex bridge URL; takes precedence over --base-url and --pod-id")
|
||||
parser.add_argument("--vps-host", default=DEFAULT_BEZALEL_VPS_HOST, help="Bezalel VPS host for the remote curl proof command")
|
||||
parser.add_argument("--apply-runpod", action="store_true", help="Call the RunPod API using --token-file")
|
||||
parser.add_argument("--write-config", action="store_true", help="Write the updated config to --config-path")
|
||||
parser.add_argument("--verify-chat", action="store_true", help="Call the OpenAI-compatible chat endpoint")
|
||||
@@ -229,18 +175,13 @@ def main() -> None:
|
||||
"cloud_type": args.cloud_type,
|
||||
"model": args.model,
|
||||
"provider_name": args.provider_name,
|
||||
"config_path": str(args.config_path),
|
||||
"vps_host": args.vps_host,
|
||||
"actions": [],
|
||||
}
|
||||
|
||||
base_url, base_url_source = resolve_base_url(
|
||||
vertex_base_url=args.vertex_base_url,
|
||||
base_url=args.base_url,
|
||||
pod_id=args.pod_id,
|
||||
)
|
||||
if base_url_source:
|
||||
summary["actions"].append(f"resolved_base_url_from_{base_url_source}")
|
||||
base_url = args.base_url
|
||||
if not base_url and args.pod_id:
|
||||
base_url = build_runpod_endpoint(args.pod_id)
|
||||
summary["actions"].append("computed_base_url_from_pod_id")
|
||||
|
||||
if args.apply_runpod:
|
||||
if not args.token_file.exists():
|
||||
@@ -255,17 +196,12 @@ def main() -> None:
|
||||
base_url = build_runpod_endpoint("<pod-id>")
|
||||
summary["actions"].append("using_placeholder_base_url")
|
||||
|
||||
summary["base_url"] = normalize_openai_base_url(base_url)
|
||||
summary["base_url"] = base_url
|
||||
summary["config_preview"] = update_config_text("", base_url=base_url, model=args.model, provider_name=args.provider_name)
|
||||
summary["vps_verify_command"] = build_vps_verify_command(
|
||||
base_url=base_url,
|
||||
model=args.model,
|
||||
prompt=DEFAULT_VERIFY_PROMPT,
|
||||
vps_host=args.vps_host,
|
||||
)
|
||||
|
||||
if args.write_config:
|
||||
write_config_file(args.config_path, base_url=base_url, model=args.model, provider_name=args.provider_name)
|
||||
summary["config_path"] = str(args.config_path)
|
||||
summary["actions"].append("wrote_config")
|
||||
|
||||
if args.verify_chat:
|
||||
@@ -278,10 +214,8 @@ def main() -> None:
|
||||
|
||||
print("--- Bezalel Gemma4 RunPod Wiring ---")
|
||||
print(f"Pod name: {args.pod_name}")
|
||||
print(f"Base URL: {summary['base_url']}")
|
||||
print(f"Base URL: {base_url}")
|
||||
print(f"Model: {args.model}")
|
||||
print(f"Config target: {args.config_path}")
|
||||
print(f"Bezalel VPS proof: {summary['vps_verify_command']}")
|
||||
if args.write_config:
|
||||
print(f"Config written: {args.config_path}")
|
||||
if "verify_response" in summary:
|
||||
|
||||
104
specs/fleet-operator-incentives.md
Normal file
104
specs/fleet-operator-incentives.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Fleet Operator Incentives Specification
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the incentive structures for fleet operators within the Timmy Home ecosystem. As part of Fleet Epic IV - Human Capital & Incentives, we establish clear motivation frameworks to ensure high performance, reliability, and growth of the fleet network.
|
||||
|
||||
## 1. Incentive Tiers
|
||||
|
||||
### Tier 1: Bronze Operator
|
||||
- **Eligibility**: New operators, < 3 months tenure
|
||||
- **Base Rate**: $0.15/task
|
||||
- **Monthly Cap**: $500
|
||||
- **Bonuses**:
|
||||
- First 100 tasks completed: +$100
|
||||
- 95%+ completion rate: +$50
|
||||
|
||||
### Tier 2: Silver Operator
|
||||
- **Eligibility**: 3-12 months tenure, >500 tasks completed
|
||||
- **Base Rate**: $0.22/task
|
||||
- **Monthly Cap**: $1,200
|
||||
- **Bonuses**:
|
||||
- 98%+ completion rate: +$150
|
||||
- Peak-hour availability (6-9 AM,YPES$150
|
||||
|
||||
### Tier 3: Gold Operator
|
||||
- **Eligibility**: >12 months tenure, >2000 tasks completed
|
||||
- **Base Rate**: $0.30/task
|
||||
- **Monthly Cap**: $2,500
|
||||
- **Bonuses**:
|
||||
- 99%+ completion rate: +$300
|
||||
- Training 2+ new operators: +$200/operator
|
||||
- Weekend availability: +$200
|
||||
|
||||
### Tier 4: Platinum Operator
|
||||
- **Eligibility**: >24 months tenure, >5000 tasks completed, peer nomination
|
||||
- **Base Rate**: $0.40/task
|
||||
- **Monthly Cap**: Unlimited
|
||||
- **Bonuses**:
|
||||
- Perfect attendance month: +$500
|
||||
- Regional spot bonus: $100-$1000 (discretionary)
|
||||
- Profit-sharing pool access (5% of net profits)
|
||||
|
||||
## 2. Performance Metrics
|
||||
|
||||
| Metric | Target | Measurement |
|
||||
|--------|--------|-------------|
|
||||
| Task Completion Rate | ≥98% | Daily rolling average |
|
||||
| Response Time | ≤5 min | 95th percentile |
|
||||
| Customer Rating | ≥4.8/5.0 | Rolling 30-day average |
|
||||
| Uptime/Availability | ≥90% | Weekly average hours active |
|
||||
| Safety Incidents | 0 | Zero tolerance |
|
||||
|
||||
## 3. Bonus Structures
|
||||
|
||||
### Quarterly Performance Bonus
|
||||
- Gold+ operators eligible
|
||||
- Tiered payouts based on combined metrics:
|
||||
- Meets targets: $1,000
|
||||
- Exceeds targets: $2,500
|
||||
- Exceptional: $5,000
|
||||
|
||||
### Referral Program
|
||||
- Refer new operator: $250 after their 50th task
|
||||
- Refer new partner business: $500 after first contract signed
|
||||
- Multi-tier: additional $100 for each referral that becomes Gold within 12 months
|
||||
|
||||
### Fleet Growth Bonus
|
||||
- Operators who expand their own fleet (add ≥3 additional verified operators under their mentorship):
|
||||
- $1,000 per new operator added after 6-month probation
|
||||
- Access to Platinum-tier benefits for 6 months
|
||||
|
||||
## 4. Penalties & Adjustments
|
||||
|
||||
- **Late task completion**: -$0.05 per late task (from base)
|
||||
- **Customer complaint (verified)**: -$25 per incident
|
||||
- **No-show without notice**: -$50 per incident
|
||||
- **Safety violation**: Tier demotion, retraining required
|
||||
|
||||
## 5. Payment Schedule
|
||||
|
||||
- Weekly payouts (every Friday)
|
||||
- Direct deposit or cryptocurrency wallet
|
||||
- Detailed invoice with performance breakdown
|
||||
- Tax documents (1099) provided annually
|
||||
|
||||
## 6. Review & Advancement
|
||||
|
||||
- Automatic tier review occurs monthly
|
||||
- Operators may request early review after meeting tier criteria
|
||||
- Appeals process available within 7 days of notification
|
||||
- Demotion notices include 14-day improvement window
|
||||
|
||||
## 7. Partner Program Integration
|
||||
|
||||
Operators in Gold+ tiers are eligible for Partner Program benefits:
|
||||
- Access to premium client contracts
|
||||
- Co-marketing opportunities
|
||||
- Equipment leasing at preferred rates
|
||||
- Revenue share on referred business
|
||||
|
||||
---
|
||||
|
||||
*Last Updated: 2026-03-29*
|
||||
*Next Review: Quarterly*
|
||||
149
specs/fleet-ops-runbook.md
Normal file
149
specs/fleet-ops-runbook.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Fleet Operations Runbook
|
||||
|
||||
## Purpose
|
||||
|
||||
This runbook provides fleet operators with standard operating procedures (SOPs), escalation paths, and daily operational guidance for managing fleet tasks within the Timmy Home platform.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Daily Startup](#daily-startup)
|
||||
2. [Task Management](#task-management)
|
||||
3. [Communication Protocols](#communication-protocols)
|
||||
4. [Incident Response](#incident-response)
|
||||
5. [Vehicle & Equipment Checks](#vehicle--equipment-checks)
|
||||
6. [End-of-Day Procedures](#end-of-day-procedures)
|
||||
7. [Escalation Matrix](#escalation-matrix)
|
||||
8. [Contact Directory](#contact-directory)
|
||||
|
||||
---
|
||||
|
||||
## Daily Startup
|
||||
|
||||
### Morning Briefing (5:45 AM - 6:00 AM)
|
||||
- [ ] Log into operator dashboard
|
||||
- [ ] Review daily task assignments
|
||||
- [ ] Check weather and traffic conditions
|
||||
- [ ] Confirm vehicle status (fuel, battery, maintenance)
|
||||
- [ ] Update availability status to "Active"
|
||||
|
||||
### Equipment Checklist
|
||||
- [ ] Mobile device charged (>80%)
|
||||
- [ ] Scanner/tablet functional
|
||||
- [ ] Connectivity tested (Wi-Fi & cellular)
|
||||
- [ ] PPE available (if required for task type)
|
||||
- [ ] First aid kit present in vehicle
|
||||
|
||||
## Task Management
|
||||
|
||||
### Task Acceptance
|
||||
1. Review task details: location, time window, requirements
|
||||
2. Confirm capacity to accept
|
||||
3. Acknowledge task within 2 minutes
|
||||
4. Navigate to location using integrated GPS
|
||||
|
||||
### On-Site Procedure
|
||||
- Arrive 5 minutes early
|
||||
- Scan QR code or enter PIN
|
||||
- Complete required verification steps
|
||||
- Perform task according to SOP checklist
|
||||
- Capture completion evidence (photo/video if required)
|
||||
- Obtain customer signature if applicable
|
||||
- Mark task complete in system
|
||||
|
||||
### Task Issues
|
||||
- **Location inaccessible**: Contact dispatch, document with photo
|
||||
- **Equipment failure**: Log issue, request replacement
|
||||
- **Customer not present**: Wait 15 min past scheduled time, then escalate
|
||||
- **Task cannot be completed**: Document reason, contact support immediately
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
### Radio/Comms Etiquette
|
||||
- Use clear, concise language
|
||||
- Identify yourself and task ID at start of transmission
|
||||
- Acknowledge all dispatcher communications within 1 minute
|
||||
- Emergency communications use priority channel
|
||||
|
||||
### Status Updates
|
||||
- Update status every 2 hours during shift
|
||||
- Immediate notification for delays >10 minutes
|
||||
- ETA changes communicated proactively
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Incident Categories & Response Times
|
||||
|
||||
| Incident Type | Initial Response | Escalation Threshold |
|
||||
|---------------|-----------------|---------------------|
|
||||
| Vehicle accident | Immediate (911 + dispatch) | All accidents |
|
||||
| Task dispute | 5 minutes | Unresolved after 15 min |
|
||||
| Medical emergency | Immediate (911) | All emergencies |
|
||||
| Equipment loss/theft | 10 minutes | Police report required |
|
||||
| Route blocked | 15 minutes | Alternate not found |
|
||||
|
||||
### Incident Reporting Steps
|
||||
1. Secure safety (self and others)
|
||||
2. Contact appropriate emergency services if needed
|
||||
3. Notify dispatch/supervisor
|
||||
4. Document with photos/videos
|
||||
5. Complete incident form within 1 hour
|
||||
6. Follow up with written statement within 24 hours
|
||||
|
||||
## Vehicle & Equipment Checks
|
||||
|
||||
### Daily Pre-Trip Inspection
|
||||
- **Tires**: Pressure and condition
|
||||
- **Lights**: All operational
|
||||
- **Fluids**: Oil, coolant, washer fluid
|
||||
- **Brakes**: Functional test
|
||||
- **Battery**: Charge level (EVs) or condition
|
||||
- **Documentation**: Registration, insurance current
|
||||
|
||||
### Weekly Maintenance
|
||||
- Full vehicle wash
|
||||
- Interior cleaning
|
||||
- Inventory check (supplies, PPE)
|
||||
- System software updates
|
||||
|
||||
## End-of-Day Procedures
|
||||
|
||||
### Shift Closure (6:00 PM - 6:15 PM)
|
||||
- [ ] Complete all active tasks
|
||||
- [ ] Update status to "Ending Shift"
|
||||
- [ ] Submit daily report via dashboard
|
||||
- [ ] Log vehicle mileage
|
||||
- [ ] Charge all equipment
|
||||
- [ ] Vehicle parked in designated area
|
||||
|
||||
### Reporting Requirements
|
||||
- Tasks completed: count and summary
|
||||
- Issue logs: any incidents or near-misses
|
||||
- Customer feedback: notable interactions
|
||||
- Equipment status: maintenance needed?
|
||||
- Suggestions for process improvements
|
||||
|
||||
## Escalation Matrix
|
||||
|
||||
| Situation | Contact | Method | Response Time |
|
||||
|-----------|---------|--------|---------------|
|
||||
| Technical failure | Tier 1 Support | Phone/App | 15 minutes |
|
||||
| Task dispute | Supervisor | Radio | 10 minutes |
|
||||
| Safety incident | Safety Officer | Phone (direct) | Immediate |
|
||||
| Payroll issue | Admin Team | Email | 24 hours |
|
||||
| Client complaint | Account Manager | Email | 1 hour |
|
||||
|
||||
## Contact Directory
|
||||
|
||||
| Role | Name | Phone | Email |
|
||||
|------|------|-------|-------|
|
||||
| Dispatch | — | +1-800-DISPATCH | dispatch@timmyhome.io |
|
||||
| Tier 1 Support | — | +1-800-SUPPORT | support@timmyhome.io |
|
||||
| Safety Hotline | — | +1-800-SAFETY | safety@timmyhome.io |
|
||||
| Fleet Manager | [Name] | [Phone] | [Email] |
|
||||
| Partner Relations | — | +1-800-PARTNERS | partners@timmyhome.io |
|
||||
|
||||
---
|
||||
|
||||
*Runbook Version: 1.0*
|
||||
*Effective Date: 2026-03-29*
|
||||
*Next Review: Quarterly*
|
||||
146
specs/templates/operator-application.md
Normal file
146
specs/templates/operator-application.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Fleet Operator Application Template
|
||||
|
||||
## Personal Information
|
||||
|
||||
**Full Legal Name**: _______________________________
|
||||
**Date of Birth**: _______________
|
||||
**SSN / Tax ID**: _______________
|
||||
**Contact Phone**: _______________
|
||||
**Email Address**: _______________
|
||||
**Physical Address**: _______________________________
|
||||
|
||||
## Employment Eligibility
|
||||
|
||||
- [ ] I am legally authorized to work in the United States
|
||||
- [ ] I am at least 21 years of age
|
||||
- [ ] I possess a valid driver's license (Class: ______, State: ______)
|
||||
|
||||
## Driving & Vehicle Information
|
||||
|
||||
### Driver's License
|
||||
- License Number: _______________
|
||||
- State: _______________
|
||||
- Expiration: _______________
|
||||
- Have you had any moving violations in the past 3 years? (Y/N): ______
|
||||
- If yes, please explain: _______________________________
|
||||
|
||||
### Vehicle Information
|
||||
- **Vehicle Year/Make/Model**: __________________________________
|
||||
- **Vehicle VIN**: ___________________________________________
|
||||
- **License Plate**: _________________________________________
|
||||
- **Vehicle Color**: _________________________________________
|
||||
- **Vehicle used for**: [ ] Personal [ ] Commercial [ ] Leased
|
||||
- **Insurance Provider**: _____________________________________
|
||||
- **Policy Number**: _________________________________________
|
||||
- **Coverage Limits**: $______ bodily injury / $______ property damage
|
||||
|
||||
## Background Check Authorization
|
||||
|
||||
I authorize Timmy Home and its affiliated entities to conduct a background check, including:
|
||||
|
||||
- [ ] Criminal history (7-year lookback)
|
||||
- [ ] Motor vehicle records
|
||||
- [ ] Employment verification
|
||||
- [ ] Education verification
|
||||
- [ ] Credit check (if required)
|
||||
|
||||
**Signature**: _______________________________ **Date**: _______________
|
||||
|
||||
## Equipment & Technology
|
||||
|
||||
### Required Equipment (check all that you possess)
|
||||
- [ ] Smartphone (iOS/Android) with data plan
|
||||
- [ ] Portable charger / power bank
|
||||
- [ ] Mount for phone in vehicle
|
||||
- [ ] Scanner/tablet (if applicable)
|
||||
- [ ] Other: _______________________________________________
|
||||
|
||||
### Technical Proficiency
|
||||
Please rate your comfort level with the following (1-5):
|
||||
- Mobile applications: _____
|
||||
- GPS navigation: _____
|
||||
- Digital forms & documentation: _____
|
||||
- Photography for documentation: _____
|
||||
|
||||
## Availability & Scheduling
|
||||
|
||||
### Preferred Working Hours
|
||||
- [ ] Morning (5:00 AM - 12:00 PM)
|
||||
- [ ] Afternoon (12:00 PM - 8:00 PM)
|
||||
- [ ] Evening (8:00 PM - 12:00 AM)
|
||||
- [ ] Overnight (12:00 AM - 5:00 AM)
|
||||
- [ ] Weekends
|
||||
|
||||
### Weekly Availability
|
||||
- Monday: _____ hours
|
||||
- Tuesday: _____ hours
|
||||
- Wednesday: _____ hours
|
||||
- Thursday: _____ hours
|
||||
- Friday: _____ hours
|
||||
- Saturday: _____ hours
|
||||
- Sunday: _____ hours
|
||||
|
||||
**Total weekly availability**: _____ hours
|
||||
|
||||
## Experience & Training
|
||||
|
||||
### Previous Relevant Experience
|
||||
**Company**: ___________________________________________
|
||||
**Role**: _______________________________________________
|
||||
**Duration**: ___________________________________________
|
||||
**Key Responsibilities**: _______________________________
|
||||
|
||||
**Company**: ___________________________________________
|
||||
**Role**: _______________________________________________
|
||||
**Duration**: ___________________________________________
|
||||
**Key Responsibilities**: _______________________________
|
||||
|
||||
### Specialized Training
|
||||
- [ ] Commercial Driver's License (CDL)
|
||||
- [ ] Defensive Driving Course
|
||||
- [ ] First Aid / CPR Certified
|
||||
- [ ] OSHA Safety Training
|
||||
- [ ] Other: _____________________________________________
|
||||
|
||||
## Incentive Program Preferences
|
||||
|
||||
Which incentive components are most important to you? (Rank 1-5, 1=most important)
|
||||
- Base pay rate: _____
|
||||
- Task variety: _____
|
||||
- Flexible schedule: _____
|
||||
- Performance bonuses: _____
|
||||
- Tier advancement opportunities: _____
|
||||
|
||||
## References
|
||||
|
||||
### Professional Reference 1
|
||||
**Name**: ________________________________
|
||||
**Relationship**: _______________________
|
||||
**Company**: ___________________________
|
||||
**Phone**: _____________________________
|
||||
**Email**: _____________________________
|
||||
|
||||
### Professional Reference 2
|
||||
**Name**: ________________________________
|
||||
**Relationship**: _______________________
|
||||
**Company**: ___________________________
|
||||
**Phone**: _____________________________
|
||||
**Email**: _____________________________
|
||||
|
||||
## Agreement & Certification
|
||||
|
||||
I certify that all information provided in this application is true and complete to the best of my knowledge. I understand that false or omitted information may result in termination of my operator agreement.
|
||||
|
||||
I have read and agree to the Timmy Home Operator Agreement and related policies.
|
||||
|
||||
**Applicant Signature**: _______________________________
|
||||
**Printed Name**: _____________________________________
|
||||
**Date**: _______________
|
||||
|
||||
---
|
||||
|
||||
*Application ID*: [Auto-generated]
|
||||
*Submission Date*: [Auto-filled]
|
||||
*Review Status*: Pending
|
||||
|
||||
*Please email completed application to operators@timmyhome.io or submit via the operator portal.*
|
||||
222
specs/templates/partner-report.md
Normal file
222
specs/templates/partner-report.md
Normal file
@@ -0,0 +1,222 @@
|
||||
# Partner Performance Report Template
|
||||
|
||||
## Report Period
|
||||
|
||||
**From**: _______________ **To**: _______________
|
||||
**Report Generated**: _______________
|
||||
**Report Owner**: _________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
### Period Highlights
|
||||
- Total tasks completed: _______________
|
||||
- Revenue generated: $_______________
|
||||
- Net promoter score (NPS): _______________
|
||||
- Completion rate: ______________%
|
||||
- Key achievements: _____________________________________________
|
||||
- Areas for improvement: _________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Partner Details
|
||||
|
||||
**Partner Name**: _______________________________________________
|
||||
**Partner ID**: _______________
|
||||
**Partner Tier**: [ ] Bronze [ ] Silver [ ] Gold [ ] Platinum
|
||||
**Contract Start Date**: _______________
|
||||
**Account Manager**: _______________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Volume Metrics
|
||||
|
||||
| Metric | Current Period | Previous Period | Variance | Annual Target |
|
||||
|--------|----------------|-----------------|----------|---------------|
|
||||
| Tasks Assigned | ________ | ________ | ____% | ________ |
|
||||
| Tasks Completed | ________ | ________ | ____% | ________ |
|
||||
| Tasks Cancelled | ________ | ________ | ____% | ________ |
|
||||
| Avg. Tasks/Day | ________ | ________ | ____% | ________ |
|
||||
| Peak Day (tasks) | ________ | ________ | ________ | ________ |
|
||||
|
||||
---
|
||||
|
||||
## Financial Summary
|
||||
|
||||
| Category | Current Period | Previous Period | Variance | YTD Total |
|
||||
|----------|----------------|-----------------|----------|-----------|
|
||||
| Gross Revenue | $__________ | $__________ | ____% | $__________ |
|
||||
| Incentives Paid | $__________ | $__________ | ____% | $__________ |
|
||||
| Bonuses Awarded | $__________ | $__________ | ____% | $__________ |
|
||||
| Net Revenue* | $__________ | $__________ | ____% | $__________ |
|
||||
|
||||
*Net Revenue = Gross Revenue - Incentives Paid - Bonuses Awarded
|
||||
|
||||
### Revenue Breakdown by Service Type
|
||||
- Standard Delivery: $__________ (____%)
|
||||
- Express Delivery: $__________ (____%)
|
||||
- White-Glove Service: $__________ (____%)
|
||||
- Other: $__________ (____%)
|
||||
|
||||
---
|
||||
|
||||
## Performance Quality Metrics
|
||||
|
||||
### Completion & Timeliness
|
||||
- **On-time Completion Rate**: ________% (Target: ≥95%)
|
||||
- **Average Completion Time**: ______ minutes (Target: ≤45 min)
|
||||
- **Tasks Completed Early**: ________ (____%)
|
||||
- **Tasks Completed Late**: ________ (____%)
|
||||
|
||||
### Quality Assurance
|
||||
- **Customer Satisfaction Score**: ______ / 5.0
|
||||
- **5-Star Rating Percentage**: ______%
|
||||
- **Complaints Received**: ________
|
||||
- **Complaints Escalated**: ________
|
||||
- **Quality Audit Pass Rate**: ______%
|
||||
|
||||
### Operational Reliability
|
||||
- **Vehicle/Availability Uptime**: ______%
|
||||
- **System/App Uptime**: ______%
|
||||
- **Missed Tasks due to Equipment**: ________
|
||||
- **Route Adherence Score**: ______%
|
||||
|
||||
---
|
||||
|
||||
## Operator Team Performance
|
||||
|
||||
### Team Composition
|
||||
| Tier | Count | Change from prev. period |
|
||||
|------|-------|--------------------------|
|
||||
| Bronze | ________ | [ ] ↑ [ ] ↓ ____ |
|
||||
| Silver | ________ | [ ] ↑ [ ] ↓ ____ |
|
||||
| Gold | ________ | [ ] ↑ [ ] ↓ ____ |
|
||||
| Platinum | ________ | [ ] ↑ [ ] ↓ ____ |
|
||||
| **Total** | ________ | ________ |
|
||||
|
||||
### Operator Productivity
|
||||
- **Top Performer**: ______________________ (______ tasks)
|
||||
- **Average Tasks/Operator/Day**: ________
|
||||
- **New Operators Added**: ________
|
||||
- **Operators Terminated**: ________
|
||||
- **Operator Retention Rate**: ______%
|
||||
|
||||
---
|
||||
|
||||
## Customer & Client Insights
|
||||
|
||||
### Top 5 Customers by Volume
|
||||
| # | Customer Name | Tasks | Revenue |
|
||||
|---|---------------|-------|---------|
|
||||
| 1 | ______________ | _____ | $_______ |
|
||||
| 2 | ______________ | _____ | $_______ |
|
||||
| 3 | ______________ | _____ | $_______ |
|
||||
| 4 | ______________ | _____ | $_______ |
|
||||
| 5 | ______________ | _____ | $_______ |
|
||||
|
||||
### Customer Feedback Themes
|
||||
- **Positive**: _______________________________________________________
|
||||
- **Negative**: _______________________________________________________
|
||||
- **Improvement Requests**: ___________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Incident & Issue Log
|
||||
|
||||
| Date | Incident Type | Description | Resolution | Cost Impact |
|
||||
|------|---------------|-------------|------------|-------------|
|
||||
| ______ | _____________ | ____________ | __________ | $__________ |
|
||||
| ______ | _____________ | ____________ | __________ | $__________ |
|
||||
| ______ | _____________ | ____________ | __________ | $__________ |
|
||||
|
||||
**Total Incident Cost This Period**: $__________
|
||||
|
||||
---
|
||||
|
||||
## Compliance & Safety
|
||||
|
||||
- Safety Training Completed: ________%
|
||||
- Safety Violations: ________
|
||||
- Near-Miss Reports: ________
|
||||
- Corrective Actions Outstanding: ________
|
||||
- Regulatory Compliance Status: [ ] Compliant [ ] Non-compliant
|
||||
|
||||
---
|
||||
|
||||
## Partner Program Benefits Utilization
|
||||
|
||||
| Benefit | Utilized? | Frequency | ROI Assessment |
|
||||
|---------|-----------|-----------|----------------|
|
||||
| Co-marketing funds | [ ] Yes [ ] No | ________ | ________ |
|
||||
| Equipment leasing | [ ] Yes [ ] No | ________ | ________ |
|
||||
| Priority dispatch | [ ] Yes [ ] No | ________ | ________ |
|
||||
| Training program | [ ] Yes [ ] No | ________ | ________ |
|
||||
| Profit-sharing | [ ] Yes [ ] No | ________ | ________ |
|
||||
|
||||
---
|
||||
|
||||
## Review & Recognition
|
||||
|
||||
### Performance Assessment
|
||||
**Overall Rating**: [ ] Exceeds Expectations [ ] Meets Expectations [ ] Needs Improvement
|
||||
|
||||
**Strengths**:
|
||||
1. ___________________________________________
|
||||
2. ___________________________________________
|
||||
3. ___________________________________________
|
||||
|
||||
**Areas for Development**:
|
||||
1. ___________________________________________
|
||||
2. ___________________________________________
|
||||
|
||||
### Recognition & Awards
|
||||
- Employee of the Month: _________________________________
|
||||
- Safety Champion: ______________________________________
|
||||
- Customer Hero: _______________________________________
|
||||
|
||||
---
|
||||
|
||||
## Goals & Action Plan
|
||||
|
||||
### Next Period Goals (30-60-90 day)
|
||||
|
||||
| Goal Area | Objective | Success Metric | Owner | Due Date |
|
||||
|-----------|-----------|----------------|-------|----------|
|
||||
| Volume Growth | ______________________ | ______________ | ________ | ________ |
|
||||
| Quality Improvement | ______________________ | ______________ | ________ | ________ |
|
||||
| Safety | ______________________ | ______________ | ________ | ________ |
|
||||
| Training | ______________________ | ______________ | ________ | ________ |
|
||||
|
||||
### Required Support from Timmy Home
|
||||
_________________________________________________________________
|
||||
_________________________________________________________________
|
||||
_________________________________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Signatures
|
||||
|
||||
**Partner Representative**: _______________________________________
|
||||
**Title**: ______________________ **Date**: _______________
|
||||
**Signature**: _______________________________________________
|
||||
|
||||
**Timmy Home Account Manager**: _________________________________
|
||||
**Title**: ______________________ **Date**: _______________
|
||||
**Signature**: _______________________________________________
|
||||
|
||||
---
|
||||
|
||||
## Appendices
|
||||
|
||||
- [ ] Appendix A: Detailed Task Log
|
||||
- [ ] Appendix B: Customer Feedback Samples
|
||||
- [ ] Appendix C: Financial Ledger
|
||||
- [ ] Appendix D: Incident Reports
|
||||
- [ ] Appendix E: Training Records
|
||||
|
||||
---
|
||||
|
||||
*Report classification: Confidential - Partner Eyes Only*
|
||||
*Template Version: 1.0*
|
||||
*Next review due: _______________*
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,20 +1,14 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import yaml
|
||||
|
||||
from scripts.bezalel_gemma4_vps import (
|
||||
DEFAULT_CONFIG_PATH,
|
||||
DEFAULT_BEZALEL_VPS_HOST,
|
||||
build_deploy_mutation,
|
||||
build_runpod_endpoint,
|
||||
build_vps_verify_command,
|
||||
normalize_openai_base_url,
|
||||
parse_deploy_response,
|
||||
resolve_base_url,
|
||||
update_config_text,
|
||||
verify_openai_chat,
|
||||
)
|
||||
@@ -34,10 +28,6 @@ class _FakeResponse:
|
||||
return False
|
||||
|
||||
|
||||
def test_default_config_path_targets_bezalel_vps_root_config() -> None:
|
||||
assert DEFAULT_CONFIG_PATH == Path("/root/wizards/bezalel/home/config.yaml")
|
||||
|
||||
|
||||
def test_build_deploy_mutation_uses_ollama_image_and_openai_port() -> None:
|
||||
query = build_deploy_mutation(name="bezalel-gemma4", gpu_type="NVIDIA L40S", model_tag="gemma4:latest")
|
||||
|
||||
@@ -47,30 +37,6 @@ def test_build_deploy_mutation_uses_ollama_image_and_openai_port() -> None:
|
||||
assert 'volumeMountPath: "/root/.ollama"' in query
|
||||
|
||||
|
||||
def test_normalize_openai_base_url_adds_v1_suffix() -> None:
|
||||
assert normalize_openai_base_url("https://pod-11434.proxy.runpod.net") == "https://pod-11434.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def test_normalize_openai_base_url_trims_chat_completions_suffix() -> None:
|
||||
assert normalize_openai_base_url("https://pod-11434.proxy.runpod.net/v1/chat/completions") == "https://pod-11434.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def test_resolve_base_url_prefers_vertex_over_base_and_pod_id() -> None:
|
||||
base_url, source = resolve_base_url(
|
||||
vertex_base_url="https://vertex.example.com/openai",
|
||||
base_url="https://plain.example.com",
|
||||
pod_id="abc123",
|
||||
)
|
||||
assert source == "vertex_base_url"
|
||||
assert base_url == "https://vertex.example.com/openai/v1"
|
||||
|
||||
|
||||
def test_resolve_base_url_falls_back_to_base_url_before_pod_id() -> None:
|
||||
base_url, source = resolve_base_url(base_url="https://plain.example.com", pod_id="abc123")
|
||||
assert source == "base_url"
|
||||
assert base_url == "https://plain.example.com/v1"
|
||||
|
||||
|
||||
def test_build_runpod_endpoint_appends_v1_suffix() -> None:
|
||||
assert build_runpod_endpoint("abc123") == "https://abc123-11434.proxy.runpod.net/v1"
|
||||
|
||||
@@ -94,7 +60,7 @@ def test_parse_deploy_response_extracts_pod_id_and_endpoint() -> None:
|
||||
}
|
||||
|
||||
|
||||
def test_update_config_text_upserts_big_brain_provider_and_normalizes_base_url() -> None:
|
||||
def test_update_config_text_upserts_big_brain_provider() -> None:
|
||||
original = """
|
||||
model:
|
||||
default: kimi-k2.5
|
||||
@@ -106,7 +72,7 @@ custom_providers:
|
||||
model: gemma3:27b
|
||||
"""
|
||||
|
||||
updated = update_config_text(original, base_url="https://new-pod-11434.proxy.runpod.net", model="gemma4:latest")
|
||||
updated = update_config_text(original, base_url="https://new-pod-11434.proxy.runpod.net/v1", model="gemma4:latest")
|
||||
parsed = yaml.safe_load(updated)
|
||||
|
||||
assert parsed["model"] == {"default": "kimi-k2.5", "provider": "kimi-coding"}
|
||||
@@ -120,14 +86,7 @@ custom_providers:
|
||||
]
|
||||
|
||||
|
||||
def test_build_vps_verify_command_targets_bezalel_host_and_chat_completions() -> None:
|
||||
command = build_vps_verify_command(base_url="https://pod-11434.proxy.runpod.net", model="gemma4:latest")
|
||||
assert command.startswith(f"ssh root@{DEFAULT_BEZALEL_VPS_HOST} ")
|
||||
assert "/v1/chat/completions" in command
|
||||
assert "gemma4:latest" in command
|
||||
|
||||
|
||||
def test_verify_openai_chat_calls_chat_completions_with_normalized_base_url() -> None:
|
||||
def test_verify_openai_chat_calls_chat_completions() -> None:
|
||||
response_payload = {
|
||||
"choices": [
|
||||
{
|
||||
@@ -142,7 +101,7 @@ def test_verify_openai_chat_calls_chat_completions_with_normalized_base_url() ->
|
||||
"scripts.bezalel_gemma4_vps.request.urlopen",
|
||||
return_value=_FakeResponse(response_payload),
|
||||
) as mocked:
|
||||
result = verify_openai_chat("https://pod-11434.proxy.runpod.net", model="gemma4:latest", prompt="say READY")
|
||||
result = verify_openai_chat("https://pod-11434.proxy.runpod.net/v1", model="gemma4:latest", prompt="say READY")
|
||||
|
||||
assert result == "READY"
|
||||
req = mocked.call_args.args[0]
|
||||
@@ -150,10 +109,3 @@ def test_verify_openai_chat_calls_chat_completions_with_normalized_base_url() ->
|
||||
payload = json.loads(req.data.decode())
|
||||
assert payload["model"] == "gemma4:latest"
|
||||
assert payload["messages"][0]["content"] == "say READY"
|
||||
|
||||
|
||||
def test_readme_documents_root_config_path_and_vps_proof_command() -> None:
|
||||
readme = Path("scripts/README_bezalel_gemma4_vps.md").read_text()
|
||||
assert "/root/wizards/bezalel/home/config.yaml" in readme
|
||||
assert "ssh root@104.131.15.18" in readme
|
||||
assert "--vertex-base-url" in readme
|
||||
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
Reference in New Issue
Block a user