Compare commits
1 Commits
sprint/iss
...
step35/874
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
41ac45e49b |
70
README.md
70
README.md
@@ -112,6 +112,76 @@ pytest tests/
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
## Sherlock Username Recon Wrapper
|
||||
|
||||
### Quick Usage
|
||||
|
||||
```bash
|
||||
# Opt-in via env var
|
||||
export SHERLOCK_ENABLED=1
|
||||
|
||||
# Or via explicit CLI flag
|
||||
python -m tools.sherlock_wrapper --query "alice" --opt-in --json
|
||||
|
||||
# With site whitelist
|
||||
python -m tools.sherlock_wrapper --query "alice" --opt-in --sites github twitter --json
|
||||
```
|
||||
|
||||
### What It Does
|
||||
|
||||
Builds a bounded local wrapper around the Sherlock username OSINT tool that:
|
||||
|
||||
- **Opt-in gate** — SHERLOCK_ENABLED=1 or `--opt-in` required before any external call
|
||||
- **Local-first caching** — results cached in `~/.cache/timmy/sherlock_cache.db` (TTL: 7 days)
|
||||
- **Normalized JSON** — stable schema with `found`, `missing`, `errors`, and `metadata` sections
|
||||
- **No network egress** — only makes outbound HTTP to target sites through sherlock; never phones home
|
||||
|
||||
### Output Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"schema_version": "1.0",
|
||||
"query": "alice",
|
||||
"timestamp": "2025-04-26T14:23:00+00:00",
|
||||
"found": [
|
||||
{"site": "github", "url": "https://github.com/alice"}
|
||||
],
|
||||
"missing": ["twitter", "facebook"],
|
||||
"errors": [{"site": "instagram", "error": "timeout"}],
|
||||
"metadata": {
|
||||
"total_sites_checked": 50,
|
||||
"found_count": 1,
|
||||
"missing_count": 48,
|
||||
"error_count": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
Sherlock must be installed separately:
|
||||
|
||||
```bash
|
||||
pip install sherlock-project
|
||||
```
|
||||
|
||||
The wrapper is pure Python and requires only stdlib apart from sherlock itself.
|
||||
|
||||
### Why an Opt-In Gate?
|
||||
|
||||
Sherlock makes outbound HTTP requests to dozens of third-party sites. The opt-in gate:
|
||||
1. Ensures a human operator explicitly approves this dependency
|
||||
2. Makes the outbound traffic auditable in session logs
|
||||
3. Prevents accidental invocation in automated pipelines
|
||||
|
||||
### Running the Smoke Test
|
||||
|
||||
```bash
|
||||
# Run unit + integration tests
|
||||
pytest tests/test_sherlock_wrapper.py -v
|
||||
```
|
||||
|
||||
|
||||
|
||||
```
|
||||
.
|
||||
|
||||
20
SOUL.md
20
SOUL.md
@@ -137,26 +137,6 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
@@ -1,18 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
289
luna/sketch.js
@@ -1,289 +0,0 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
@@ -1,38 +0,0 @@
|
||||
# Fleet Operator Incentives
|
||||
|
||||
## Overview
|
||||
|
||||
This specification defines the incentive structure for certified fleet operators within the Timmy ecosystem. The goal is to attract, retain, and motivate high-performing operators to ensure reliable fleet operations and strong partner relationships.
|
||||
|
||||
## Incentive Tiers
|
||||
|
||||
### Tier 1: Certified Operator
|
||||
- **Eligibility**: Complete operator application, pass background check, complete training
|
||||
- **Benefits**:
|
||||
- Base rate per delivery
|
||||
- Access to premium loads
|
||||
- Basic support
|
||||
- Operator badge and certification
|
||||
|
||||
### Tier 2: Performance Bonus
|
||||
- **Eligibility**: 95%+ on-time delivery rate, <2% incident rate, 6+ months active
|
||||
- **Benefits**:
|
||||
- +15% rate multiplier
|
||||
- Priority dispatch
|
||||
- Dedicated support line
|
||||
- Monthly performance bonus
|
||||
|
||||
### Tier 3: Fleet Partner
|
||||
- **Eligibility**: 5+ vehicles, 99%+ uptime, 12+ months active, refer 3+ qualified partners
|
||||
- **Benefits**:
|
||||
- +25% rate multiplier
|
||||
- Volume discounts
|
||||
- Co-marketing opportunities
|
||||
- Annual renewal bonus
|
||||
- Training stipend
|
||||
|
||||
## Success Criteria (6-month targets)
|
||||
- 3-5 active certified operators
|
||||
- Operator churn <10% annually
|
||||
- Fleet uptime >99.5%
|
||||
- Partner channel >30% of leads
|
||||
@@ -1,52 +0,0 @@
|
||||
# Fleet Operations Runbook
|
||||
|
||||
## Purpose
|
||||
|
||||
Standard operating procedures for fleet operators to ensure consistent, reliable service delivery.
|
||||
|
||||
## Daily Operations
|
||||
|
||||
### Pre-Shift Checklist
|
||||
- [ ] Vehicle inspection complete
|
||||
- [ ] Documentation uploaded
|
||||
- [ ] Route planning confirmed
|
||||
- [ ] Communication devices charged
|
||||
|
||||
### During Operations
|
||||
- [ ] Maintain 99.5%+ uptime
|
||||
- [ ] Report incidents within 15 minutes
|
||||
- [ ] Complete delivery confirmations
|
||||
- [ ] Follow safety protocols
|
||||
|
||||
### Post-Shift
|
||||
- [ ] Vehicle maintenance log updated
|
||||
- [ ] End-of-day report submitted
|
||||
- [ ] Next shift preparation
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### Vehicle Breakdown
|
||||
1. Safety first - pull over safely
|
||||
2. Notify dispatch immediately
|
||||
3. Request replacement vehicle if needed
|
||||
4. Complete incident report
|
||||
|
||||
### Delivery Issue
|
||||
1. Contact customer within 30 minutes
|
||||
2. Escalate to support if unresolved
|
||||
3. Document all communications
|
||||
4. File formal report within 24 hours
|
||||
|
||||
## Performance Monitoring
|
||||
|
||||
- **Uptime**: Track via GPS and dispatch logs
|
||||
- **Delivery Timeliness**: On-time vs delayed deliveries
|
||||
- **Incident Rate**: Safety and damage events
|
||||
- **Customer Satisfaction**: Feedback scores
|
||||
|
||||
## Support Contacts
|
||||
|
||||
- Dispatch: [dispatch number]
|
||||
- Emergency: [emergency number]
|
||||
- Maintenance: [maintenance contact]
|
||||
- Partner Success: [partner manager]
|
||||
@@ -1,58 +0,0 @@
|
||||
# Operator Application Template
|
||||
|
||||
## Personal Information
|
||||
|
||||
**Full Name**: ___________________________
|
||||
|
||||
**Contact Email**: ________________________
|
||||
|
||||
**Phone**: _______________________________
|
||||
|
||||
**Address**: ______________________________
|
||||
|
||||
## Business Information
|
||||
|
||||
**Company Name**: _________________________
|
||||
|
||||
**Years in Business**: _____________________
|
||||
|
||||
**Number of Vehicles**: ____________________
|
||||
|
||||
**Vehicle Types**: _________________________
|
||||
|
||||
**Service Area**: _________________________
|
||||
|
||||
## Certifications
|
||||
|
||||
- [ ] Commercial Driver's License (CDL)
|
||||
- [ ] Safety Certification
|
||||
- [ ] Insurance Coverage (provide proof)
|
||||
- [ ] Background Check Completed
|
||||
|
||||
## Experience
|
||||
|
||||
**Years of Fleet Operations**: _____________
|
||||
|
||||
**Specializations**: _______________________
|
||||
|
||||
**References**: ___________________________
|
||||
|
||||
## Agreement
|
||||
|
||||
I agree to abide by the Timmy Fleet Operations Manual, maintain required insurance levels, and uphold service standards as defined in the fleet operator incentives specification.
|
||||
|
||||
**Signature**: ___________________________
|
||||
|
||||
**Date**: ________________________________
|
||||
|
||||
## For Internal Use
|
||||
|
||||
**Application ID**: ________________________
|
||||
|
||||
**Review Date**: ___________________________
|
||||
|
||||
**Status**: [ ] Approved [ ] Denied [ ] Pending
|
||||
|
||||
**Assigned Partner Manager**: _______________
|
||||
|
||||
**Certification Level Applied For**: _________
|
||||
@@ -1,82 +0,0 @@
|
||||
# Partner Report Template
|
||||
|
||||
## Reporting Period
|
||||
|
||||
**From**: ___________________________
|
||||
|
||||
**To**: _____________________________
|
||||
|
||||
**Partner Name**: ___________________
|
||||
|
||||
**Partner ID**: _____________________
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Operational Metrics
|
||||
- **Active Vehicles**: _________
|
||||
- **Total Deliveries**: _________
|
||||
- **On-Time Rate**: _____%
|
||||
- **Incident Count**: _________
|
||||
- **Uptime**: _____%
|
||||
|
||||
### Financial Metrics
|
||||
- **Revenue Generated**: $_________
|
||||
- **Incentives Earned**: $_________
|
||||
- **Referral Bonuses**: $_________
|
||||
|
||||
### Customer Experience
|
||||
- **Average Rating**: _____/5
|
||||
- **Complaints**: _________
|
||||
- **Resolution Time**: _____ hours
|
||||
|
||||
## Lead Generation
|
||||
|
||||
**New Leads Generated**: _________
|
||||
|
||||
**Qualified Leads**: _________
|
||||
|
||||
**Converted Customers**: _________
|
||||
|
||||
**Conversion Rate**: _____%
|
||||
|
||||
## Challenges & Issues
|
||||
|
||||
*Describe any operational challenges, incidents, or areas requiring support:*
|
||||
|
||||
_________________________________________
|
||||
|
||||
_________________________________________
|
||||
|
||||
## Support Required
|
||||
|
||||
*What resources or assistance would help improve performance?*
|
||||
|
||||
_________________________________________
|
||||
|
||||
_________________________________________
|
||||
|
||||
## Partner Feedback
|
||||
|
||||
*Comments, suggestions, or success stories:*
|
||||
|
||||
_________________________________________
|
||||
|
||||
_________________________________________
|
||||
|
||||
## Certification Status
|
||||
|
||||
**Current Tier**: _________________
|
||||
|
||||
**Eligibility for Promotion**: [ ] Yes [ ] No
|
||||
|
||||
**Next Review Date**: _____________
|
||||
|
||||
## Signatures
|
||||
|
||||
**Partner Representative**: _______________________
|
||||
|
||||
**Date**: _________________________________________
|
||||
|
||||
**Timmy Partner Success Manager**: _________________
|
||||
|
||||
**Date**: _________________________________________
|
||||
@@ -1,12 +1 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
182
tests/test_sherlock_wrapper.py
Normal file
182
tests/test_sherlock_wrapper.py
Normal file
@@ -0,0 +1,182 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Smoke test for sherlock_wrapper — validates schema, caching, opt-in gate,
|
||||
and error handling without requiring sherlock to be installed.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "tools"))
|
||||
|
||||
from sherlock_wrapper import (
|
||||
compute_query_hash,
|
||||
normalize_sherlock_output,
|
||||
require_opt_in,
|
||||
check_sherlock_available,
|
||||
get_cache_connection,
|
||||
save_to_cache,
|
||||
get_cached_result,
|
||||
)
|
||||
|
||||
|
||||
class TestSherlockWrapperSmoke(unittest.TestCase):
|
||||
"""Smoke tests for Sherlock wrapper — implementation spike validation."""
|
||||
|
||||
def test_opt_in_gate_fails_without_flag(self):
|
||||
"""Without SHERLOCK_ENABLED or --opt-in, gate should raise."""
|
||||
with patch("sherlock_wrapper.SHERLOCK_ENABLED", False):
|
||||
with self.assertRaises(RuntimeError) as ctx:
|
||||
require_opt_in(opt_in=False)
|
||||
self.assertIn("opt-in only", str(ctx.exception).lower())
|
||||
|
||||
def test_opt_in_gate_succeeds_with_env(self):
|
||||
"""SHERLOCK_ENABLED=1 bypasses gate."""
|
||||
with patch("sherlock_wrapper.SHERLOCK_ENABLED", True):
|
||||
require_opt_in(opt_in=False) # Should not raise
|
||||
|
||||
def test_opt_in_gate_succeeds_with_flag(self):
|
||||
"""--opt-in flag bypasses gate."""
|
||||
with patch("sherlock_wrapper.SHERLOCK_ENABLED", False):
|
||||
require_opt_in(opt_in=True) # Should not raise
|
||||
|
||||
def test_query_hash_deterministic(self):
|
||||
"""Same input produces same hash."""
|
||||
h1 = compute_query_hash("alice")
|
||||
h2 = compute_query_hash("alice")
|
||||
self.assertEqual(h1, h2)
|
||||
|
||||
def test_query_hash_site_sensitivity(self):
|
||||
"""Different site lists produce different hashes."""
|
||||
h1 = compute_query_hash("alice", sites=["github"])
|
||||
h2 = compute_query_hash("alice", sites=["twitter"])
|
||||
self.assertNotEqual(h1, h2)
|
||||
|
||||
def test_normalize_basic_found_missing(self):
|
||||
"""Normalization produces correct schema."""
|
||||
raw = {
|
||||
"github": {"status": "found", "url": "https://github.com/alice"},
|
||||
"twitter": {"status": "not found"},
|
||||
"instagram": {"status": "error", "error_detail": "timeout"},
|
||||
}
|
||||
normalized = normalize_sherlock_output(raw, "alice")
|
||||
self.assertEqual(normalized["query"], "alice")
|
||||
self.assertEqual(normalized["metadata"]["found_count"], 1)
|
||||
self.assertEqual(normalized["metadata"]["missing_count"], 1)
|
||||
self.assertEqual(normalized["metadata"]["error_count"], 1)
|
||||
self.assertEqual(len(normalized["found"]), 1)
|
||||
self.assertEqual(normalized["found"][0]["site"], "github")
|
||||
self.assertIn("twitter", normalized["missing"])
|
||||
self.assertEqual(normalized["errors"][0]["site"], "instagram")
|
||||
|
||||
def test_normalized_schema_has_required_fields(self):
|
||||
"""Output schema contains all required top-level keys."""
|
||||
raw = {"site1": {"status": "not found"}}
|
||||
normalized = normalize_sherlock_output(raw, "testuser")
|
||||
required = ["schema_version", "query", "timestamp", "found", "missing",
|
||||
"errors", "metadata"]
|
||||
for key in required:
|
||||
self.assertIn(key, normalized)
|
||||
self.assertIsInstance(normalized["timestamp"], str)
|
||||
self.assertIsInstance(normalized["found"], list)
|
||||
self.assertIsInstance(normalized["missing"], list)
|
||||
self.assertIsInstance(normalized["errors"], list)
|
||||
self.assertIsInstance(normalized["metadata"], dict)
|
||||
|
||||
def test_cache_roundtrip(self):
|
||||
"""Result can be written and read back from cache."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
with patch("sherlock_wrapper.CACHE_DB", Path(tmp) / "cache.db"):
|
||||
test_result = {
|
||||
"schema_version": "1.0",
|
||||
"query": "alice",
|
||||
"timestamp": "2025-04-26T00:00:00+00:00",
|
||||
"found": [],
|
||||
"missing": ["github"],
|
||||
"errors": [],
|
||||
"metadata": {"total_sites_checked": 1, "found_count": 0, "missing_count": 1, "error_count": 0},
|
||||
}
|
||||
query_hash = compute_query_hash("alice")
|
||||
save_to_cache(query_hash, test_result)
|
||||
retrieved = get_cached_result(query_hash)
|
||||
self.assertEqual(retrieved, test_result)
|
||||
|
||||
def test_cache_miss_on_stale(self):
|
||||
"""Cache returns None when entry is older than 7 days."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
db_path = Path(tmp) / "cache.db"
|
||||
with patch("sherlock_wrapper.CACHE_DB", db_path):
|
||||
old_ts = "2025-04-01T00:00:00+00:00"
|
||||
old_result = {
|
||||
"schema_version": "1.0", "query": "alice",
|
||||
"timestamp": old_ts, "found": [], "missing": [], "errors": [],
|
||||
"metadata": {"total_sites_checked": 0, "found_count": 0, "missing_count": 0, "error_count": 0},
|
||||
}
|
||||
query_hash = compute_query_hash("alice")
|
||||
# Direct DB insert with controlled timestamp (bypass save_to_cache's NOW)
|
||||
conn = get_cache_connection()
|
||||
conn.execute(
|
||||
"INSERT INTO cache (query_hash, result_json, timestamp) VALUES (?, ?, ?)",
|
||||
(query_hash, json.dumps(old_result), old_ts)
|
||||
)
|
||||
conn.commit()
|
||||
retrieved = get_cached_result(query_hash)
|
||||
self.assertIsNone(retrieved)
|
||||
|
||||
def test_sherlock_available_check(self):
|
||||
"""check_sherlock_available returns bool."""
|
||||
available = check_sherlock_available()
|
||||
self.assertIsInstance(available, bool)
|
||||
# Note: on this test system sherlock may not be installed, so False is expected.
|
||||
# The important thing is the function returns a bool.
|
||||
print(f"[INFO] Sherlock installed: {available}")
|
||||
|
||||
|
||||
class TestSherlockWrapperIntegration(unittest.TestCase):
|
||||
"""Integration tests with mocked sherlock module."""
|
||||
|
||||
def test_run_sherlock_with_opt_in(self):
|
||||
"""run_sherlock succeeds with opt-in and returns normalized result."""
|
||||
fake_sherlock = MagicMock()
|
||||
fake_sherlock.sherlock = MagicMock(return_value={
|
||||
"github": {"status": "found", "url": "https://github.com/alice"},
|
||||
"twitter": {"status": "not found"},
|
||||
})
|
||||
with patch.dict("sys.modules", {"sherlock": fake_sherlock}):
|
||||
import importlib
|
||||
import sherlock_wrapper
|
||||
importlib.reload(sherlock_wrapper)
|
||||
with patch.dict(os.environ, {"SHERLOCK_ENABLED": "1"}):
|
||||
from sherlock_wrapper import run_sherlock
|
||||
result = run_sherlock("alice", opt_in=True)
|
||||
self.assertEqual(result["query"], "alice")
|
||||
self.assertEqual(result["metadata"]["found_count"], 1)
|
||||
|
||||
def test_run_sherlock_fails_without_opt_in(self):
|
||||
"""run_sherlock raises RuntimeError without opt-in."""
|
||||
from sherlock_wrapper import run_sherlock
|
||||
with self.assertRaises(RuntimeError) as ctx:
|
||||
run_sherlock("alice", opt_in=False)
|
||||
self.assertIn("opt-in only", str(ctx.exception).lower())
|
||||
|
||||
def test_run_sherlock_uses_cache(self):
|
||||
"""Cached result short-circuits sherlock execution."""
|
||||
cached = {
|
||||
"schema_version": "1.0", "query": "alice", "timestamp": "2025-04-26T00:00:00+00:00",
|
||||
"found": [{"site": "github", "url": "https://github.com/alice"}],
|
||||
"missing": ["twitter"],
|
||||
"errors": [],
|
||||
"metadata": {"total_sites_checked": 2, "found_count": 1, "missing_count": 1, "error_count": 0},
|
||||
}
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
with patch("sherlock_wrapper.CACHE_DB", Path(tmp) / "cache.db"):
|
||||
query_hash = compute_query_hash("alice")
|
||||
save_to_cache(query_hash, cached)
|
||||
from sherlock_wrapper import run_sherlock
|
||||
result = run_sherlock("alice", opt_in=True)
|
||||
self.assertEqual(result, cached)
|
||||
@@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
0
tools/__init__.py
Normal file
0
tools/__init__.py
Normal file
249
tools/sherlock_wrapper.py
Normal file
249
tools/sherlock_wrapper.py
Normal file
@@ -0,0 +1,249 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Sherlock username recon wrapper — opt-in, cached, normalized JSON output.
|
||||
|
||||
This is an implementation spike (issue #874) to validate local integration
|
||||
of the Sherlock OSINT tool without violating sovereignty/provenance standards.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import sqlite3
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional, Dict, Any, List
|
||||
|
||||
# Opt-in gate: must have SHERLOCK_ENABLED=1 or --opt-in flag
|
||||
SHERLOCK_ENABLED = os.environ.get("SHERLOCK_ENABLED", "0") == "1"
|
||||
|
||||
# Cache location
|
||||
CACHE_DIR = Path.home() / ".cache" / "timmy"
|
||||
CACHE_DB = CACHE_DIR / "sherlock_cache.db"
|
||||
|
||||
# Normalized output schema version
|
||||
SCHEMA_VERSION = "1.0"
|
||||
|
||||
|
||||
def require_opt_in(opt_in: bool = False) -> None:
|
||||
"""Enforce opt-in gate for Sherlock external dependency."""
|
||||
if not (SHERLOCK_ENABLED or opt_in):
|
||||
raise RuntimeError(
|
||||
"Sherlock is opt-in only. Set SHERLOCK_ENABLED=1 or pass --opt-in."
|
||||
)
|
||||
|
||||
|
||||
|
||||
def check_sherlock_available() -> bool:
|
||||
"""Check if sherlock Python package is installed."""
|
||||
try:
|
||||
import sherlock # type: ignore # noqa: F401
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
def get_cache_connection() -> sqlite3.Connection:
|
||||
"""Initialize cache directory and return DB connection."""
|
||||
CACHE_DIR.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(CACHE_DB))
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS cache (
|
||||
query_hash TEXT PRIMARY KEY,
|
||||
result_json TEXT NOT NULL,
|
||||
timestamp DATETIME NOT NULL
|
||||
)
|
||||
""")
|
||||
return conn
|
||||
|
||||
|
||||
def compute_query_hash(username: str, sites: Optional[List[str]] = None) -> str:
|
||||
"""Deterministic hash for cache key."""
|
||||
components = [username.lower().strip()]
|
||||
if sites:
|
||||
components.extend(sorted(sites))
|
||||
raw = "|".join(components)
|
||||
return hashlib.sha256(raw.encode()).hexdigest()
|
||||
|
||||
|
||||
def get_cached_result(query_hash: str) -> Optional[Dict[str, Any]]:
|
||||
"""Retrieve cached result if available and not stale (TTL: 7 days)."""
|
||||
conn = get_cache_connection()
|
||||
cur = conn.execute(
|
||||
"SELECT result_json, timestamp FROM cache WHERE query_hash = ?",
|
||||
(query_hash,)
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if not row:
|
||||
return None
|
||||
result_json, ts_str = row
|
||||
# TTL: 7 days (604800 seconds)
|
||||
ts = datetime.fromisoformat(ts_str)
|
||||
age_seconds = (datetime.now(timezone.utc) - ts).total_seconds()
|
||||
if age_seconds >= 604800:
|
||||
return None
|
||||
return json.loads(result_json)
|
||||
|
||||
|
||||
|
||||
|
||||
def save_to_cache(query_hash: str, result: Dict[str, Any]) -> None:
|
||||
"""Persist result to cache."""
|
||||
conn = get_cache_connection()
|
||||
conn.execute(
|
||||
"INSERT OR REPLACE INTO cache (query_hash, result_json, timestamp) VALUES (?, ?, ?)",
|
||||
(query_hash, json.dumps(result), datetime.now(timezone.utc).isoformat())
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
|
||||
def normalize_sherlock_output(
|
||||
raw_result: Dict[str, Any],
|
||||
username: str,
|
||||
sites_checked: Optional[List[str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convert raw sherlock output into a stable, normalized schema.
|
||||
|
||||
Expected sherlock result shape (via Python API):
|
||||
{
|
||||
"site_name": {"url": "...", "status": "found"|"not found"|"error", ...},
|
||||
...
|
||||
}
|
||||
"""
|
||||
found: List[Dict[str, str]] = []
|
||||
missing: List[str] = []
|
||||
errors: List[Dict[str, str]] = []
|
||||
|
||||
for site_name, site_data in raw_result.items():
|
||||
status = site_data.get("status", "")
|
||||
url = site_data.get("url", "")
|
||||
if status == "found" and url:
|
||||
found.append({"site": site_name, "url": url})
|
||||
elif status == "not found":
|
||||
missing.append(site_name)
|
||||
else:
|
||||
errors.append({"site": site_name, "error": status or "unknown"})
|
||||
|
||||
# Compute totals from the original site list if provided
|
||||
total_sites = len(raw_result) if sites_checked is None else len(sites_checked)
|
||||
|
||||
return {
|
||||
"schema_version": SCHEMA_VERSION,
|
||||
"query": username,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"found": found,
|
||||
"missing": missing,
|
||||
"errors": errors,
|
||||
"metadata": {
|
||||
"total_sites_checked": total_sites,
|
||||
"found_count": len(found),
|
||||
"missing_count": len(missing),
|
||||
"error_count": len(errors),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def run_sherlock(
|
||||
username: str,
|
||||
sites: Optional[List[str]] = None,
|
||||
timeout: Optional[int] = None,
|
||||
opt_in: bool = False
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute Sherlock wrapper with opt-in gate, caching, and normalization.
|
||||
"""
|
||||
require_opt_in(opt_in)
|
||||
|
||||
# Compute cache key
|
||||
query_hash = compute_query_hash(username, sites)
|
||||
|
||||
# Check cache first — avoids dependency requirement on cache hit
|
||||
cached = get_cached_result(query_hash)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
# Only require sherlock on cache miss
|
||||
if not check_sherlock_available():
|
||||
raise RuntimeError(
|
||||
"Sherlock Python package not installed. "
|
||||
"Install with: pip install sherlock-project"
|
||||
)
|
||||
|
||||
# Call sherlock
|
||||
try:
|
||||
import sherlock
|
||||
from sherlock import sherlock as sherlock_main # type: ignore
|
||||
|
||||
if sites:
|
||||
result = sherlock_main(username, site_list=sites, timeout=timeout or 10)
|
||||
else:
|
||||
result = sherlock_main(username, timeout=timeout or 10)
|
||||
|
||||
normalized = normalize_sherlock_output(result, username, sites)
|
||||
save_to_cache(query_hash, normalized)
|
||||
return normalized
|
||||
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Sherlock execution failed: {e}") from e
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Sherlock username OSINT wrapper — opt-in, cached, normalized JSON"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--query", "-q", required=True,
|
||||
help="Username to search across sites"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--opt-in", action="store_true",
|
||||
help="Explicit opt-in flag (alternatively set SHERLOCK_ENABLED=1)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sites", "-s", nargs="+",
|
||||
help="Specific sites to check (default: all supported)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout", "-t", type=int, default=10,
|
||||
help="Request timeout per site (default: 10)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--json", action="store_true",
|
||||
help="Output normalized JSON to stdout"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-cache",
|
||||
action="store_true",
|
||||
help="Bypass cached result (if any)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
result = run_sherlock(
|
||||
username=args.query,
|
||||
sites=args.sites,
|
||||
timeout=args.timeout,
|
||||
opt_in=args.opt_in
|
||||
)
|
||||
if args.json:
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(f"Query: {result['query']}")
|
||||
print(f"Found: {result['metadata']['found_count']} site(s)")
|
||||
print(f"Missing: {result['metadata']['missing_count']} site(s)")
|
||||
print(f"Errors: {result['metadata']['error_count']} site(s)")
|
||||
for f in result['found']:
|
||||
print(f" [{f['site']}] {f['url']}")
|
||||
return 0
|
||||
except RuntimeError as e:
|
||||
print(f"ERROR: {e}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user