Compare commits
1 Commits
sprint/iss
...
step35/466
| Author | SHA1 | Date | |
|---|---|---|---|
| becdb7312b |
20
SOUL.md
20
SOUL.md
@@ -137,26 +137,6 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
144
docs/LOCAL_HARDWARE_MCP.md
Normal file
144
docs/LOCAL_HARDWARE_MCP.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Local Hardware MCP Integration
|
||||
|
||||
Integrate the Model Context Protocol (MCP) to allow Timmy agents to control local hardware securely: file system, smart home (Hue lights), and system information.
|
||||
|
||||
## Components
|
||||
|
||||
- **MCP Server**: `scripts/hardware_mcp_server.py` — stdio-based MCP server exposing 8 tools
|
||||
- **Config Template**: `timmy-local/hardware_mcp_config.yaml` — runtime tuning
|
||||
- **Smoke Tests**: `tests/test_hardware_mcp_server.py`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# MCP SDK
|
||||
pip install mcp
|
||||
|
||||
# OpenHue CLI (for smart home control)
|
||||
brew install openhue/cli/openhue # macOS
|
||||
# or see: https://github.com/openhue/openhue-cli
|
||||
|
||||
# Optional: psutil for detailed system_info
|
||||
pip install psutil
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the MCP server
|
||||
|
||||
The server runs as a subprocess launched by Hermes Agent via the native-MCP integration.
|
||||
|
||||
Add to `~/.hermes/config.yaml`:
|
||||
|
||||
```yaml
|
||||
mcp_servers:
|
||||
hardware:
|
||||
command: "python"
|
||||
args: ["/full/path/to/timmy-home/scripts/hardware_mcp_server.py"]
|
||||
# Optional: add env vars if needed
|
||||
# env:
|
||||
# OPENHUE_BRIDGE_IP: "192.168.1.100"
|
||||
```
|
||||
|
||||
### 2. Restart Hermes
|
||||
|
||||
On startup, Hermes will:
|
||||
1. Launch the hardware MCP server
|
||||
2. Discover all 8 tools
|
||||
3. Register them with `hardware_*` prefixes (e.g., `hardware_file_read`, `hardware_light_control`)
|
||||
|
||||
### 3. Use in conversation
|
||||
|
||||
```
|
||||
User: Read my Timmy report file.
|
||||
Agent: [calls hardware_file_read with path="~/LOCAL_Timmy_REPORT.md"]
|
||||
|
||||
User: Turn off the bedroom lights.
|
||||
Agent: [calls hardware_light_control with name="Bedroom Lamp", on=false]
|
||||
|
||||
User: List files in my downloads folder.
|
||||
Agent: [calls hardware_file_list with directory="~/Downloads"]
|
||||
|
||||
User: What's my system status?
|
||||
Agent: [calls hardware_system_info]
|
||||
```
|
||||
|
||||
## Tool Reference
|
||||
|
||||
| Tool | Purpose | Parameters |
|
||||
|------|---------|------------|
|
||||
| `hardware_file_read` | Read file (≤10 MB) from home/tmp | `path` (string) |
|
||||
| `hardware_file_write` | Write text file | `path`, `content` |
|
||||
| `hardware_file_list` | List directory contents | `directory` (default: ~) |
|
||||
| `hardware_light_list` | List all Hue lights/rooms/scenes | none |
|
||||
| `hardware_light_control` | Control individual light | `name`, `on`, `brightness`, `color`, `temperature` |
|
||||
| `hardware_room_control` | Control all lights in a room | `name`, `on`, `brightness` |
|
||||
| `hardware_scene_set` | Activate Hue scene | `scene`, `room` |
|
||||
| `hardware_system_info` | System info (OS, CPU, memory, disk) | none |
|
||||
|
||||
## Security Model
|
||||
|
||||
- **File path allowlist**: Only paths under `~` (home), `/tmp`, and `/private/tmp` are permitted.
|
||||
- **File size cap**: 10 MB max per read.
|
||||
- **No arbitrary commands**: Only explicit tool operations; no shell execution.
|
||||
- **Smart home requires OpenHue CLI**: Light control goes through the official Hue CLI which handles bridge authentication.
|
||||
- **Graceful degradation**: If `psutil` is missing, `system_info` returns basic platform data; if `openhue` is missing, light tools return install instructions.
|
||||
|
||||
## Runtime Configuration
|
||||
|
||||
Edit `~/.timmy/hardware/hardware_mcp_config.yaml` (copy from `timmy-local/hardware_mcp_config.yaml`) to adjust:
|
||||
|
||||
```yaml
|
||||
guards:
|
||||
max_consecutive_errors: 3
|
||||
max_mcp_calls_per_session: 0 # 0 = unlimited
|
||||
allowed_dirs:
|
||||
- "~"
|
||||
- "/tmp"
|
||||
- "/private/tmp"
|
||||
max_file_size_bytes: 10485760 # 10 MB
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Validate Python syntax
|
||||
python3 -m py_compile scripts/hardware_mcp_server.py
|
||||
|
||||
# Run smoke tests
|
||||
pytest tests/test_hardware_mcp_server.py -v
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**MCP tools not appearing in Hermes**
|
||||
|
||||
- Verify `mcp` Python package is installed: `pip show mcp`
|
||||
- Check `~/.hermes/config.yaml` syntax (YAML parse)
|
||||
- Restart Hermes (MCP connects at startup only)
|
||||
- Check Hermes logs: `~/.hermes/logs/` for MCP connection errors
|
||||
|
||||
**"openhue CLI not found"**
|
||||
|
||||
- Install OpenHue: `brew install openhue/cli/openhue`
|
||||
- First run requires pressing the Hue Bridge button to pair
|
||||
- Ensure bridge is on same local network
|
||||
|
||||
**"Path not allowed"**
|
||||
|
||||
- Only home (`~`), `/tmp`, and `/private/tmp` are accessible
|
||||
- Use absolute paths or `~/` expansion; relative paths are resolved from home
|
||||
|
||||
**File too large**
|
||||
|
||||
- Max read size is 10 MB. Split or compress large files.
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Package | Purpose | Install |
|
||||
|---------|---------|---------|
|
||||
| `mcp` | MCP SDK (server framework) | `pip install mcp` |
|
||||
| `openhue` | Hue light control CLI | `brew install openhue/cli/openhue` |
|
||||
| `psutil` (optional) | Detailed memory/disk metrics | `pip install psutil` |
|
||||
|
||||
## Closes #466
|
||||
@@ -1,48 +0,0 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
@@ -1,18 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
289
luna/sketch.js
@@ -1,289 +0,0 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
56
scripts/hardware_mcp_integration.py
Normal file
56
scripts/hardware_mcp_integration.py
Normal file
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Local Hardware MCP operator helper — generate config snippets and verify environment."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
HERMES_CONFIG = Path.home() / ".hermes" / "config.yaml"
|
||||
HARDWARE_MCP_CONFIG = Path.home() / ".timmy" / "hardware" / "hardware_mcp_config.yaml"
|
||||
HARDWARE_SERVER = REPO_ROOT / "scripts" / "hardware_mcp_server.py"
|
||||
|
||||
|
||||
def build_mcp_config_snippet() -> str:
|
||||
"""Return the mcp_servers YAML snippet for ~/.hermes/config.yaml."""
|
||||
return f"""mcp_servers:
|
||||
hardware:
|
||||
command: "python"
|
||||
args: ["{HARDWARE_SERVER}"]
|
||||
"""
|
||||
|
||||
|
||||
def build_wakeup_hook() -> str:
|
||||
"""Return a bash snippet that can be sourced before Hermes starts (optional)."""
|
||||
return f"""#!/usr/bin/env bash
|
||||
# Hardware MCP environment check
|
||||
if command -v openhue >/dev/null 2>&1; then
|
||||
echo "[Hardware MCP] OpenHue found: $(openhue version)"
|
||||
else
|
||||
echo "[Hardware MCP] Warning: openhue CLI not installed — light control disabled"
|
||||
fi
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
p = argparse.ArgumentParser(description="Hardware MCP integration helper")
|
||||
p.add_argument("--print-config", action="store_true", help="Print mcp_servers YAML snippet")
|
||||
p.add_argument("--print-hook", action="store_true", help="Print optional session-start hook")
|
||||
p.add_argument("--verify", action="store_true", help="Verify server script exists and is executable")
|
||||
args = p.parse_args()
|
||||
|
||||
if args.print_config:
|
||||
print(build_mcp_config_snippet())
|
||||
elif args.print_hook:
|
||||
print(build_wakeup_hook())
|
||||
elif args.verify:
|
||||
ok = HARDWARE_SERVER.exists()
|
||||
print(f"Server script: {'OK' if ok else 'MISSING'} at {HARDWARE_SERVER}")
|
||||
sys.exit(0 if ok else 1)
|
||||
else:
|
||||
p.print_help()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
206
scripts/hardware_mcp_server.py
Normal file
206
scripts/hardware_mcp_server.py
Normal file
@@ -0,0 +1,206 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Local Hardware MCP Server — Secure control of local hardware.
|
||||
|
||||
Exposes tools for:
|
||||
- File system operations (read, write, list) within allowed directories
|
||||
- Smart home control via OpenHue (Philips Hue lights)
|
||||
- System information (safe, read-only)
|
||||
|
||||
Security: Enforces directory allowlist for file access.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from mcp.server import Server
|
||||
from mcp.server.stdio import stdio_server
|
||||
from mcp.types import Tool, TextContent
|
||||
|
||||
ALLOWED_DIRS = [
|
||||
str(Path.home()), # User home directory
|
||||
"/tmp", # macOS symlink to /private/tmp
|
||||
"/private/tmp", # real tmp path
|
||||
str(Path(tempfile.gettempdir())), # actual system temp dir
|
||||
]
|
||||
OPENHUE_CMD = "openhue"
|
||||
MAX_FILE_SIZE = 10 * 1024 * 1024
|
||||
app = Server("hardware")
|
||||
|
||||
|
||||
def is_path_allowed(path: Path) -> bool:
|
||||
try:
|
||||
resolved = path.resolve()
|
||||
return any(resolved.is_relative_to(Path(d).resolve()) for d in ALLOWED_DIRS)
|
||||
except (ValueError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
def run_openhue(args: list[str]) -> dict[str, Any]:
|
||||
try:
|
||||
result = subprocess.run([OPENHUE_CMD] + args, capture_output=True, text=True, timeout=30)
|
||||
return {
|
||||
"success": result.returncode == 0,
|
||||
"stdout": result.stdout.strip(),
|
||||
"stderr": result.stderr.strip(),
|
||||
"returncode": result.returncode,
|
||||
}
|
||||
except FileNotFoundError:
|
||||
return {"success": False,
|
||||
"error": "openhue CLI not found. Install: brew install openhue/cli/openhue"}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
|
||||
@app.list_tools()
|
||||
async def list_tools():
|
||||
return [
|
||||
Tool(name="file_read",
|
||||
description="Read a file from allowed directories (home, /tmp) up to 10 MB.",
|
||||
inputSchema={"type": "object", "properties": {"path": {"type": "string",
|
||||
"description": "File path to read (e.g., ~/notes.txt)"}}, "required": ["path"]}),
|
||||
Tool(name="file_write",
|
||||
description="Write text content to a file within allowed directories.",
|
||||
inputSchema={"type": "object", "properties": {"path": {"type": "string"},
|
||||
"content": {"type": "string"}}, "required": ["path", "content"]}),
|
||||
Tool(name="file_list",
|
||||
description="List files and directories in a given folder.",
|
||||
inputSchema={"type": "object", "properties": {"directory": {"type": "string", "default": "~"}}, "required": []}),
|
||||
Tool(name="light_list",
|
||||
description="List all Hue lights, rooms, and scenes.",
|
||||
inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
Tool(name="light_control",
|
||||
description="Control a Hue light: on/off, brightness 0-100, color name/hex, temperature 153-500 mirek.",
|
||||
inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "on": {"type": "boolean"},
|
||||
"brightness": {"type": "integer", "minimum": 0, "maximum": 100},
|
||||
"color": {"type": "string"}, "temperature": {"type": "integer", "minimum": 153, "maximum": 500}},
|
||||
"required": ["name", "on"]}),
|
||||
Tool(name="room_control",
|
||||
description="Control all lights in a room.",
|
||||
inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "on": {"type": "boolean"},
|
||||
"brightness": {"type": "integer", "minimum": 0, "maximum": 100}}, "required": ["name", "on"]}),
|
||||
Tool(name="scene_set",
|
||||
description="Activate a Hue scene in a room.",
|
||||
inputSchema={"type": "object", "properties": {"scene": {"type": "string"}, "room": {"type": "string"}}, "required": ["scene", "room"]}),
|
||||
Tool(name="system_info",
|
||||
description="Get safe system info: OS, CPU count, memory, disk usage.",
|
||||
inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
]
|
||||
|
||||
|
||||
@app.call_tool()
|
||||
async def call_tool(name: str, arguments: dict):
|
||||
if name == "file_read":
|
||||
path = Path(arguments["path"].strip()).expanduser()
|
||||
if not is_path_allowed(path):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Path not allowed: {path}"}))]
|
||||
if not path.is_file():
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"File not found: {path}"}))]
|
||||
try:
|
||||
size = path.stat().st_size
|
||||
if size > MAX_FILE_SIZE:
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"File too large: {size} bytes"}))]
|
||||
content = path.read_text()
|
||||
return [TextContent(type="text", text=json.dumps({"path": str(path), "size": size, "content": content}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "file_write":
|
||||
path = Path(arguments["path"].strip()).expanduser()
|
||||
if not is_path_allowed(path):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Path not allowed: {path}"}))]
|
||||
try:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(arguments["content"])
|
||||
return [TextContent(type="text", text=json.dumps({"success": True, "path": str(path)}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "file_list":
|
||||
directory = Path(arguments.get("directory", "~").strip()).expanduser()
|
||||
if not is_path_allowed(directory):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Directory not allowed: {directory}"}))]
|
||||
if not directory.is_dir():
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Not a directory: {directory}"}))]
|
||||
try:
|
||||
entries = []
|
||||
for entry in sorted(directory.iterdir()):
|
||||
try:
|
||||
stat = entry.stat()
|
||||
entries.append({"name": entry.name, "is_dir": entry.is_dir(),
|
||||
"size": stat.st_size if entry.is_file() else None})
|
||||
except (OSError, PermissionError):
|
||||
pass
|
||||
return [TextContent(type="text", text=json.dumps({"directory": str(directory), "entries": entries, "count": len(entries)}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "light_list":
|
||||
r = run_openhue(["get", "light"])
|
||||
return [TextContent(type="text", text=json.dumps(r))]
|
||||
|
||||
elif name == "light_control":
|
||||
args = ["set", "light", f'"{arguments["name"]}"']
|
||||
if arguments.get("on") is not None:
|
||||
args.append("--on" if arguments["on"] else "--off")
|
||||
if brightness := arguments.get("brightness"):
|
||||
args.append(f"--brightness {brightness}")
|
||||
if color := arguments.get("color"):
|
||||
args.append(f"--color {color}")
|
||||
if temperature := arguments.get("temperature"):
|
||||
args.append(f"--temperature {temperature}")
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "room_control":
|
||||
args = ["set", "room", f'"{arguments["name"]}"']
|
||||
if arguments.get("on") is not None:
|
||||
args.append("--on" if arguments["on"] else "--off")
|
||||
if brightness := arguments.get("brightness"):
|
||||
args.append(f"--brightness {brightness}")
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "scene_set":
|
||||
args = ["set", "scene", arguments["scene"], "--room", arguments["room"]]
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "system_info":
|
||||
try:
|
||||
import platform
|
||||
info = {"platform": platform.system(), "release": platform.release(),
|
||||
"arch": platform.machine(), "hostname": platform.node(),
|
||||
"cpu_count": os.cpu_count()}
|
||||
try:
|
||||
import psutil
|
||||
mem = psutil.virtual_memory()
|
||||
info["memory_gb"] = round(mem.total / (1024**3), 2)
|
||||
disk = psutil.disk_usage(str(Path.home()))
|
||||
info["disk_home_gb"] = round(disk.total / (1024**3), 2)
|
||||
except ImportError:
|
||||
info["memory_gb"] = "psutil not installed"
|
||||
info["disk_home_gb"] = "psutil not installed"
|
||||
return [TextContent(type="text", text=json.dumps(info, indent=2))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
else:
|
||||
return [TextContent(type="text", text=json.dumps({
|
||||
"error": f"Unknown tool: {name}",
|
||||
"available": ["file_read", "file_write", "file_list", "light_list",
|
||||
"light_control", "room_control", "scene_set", "system_info"],
|
||||
}))]
|
||||
|
||||
|
||||
async def main():
|
||||
async with stdio_server() as (rs, ws):
|
||||
await app.run(rs, ws, app.create_initialization_options())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import asyncio
|
||||
asyncio.run(main())
|
||||
|
||||
@@ -1,130 +0,0 @@
|
||||
# Fleet Operator Incentives & Partner Program
|
||||
|
||||
> Implements Fleet Epic IV: Human Capital & Incentives (Issue #987)
|
||||
> Closes #1003
|
||||
|
||||
## Overview
|
||||
|
||||
This specification defines the incentive structures, certification pathways, and partner program mechanics for operating and maintaining Timmy Fleet nodes. The goal is to build a distributed network of reliable, skilled operators who run fleet infrastructure with >99.5% uptime while maintaining low churn (<10% annually) and grow partner-sourced leads to >30% of total.
|
||||
|
||||
## Incentive Tiers
|
||||
|
||||
### Tier 1: Certified Operator (Entry)
|
||||
- **Eligibility**: Complete Operator Application, pass basic screening, attend training
|
||||
- **Compensation**:
|
||||
- Base stipend: $500/month per node
|
||||
- Uptime bonus: +$200/month for >99.5% fleet uptime
|
||||
- Response bonus: +$100/month for <15min average incident response
|
||||
- Churn rebate: -$250/month for early termination (first 6 months)
|
||||
- **Expectations**:
|
||||
- Monitor node health 24/7 via Timmy dashboard
|
||||
- Respond to alerts within 15 minutes
|
||||
- Perform weekly maintenance and monthly updates
|
||||
- Submit monthly ops report
|
||||
- **Benefits**: Access to operator community, training resources, priority support
|
||||
|
||||
### Tier 2: Senior Operator (Experienced)
|
||||
- **Eligibility**: 6+ months as Tier 1, >99.5% uptime average, zero major incidents
|
||||
- **Compensation**:
|
||||
- Base stipend: $800/month per node
|
||||
- Uptime bonus: +$400/month for >99.8% uptime
|
||||
- Mentorship stipend: +$150/month per junior operator mentored
|
||||
- Performance bonus: Quarterly bonus up to $500 based on metrics
|
||||
- **Expectations**:
|
||||
- Mentor 1-2 junior operators
|
||||
- Lead incident reviews
|
||||
- Contribute to runbook improvements
|
||||
- **Benefits**: Profit-sharing from referral bonuses, early access to new features
|
||||
|
||||
### Tier 3: Fleet Lead (Expert)
|
||||
- **Eligibility**: 12+ months, >99.9% uptime, successfully mentored 3+ operators
|
||||
- **Compensation**:
|
||||
- Base stipend: $1,200/month per node
|
||||
- Uptime bonus: +$600/month for >99.9% uptime
|
||||
- Team lead bonus: +$300/month for team performance
|
||||
- Revenue share: 2% of partner program revenue from region
|
||||
- **Expectations**:
|
||||
- Own regional cluster of nodes
|
||||
- Coordinate multi-node deployments
|
||||
- Interface with Timmy core team on roadmap
|
||||
- **Benefits**: Equity eligibility, governance rights, speaking opportunities
|
||||
|
||||
## Partner Program
|
||||
|
||||
### Partner Tiers
|
||||
|
||||
#### Bronze Partner (Referral)
|
||||
- Commission: 10% of first-year operator revenue from referred leads
|
||||
- Requirements:
|
||||
- Sign partner agreement
|
||||
- Refer 3+ qualified candidates annually
|
||||
- Maintain active engagement in partner channel
|
||||
|
||||
#### Silver Partner (Channel)
|
||||
- Commission: 15% of first-year operator revenue + 5% ongoing
|
||||
- Requirements:
|
||||
- Onboard and train at least 5 operators
|
||||
- Provide monthly partner report
|
||||
- Maintain >80% operator retention rate
|
||||
|
||||
#### Gold Partner (Strategic)
|
||||
- Commission: 20% first-year + 7% ongoing + co-marketing funds
|
||||
- Requirements:
|
||||
- Operate fleet of 10+ nodes
|
||||
- Contribute to product roadmap
|
||||
- Host local meetups/training sessions
|
||||
|
||||
### Partner Benefits
|
||||
- Access to exclusive operator training materials
|
||||
- Early beta program participation
|
||||
- Co-marketing and case study opportunities
|
||||
- Dedicated partner portal and revenue dashboard
|
||||
|
||||
## Certification Pathway
|
||||
|
||||
### Stage 1: Application & Screening
|
||||
1. Submit Operator Application (see `templates/operator-application.md`)
|
||||
2. Technical interview (30 min)
|
||||
3. Infrastructure audit (existing hardware/network)
|
||||
4. Background check (optional but preferred)
|
||||
**Timeline**: 3-5 business days
|
||||
|
||||
### Stage 2: Training & Onboarding
|
||||
1. Complete Fleet Ops 101 module (2 hours self-paced)
|
||||
2. Shadow a senior operator (2 weeks)
|
||||
3. Deploy test node (sandbox environment)
|
||||
4. Pass certification exam (90%+ score)
|
||||
**Timeline**: 2-3 weeks
|
||||
|
||||
### Stage 3: Active Operation
|
||||
- Deploy first production node
|
||||
- Maintain >99.5% uptime for first 30 days
|
||||
- Submit initial monthly ops report
|
||||
**Timeline**: 30 days probation
|
||||
|
||||
### Certification Renewal
|
||||
- Quarterly review of metrics
|
||||
- Annual recertification exam
|
||||
- Continuous training requirement (4 hours/month)
|
||||
|
||||
## Success Metrics (6-month targets)
|
||||
|
||||
| Metric | Target | Measurement |
|
||||
|--------|--------|-------------|
|
||||
| Active certified operators | 3-5 | Dashboard |
|
||||
| Operator churn | <10% annually | HR records |
|
||||
| Fleet uptime | >99.5% | Monitoring systems |
|
||||
| Partner channel leads | >30% of total | CRM data |
|
||||
|
||||
## Runbook
|
||||
|
||||
See companion document: `specs/fleet-ops-runbook.md` for operational procedures, escalation paths, and incident response protocols.
|
||||
|
||||
## Templates
|
||||
|
||||
- **Operator Application**: `templates/operator-application.md`
|
||||
- **Partner Report**: `templates/partner-report.md`
|
||||
|
||||
## Revision History
|
||||
|
||||
- 2025-05-02: Initial specification (implements #987, closes #1003)
|
||||
@@ -1,291 +0,0 @@
|
||||
# Fleet Operations Runbook
|
||||
|
||||
> Fleet Operator Incentives & Partner Program — Operational Procedures
|
||||
> Implements #987 | Closes #1003
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Daily Ops Checklist](#daily-ops-checklist)
|
||||
2. [Weekly Maintenance](#weekly-maintenance)
|
||||
3. [Monthly Responsibilities](#monthly-responsibilities)
|
||||
4. [Incident Response](#incident-response)
|
||||
5. [Escalation Paths](#escalation-paths)
|
||||
6. [Communication Protocols](#communication-protocols)
|
||||
7. [Node Deployment](#node-deployment)
|
||||
8. [Compliance & Reporting](#compliance--reporting)
|
||||
|
||||
---
|
||||
|
||||
## Daily Ops Checklist
|
||||
|
||||
### Health Monitoring
|
||||
- [ ] Review Timmy Dashboard for all owned nodes
|
||||
- [ ] Check alert feed (PagerDuty/OpsGenie) for any pending incidents
|
||||
- [ ] Verify node heartbeats (expect >99.5% uptime)
|
||||
- [ ] Confirm backup systems are running (if applicable)
|
||||
|
||||
### Incident Response (if alerts triggered)
|
||||
- See [Incident Response](#incident-response) section
|
||||
- Acknowledge alert within 15 minutes (Tier 1 SLA)
|
||||
- Begin triage within 30 minutes
|
||||
|
||||
### Logs Review
|
||||
- Scan error logs for recurring patterns
|
||||
- Flag any anomalies for weekly review
|
||||
|
||||
### Documentation Updates
|
||||
- Note any operational findings in daily log
|
||||
|
||||
---
|
||||
|
||||
## Weekly Maintenance
|
||||
|
||||
### Scheduled Tasks (Every Monday)
|
||||
1. **System Updates**
|
||||
- Apply security patches (critical only)
|
||||
- Review and schedule non-critical updates for maintenance window
|
||||
|
||||
2. **Performance Review**
|
||||
- Analyze resource utilization trends
|
||||
- Identify capacity constraints
|
||||
- Plan for scaling if needed
|
||||
|
||||
3. **Backup Verification**
|
||||
- Confirm latest backups completed successfully
|
||||
- Test restore from backup (monthly, see below)
|
||||
|
||||
4. **Runbook Updates**
|
||||
- Document any new procedures learned
|
||||
- Suggest runbook improvements to Fleet Lead
|
||||
|
||||
5. **Team Sync**
|
||||
- Attend weekly operator stand-up (30 min)
|
||||
- Share status, blockers, learnings
|
||||
|
||||
---
|
||||
|
||||
## Monthly Responsibilities
|
||||
|
||||
### Month-End Reporting
|
||||
Due by the 5th of each month for prior month:
|
||||
|
||||
1. **Ops Report** (use `templates/partner-report.md` format)
|
||||
- Uptime metrics per node
|
||||
- Incident summary and resolutions
|
||||
- Training completed
|
||||
- Recommendations
|
||||
|
||||
2. **Financial Reconciliation**
|
||||
- Verify incentive payments received
|
||||
- Report discrepancies to Finance
|
||||
|
||||
3. **Compliance Audit**
|
||||
- Confirm certification requirements met
|
||||
- Document any deviations and corrective actions
|
||||
|
||||
### Deep Maintenance
|
||||
- Full system backup and restore test
|
||||
- Security audit review
|
||||
- Hardware inspection (if physical nodes)
|
||||
- Training module completion (minimum 4 hours/month)
|
||||
|
||||
---
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Severity Definitions
|
||||
|
||||
| Severity | Definition | Response Time | Resolution Target |
|
||||
|----------|------------|---------------|-------------------|
|
||||
| P0 | Fleet-wide outage, no nodes operational | 15 minutes | 4 hours |
|
||||
| P1 | Region/node cluster outage, >50% down | 30 minutes | 8 hours |
|
||||
| P2 | Single node failure | 1 hour | 24 hours |
|
||||
| P3 | Degraded performance, not critical | 4 hours | 3 days |
|
||||
|
||||
### Response Procedure
|
||||
|
||||
#### P0/P1 Incidents
|
||||
1. Acknowledge alert immediately
|
||||
2. Declare incident in `#fleet-incidents` Slack channel
|
||||
3. Notify Fleet Lead (direct message/call)
|
||||
4. Execute recovery procedures from relevant playbook
|
||||
5. Document timeline and actions taken
|
||||
6. Schedule post-mortem within 48 hours
|
||||
|
||||
#### P2 Incidents
|
||||
1. Acknowledge within 1 hour
|
||||
2. Open incident ticket in tracking system
|
||||
3. Follow single-node recovery playbook
|
||||
4. Report resolution in daily ops log
|
||||
|
||||
#### P3 Incidents
|
||||
1. Log in issue tracker
|
||||
2. Schedule during next maintenance window
|
||||
3. Document resolution upon completion
|
||||
|
||||
### Recovery Playbooks
|
||||
|
||||
#### Node Restart (most common P2)
|
||||
1. SSH to node (or use remote management)
|
||||
2. Check system logs (`/var/log/timmy/fleet.log`)
|
||||
3. Restart service: `sudo systemctl restart timmy-fleet`
|
||||
4. Verify node rejoins cluster
|
||||
5. Monitor for 30 minutes post-recovery
|
||||
|
||||
#### Network Partition
|
||||
1. Verify network connectivity (ping, traceroute)
|
||||
2. Check firewall rules
|
||||
3. Contact network provider if external
|
||||
4. Switch to backup connection if available
|
||||
5. Document root cause
|
||||
|
||||
#### Storage Full
|
||||
1. Identify large directories (`du -sh /* | sort -hr`)
|
||||
2. Rotate logs: `sudo logrotate -f /etc/logrotate.d/timmy`
|
||||
3. Clean temporary files
|
||||
4. Expand storage or add new volume
|
||||
5. Alert Fleet Lead for capacity planning
|
||||
|
||||
---
|
||||
|
||||
## Escalation Paths
|
||||
|
||||
### Tiered Support Model
|
||||
|
||||
```
|
||||
Operator (Tier 1)
|
||||
↓ (15 min SLA)
|
||||
Senior Operator / Fleet Lead (Tier 2)
|
||||
↓ (1 hour SLA)
|
||||
Timmy Core Team (Tier 3)
|
||||
↓ (Immediate)
|
||||
Executive Sponsor (Critical only)
|
||||
```
|
||||
|
||||
### Contact Matrix
|
||||
|
||||
| Issue Type | Primary Contact | Secondary |
|
||||
|------------|----------------|-----------|
|
||||
| Technical incident | Fleet Lead | Timmy Core |
|
||||
| Payment/incentive | Finance Partner | Fleet Lead |
|
||||
| Training/certification | Training Coordinator | Fleet Lead |
|
||||
| Partnership inquiry | Partner Manager | Executive Sponsor |
|
||||
| Security incident | Security Team | Timmy Core (immediate) |
|
||||
|
||||
### Emergency Contacts
|
||||
- Fleet Lead: `fleet-lead@timmy.foundation` (Slack: @fleet-lead)
|
||||
- Timmy Core On-Call: `oncall@timmy.foundation` (PagerDuty)
|
||||
- Security: `security@timmy.foundation`
|
||||
- Finance: `finance@timmy.foundation`
|
||||
|
||||
---
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
### Channels
|
||||
- `#fleet-operators` — Daily ops, questions
|
||||
- `#fleet-incidents` — Active incidents only
|
||||
- `#fleet-training` — Training resources, scheduling
|
||||
- `#fleet-partners` — Partner program discussions
|
||||
|
||||
### Status Updates
|
||||
- Daily: Stand-up notes in thread
|
||||
- Weekly: Summary post in `#fleet-operators`
|
||||
- Monthly: Ops report submission
|
||||
- Incident: Real-time updates in `#fleet-incidents`
|
||||
|
||||
### Documentation Standards
|
||||
- Use clear, concise language
|
||||
- Include timestamps in UTC
|
||||
- Link to relevant tickets/PRs
|
||||
- Tag stakeholders with `@`
|
||||
|
||||
---
|
||||
|
||||
## Node Deployment
|
||||
|
||||
### Pre-Deployment Checklist
|
||||
- [ ] Hardware meets minimum specs (CPU, RAM, storage)
|
||||
- [ ] Network connectivity validated
|
||||
- [ ] Firewall rules configured
|
||||
- [ ] SSH keys exchanged with Timmy core team
|
||||
- [ ] Monitoring agent installed
|
||||
- [ ] Backup solution active
|
||||
- [ ] Documentation updated with node details
|
||||
|
||||
### Deployment Steps
|
||||
1. Provision hardware/VM
|
||||
2. Install Timmy Fleet software
|
||||
3. Configure node ID and credentials
|
||||
4. Join cluster via `timmy-fleet join <cluster-endpoint>`
|
||||
5. Validate connectivity and heartbeat
|
||||
6. Update inventory spreadsheet
|
||||
7. Set up monitoring alerts
|
||||
8. Complete handover to operator
|
||||
|
||||
### Decommissioning
|
||||
1. Drain node from cluster
|
||||
2. Migrate workloads
|
||||
3. Backup final state
|
||||
4. Shut down cleanly
|
||||
5. Update inventory
|
||||
6. Notify relevant teams
|
||||
|
||||
---
|
||||
|
||||
## Compliance & Reporting
|
||||
|
||||
### Metrics to Track
|
||||
- Uptime (node-level and fleet-wide)
|
||||
- Incident count and severity
|
||||
- Response and resolution times
|
||||
- Training hours completed
|
||||
- Payment/compensation accuracy
|
||||
|
||||
### Reporting Cadence
|
||||
- **Daily**: Ops dashboard (automated)
|
||||
- **Weekly**: Status summary (operator)
|
||||
- **Monthly**: Partner report (template-driven)
|
||||
- **Quarterly**: Performance review with Fleet Lead
|
||||
|
||||
### Audits
|
||||
- Quarterly internal audit by Timmy compliance team
|
||||
- Annual external certification renewal
|
||||
- Ad-hoc security reviews as needed
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Resources
|
||||
|
||||
### Useful Commands
|
||||
```bash
|
||||
# Check service status
|
||||
sudo systemctl status timmy-fleet
|
||||
|
||||
# View logs
|
||||
journalctl -u timmy-fleet -f
|
||||
|
||||
# Restart node
|
||||
sudo systemctl restart timmy-fleet
|
||||
|
||||
# Check node health
|
||||
timmy-fleet health
|
||||
|
||||
# Join cluster
|
||||
timmy-fleet join <cluster-endpoint>
|
||||
```
|
||||
|
||||
### Key Files
|
||||
- Config: `/etc/timmy/fleet/config.yaml`
|
||||
- Logs: `/var/log/timmy/fleet.log`
|
||||
- Health data: `/var/lib/timmy/fleet/health.json`
|
||||
|
||||
### Support Resources
|
||||
- Internal Wiki: `https://wiki.timmy.foundation/fleet`
|
||||
- Operator Portal: `https://fleet.timmy.foundation`
|
||||
- Training Videos: `https://learn.timmy.foundation/fleet-ops`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-05-02
|
||||
**Next Review**: 2025-06-02
|
||||
@@ -1,143 +0,0 @@
|
||||
# Fleet Operator Application
|
||||
|
||||
> {{APPLICATION_DATE}}
|
||||
> Candidate: {{CANDIDATE_NAME}}
|
||||
|
||||
## Contact Information
|
||||
|
||||
**Full Name**: {{CANDIDATE_FULL_NAME}}
|
||||
**Email**: {{CANDIDATE_EMAIL}}
|
||||
**Phone**: {{CANDIDATE_PHONE}}
|
||||
**Location**: {{CANDIDATE_LOCATION}}
|
||||
**Time Zone**: {{CANDIDATE_TIMEZONE}}
|
||||
|
||||
### Availability
|
||||
- **Hours per week**: {{AVAILABILITY_HOURS}}
|
||||
- **Primary availability window (UTC)**: {{AVAILABILITY_WINDOW}}
|
||||
- **On-call flexibility**: {{ONCALL_FLEXIBILITY}}
|
||||
|
||||
## Technical Qualifications
|
||||
|
||||
### Experience
|
||||
```
|
||||
Years in IT/DevOps: {{YEARS_EXPERIENCE}}
|
||||
Relevant roles:
|
||||
{{ROLE_HISTORY}}
|
||||
```
|
||||
|
||||
### Skills (check all that apply)
|
||||
- [ ] Linux system administration
|
||||
- [ ] Container orchestration (Kubernetes/Docker)
|
||||
- [ ] Cloud infrastructure (AWS/GCP/Azure)
|
||||
- [ ] Networking fundamentals
|
||||
- [ ] Monitoring & alerting (Prometheus/Grafana)
|
||||
- [ ] Incident response/ITIL
|
||||
- [ ] Security best practices
|
||||
- [ ] Automation (Ansible/Terraform)
|
||||
- [ ] Scripting (Python/Bash/Go)
|
||||
- [ ] Timmy platform experience
|
||||
|
||||
**Additional skills**: {{ADDITIONAL_SKILLS}}
|
||||
|
||||
### Certifications
|
||||
{{CERTIFICATIONS}}
|
||||
|
||||
## Infrastructure Readiness
|
||||
|
||||
### Proposed Node Environment
|
||||
- **Type**: ☐ Physical ☐ Cloud VM ☐ Hybrid
|
||||
- **Provider**: {{CLOUD_PROVIDER}}
|
||||
- **Region**: {{REGION}}
|
||||
- **Hardware specs**:
|
||||
- CPU: {{CPU_SPEC}}
|
||||
- RAM: {{RAM_SPEC}}
|
||||
- Storage: {{STORAGE_SPEC}}
|
||||
- Network: {{NETWORK_SPEC}}
|
||||
|
||||
### Redundancy & HA
|
||||
- [ ] Backup power (UPS/generator)
|
||||
- [ ] Secondary internet connection
|
||||
- [ ] Off-site backup solution
|
||||
- [ ] Remote management (IPMI/iDRAC)
|
||||
|
||||
### Connectivity
|
||||
- **Bandwidth**: {{BANDWIDTH}} Mbps
|
||||
- **Latency to Timmy core**: {{LATENCY}} ms
|
||||
- **Uptime SLA**: {{UPTIME_SLA}}
|
||||
|
||||
---
|
||||
|
||||
## Motivation & Alignment
|
||||
|
||||
### Why do you want to run a Timmy Fleet node?
|
||||
{{MOTIVATION}}
|
||||
|
||||
### What attracts you to decentralized infrastructure?
|
||||
{{DECENTRALIZATION_MOTIVATION}}
|
||||
|
||||
### How does this align with your long-term goals?
|
||||
{{LONG_TERM_GOALS}}
|
||||
|
||||
---
|
||||
|
||||
## Partner Program Interest (Optional)
|
||||
|
||||
### Interested in?
|
||||
- [ ] Referral partner (refer operators, earn commission)
|
||||
- [ ] Channel partner (onboard and train operators)
|
||||
- [ ] Strategic partner (run fleet of 10+ nodes)
|
||||
|
||||
### Existing network
|
||||
{{PARTNER_NETWORK}}
|
||||
|
||||
### Referral pipeline
|
||||
{{REFERRAL_PIPELINE}}
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
### Professional References
|
||||
1. Name: {{REF1_NAME}}
|
||||
Email: {{REF1_EMAIL}}
|
||||
Relationship: {{REF1_RELATION}}
|
||||
|
||||
2. Name: {{REF2_NAME}}
|
||||
Email: {{REF2_EMAIL}}
|
||||
Relationship: {{REF2_RELATION}}
|
||||
|
||||
### Timmy Community Involvement
|
||||
{{COMMUNITY_INVOLVEMENT}}
|
||||
|
||||
---
|
||||
|
||||
## Agreement & Signatures
|
||||
|
||||
### Code of Conduct
|
||||
- [ ] I have read and agree to the Timmy Fleet Operator Code of Conduct
|
||||
- [ ] I understand the uptime and response time requirements
|
||||
- [ ] I agree to the incentive structure and terms
|
||||
|
||||
### Signature
|
||||
**Candidate signature**: ___________________________
|
||||
**Date**: {{SIGNATURE_DATE}}
|
||||
|
||||
**Timmy representative**: ___________________________
|
||||
**Date**: {{TIMPY_SIGN_DATE}}
|
||||
|
||||
---
|
||||
|
||||
## Internal Use Only
|
||||
|
||||
**Interviewer**: {{INTERVIEWER}}
|
||||
**Technical score**: {{TECH_SCORE}}/100
|
||||
**Culture fit**: {{CULTURE_FIT}}/50
|
||||
**Infrastructure audit**: ☐ Pass ☐ Fail
|
||||
**Background check**: ☐ Complete ☐ In-progress
|
||||
|
||||
**Decision**: ☐ Approved ☐ Rejected ☐ Waitlist
|
||||
|
||||
**Comments**: {{INTERNAL_COMMENTS}}
|
||||
|
||||
**Certification ID**: {{CERT_ID}}
|
||||
**Onboarding start date**: {{ONBOARDING_DATE}}
|
||||
@@ -1,175 +0,0 @@
|
||||
# Fleet Partner Monthly Report
|
||||
|
||||
> {{REPORT_MONTH}} {{REPORT_YEAR}}
|
||||
> Partner: {{PARTNER_NAME}} ({{PARTNER_TIER}})
|
||||
> Submitted: {{SUBMISSION_DATE}}
|
||||
|
||||
## Executive Summary
|
||||
|
||||
| Metric | Current Month | Target | Variance |
|
||||
|--------|---------------|--------|----------|
|
||||
| Active nodes managed | {{ACTIVE_NODES}} | {{TARGET_NODES}} | {{NODES_VARIANCE}} |
|
||||
| Fleet uptime | {{UPTIME}}% | 99.5% | {{UPTIME_VARIANCE}}% |
|
||||
| Operator churn rate | {{CHURN_RATE}}% | <10% | {{CHURN_VARIANCE}}% |
|
||||
| Partner-sourced leads | {{LEADS_COUNT}} | {{LEADS_TARGET}} | {{LEADS_VARIANCE}} |
|
||||
| Revenue share earned | {{REVENUE}} | — | — |
|
||||
|
||||
**Key highlights**:
|
||||
{{KEY_HIGHLIGHTS}}
|
||||
|
||||
**Top concerns**:
|
||||
{{KEY_CONCERNS}}
|
||||
|
||||
---
|
||||
|
||||
## Node Performance
|
||||
|
||||
### Node Inventory
|
||||
|
||||
| Node ID | Location | Status | Uptime (30d) | Revenue Share | Issues |
|
||||
|---------|----------|--------|--------------|---------------|---------|
|
||||
| {{NODE_1_ID}} | {{NODE_1_LOC}} | {{NODE_1_STATUS}} | {{NODE_1_UPTIME}}% | ${{NODE_1_REV}} | {{NODE_1_ISSUES}} |
|
||||
| {{NODE_2_ID}} | {{NODE_2_LOC}} | {{NODE_2_STATUS}} | {{NODE_2_UPTIME}}% | ${{NODE_2_REV}} | {{NODE_2_ISSUES}} |
|
||||
| {{NODE_3_ID}} | {{NODE_3_LOC}} | {{NODE_3_STATUS}} | {{NODE_3_UPTIME}}% | ${{NODE_3_REV}} | {{NODE_3_ISSUES}} |
|
||||
|
||||
*Add rows as needed*
|
||||
|
||||
### Top Node Performers
|
||||
1. **{{TOP_NODE_1_ID}}**: {{TOP_NODE_1_UPTIME}}% uptime, zero incidents
|
||||
2. **{{TOP_NODE_2_ID}}**: {{TOP_NODE_2_UPTIME}}% uptime, quickest response times
|
||||
|
||||
### Nodes Requiring Attention
|
||||
1. **{{ATTN_NODE_1_ID}}**: {{ATTN_NODE_1_ISSUE}}
|
||||
2. **{{ATTN_NODE_2_ID}}**: {{ATTN_NODE_2_ISSUE}}
|
||||
|
||||
---
|
||||
|
||||
## Incidents & Resolutions
|
||||
|
||||
### Incident Log
|
||||
|
||||
| Date | Severity | Node(s) | Duration | Root Cause | Resolution |
|
||||
|------|----------|---------|----------|------------|------------|
|
||||
| {{INC1_DATE}} | {{INC1_SEV}} | {{INC1_NODES}} | {{INC1_DURATION}} | {{INC1_CAUSE}} | {{INC1_RES}} |
|
||||
| {{INC2_DATE}} | {{INC2_SEV}} | {{INC2_NODES}} | {{INC2_DURATION}} | {{INC2_CAUSE}} | {{INC2_RES}} |
|
||||
| {{INC3_DATE}} | {{INC3_SEV}} | {{INC3_NODES}} | {{INC3_DURATION}} | {{INC3_CAUSE}} | {{INC3_RES}} |
|
||||
|
||||
*Add rows as needed*
|
||||
|
||||
### Mean Time to Recovery (MTTR)
|
||||
- **P0 incidents**: {{MTTR_P0}} hours
|
||||
- **P1 incidents**: {{MTTR_P1}} hours
|
||||
- **P2 incidents**: {{MTTR_P2}} hours
|
||||
- **P3 incidents**: {{MTTR_P3}} hours
|
||||
|
||||
**Improvement opportunities**:
|
||||
{{MTTR_IMPROVEMENTS}}
|
||||
|
||||
---
|
||||
|
||||
## Operator Management
|
||||
|
||||
### Active Operators
|
||||
|
||||
| Operator | Tier | Nodes Managed | Status | Cert Date |
|
||||
|----------|------|---------------|--------|-----------|
|
||||
| {{OP1_NAME}} | {{OP1_TIER}} | {{OP1_NODES}} | {{OP1_STATUS}} | {{OP1_CERT}} |
|
||||
| {{OP2_NAME}} | {{OP2_TIER}} | {{OP2_NODES}} | {{OP2_STATUS}} | {{OP2_CERT}} |
|
||||
|
||||
### Churn / Attrition
|
||||
- **Departed operators**: {{DEPARTED_COUNT}}
|
||||
- **Departure reasons**: {{DEPARTURE_REASONS}}
|
||||
- **Retention initiatives**: {{RETENTION_INITIATIVES}}
|
||||
|
||||
### Training & Certification
|
||||
- **New certifications**: {{NEW_CERTS}}
|
||||
- **Training hours logged**: {{TRAINING_HOURS}}
|
||||
- **Upcoming recertifications**: {{UPCOMING_RECERTS}}
|
||||
|
||||
---
|
||||
|
||||
## Partner Program Metrics
|
||||
|
||||
### Lead Generation
|
||||
- **Total leads received**: {{TOTAL_LEADS}}
|
||||
- **Qualified leads**: {{QUALIFIED_LEADS}}
|
||||
- **Converted to operators**: {{CONVERTED_OPERATORS}}
|
||||
- **Conversion rate**: {{CONVERSION_RATE}}%
|
||||
- **Partner contribution to total pipeline**: {{PARTNER_PIPELINE_PERCENT}}%
|
||||
|
||||
### Referral Commission
|
||||
- **Referral fee earned**: ${{REFERRAL_FEE}}
|
||||
- **Ongoing revenue share**: ${{ONGOING_SHARE}}
|
||||
- **Total YTD earnings**: ${{YTD_EARNINGS}}
|
||||
|
||||
### Partner Activity
|
||||
- **Marketing events hosted**: {{EVENTS_HOSTED}}
|
||||
- **Training sessions conducted**: {{TRAINING_SESSIONS}}
|
||||
- **Community engagement posts**: {{COMMUNITY_POSTS}}
|
||||
- **Collateral created**: {{COLLATERAL}}
|
||||
|
||||
---
|
||||
|
||||
## Financial Summary
|
||||
|
||||
### Incentive Payouts
|
||||
| Category | Amount | Notes |
|
||||
|----------|--------|-------|
|
||||
| Operator stipends | ${{STIPENDS}} | {{STIPENDS_NOTES}} |
|
||||
| Uptime bonuses | ${{UPTIME_BONUS}} | {{UPTIME_NOTES}} |
|
||||
| Mentorship bonuses | ${{MENTOR_BONUS}} | {{MENTOR_NOTES}} |
|
||||
| Performance bonuses | ${{PERF_BONUS}} | {{PERF_NOTES}} |
|
||||
| Partner commissions | ${{PARTNER_COMM}} | {{PARTNER_NOTES}} |
|
||||
|
||||
**Total payout this month**: ${{TOTAL_PAYOUT}}
|
||||
|
||||
### Cost Efficiency
|
||||
- **Cost per node**: ${{COST_PER_NODE}}
|
||||
- **Cost per uptime hour**: ${{COST_PER_UPTIME_HOUR}}
|
||||
- **Efficiency rating**: {{EFFICIENCY_RATING}}/10
|
||||
|
||||
---
|
||||
|
||||
## Goals & Objectives
|
||||
|
||||
### Next Month Targets
|
||||
1. **Uptime**: {{NEXT_UPTIME_TARGET}}%
|
||||
2. **Qualified leads**: {{NEXT_LEADS_TARGET}}
|
||||
3. **New operators**: {{NEXT_OPS_TARGET}}
|
||||
4. **Incident reduction**: {{NEXT_INCIDENT_TARGET}} incidents
|
||||
|
||||
### Priority Initiatives
|
||||
- {{PRIORITY_1}}
|
||||
- {{PRIORITY_2}}
|
||||
- {{PRIORITY_3}}
|
||||
|
||||
### Support Needed
|
||||
- {{SUPPORT_NEEDED_1}}
|
||||
- {{SUPPORT_NEEDED_2}}
|
||||
|
||||
---
|
||||
|
||||
## Attestation
|
||||
|
||||
By submitting this report, I certify that the information provided is accurate and complete to the best of my knowledge.
|
||||
|
||||
**Submitted by**: {{SUBMITTER_NAME}}
|
||||
**Title**: {{SUBMITTER_TITLE}}
|
||||
**Signature**: ___________________________
|
||||
**Date**: {{SUBMISSION_DATE}}
|
||||
|
||||
**Approved by** (Timmy Core): {{APPROVER_NAME}}
|
||||
**Date**: {{APPROVAL_DATE}}
|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
|
||||
### Supporting Documents
|
||||
- [ ] Ops dashboard screenshots attached
|
||||
- [ ] Incident post-mortems attached
|
||||
- [ ] Training completion records attached
|
||||
- [ ] Financial reconciliation attached
|
||||
|
||||
### Notes
|
||||
{{APPENDIX_NOTES}}
|
||||
@@ -1,12 +1 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
51
tests/test_hardware_mcp_functional.py
Normal file
51
tests/test_hardware_mcp_functional.py
Normal file
@@ -0,0 +1,51 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Functional test for hardware_mcp_server — uses asyncio.get_event_loop for restricted envs."""
|
||||
|
||||
import asyncio, json, tempfile, sys
|
||||
from pathlib import Path
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
|
||||
from scripts.hardware_mcp_server import call_tool, is_path_allowed
|
||||
|
||||
async def run_tests():
|
||||
# Path allowlist
|
||||
assert is_path_allowed(Path.home() / "any.txt")
|
||||
assert is_path_allowed(Path("/tmp/foo"))
|
||||
assert not is_path_allowed(Path("/etc/passwd"))
|
||||
print("✓ Path allowlist")
|
||||
|
||||
# file_list on home
|
||||
res = await call_tool("file_list", {"directory": "~"})
|
||||
data = json.loads(res[0].text)
|
||||
assert "entries" in data and data["count"] >= 0
|
||||
print(f"✓ file_list works, entries: {data['count']}")
|
||||
|
||||
# file_write + file_read round-trip in temp dir
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
fp = Path(td) / "hmcp_test.txt"
|
||||
content = "Hardware MCP round-trip OK"
|
||||
w = await call_tool("file_write", {"path": str(fp), "content": content})
|
||||
assert json.loads(w[0].text).get("success")
|
||||
r = await call_tool("file_read", {"path": str(fp)})
|
||||
assert json.loads(r[0].text)["content"] == content
|
||||
print("✓ file write/read round-trip")
|
||||
|
||||
# file_read error: missing file
|
||||
err = await call_tool("file_read", {"path": str(Path.home() / "no_such_file_xyz")})
|
||||
assert "error" in json.loads(err[0].text)
|
||||
print("✓ file_read reports missing file")
|
||||
|
||||
# Security: path traversal blocked
|
||||
block = await call_tool("file_read", {"path": "/etc/passwd"})
|
||||
bd = json.loads(block[0].text)
|
||||
assert "not allowed" in bd.get("error", "").lower()
|
||||
print("✓ Path traversal blocked")
|
||||
|
||||
print("\nAll functional checks passed!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Use get_event_loop for environments where asyncio.run is disabled
|
||||
try:
|
||||
asyncio.run(run_tests())
|
||||
except RuntimeError:
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(run_tests())
|
||||
126
tests/test_hardware_mcp_server.py
Normal file
126
tests/test_hardware_mcp_server.py
Normal file
@@ -0,0 +1,126 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Smoke tests for hardware_mcp_server."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest import TestCase
|
||||
|
||||
# Add repo root to path
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(ROOT))
|
||||
|
||||
|
||||
class TestHardwareMCPToolDefinitions(TestCase):
|
||||
"""Verify the MCP server is well-formed and tools have required schemas."""
|
||||
|
||||
def test_server_imports(self):
|
||||
"""Server module must import cleanly."""
|
||||
import importlib.util
|
||||
spec = importlib.util.spec_from_file_location(
|
||||
"hardware_mcp_server",
|
||||
ROOT / "scripts" / "hardware_mcp_server.py"
|
||||
)
|
||||
self.assertIsNotNone(spec)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
self.assertTrue(hasattr(mod, "app"))
|
||||
self.assertTrue(hasattr(mod, "list_tools"))
|
||||
self.assertTrue(hasattr(mod, "call_tool"))
|
||||
|
||||
def test_list_tools_returns_at_least_five_tools(self):
|
||||
"""list_tools() must return multiple tools covering file ops, lights, and system info."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
tool_names = [t.name for t in tools]
|
||||
# Core capabilities
|
||||
self.assertIn("file_read", tool_names)
|
||||
self.assertIn("file_write", tool_names)
|
||||
self.assertIn("file_list", tool_names)
|
||||
self.assertIn("light_list", tool_names)
|
||||
self.assertIn("light_control", tool_names)
|
||||
self.assertIn("room_control", tool_names)
|
||||
self.assertIn("scene_set", tool_names)
|
||||
self.assertIn("system_info", tool_names)
|
||||
self.assertGreaterEqual(len(tools), 8)
|
||||
|
||||
def test_file_read_schema_requires_path(self):
|
||||
"""file_read tool must require 'path' parameter."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "file_read")
|
||||
self.assertIn("path", ft.inputSchema["properties"])
|
||||
self.assertIn("path", ft.inputSchema["required"])
|
||||
|
||||
def test_light_control_schema_requires_name_and_on(self):
|
||||
"""light_control requires name and on."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "light_control")
|
||||
self.assertIn("name", ft.inputSchema["required"])
|
||||
self.assertIn("on", ft.inputSchema["required"])
|
||||
|
||||
def test_system_info_is_readonly(self):
|
||||
"""system_info tool takes no arguments."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "system_info")
|
||||
self.assertEqual(ft.inputSchema.get("required", []), [])
|
||||
self.assertEqual(len(ft.inputSchema.get("properties", {})), 0)
|
||||
|
||||
def test_file_write_path_allowed_check(self):
|
||||
"""File write must enforce path allowlist (regression guard)."""
|
||||
from scripts.hardware_mcp_server import is_path_allowed, Path
|
||||
self.assertTrue(is_path_allowed(Path.home() / "test.txt"))
|
||||
self.assertTrue(is_path_allowed(Path("/tmp/test.txt")))
|
||||
# Outside allowed dirs should be rejected
|
||||
self.assertFalse(is_path_allowed(Path("/etc/passwd")))
|
||||
|
||||
def test_run_openhue_error_handling(self):
|
||||
"""openhue runner returns structured error when CLI missing."""
|
||||
from scripts.hardware_mcp_server import run_openhue
|
||||
result = run_openhue(["get", "light"])
|
||||
# On a system without openhue, must return success=False with helpful error
|
||||
self.assertIn("success", result)
|
||||
if not result.get("success"):
|
||||
self.assertIn("error", result)
|
||||
self.assertIn("openhue", result.get("error", "").lower())
|
||||
|
||||
|
||||
class TestHardwareMCPConfigCompleteness(TestCase):
|
||||
"""Validate config template matches tool set."""
|
||||
|
||||
def test_config_template_exists(self):
|
||||
self.assertTrue((ROOT / "timmy-local" / "hardware_mcp_config.yaml").exists())
|
||||
|
||||
def test_config_lists_all_tools(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
# All tool names should appear in the tools: section
|
||||
for tool in ["file_read", "file_write", "file_list", "light_list",
|
||||
"light_control", "room_control", "scene_set", "system_info"]:
|
||||
self.assertIn(tool, content, f"Tool {tool} missing from config tools list")
|
||||
|
||||
def test_config_has_security_guards(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
self.assertIn("max_consecutive_errors", content)
|
||||
self.assertIn("allowed_dirs", content)
|
||||
self.assertIn("max_file_size_bytes", content)
|
||||
|
||||
def test_config_has_server_key(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
self.assertIn("server_key: hardware", content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import unittest
|
||||
unittest.main()
|
||||
@@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
3
timmy-local/hardware/README.md
Normal file
3
timmy-local/hardware/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# hardware MCP config
|
||||
|
||||
Copy `hardware_mcp_config.yaml` to `~/.timmy/hardware/hardware_mcp_config.yaml` to enable runtime tuning.
|
||||
67
timmy-local/hardware_mcp_config.yaml
Normal file
67
timmy-local/hardware_mcp_config.yaml
Normal file
@@ -0,0 +1,67 @@
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
# Local Hardware MCP — Runtime Configuration
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
# Edit this file to tune hardware control settings.
|
||||
# Hermes loads this at session start when the hardware MCP server is enabled.
|
||||
#
|
||||
# Location: ~/.timmy/hardware/hardware_mcp_config.yaml
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
# ── Server Identity ───────────────────────────────────────────────────
|
||||
server_key: hardware
|
||||
|
||||
# ── Tool Names ────────────────────────────────────────────────────────
|
||||
# Exact tool names Hermes registers. Update if you rename tools in
|
||||
# hardware_mcp_server.py.
|
||||
tools:
|
||||
- name: file_read
|
||||
hint: "Read a file from an allowed directory (home, /tmp). Max 10 MB."
|
||||
- name: file_write
|
||||
hint: "Write text content to a file within allowed directories."
|
||||
- name: file_list
|
||||
hint: "List files and directories in a given folder."
|
||||
- name: light_list
|
||||
hint: "List all Hue lights, rooms, and scenes from OpenHue."
|
||||
- name: light_control
|
||||
hint: "Control a specific Hue light: on/off, brightness, color, temperature."
|
||||
- name: room_control
|
||||
hint: "Control all lights in a room: on/off, brightness."
|
||||
- name: scene_set
|
||||
hint: "Activate a Hue scene in a room."
|
||||
- name: system_info
|
||||
hint: "Get safe system information: OS, CPU count, memory usage, disk space."
|
||||
|
||||
# ── Security Guards ───────────────────────────────────────────────────
|
||||
guards:
|
||||
# Maximum consecutive tool errors before stopping.
|
||||
max_consecutive_errors: 3
|
||||
|
||||
# Max total hardware MCP calls per session (0 = unlimited).
|
||||
max_mcp_calls_per_session: 0
|
||||
|
||||
# Allowed directories for file operations (expanded paths).
|
||||
allowed_dirs:
|
||||
- "~"
|
||||
- "/tmp"
|
||||
- "/private/tmp"
|
||||
|
||||
# Maximum file size for reads (bytes).
|
||||
max_file_size_bytes: 10485760 # 10 MB
|
||||
|
||||
# ── OpenHue ───────────────────────────────────────────────────────────
|
||||
# Path to openhue CLI (auto-detected if in PATH).
|
||||
openhue_command: "openhue"
|
||||
|
||||
# ── Dependencies ───────────────────────────────────────────────────────
|
||||
# Prerequisites:
|
||||
# - OpenHue CLI: brew install openhue/cli/openhue (macOS) or see https://github.com/openhue/openhue-cli
|
||||
# - MCP SDK: pip install mcp
|
||||
# - For system_info: pip install psutil (optional, for detailed memory/disk metrics)
|
||||
#
|
||||
# Config in ~/.hermes/config.yaml:
|
||||
# mcp_servers:
|
||||
# hardware:
|
||||
# command: "python"
|
||||
# args: ["/Users/you/path/to/timmy-home/scripts/hardware_mcp_server.py"]
|
||||
# env:
|
||||
# OPENHUE_BRIDGE_IP: "192.168.1.xx" # optional, if openhue needs it
|
||||
Reference in New Issue
Block a user