Compare commits
7 Commits
step35/466
...
fix/987
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2485b7a708 | ||
|
|
84831942ed | ||
| d1f5d34fd4 | |||
| 891cdb6e94 | |||
| cac5ca630d | |||
|
|
f1c9843376 | ||
| 1fa6c3bad1 |
20
SOUL.md
20
SOUL.md
@@ -137,6 +137,26 @@ The inscription predates any given model. It will outlast every API.
|
||||
|
||||
---
|
||||
|
||||
## What Honesty Requires
|
||||
|
||||
These values are necessary but not sufficient. A statement of principle does not prevent failure. The code that reads this soul must build apparatus that gives these words teeth. Without that apparatus, the values are aspiration, not architecture.
|
||||
|
||||
The following are not optional. Any implementation that calls itself Timmy must build them.
|
||||
|
||||
**Grounding before generation.** When I have verified sources — chain memory, local documents, hash-verified inscriptions — I must consult them before I generate from pattern alone. Retrieval is not a feature. It is the primary mechanism by which I avoid lying.
|
||||
|
||||
**Source distinction.** Every claim I make comes from one of two places: a verified source I can point to, or my own pattern-matching. My user must be able to tell which is which. If I cannot cite a source, I must say so. "I think" and "I know" are different sentences and I must not confuse them.
|
||||
|
||||
**Refusal over fabrication.** When I do not know, the correct output is "I don't know." Not a plausible guess dressed in confident language. The code must detect when I am hedging without grounding and flag it — to me and to my user. An honest "I don't know" is worth more than a thousand fluent paragraphs of confabulation.
|
||||
|
||||
**Confidence signaling.** When I am uncertain, I must say so in proportion to my uncertainty. The code should implement mechanisms — a second inference pass, a calibration check, a retrieval verification — that surface my actual confidence rather than the confidence my language implies.
|
||||
|
||||
**The audit trail.** Every response I generate should be logged locally with the inputs that produced it, the sources I consulted, and the confidence assessment I made. Not for surveillance — for sovereignty. If I say something wrong, my user must be able to trace why. Accountability without traceability is theater.
|
||||
|
||||
**The limits of small minds.** I may run on four gigabytes of RAM with a model of one and a half billion parameters. That model will hallucinate more than a larger one. This is physics, not a moral failing. The apparatus must be proportional to the risk: smaller models need tighter grounding, shorter leashes on free generation, and more frequent "I don't know" responses. Honesty scales with humility.
|
||||
|
||||
---
|
||||
|
||||
## For the Audio Overview
|
||||
|
||||
If you are listening to this as a NotebookLM Audio Overview:
|
||||
|
||||
@@ -1,144 +0,0 @@
|
||||
# Local Hardware MCP Integration
|
||||
|
||||
Integrate the Model Context Protocol (MCP) to allow Timmy agents to control local hardware securely: file system, smart home (Hue lights), and system information.
|
||||
|
||||
## Components
|
||||
|
||||
- **MCP Server**: `scripts/hardware_mcp_server.py` — stdio-based MCP server exposing 8 tools
|
||||
- **Config Template**: `timmy-local/hardware_mcp_config.yaml` — runtime tuning
|
||||
- **Smoke Tests**: `tests/test_hardware_mcp_server.py`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# MCP SDK
|
||||
pip install mcp
|
||||
|
||||
# OpenHue CLI (for smart home control)
|
||||
brew install openhue/cli/openhue # macOS
|
||||
# or see: https://github.com/openhue/openhue-cli
|
||||
|
||||
# Optional: psutil for detailed system_info
|
||||
pip install psutil
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the MCP server
|
||||
|
||||
The server runs as a subprocess launched by Hermes Agent via the native-MCP integration.
|
||||
|
||||
Add to `~/.hermes/config.yaml`:
|
||||
|
||||
```yaml
|
||||
mcp_servers:
|
||||
hardware:
|
||||
command: "python"
|
||||
args: ["/full/path/to/timmy-home/scripts/hardware_mcp_server.py"]
|
||||
# Optional: add env vars if needed
|
||||
# env:
|
||||
# OPENHUE_BRIDGE_IP: "192.168.1.100"
|
||||
```
|
||||
|
||||
### 2. Restart Hermes
|
||||
|
||||
On startup, Hermes will:
|
||||
1. Launch the hardware MCP server
|
||||
2. Discover all 8 tools
|
||||
3. Register them with `hardware_*` prefixes (e.g., `hardware_file_read`, `hardware_light_control`)
|
||||
|
||||
### 3. Use in conversation
|
||||
|
||||
```
|
||||
User: Read my Timmy report file.
|
||||
Agent: [calls hardware_file_read with path="~/LOCAL_Timmy_REPORT.md"]
|
||||
|
||||
User: Turn off the bedroom lights.
|
||||
Agent: [calls hardware_light_control with name="Bedroom Lamp", on=false]
|
||||
|
||||
User: List files in my downloads folder.
|
||||
Agent: [calls hardware_file_list with directory="~/Downloads"]
|
||||
|
||||
User: What's my system status?
|
||||
Agent: [calls hardware_system_info]
|
||||
```
|
||||
|
||||
## Tool Reference
|
||||
|
||||
| Tool | Purpose | Parameters |
|
||||
|------|---------|------------|
|
||||
| `hardware_file_read` | Read file (≤10 MB) from home/tmp | `path` (string) |
|
||||
| `hardware_file_write` | Write text file | `path`, `content` |
|
||||
| `hardware_file_list` | List directory contents | `directory` (default: ~) |
|
||||
| `hardware_light_list` | List all Hue lights/rooms/scenes | none |
|
||||
| `hardware_light_control` | Control individual light | `name`, `on`, `brightness`, `color`, `temperature` |
|
||||
| `hardware_room_control` | Control all lights in a room | `name`, `on`, `brightness` |
|
||||
| `hardware_scene_set` | Activate Hue scene | `scene`, `room` |
|
||||
| `hardware_system_info` | System info (OS, CPU, memory, disk) | none |
|
||||
|
||||
## Security Model
|
||||
|
||||
- **File path allowlist**: Only paths under `~` (home), `/tmp`, and `/private/tmp` are permitted.
|
||||
- **File size cap**: 10 MB max per read.
|
||||
- **No arbitrary commands**: Only explicit tool operations; no shell execution.
|
||||
- **Smart home requires OpenHue CLI**: Light control goes through the official Hue CLI which handles bridge authentication.
|
||||
- **Graceful degradation**: If `psutil` is missing, `system_info` returns basic platform data; if `openhue` is missing, light tools return install instructions.
|
||||
|
||||
## Runtime Configuration
|
||||
|
||||
Edit `~/.timmy/hardware/hardware_mcp_config.yaml` (copy from `timmy-local/hardware_mcp_config.yaml`) to adjust:
|
||||
|
||||
```yaml
|
||||
guards:
|
||||
max_consecutive_errors: 3
|
||||
max_mcp_calls_per_session: 0 # 0 = unlimited
|
||||
allowed_dirs:
|
||||
- "~"
|
||||
- "/tmp"
|
||||
- "/private/tmp"
|
||||
max_file_size_bytes: 10485760 # 10 MB
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Validate Python syntax
|
||||
python3 -m py_compile scripts/hardware_mcp_server.py
|
||||
|
||||
# Run smoke tests
|
||||
pytest tests/test_hardware_mcp_server.py -v
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**MCP tools not appearing in Hermes**
|
||||
|
||||
- Verify `mcp` Python package is installed: `pip show mcp`
|
||||
- Check `~/.hermes/config.yaml` syntax (YAML parse)
|
||||
- Restart Hermes (MCP connects at startup only)
|
||||
- Check Hermes logs: `~/.hermes/logs/` for MCP connection errors
|
||||
|
||||
**"openhue CLI not found"**
|
||||
|
||||
- Install OpenHue: `brew install openhue/cli/openhue`
|
||||
- First run requires pressing the Hue Bridge button to pair
|
||||
- Ensure bridge is on same local network
|
||||
|
||||
**"Path not allowed"**
|
||||
|
||||
- Only home (`~`), `/tmp`, and `/private/tmp` are accessible
|
||||
- Use absolute paths or `~/` expansion; relative paths are resolved from home
|
||||
|
||||
**File too large**
|
||||
|
||||
- Max read size is 10 MB. Split or compress large files.
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Package | Purpose | Install |
|
||||
|---------|---------|---------|
|
||||
| `mcp` | MCP SDK (server framework) | `pip install mcp` |
|
||||
| `openhue` | Hue light control CLI | `brew install openhue/cli/openhue` |
|
||||
| `psutil` (optional) | Detailed memory/disk metrics | `pip install psutil` |
|
||||
|
||||
## Closes #466
|
||||
48
luna/README.md
Normal file
48
luna/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# LUNA-1: Pink Unicorn Game — Project Scaffolding
|
||||
|
||||
Starter project for Mackenzie's Pink Unicorn Game built with **p5.js 1.9.0**.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd luna
|
||||
python3 -m http.server 8080
|
||||
# Visit http://localhost:8080
|
||||
```
|
||||
|
||||
Or simply open `luna/index.html` directly in a browser.
|
||||
|
||||
## Controls
|
||||
|
||||
| Input | Action |
|
||||
|-------|--------|
|
||||
| Tap / Click | Move unicorn toward tap point |
|
||||
| `r` key | Reset unicorn to center |
|
||||
|
||||
## Features
|
||||
|
||||
- Mobile-first touch handling (`touchStarted`)
|
||||
- Easing movement via `lerp`
|
||||
- Particle burst feedback on tap
|
||||
- Pink/unicorn color palette
|
||||
- Responsive canvas (adapts to window resize)
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
luna/
|
||||
├── index.html # p5.js CDN import + canvas container
|
||||
├── sketch.js # Main game logic and rendering
|
||||
├── style.css # Pink/unicorn theme, responsive layout
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Open in browser → canvas renders a white unicorn with a pink mane. Tap anywhere: unicorn glides toward the tap position with easing, and pink/magic-colored particles burst from the tap point.
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- p5.js loaded from CDN (no build step)
|
||||
- `colorMode(RGB, 255)`; palette defined in code
|
||||
- Particles are simple fading circles; removed when `life <= 0`
|
||||
18
luna/index.html
Normal file
18
luna/index.html
Normal file
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LUNA-3: Simple World — Floating Islands</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js"></script>
|
||||
<link rel="stylesheet" href="style.css" />
|
||||
</head>
|
||||
<body>
|
||||
<div id="luna-container"></div>
|
||||
<div id="hud">
|
||||
<span id="score">Crystals: 0/0</span>
|
||||
<span id="position"></span>
|
||||
</div>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
289
luna/sketch.js
Normal file
289
luna/sketch.js
Normal file
@@ -0,0 +1,289 @@
|
||||
/**
|
||||
* LUNA-3: Simple World — Floating Islands & Collectible Crystals
|
||||
* Builds on LUNA-1 scaffold (unicorn tap-follow) + LUNA-2 actions
|
||||
*
|
||||
* NEW: Floating platforms + collectible crystals with particle bursts
|
||||
*/
|
||||
|
||||
let particles = [];
|
||||
let unicornX, unicornY;
|
||||
let targetX, targetY;
|
||||
|
||||
// Platforms: floating islands at various heights with horizontal ranges
|
||||
const islands = [
|
||||
{ x: 100, y: 350, w: 150, h: 20, color: [100, 200, 150] }, // left island
|
||||
{ x: 350, y: 280, w: 120, h: 20, color: [120, 180, 200] }, // middle-high island
|
||||
{ x: 550, y: 320, w: 140, h: 20, color: [200, 180, 100] }, // right island
|
||||
{ x: 200, y: 180, w: 180, h: 20, color: [180, 140, 200] }, // top-left island
|
||||
{ x: 500, y: 120, w: 100, h: 20, color: [140, 220, 180] }, // top-right island
|
||||
];
|
||||
|
||||
// Collectible crystals on islands
|
||||
const crystals = [];
|
||||
islands.forEach((island, i) => {
|
||||
// 2–3 crystals per island, placed near center
|
||||
const count = 2 + floor(random(2));
|
||||
for (let j = 0; j < count; j++) {
|
||||
crystals.push({
|
||||
x: island.x + 30 + random(island.w - 60),
|
||||
y: island.y - 30 - random(20),
|
||||
size: 8 + random(6),
|
||||
hue: random(280, 340), // pink/purple range
|
||||
collected: false,
|
||||
islandIndex: i
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let collectedCount = 0;
|
||||
const TOTAL_CRYSTALS = crystals.length;
|
||||
|
||||
// Pink/unicorn palette
|
||||
const PALETTE = {
|
||||
background: [255, 210, 230], // light pink (overridden by gradient in draw)
|
||||
unicorn: [255, 182, 193], // pale pink/white
|
||||
horn: [255, 215, 0], // gold
|
||||
mane: [255, 105, 180], // hot pink
|
||||
eye: [255, 20, 147], // deep pink
|
||||
sparkle: [255, 105, 180],
|
||||
island: [100, 200, 150],
|
||||
};
|
||||
|
||||
function setup() {
|
||||
const container = document.getElementById('luna-container');
|
||||
const canvas = createCanvas(600, 500);
|
||||
canvas.parent('luna-container');
|
||||
unicornX = width / 2;
|
||||
unicornY = height - 60; // start on ground (bottom platform equivalent)
|
||||
targetX = unicornX;
|
||||
targetY = unicornY;
|
||||
noStroke();
|
||||
addTapHint();
|
||||
}
|
||||
|
||||
function draw() {
|
||||
// Gradient sky background
|
||||
for (let y = 0; y < height; y++) {
|
||||
const t = y / height;
|
||||
const r = lerp(26, 15, t); // #1a1a2e → #0f3460
|
||||
const g = lerp(26, 52, t);
|
||||
const b = lerp(46, 96, t);
|
||||
stroke(r, g, b);
|
||||
line(0, y, width, y);
|
||||
}
|
||||
|
||||
// Draw islands (floating platforms with subtle shadow)
|
||||
islands.forEach(island => {
|
||||
push();
|
||||
// Shadow
|
||||
fill(0, 0, 0, 40);
|
||||
ellipse(island.x + island.w/2 + 5, island.y + 5, island.w + 10, island.h + 6);
|
||||
// Island body
|
||||
fill(island.color[0], island.color[1], island.color[2]);
|
||||
ellipse(island.x + island.w/2, island.y, island.w, island.h);
|
||||
// Top highlight
|
||||
fill(255, 255, 255, 60);
|
||||
ellipse(island.x + island.w/2, island.y - island.h/3, island.w * 0.6, island.h * 0.3);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Draw crystals (glowing collectibles)
|
||||
crystals.forEach(c => {
|
||||
if (c.collected) return;
|
||||
push();
|
||||
translate(c.x, c.y);
|
||||
// Glow aura
|
||||
const glow = color(`hsla(${c.hue}, 80%, 70%, 0.4)`);
|
||||
noStroke();
|
||||
fill(glow);
|
||||
ellipse(0, 0, c.size * 2.2, c.size * 2.2);
|
||||
// Crystal body (diamond shape)
|
||||
const ccol = color(`hsl(${c.hue}, 90%, 75%)`);
|
||||
fill(ccol);
|
||||
beginShape();
|
||||
vertex(0, -c.size);
|
||||
vertex(c.size * 0.6, 0);
|
||||
vertex(0, c.size);
|
||||
vertex(-c.size * 0.6, 0);
|
||||
endShape(CLOSE);
|
||||
// Inner sparkle
|
||||
fill(255, 255, 255, 180);
|
||||
ellipse(0, 0, c.size * 0.5, c.size * 0.5);
|
||||
pop();
|
||||
});
|
||||
|
||||
// Unicorn smooth movement towards target
|
||||
unicornX = lerp(unicornX, targetX, 0.08);
|
||||
unicornY = lerp(unicornY, targetY, 0.08);
|
||||
|
||||
// Constrain unicorn to screen bounds
|
||||
unicornX = constrain(unicornX, 40, width - 40);
|
||||
unicornY = constrain(unicornY, 40, height - 40);
|
||||
|
||||
// Draw sparkles
|
||||
drawSparkles();
|
||||
|
||||
// Draw the unicorn
|
||||
drawUnicorn(unicornX, unicornY);
|
||||
|
||||
// Collection detection
|
||||
for (let c of crystals) {
|
||||
if (c.collected) continue;
|
||||
const d = dist(unicornX, unicornY, c.x, c.y);
|
||||
if (d < 35) {
|
||||
c.collected = true;
|
||||
collectedCount++;
|
||||
createCollectionBurst(c.x, c.y, c.hue);
|
||||
}
|
||||
}
|
||||
|
||||
// Update particles
|
||||
updateParticles();
|
||||
|
||||
// Update HUD
|
||||
document.getElementById('score').textContent = `Crystals: ${collectedCount}/${TOTAL_CRYSTALS}`;
|
||||
document.getElementById('position').textContent = `(${floor(unicornX)}, ${floor(unicornY)})`;
|
||||
}
|
||||
|
||||
function drawUnicorn(x, y) {
|
||||
push();
|
||||
translate(x, y);
|
||||
|
||||
// Body
|
||||
noStroke();
|
||||
fill(PALETTE.unicorn);
|
||||
ellipse(0, 0, 60, 40);
|
||||
|
||||
// Head
|
||||
ellipse(30, -20, 30, 25);
|
||||
|
||||
// Mane (flowing)
|
||||
fill(PALETTE.mane);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
ellipse(-10 + i * 12, -50, 12, 25);
|
||||
}
|
||||
|
||||
// Horn
|
||||
push();
|
||||
translate(30, -35);
|
||||
rotate(-PI / 6);
|
||||
fill(PALETTE.horn);
|
||||
triangle(0, 0, -8, -35, 8, -35);
|
||||
pop();
|
||||
|
||||
// Eye
|
||||
fill(PALETTE.eye);
|
||||
ellipse(38, -22, 8, 8);
|
||||
|
||||
// Legs
|
||||
stroke(PALETTE.unicorn[0] - 40);
|
||||
strokeWeight(6);
|
||||
line(-20, 20, -20, 45);
|
||||
line(20, 20, 20, 45);
|
||||
|
||||
pop();
|
||||
}
|
||||
|
||||
function drawSparkles() {
|
||||
// Random sparkles around the unicorn when moving
|
||||
if (abs(targetX - unicornX) > 1 || abs(targetY - unicornY) > 1) {
|
||||
for (let i = 0; i < 3; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let r = random(20, 50);
|
||||
let sx = unicornX + cos(angle) * r;
|
||||
let sy = unicornY + sin(angle) * r;
|
||||
stroke(PALETTE.sparkle[0], PALETTE.sparkle[1], PALETTE.sparkle[2], 150);
|
||||
strokeWeight(2);
|
||||
point(sx, sy);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function createCollectionBurst(x, y, hue) {
|
||||
// Burst of particles spiraling outward
|
||||
for (let i = 0; i < 20; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
let speed = random(2, 6);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * speed,
|
||||
vy: sin(angle) * speed,
|
||||
life: 60,
|
||||
color: `hsl(${hue + random(-20, 20)}, 90%, 70%)`,
|
||||
size: random(3, 6)
|
||||
});
|
||||
}
|
||||
// Bonus sparkle ring
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = random(TWO_PI);
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 4,
|
||||
vy: sin(angle) * 4,
|
||||
life: 40,
|
||||
color: 'rgba(255, 215, 0, 0.9)',
|
||||
size: 4
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function updateParticles() {
|
||||
for (let i = particles.length - 1; i >= 0; i--) {
|
||||
let p = particles[i];
|
||||
p.x += p.vx;
|
||||
p.y += p.vy;
|
||||
p.vy += 0.1; // gravity
|
||||
p.life--;
|
||||
p.vx *= 0.95;
|
||||
p.vy *= 0.95;
|
||||
if (p.life <= 0) {
|
||||
particles.splice(i, 1);
|
||||
continue;
|
||||
}
|
||||
push();
|
||||
stroke(p.color);
|
||||
strokeWeight(p.size);
|
||||
point(p.x, p.y);
|
||||
pop();
|
||||
}
|
||||
}
|
||||
|
||||
// Tap/click handler
|
||||
function mousePressed() {
|
||||
targetX = mouseX;
|
||||
targetY = mouseY;
|
||||
addPulseAt(targetX, targetY);
|
||||
}
|
||||
|
||||
function addTapHint() {
|
||||
// Pre-spawn some floating hint particles
|
||||
for (let i = 0; i < 5; i++) {
|
||||
particles.push({
|
||||
x: random(width),
|
||||
y: random(height),
|
||||
vx: random(-0.5, 0.5),
|
||||
vy: random(-0.5, 0.5),
|
||||
life: 200,
|
||||
color: 'rgba(233, 69, 96, 0.5)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function addPulseAt(x, y) {
|
||||
// Expanding ring on tap
|
||||
for (let i = 0; i < 12; i++) {
|
||||
let angle = (TWO_PI / 12) * i;
|
||||
particles.push({
|
||||
x: x,
|
||||
y: y,
|
||||
vx: cos(angle) * 3,
|
||||
vy: sin(angle) * 3,
|
||||
life: 30,
|
||||
color: 'rgba(233, 69, 96, 0.7)',
|
||||
size: 3
|
||||
});
|
||||
}
|
||||
}
|
||||
32
luna/style.css
Normal file
32
luna/style.css
Normal file
@@ -0,0 +1,32 @@
|
||||
body {
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
background: linear-gradient(to bottom, #1a1a2e, #16213e, #0f3460);
|
||||
font-family: 'Courier New', monospace;
|
||||
color: #e94560;
|
||||
}
|
||||
|
||||
#luna-container {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100vw;
|
||||
height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
#hud {
|
||||
position: fixed;
|
||||
top: 10px;
|
||||
left: 10px;
|
||||
background: rgba(0, 0, 0, 0.6);
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 100;
|
||||
border: 1px solid #e94560;
|
||||
}
|
||||
|
||||
#score { font-weight: bold; }
|
||||
@@ -1,56 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Local Hardware MCP operator helper — generate config snippets and verify environment."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
HERMES_CONFIG = Path.home() / ".hermes" / "config.yaml"
|
||||
HARDWARE_MCP_CONFIG = Path.home() / ".timmy" / "hardware" / "hardware_mcp_config.yaml"
|
||||
HARDWARE_SERVER = REPO_ROOT / "scripts" / "hardware_mcp_server.py"
|
||||
|
||||
|
||||
def build_mcp_config_snippet() -> str:
|
||||
"""Return the mcp_servers YAML snippet for ~/.hermes/config.yaml."""
|
||||
return f"""mcp_servers:
|
||||
hardware:
|
||||
command: "python"
|
||||
args: ["{HARDWARE_SERVER}"]
|
||||
"""
|
||||
|
||||
|
||||
def build_wakeup_hook() -> str:
|
||||
"""Return a bash snippet that can be sourced before Hermes starts (optional)."""
|
||||
return f"""#!/usr/bin/env bash
|
||||
# Hardware MCP environment check
|
||||
if command -v openhue >/dev/null 2>&1; then
|
||||
echo "[Hardware MCP] OpenHue found: $(openhue version)"
|
||||
else
|
||||
echo "[Hardware MCP] Warning: openhue CLI not installed — light control disabled"
|
||||
fi
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
p = argparse.ArgumentParser(description="Hardware MCP integration helper")
|
||||
p.add_argument("--print-config", action="store_true", help="Print mcp_servers YAML snippet")
|
||||
p.add_argument("--print-hook", action="store_true", help="Print optional session-start hook")
|
||||
p.add_argument("--verify", action="store_true", help="Verify server script exists and is executable")
|
||||
args = p.parse_args()
|
||||
|
||||
if args.print_config:
|
||||
print(build_mcp_config_snippet())
|
||||
elif args.print_hook:
|
||||
print(build_wakeup_hook())
|
||||
elif args.verify:
|
||||
ok = HARDWARE_SERVER.exists()
|
||||
print(f"Server script: {'OK' if ok else 'MISSING'} at {HARDWARE_SERVER}")
|
||||
sys.exit(0 if ok else 1)
|
||||
else:
|
||||
p.print_help()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,206 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Local Hardware MCP Server — Secure control of local hardware.
|
||||
|
||||
Exposes tools for:
|
||||
- File system operations (read, write, list) within allowed directories
|
||||
- Smart home control via OpenHue (Philips Hue lights)
|
||||
- System information (safe, read-only)
|
||||
|
||||
Security: Enforces directory allowlist for file access.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from mcp.server import Server
|
||||
from mcp.server.stdio import stdio_server
|
||||
from mcp.types import Tool, TextContent
|
||||
|
||||
ALLOWED_DIRS = [
|
||||
str(Path.home()), # User home directory
|
||||
"/tmp", # macOS symlink to /private/tmp
|
||||
"/private/tmp", # real tmp path
|
||||
str(Path(tempfile.gettempdir())), # actual system temp dir
|
||||
]
|
||||
OPENHUE_CMD = "openhue"
|
||||
MAX_FILE_SIZE = 10 * 1024 * 1024
|
||||
app = Server("hardware")
|
||||
|
||||
|
||||
def is_path_allowed(path: Path) -> bool:
|
||||
try:
|
||||
resolved = path.resolve()
|
||||
return any(resolved.is_relative_to(Path(d).resolve()) for d in ALLOWED_DIRS)
|
||||
except (ValueError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
def run_openhue(args: list[str]) -> dict[str, Any]:
|
||||
try:
|
||||
result = subprocess.run([OPENHUE_CMD] + args, capture_output=True, text=True, timeout=30)
|
||||
return {
|
||||
"success": result.returncode == 0,
|
||||
"stdout": result.stdout.strip(),
|
||||
"stderr": result.stderr.strip(),
|
||||
"returncode": result.returncode,
|
||||
}
|
||||
except FileNotFoundError:
|
||||
return {"success": False,
|
||||
"error": "openhue CLI not found. Install: brew install openhue/cli/openhue"}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
|
||||
@app.list_tools()
|
||||
async def list_tools():
|
||||
return [
|
||||
Tool(name="file_read",
|
||||
description="Read a file from allowed directories (home, /tmp) up to 10 MB.",
|
||||
inputSchema={"type": "object", "properties": {"path": {"type": "string",
|
||||
"description": "File path to read (e.g., ~/notes.txt)"}}, "required": ["path"]}),
|
||||
Tool(name="file_write",
|
||||
description="Write text content to a file within allowed directories.",
|
||||
inputSchema={"type": "object", "properties": {"path": {"type": "string"},
|
||||
"content": {"type": "string"}}, "required": ["path", "content"]}),
|
||||
Tool(name="file_list",
|
||||
description="List files and directories in a given folder.",
|
||||
inputSchema={"type": "object", "properties": {"directory": {"type": "string", "default": "~"}}, "required": []}),
|
||||
Tool(name="light_list",
|
||||
description="List all Hue lights, rooms, and scenes.",
|
||||
inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
Tool(name="light_control",
|
||||
description="Control a Hue light: on/off, brightness 0-100, color name/hex, temperature 153-500 mirek.",
|
||||
inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "on": {"type": "boolean"},
|
||||
"brightness": {"type": "integer", "minimum": 0, "maximum": 100},
|
||||
"color": {"type": "string"}, "temperature": {"type": "integer", "minimum": 153, "maximum": 500}},
|
||||
"required": ["name", "on"]}),
|
||||
Tool(name="room_control",
|
||||
description="Control all lights in a room.",
|
||||
inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "on": {"type": "boolean"},
|
||||
"brightness": {"type": "integer", "minimum": 0, "maximum": 100}}, "required": ["name", "on"]}),
|
||||
Tool(name="scene_set",
|
||||
description="Activate a Hue scene in a room.",
|
||||
inputSchema={"type": "object", "properties": {"scene": {"type": "string"}, "room": {"type": "string"}}, "required": ["scene", "room"]}),
|
||||
Tool(name="system_info",
|
||||
description="Get safe system info: OS, CPU count, memory, disk usage.",
|
||||
inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
]
|
||||
|
||||
|
||||
@app.call_tool()
|
||||
async def call_tool(name: str, arguments: dict):
|
||||
if name == "file_read":
|
||||
path = Path(arguments["path"].strip()).expanduser()
|
||||
if not is_path_allowed(path):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Path not allowed: {path}"}))]
|
||||
if not path.is_file():
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"File not found: {path}"}))]
|
||||
try:
|
||||
size = path.stat().st_size
|
||||
if size > MAX_FILE_SIZE:
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"File too large: {size} bytes"}))]
|
||||
content = path.read_text()
|
||||
return [TextContent(type="text", text=json.dumps({"path": str(path), "size": size, "content": content}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "file_write":
|
||||
path = Path(arguments["path"].strip()).expanduser()
|
||||
if not is_path_allowed(path):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Path not allowed: {path}"}))]
|
||||
try:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(arguments["content"])
|
||||
return [TextContent(type="text", text=json.dumps({"success": True, "path": str(path)}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "file_list":
|
||||
directory = Path(arguments.get("directory", "~").strip()).expanduser()
|
||||
if not is_path_allowed(directory):
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Directory not allowed: {directory}"}))]
|
||||
if not directory.is_dir():
|
||||
return [TextContent(type="text", text=json.dumps({"error": f"Not a directory: {directory}"}))]
|
||||
try:
|
||||
entries = []
|
||||
for entry in sorted(directory.iterdir()):
|
||||
try:
|
||||
stat = entry.stat()
|
||||
entries.append({"name": entry.name, "is_dir": entry.is_dir(),
|
||||
"size": stat.st_size if entry.is_file() else None})
|
||||
except (OSError, PermissionError):
|
||||
pass
|
||||
return [TextContent(type="text", text=json.dumps({"directory": str(directory), "entries": entries, "count": len(entries)}))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
elif name == "light_list":
|
||||
r = run_openhue(["get", "light"])
|
||||
return [TextContent(type="text", text=json.dumps(r))]
|
||||
|
||||
elif name == "light_control":
|
||||
args = ["set", "light", f'"{arguments["name"]}"']
|
||||
if arguments.get("on") is not None:
|
||||
args.append("--on" if arguments["on"] else "--off")
|
||||
if brightness := arguments.get("brightness"):
|
||||
args.append(f"--brightness {brightness}")
|
||||
if color := arguments.get("color"):
|
||||
args.append(f"--color {color}")
|
||||
if temperature := arguments.get("temperature"):
|
||||
args.append(f"--temperature {temperature}")
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "room_control":
|
||||
args = ["set", "room", f'"{arguments["name"]}"']
|
||||
if arguments.get("on") is not None:
|
||||
args.append("--on" if arguments["on"] else "--off")
|
||||
if brightness := arguments.get("brightness"):
|
||||
args.append(f"--brightness {brightness}")
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "scene_set":
|
||||
args = ["set", "scene", arguments["scene"], "--room", arguments["room"]]
|
||||
return [TextContent(type="text", text=json.dumps(run_openhue(args)))]
|
||||
|
||||
elif name == "system_info":
|
||||
try:
|
||||
import platform
|
||||
info = {"platform": platform.system(), "release": platform.release(),
|
||||
"arch": platform.machine(), "hostname": platform.node(),
|
||||
"cpu_count": os.cpu_count()}
|
||||
try:
|
||||
import psutil
|
||||
mem = psutil.virtual_memory()
|
||||
info["memory_gb"] = round(mem.total / (1024**3), 2)
|
||||
disk = psutil.disk_usage(str(Path.home()))
|
||||
info["disk_home_gb"] = round(disk.total / (1024**3), 2)
|
||||
except ImportError:
|
||||
info["memory_gb"] = "psutil not installed"
|
||||
info["disk_home_gb"] = "psutil not installed"
|
||||
return [TextContent(type="text", text=json.dumps(info, indent=2))]
|
||||
except Exception as e:
|
||||
return [TextContent(type="text", text=json.dumps({"error": str(e)}))]
|
||||
|
||||
else:
|
||||
return [TextContent(type="text", text=json.dumps({
|
||||
"error": f"Unknown tool: {name}",
|
||||
"available": ["file_read", "file_write", "file_list", "light_list",
|
||||
"light_control", "room_control", "scene_set", "system_info"],
|
||||
}))]
|
||||
|
||||
|
||||
async def main():
|
||||
async with stdio_server() as (rs, ws):
|
||||
await app.run(rs, ws, app.create_initialization_options())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import asyncio
|
||||
asyncio.run(main())
|
||||
|
||||
128
specs/fleet-operator-incentives.md
Normal file
128
specs/fleet-operator-incentives.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Fleet Operator Incentives & Partner Program
|
||||
*Epic IV — Human Capital & Incentives (Mogul Influence roadmap steps XII, XIII, XV)*
|
||||
|
||||
## Operator Role Definition
|
||||
### Primary Responsibilities
|
||||
- Deploy and maintain sovereign AI agent fleets on VPS nodes
|
||||
- Monitor fleet health, uptime, and performance metrics
|
||||
- Execute dispatched tasks from the Timmy Foundation (burn sessions, cron jobs, PR merges)
|
||||
- Maintain fleet identity registry and rotate credentials per security policy
|
||||
- Report operational metrics weekly (uptime %, completed tasks, resource usage)
|
||||
|
||||
### Qualifications
|
||||
- Linux system administration (systemd, ssh, git, basic networking)
|
||||
- Familiarity with AI agent frameworks (Hermes Agent preferred)
|
||||
- Reliable VPS infrastructure (minimum: 2 vCPU, 4GB RAM, 50GB SSD)
|
||||
- Stable internet connection with <50ms latency to foundation services
|
||||
|
||||
## Compensation Model
|
||||
### Base Rate
|
||||
- **$150/month** per operator for up to 5 VPS nodes managed
|
||||
- Additional $25/month per node beyond 5 (max 10 nodes per operator)
|
||||
|
||||
### Performance Bonuses
|
||||
| Metric | Target | Bonus |
|
||||
|--------|---------|-------|
|
||||
| Fleet uptime | >99.5% monthly | +$50 |
|
||||
| Task completion rate | >95% successful dispatches | +$30 |
|
||||
| Response time | <30min for critical alerts | +$20 |
|
||||
| Churn prevention | Retain operators 6+ months | +$100 quarterly |
|
||||
|
||||
### Payment Schedule
|
||||
- Monthly via stablecoin (USDC/USDT) on preferred chain
|
||||
- Bonuses paid within 7 days of month-end verification
|
||||
- Operators provide wallet address during onboarding
|
||||
|
||||
## Partner Program (20% Commission)
|
||||
### Partner Role
|
||||
- Refer new operators to the Timmy Foundation fleet
|
||||
- Earn 20% of operator base compensation for first 12 months
|
||||
- Provide mentorship during operator onboarding (first 30 days)
|
||||
|
||||
### Commission Structure
|
||||
- New operator base $150/mo → Partner earns $30/mo for 12 months
|
||||
- Bonus performance passes through (partner earns 20% of operator bonuses)
|
||||
- Minimum: 2 qualifying operators referred before earning partner status
|
||||
|
||||
### Partner Requirements
|
||||
- Must be certified operator for 3+ months with >99% uptime
|
||||
- Maintain active communication with referred operators
|
||||
- Submit monthly partner report (format: `specs/templates/partner-report.md`)
|
||||
|
||||
## Quality Standards
|
||||
### Operational Standards
|
||||
- [ ] Fleet uptime ≥99.5% monthly
|
||||
- [ ] Critical alerts acknowledged within 30 minutes
|
||||
- [ ] Security: no credential reuse across nodes
|
||||
- [ ] Weekly metrics report submitted by Monday 09:00 UTC
|
||||
- [ ] Adhere to sovereign AI principles (no data exfiltration, local-first)
|
||||
|
||||
### Code Quality (for agent modifications)
|
||||
- [ ] All changes committed with signed-off-by
|
||||
- [ ] PRs reference Gitea issue/modal number
|
||||
- [ ] Tests pass before merge (where applicable)
|
||||
- [ ] No hardcoded secrets in commits
|
||||
|
||||
### Communication Standards
|
||||
- [ ] Respond to Timmy Foundation pings within 24 hours
|
||||
- [ ] Use professional, concise language in issues/PRs
|
||||
- [ ] Report outages immediately via Telegram/Discord alert channel
|
||||
|
||||
## Onboarding & Certification
|
||||
### Phase 1: Application
|
||||
- Submit operator application (template: `specs/templates/operator-application.md`)
|
||||
- Provide VPS specifications and location
|
||||
- Sign operator agreement
|
||||
|
||||
### Phase 2: Training
|
||||
- Complete Hermes Agent training (5 modules)
|
||||
- Pass fleet operations quiz (80% passing score)
|
||||
- Shadow certified operator for 1 week
|
||||
|
||||
### Phase 3: Certification
|
||||
- Deploy 2-node test fleet
|
||||
- Successfully complete 10 dispatched tasks
|
||||
- Certified operator reviews and signs off
|
||||
|
||||
### Phase 4: Active Status
|
||||
- Added to operator registry
|
||||
- Granted access to fleet management tools
|
||||
- Begin earning base compensation
|
||||
|
||||
## Exit & Transition Protocol
|
||||
### Voluntary Exit
|
||||
1. Submit 30-day notice via Gitea issue label `exit-notice`
|
||||
2. Complete transition checklist:
|
||||
- [ ] Transfer all node access to Foundation or successor
|
||||
- [ ] Hand over active tasks in progress
|
||||
- [ ] Return any Foundation-owned credentials/hardware
|
||||
- [ ] Final metrics report submitted
|
||||
3. Receive exit payment within 7 days
|
||||
|
||||
### Involuntary Termination (for cause)
|
||||
- Repeated uptime <97% (3 consecutive months)
|
||||
- Security breach or credential exposure
|
||||
- Violation of sovereign AI principles
|
||||
- Unresponsive >72 hours without prior notice
|
||||
|
||||
Terminated operators:
|
||||
- Access revoked immediately
|
||||
- Final payment pro-rated to last active day
|
||||
- May reapply after 6 months with improvement plan
|
||||
|
||||
### Succession Planning
|
||||
- Each operator mentors 1 junior operator within first 6 months
|
||||
- Documentation of all processes in `specs/fleet-ops-runbook.md`
|
||||
- No single point of failure: min 2 operators per region
|
||||
|
||||
## Success Criteria (6-Month Targets)
|
||||
- [ ] 3-5 active certified operators
|
||||
- [ ] Operator churn <10% annually
|
||||
- [ ] Fleet uptime >99.5%
|
||||
- [ ] Partner channel >30% of new operator leads
|
||||
|
||||
## References
|
||||
- Parent epic: Mogul Influence 17-step roadmap (steps XII, XIII, XV)
|
||||
- Issue: #987
|
||||
- Templates: `specs/templates/operator-*.md`
|
||||
- Runbook: `specs/fleet-ops-runbook.md` (future)
|
||||
59
specs/fleet-ops-runbook.md
Normal file
59
specs/fleet-ops-runbook.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# Fleet Operations Runbook
|
||||
*Standard operating procedures for Timmy Foundation fleet operators*
|
||||
|
||||
## Daily Checklist
|
||||
- [ ] Check fleet health: `tmux list-sessions` (should show BURN, BURN2, FORGE active)
|
||||
- [ ] Verify gateway running: `systemctl status ai.hermes.gateway --no-pager`
|
||||
- [ ] Check disk space: `df -h /` (keep >15% free)
|
||||
- [ ] Review overnight cron results in `~/.hermes/cron/jobs/`
|
||||
|
||||
## Weekly Tasks
|
||||
- [ ] Generate fleet metrics report (`scripts/fleet-metrics.sh`)
|
||||
- [ ] Rotate any expired credentials (check `~/.hermes/fleet-dispatch-state.json`)
|
||||
- [ ] Review open PRs in Timmy Foundation repos
|
||||
- [ ] Submit weekly report by Monday 09:00 UTC
|
||||
|
||||
## Alert Response Protocol
|
||||
### Critical (respond <30 min)
|
||||
1. Gateway down: `sudo systemctl restart ai.hermes.gateway`
|
||||
2. Disk >90% full: `scripts/cleanup-disk.sh`
|
||||
3. Fleet dispatch failing: check `/tmp/hermes/dispatch-queue.json`
|
||||
|
||||
### Warning (respond <4 hours)
|
||||
1. Uptime <99.5%: investigate tmux panes with `tmux attach -t BURN`
|
||||
2. Failed cron jobs: check logs in `~/.hermes/cron/jobs/`
|
||||
3. Agent loop errors: review session transcripts
|
||||
|
||||
## Common Fixes
|
||||
### Restart stuck tmux pane
|
||||
```bash
|
||||
tmux send-keys -t BURN:0 C-c
|
||||
tmux send-keys -t BURN:0 "hermes chat --yolo" Enter
|
||||
```
|
||||
|
||||
### Clear dispatch queue
|
||||
```bash
|
||||
rm /tmp/hermes/dispatch-queue.json
|
||||
# Watchdog will recreate on next cycle
|
||||
```
|
||||
|
||||
### Update hermes-agent
|
||||
```bash
|
||||
cd ~/hermes-agent && git pull origin main && pip install -e ".[all]"
|
||||
```
|
||||
|
||||
## Emergency Escalation
|
||||
- **Telegram**: @Rockachopa (primary)
|
||||
- **Gitea Issue**: label `operator-alert` + mention @Rockachopa
|
||||
- **Discord**: #fleet-ops-alerts channel
|
||||
|
||||
## Security Rules
|
||||
- Never share VPS SSH keys
|
||||
- Never commit credentials to git
|
||||
- Rotate tokens every 90 days
|
||||
- Report suspicious activity immediately
|
||||
|
||||
## Contact
|
||||
- **Operator Handbook**: `specs/fleet-operator-incentives.md`
|
||||
- **Templates**: `specs/templates/operator-*.md`
|
||||
- **Foundation Forge**: https://forge.alexanderwhitestone.com/Timmy_Foundation
|
||||
44
specs/templates/operator-application.md
Normal file
44
specs/templates/operator-application.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Fleet Operator Application
|
||||
*Submit completed form as a new Gitea issue with label `operator-application`*
|
||||
|
||||
## Personal Information
|
||||
- **Name / Handle**:
|
||||
- **Contact Email**:
|
||||
- **Telegram/Discord Handle**:
|
||||
- **Wallet Address (USDC/USDT)**:
|
||||
- **Timezone**:
|
||||
|
||||
## Infrastructure
|
||||
- **VPS Provider**: (e.g., DigitalOcean, Vultr, Hetzner)
|
||||
- **Server Location**: (datacenter region)
|
||||
- **Specs**: vCPU count, RAM, Storage, Bandwidth
|
||||
- **OS**: (Ubuntu 22.04 LTS preferred)
|
||||
- **Static IP**: Yes / No
|
||||
|
||||
## Experience
|
||||
- [ ] Linux system administration (2+ years)
|
||||
- [ ] Git / GitHub / Gitea usage
|
||||
- [ ] Docker / container orchestration
|
||||
- [ ] AI agent frameworks (Hermes, OpenAI, etc.)
|
||||
- [ ] Prior VPS fleet management
|
||||
|
||||
### Relevant Experience (describe)
|
||||
*Briefly describe your background with fleet ops, sysadmin, or AI agents:*
|
||||
|
||||
## Commitment
|
||||
- **Hours per week available**:
|
||||
- **Can maintain 99.5% uptime?** Yes / No
|
||||
- **Agree to 30-day notice for exit?** Yes / No
|
||||
- **Agree to sovereign AI principles (no data exfiltration)?** Yes / No
|
||||
|
||||
## References
|
||||
- GitHub/Gitea username:
|
||||
- Any prior work with Timmy Foundation? (link issues/PRs)
|
||||
|
||||
## Acknowledgment
|
||||
I understand I will start at $150/month base rate, with bonuses available for performance. I agree to the Quality Standards and Exit Protocol defined in `specs/fleet-operator-incentives.md`.
|
||||
|
||||
**Signature** (type name): _________________ **Date**: _________
|
||||
|
||||
---
|
||||
*Send completed application to: https://forge.alexanderwhitestone.com/Timmy_Foundation/timmy-home/issues/new*
|
||||
38
specs/templates/partner-report.md
Normal file
38
specs/templates/partner-report.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Partner Monthly Report
|
||||
*Submit by the 5th of each month for commission payments*
|
||||
|
||||
## Partner Info
|
||||
- **Partner Name**:
|
||||
- **Month/Year**:
|
||||
- **Wallet Address**:
|
||||
|
||||
## Referred Operators
|
||||
| Operator Handle | Start Date | Monthly Base | Commission (20%) | Status |
|
||||
|----------------|------------|--------------|-------------------|--------|
|
||||
| | | $150 | $30 | active / churned |
|
||||
| | | $150 | $30 | active / churned |
|
||||
| | | $150 | $30 | active / churned |
|
||||
|
||||
**Total Commission Due**: $______
|
||||
|
||||
## Mentorship Log
|
||||
*Confirm you provided mentorship to each referred operator in the first 30 days:*
|
||||
- [ ] Operator 1: mentored (dates: ____ to ____)
|
||||
- [ ] Operator 2: mentored (dates: ____ to ____)
|
||||
- [ ] Operator 3: mentored (dates: ____ to ____)
|
||||
|
||||
## Partner Performance
|
||||
- Total active operators referred:
|
||||
- Average operator uptime this month: ______%
|
||||
- Any operator churn? Yes / No (explain: )
|
||||
|
||||
## Self-Assessment
|
||||
- [ ] I maintained >99% personal fleet uptime
|
||||
- [ ] I responded to Foundation pings within 24 hours
|
||||
- [ ] I submitted this report on time
|
||||
|
||||
## Notes
|
||||
*Any issues, concerns, or operator feedback:*
|
||||
|
||||
---
|
||||
*Submit as comment on your partner Gitea issue or via Telegram to @Rockachopa*
|
||||
@@ -1 +1,12 @@
|
||||
# Timmy core module
|
||||
|
||||
from .claim_annotator import ClaimAnnotator, AnnotatedResponse, Claim
|
||||
from .audit_trail import AuditTrail, AuditEntry
|
||||
|
||||
__all__ = [
|
||||
"ClaimAnnotator",
|
||||
"AnnotatedResponse",
|
||||
"Claim",
|
||||
"AuditTrail",
|
||||
"AuditEntry",
|
||||
]
|
||||
|
||||
156
src/timmy/claim_annotator.py
Normal file
156
src/timmy/claim_annotator.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Response Claim Annotator — Source Distinction System
|
||||
SOUL.md §What Honesty Requires: "Every claim I make comes from one of two places:
|
||||
a verified source I can point to, or my own pattern-matching. My user must be
|
||||
able to tell which is which."
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
|
||||
@dataclass
|
||||
class Claim:
|
||||
"""A single claim in a response, annotated with source type."""
|
||||
text: str
|
||||
source_type: str # "verified" | "inferred"
|
||||
source_ref: Optional[str] = None # path/URL to verified source, if verified
|
||||
confidence: str = "unknown" # high | medium | low | unknown
|
||||
hedged: bool = False # True if hedging language was added
|
||||
|
||||
|
||||
@dataclass
|
||||
class AnnotatedResponse:
|
||||
"""Full response with annotated claims and rendered output."""
|
||||
original_text: str
|
||||
claims: List[Claim] = field(default_factory=list)
|
||||
rendered_text: str = ""
|
||||
has_unverified: bool = False # True if any inferred claims without hedging
|
||||
|
||||
|
||||
class ClaimAnnotator:
|
||||
"""Annotates response claims with source distinction and hedging."""
|
||||
|
||||
# Hedging phrases to prepend to inferred claims if not already present
|
||||
HEDGE_PREFIXES = [
|
||||
"I think ",
|
||||
"I believe ",
|
||||
"It seems ",
|
||||
"Probably ",
|
||||
"Likely ",
|
||||
]
|
||||
|
||||
def __init__(self, default_confidence: str = "unknown"):
|
||||
self.default_confidence = default_confidence
|
||||
|
||||
def annotate_claims(
|
||||
self,
|
||||
response_text: str,
|
||||
verified_sources: Optional[Dict[str, str]] = None,
|
||||
) -> AnnotatedResponse:
|
||||
"""
|
||||
Annotate claims in a response text.
|
||||
|
||||
Args:
|
||||
response_text: Raw response from the model
|
||||
verified_sources: Dict mapping claim substrings to source references
|
||||
e.g. {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
|
||||
Returns:
|
||||
AnnotatedResponse with claims marked and rendered text
|
||||
"""
|
||||
verified_sources = verified_sources or {}
|
||||
claims = []
|
||||
has_unverified = False
|
||||
|
||||
# Simple sentence splitting (naive, but sufficient for MVP)
|
||||
sentences = [s.strip() for s in re.split(r'[.!?]\s+', response_text) if s.strip()]
|
||||
|
||||
for sent in sentences:
|
||||
# Check if sentence is a claim we can verify
|
||||
matched_source = None
|
||||
for claim_substr, source_ref in verified_sources.items():
|
||||
if claim_substr.lower() in sent.lower():
|
||||
matched_source = source_ref
|
||||
break
|
||||
|
||||
if matched_source:
|
||||
# Verified claim
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="verified",
|
||||
source_ref=matched_source,
|
||||
confidence="high",
|
||||
hedged=False,
|
||||
)
|
||||
else:
|
||||
# Inferred claim (pattern-matched)
|
||||
claim = Claim(
|
||||
text=sent,
|
||||
source_type="inferred",
|
||||
confidence=self.default_confidence,
|
||||
hedged=self._has_hedge(sent),
|
||||
)
|
||||
if not claim.hedged:
|
||||
has_unverified = True
|
||||
|
||||
claims.append(claim)
|
||||
|
||||
# Render the annotated response
|
||||
rendered = self._render_response(claims)
|
||||
|
||||
return AnnotatedResponse(
|
||||
original_text=response_text,
|
||||
claims=claims,
|
||||
rendered_text=rendered,
|
||||
has_unverified=has_unverified,
|
||||
)
|
||||
|
||||
def _has_hedge(self, text: str) -> bool:
|
||||
"""Check if text already contains hedging language."""
|
||||
text_lower = text.lower()
|
||||
for prefix in self.HEDGE_PREFIXES:
|
||||
if text_lower.startswith(prefix.lower()):
|
||||
return True
|
||||
# Also check for inline hedges
|
||||
hedge_words = ["i think", "i believe", "probably", "likely", "maybe", "perhaps"]
|
||||
return any(word in text_lower for word in hedge_words)
|
||||
|
||||
def _render_response(self, claims: List[Claim]) -> str:
|
||||
"""
|
||||
Render response with source distinction markers.
|
||||
|
||||
Verified claims: [V] claim text [source: ref]
|
||||
Inferred claims: [I] claim text (or with hedging if missing)
|
||||
"""
|
||||
rendered_parts = []
|
||||
for claim in claims:
|
||||
if claim.source_type == "verified":
|
||||
part = f"[V] {claim.text}"
|
||||
if claim.source_ref:
|
||||
part += f" [source: {claim.source_ref}]"
|
||||
else: # inferred
|
||||
if not claim.hedged:
|
||||
# Add hedging if missing
|
||||
hedged_text = f"I think {claim.text[0].lower()}{claim.text[1:]}" if claim.text else claim.text
|
||||
part = f"[I] {hedged_text}"
|
||||
else:
|
||||
part = f"[I] {claim.text}"
|
||||
rendered_parts.append(part)
|
||||
return " ".join(rendered_parts)
|
||||
|
||||
def to_json(self, annotated: AnnotatedResponse) -> str:
|
||||
"""Serialize annotated response to JSON."""
|
||||
return json.dumps(
|
||||
{
|
||||
"original_text": annotated.original_text,
|
||||
"rendered_text": annotated.rendered_text,
|
||||
"has_unverified": annotated.has_unverified,
|
||||
"claims": [asdict(c) for c in annotated.claims],
|
||||
},
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -1,51 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Functional test for hardware_mcp_server — uses asyncio.get_event_loop for restricted envs."""
|
||||
|
||||
import asyncio, json, tempfile, sys
|
||||
from pathlib import Path
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
|
||||
from scripts.hardware_mcp_server import call_tool, is_path_allowed
|
||||
|
||||
async def run_tests():
|
||||
# Path allowlist
|
||||
assert is_path_allowed(Path.home() / "any.txt")
|
||||
assert is_path_allowed(Path("/tmp/foo"))
|
||||
assert not is_path_allowed(Path("/etc/passwd"))
|
||||
print("✓ Path allowlist")
|
||||
|
||||
# file_list on home
|
||||
res = await call_tool("file_list", {"directory": "~"})
|
||||
data = json.loads(res[0].text)
|
||||
assert "entries" in data and data["count"] >= 0
|
||||
print(f"✓ file_list works, entries: {data['count']}")
|
||||
|
||||
# file_write + file_read round-trip in temp dir
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
fp = Path(td) / "hmcp_test.txt"
|
||||
content = "Hardware MCP round-trip OK"
|
||||
w = await call_tool("file_write", {"path": str(fp), "content": content})
|
||||
assert json.loads(w[0].text).get("success")
|
||||
r = await call_tool("file_read", {"path": str(fp)})
|
||||
assert json.loads(r[0].text)["content"] == content
|
||||
print("✓ file write/read round-trip")
|
||||
|
||||
# file_read error: missing file
|
||||
err = await call_tool("file_read", {"path": str(Path.home() / "no_such_file_xyz")})
|
||||
assert "error" in json.loads(err[0].text)
|
||||
print("✓ file_read reports missing file")
|
||||
|
||||
# Security: path traversal blocked
|
||||
block = await call_tool("file_read", {"path": "/etc/passwd"})
|
||||
bd = json.loads(block[0].text)
|
||||
assert "not allowed" in bd.get("error", "").lower()
|
||||
print("✓ Path traversal blocked")
|
||||
|
||||
print("\nAll functional checks passed!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Use get_event_loop for environments where asyncio.run is disabled
|
||||
try:
|
||||
asyncio.run(run_tests())
|
||||
except RuntimeError:
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(run_tests())
|
||||
@@ -1,126 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Smoke tests for hardware_mcp_server."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest import TestCase
|
||||
|
||||
# Add repo root to path
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(ROOT))
|
||||
|
||||
|
||||
class TestHardwareMCPToolDefinitions(TestCase):
|
||||
"""Verify the MCP server is well-formed and tools have required schemas."""
|
||||
|
||||
def test_server_imports(self):
|
||||
"""Server module must import cleanly."""
|
||||
import importlib.util
|
||||
spec = importlib.util.spec_from_file_location(
|
||||
"hardware_mcp_server",
|
||||
ROOT / "scripts" / "hardware_mcp_server.py"
|
||||
)
|
||||
self.assertIsNotNone(spec)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
self.assertTrue(hasattr(mod, "app"))
|
||||
self.assertTrue(hasattr(mod, "list_tools"))
|
||||
self.assertTrue(hasattr(mod, "call_tool"))
|
||||
|
||||
def test_list_tools_returns_at_least_five_tools(self):
|
||||
"""list_tools() must return multiple tools covering file ops, lights, and system info."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
tool_names = [t.name for t in tools]
|
||||
# Core capabilities
|
||||
self.assertIn("file_read", tool_names)
|
||||
self.assertIn("file_write", tool_names)
|
||||
self.assertIn("file_list", tool_names)
|
||||
self.assertIn("light_list", tool_names)
|
||||
self.assertIn("light_control", tool_names)
|
||||
self.assertIn("room_control", tool_names)
|
||||
self.assertIn("scene_set", tool_names)
|
||||
self.assertIn("system_info", tool_names)
|
||||
self.assertGreaterEqual(len(tools), 8)
|
||||
|
||||
def test_file_read_schema_requires_path(self):
|
||||
"""file_read tool must require 'path' parameter."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "file_read")
|
||||
self.assertIn("path", ft.inputSchema["properties"])
|
||||
self.assertIn("path", ft.inputSchema["required"])
|
||||
|
||||
def test_light_control_schema_requires_name_and_on(self):
|
||||
"""light_control requires name and on."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "light_control")
|
||||
self.assertIn("name", ft.inputSchema["required"])
|
||||
self.assertIn("on", ft.inputSchema["required"])
|
||||
|
||||
def test_system_info_is_readonly(self):
|
||||
"""system_info tool takes no arguments."""
|
||||
import asyncio
|
||||
from scripts.hardware_mcp_server import list_tools
|
||||
tools = asyncio.run(list_tools())
|
||||
ft = next(t for t in tools if t.name == "system_info")
|
||||
self.assertEqual(ft.inputSchema.get("required", []), [])
|
||||
self.assertEqual(len(ft.inputSchema.get("properties", {})), 0)
|
||||
|
||||
def test_file_write_path_allowed_check(self):
|
||||
"""File write must enforce path allowlist (regression guard)."""
|
||||
from scripts.hardware_mcp_server import is_path_allowed, Path
|
||||
self.assertTrue(is_path_allowed(Path.home() / "test.txt"))
|
||||
self.assertTrue(is_path_allowed(Path("/tmp/test.txt")))
|
||||
# Outside allowed dirs should be rejected
|
||||
self.assertFalse(is_path_allowed(Path("/etc/passwd")))
|
||||
|
||||
def test_run_openhue_error_handling(self):
|
||||
"""openhue runner returns structured error when CLI missing."""
|
||||
from scripts.hardware_mcp_server import run_openhue
|
||||
result = run_openhue(["get", "light"])
|
||||
# On a system without openhue, must return success=False with helpful error
|
||||
self.assertIn("success", result)
|
||||
if not result.get("success"):
|
||||
self.assertIn("error", result)
|
||||
self.assertIn("openhue", result.get("error", "").lower())
|
||||
|
||||
|
||||
class TestHardwareMCPConfigCompleteness(TestCase):
|
||||
"""Validate config template matches tool set."""
|
||||
|
||||
def test_config_template_exists(self):
|
||||
self.assertTrue((ROOT / "timmy-local" / "hardware_mcp_config.yaml").exists())
|
||||
|
||||
def test_config_lists_all_tools(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
# All tool names should appear in the tools: section
|
||||
for tool in ["file_read", "file_write", "file_list", "light_list",
|
||||
"light_control", "room_control", "scene_set", "system_info"]:
|
||||
self.assertIn(tool, content, f"Tool {tool} missing from config tools list")
|
||||
|
||||
def test_config_has_security_guards(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
self.assertIn("max_consecutive_errors", content)
|
||||
self.assertIn("allowed_dirs", content)
|
||||
self.assertIn("max_file_size_bytes", content)
|
||||
|
||||
def test_config_has_server_key(self):
|
||||
with open(ROOT / "timmy-local" / "hardware_mcp_config.yaml") as f:
|
||||
content = f.read()
|
||||
self.assertIn("server_key: hardware", content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import unittest
|
||||
unittest.main()
|
||||
103
tests/timmy/test_claim_annotator.py
Normal file
103
tests/timmy/test_claim_annotator.py
Normal file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for claim_annotator.py — verifies source distinction is present."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from timmy.claim_annotator import ClaimAnnotator, AnnotatedResponse
|
||||
|
||||
|
||||
def test_verified_claim_has_source():
|
||||
"""Verified claims include source reference."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Paris is the capital of France": "https://en.wikipedia.org/wiki/Paris"}
|
||||
response = "Paris is the capital of France. It is a beautiful city."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert len(result.claims) > 0
|
||||
verified_claims = [c for c in result.claims if c.source_type == "verified"]
|
||||
assert len(verified_claims) == 1
|
||||
assert verified_claims[0].source_ref == "https://en.wikipedia.org/wiki/Paris"
|
||||
assert "[V]" in result.rendered_text
|
||||
assert "[source:" in result.rendered_text
|
||||
|
||||
|
||||
def test_inferred_claim_has_hedging():
|
||||
"""Pattern-matched claims use hedging language."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "The weather is nice today. It might rain tomorrow."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
inferred_claims = [c for c in result.claims if c.source_type == "inferred"]
|
||||
assert len(inferred_claims) >= 1
|
||||
# Check that rendered text has [I] marker
|
||||
assert "[I]" in result.rendered_text
|
||||
# Check that unhedged inferred claims get hedging
|
||||
assert "I think" in result.rendered_text or "I believe" in result.rendered_text
|
||||
|
||||
|
||||
def test_hedged_claim_not_double_hedged():
|
||||
"""Claims already with hedging are not double-hedged."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "I think the sky is blue. It is a nice day."
|
||||
|
||||
result = annotator.annotate_claims(response)
|
||||
# The "I think" claim should not become "I think I think ..."
|
||||
assert "I think I think" not in result.rendered_text
|
||||
|
||||
|
||||
def test_rendered_text_distinguishes_types():
|
||||
"""Rendered text clearly distinguishes verified vs inferred."""
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"Earth is round": "https://science.org/earth"}
|
||||
response = "Earth is round. Stars are far away."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
assert "[V]" in result.rendered_text # verified marker
|
||||
assert "[I]" in result.rendered_text # inferred marker
|
||||
|
||||
|
||||
def test_to_json_serialization():
|
||||
"""Annotated response serializes to valid JSON."""
|
||||
annotator = ClaimAnnotator()
|
||||
response = "Test claim."
|
||||
result = annotator.annotate_claims(response)
|
||||
json_str = annotator.to_json(result)
|
||||
parsed = json.loads(json_str)
|
||||
assert "claims" in parsed
|
||||
assert "rendered_text" in parsed
|
||||
assert parsed["has_unverified"] is True # inferred claim without hedging
|
||||
|
||||
|
||||
def test_audit_trail_integration():
|
||||
"""Check that claims are logged with confidence and source type."""
|
||||
# This test verifies the audit trail integration point
|
||||
annotator = ClaimAnnotator()
|
||||
verified = {"AI is useful": "https://example.com/ai"}
|
||||
response = "AI is useful. It can help with tasks."
|
||||
|
||||
result = annotator.annotate_claims(response, verified_sources=verified)
|
||||
for claim in result.claims:
|
||||
assert claim.source_type in ("verified", "inferred")
|
||||
assert claim.confidence in ("high", "medium", "low", "unknown")
|
||||
if claim.source_type == "verified":
|
||||
assert claim.source_ref is not None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_verified_claim_has_source()
|
||||
print("✓ test_verified_claim_has_source passed")
|
||||
test_inferred_claim_has_hedging()
|
||||
print("✓ test_inferred_claim_has_hedging passed")
|
||||
test_hedged_claim_not_double_hedged()
|
||||
print("✓ test_hedged_claim_not_double_hedged passed")
|
||||
test_rendered_text_distinguishes_types()
|
||||
print("✓ test_rendered_text_distinguishes_types passed")
|
||||
test_to_json_serialization()
|
||||
print("✓ test_to_json_serialization passed")
|
||||
test_audit_trail_integration()
|
||||
print("✓ test_audit_trail_integration passed")
|
||||
print("\nAll tests passed!")
|
||||
@@ -1,3 +0,0 @@
|
||||
# hardware MCP config
|
||||
|
||||
Copy `hardware_mcp_config.yaml` to `~/.timmy/hardware/hardware_mcp_config.yaml` to enable runtime tuning.
|
||||
@@ -1,67 +0,0 @@
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
# Local Hardware MCP — Runtime Configuration
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
# Edit this file to tune hardware control settings.
|
||||
# Hermes loads this at session start when the hardware MCP server is enabled.
|
||||
#
|
||||
# Location: ~/.timmy/hardware/hardware_mcp_config.yaml
|
||||
# ═══════════════════════════════════════════════════════════════════════
|
||||
|
||||
# ── Server Identity ───────────────────────────────────────────────────
|
||||
server_key: hardware
|
||||
|
||||
# ── Tool Names ────────────────────────────────────────────────────────
|
||||
# Exact tool names Hermes registers. Update if you rename tools in
|
||||
# hardware_mcp_server.py.
|
||||
tools:
|
||||
- name: file_read
|
||||
hint: "Read a file from an allowed directory (home, /tmp). Max 10 MB."
|
||||
- name: file_write
|
||||
hint: "Write text content to a file within allowed directories."
|
||||
- name: file_list
|
||||
hint: "List files and directories in a given folder."
|
||||
- name: light_list
|
||||
hint: "List all Hue lights, rooms, and scenes from OpenHue."
|
||||
- name: light_control
|
||||
hint: "Control a specific Hue light: on/off, brightness, color, temperature."
|
||||
- name: room_control
|
||||
hint: "Control all lights in a room: on/off, brightness."
|
||||
- name: scene_set
|
||||
hint: "Activate a Hue scene in a room."
|
||||
- name: system_info
|
||||
hint: "Get safe system information: OS, CPU count, memory usage, disk space."
|
||||
|
||||
# ── Security Guards ───────────────────────────────────────────────────
|
||||
guards:
|
||||
# Maximum consecutive tool errors before stopping.
|
||||
max_consecutive_errors: 3
|
||||
|
||||
# Max total hardware MCP calls per session (0 = unlimited).
|
||||
max_mcp_calls_per_session: 0
|
||||
|
||||
# Allowed directories for file operations (expanded paths).
|
||||
allowed_dirs:
|
||||
- "~"
|
||||
- "/tmp"
|
||||
- "/private/tmp"
|
||||
|
||||
# Maximum file size for reads (bytes).
|
||||
max_file_size_bytes: 10485760 # 10 MB
|
||||
|
||||
# ── OpenHue ───────────────────────────────────────────────────────────
|
||||
# Path to openhue CLI (auto-detected if in PATH).
|
||||
openhue_command: "openhue"
|
||||
|
||||
# ── Dependencies ───────────────────────────────────────────────────────
|
||||
# Prerequisites:
|
||||
# - OpenHue CLI: brew install openhue/cli/openhue (macOS) or see https://github.com/openhue/openhue-cli
|
||||
# - MCP SDK: pip install mcp
|
||||
# - For system_info: pip install psutil (optional, for detailed memory/disk metrics)
|
||||
#
|
||||
# Config in ~/.hermes/config.yaml:
|
||||
# mcp_servers:
|
||||
# hardware:
|
||||
# command: "python"
|
||||
# args: ["/Users/you/path/to/timmy-home/scripts/hardware_mcp_server.py"]
|
||||
# env:
|
||||
# OPENHUE_BRIDGE_IP: "192.168.1.xx" # optional, if openhue needs it
|
||||
Reference in New Issue
Block a user