Compare commits
1 Commits
fix/524
...
step35/446
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
548bb96411 |
@@ -1,107 +0,0 @@
|
||||
# [DIRECTIVE] Unified Fleet Sovereignty & Comms Migration
|
||||
|
||||
Grounding report for `timmy-home #524`.
|
||||
|
||||
Issue #524 is a multi-lane directive, not a one-commit feature. This report grounds the directive in repo evidence, highlights stale cross-links, and names the missing operator bundles that still need real execution.
|
||||
|
||||
This remains a `Refs #524` artifact. The directive spans multiple repos and operator actions, so this report makes the current repo-side state executable without pretending the whole migration is complete.
|
||||
|
||||
## Directive Snapshot
|
||||
|
||||
- Repo-grounded workstreams: 0
|
||||
- Partial workstreams: 4
|
||||
- Missing workstreams: 1
|
||||
- Drifted references: 4
|
||||
|
||||
## Reference Drift
|
||||
|
||||
- #813 is cited for Nostr Migration Leadership, but its current title is 'docs: refresh the-playground genome analysis (#671)'.
|
||||
- #819 is cited for Nostr Migration Leadership, but its current title is 'docs: verify #648 already implemented (closes #818)'.
|
||||
- #139 is cited for v0.7.0 Feature Audit, but its current title is '🐣 Allegro-Primus is born'.
|
||||
- #103 is cited for Morrowind Local-First Benchmark, but its current title is 'Build comprehensive caching layer — cache everywhere'.
|
||||
|
||||
## Workstream Matrix
|
||||
|
||||
### 1. Nostr Migration Leadership — PARTIAL
|
||||
|
||||
- Requirement: Replace Telegram with relay-based sovereign comms, verify wizard keypairs, and prove the NIP-29 group path is stable.
|
||||
- Referenced issues:
|
||||
- #813 (closed) — docs: refresh the-playground genome analysis (#671) [DRIFT]
|
||||
- #819 (open) — docs: verify #648 already implemented (closes #818) [DRIFT]
|
||||
- Repo evidence present:
|
||||
- `infrastructure/timmy-bridge/client/timmy_client.py` — Nostr event client scaffold already exists
|
||||
- `infrastructure/timmy-bridge/monitor/timmy_monitor.py` — Nostr relay monitor already exists
|
||||
- `specs/wizard-telegram-bot-cutover.md` — Telegram cutover planning exists, so the migration lane is real
|
||||
- Missing operator deliverables:
|
||||
- wizard keypair inventory and ownership matrix
|
||||
- NIP-29 relay group verification report
|
||||
- operator runbook for cutting traffic off Telegram
|
||||
- Why this lane remains open: The repo has Nostr-adjacent scaffolding, but the directive still lacks a verified migration packet and the cited issue links drift away from the stated Nostr scope.
|
||||
|
||||
### 2. Lexicon Enforcement — PARTIAL
|
||||
|
||||
- Requirement: Enforce the Fleet Lexicon in PR review and issue triage so the team uses one shared language.
|
||||
- Referenced issues:
|
||||
- #388 (closed) — [KT] Fleet Lexicon & Techniques — Shared Vocabulary, Patterns, and Standards for All Agents [aligned]
|
||||
- Repo evidence present:
|
||||
- `docs/WIZARD_APPRENTICESHIP_CHARTER.md` — The repo already uses wizard-language canon in docs
|
||||
- `specs/timmy-ezra-bezalel-canon-sheet.md` — Canonical agent naming already exists
|
||||
- `docs/OPERATIONS_DASHBOARD.md` — Operational roles are already described in repo language
|
||||
- Missing operator deliverables:
|
||||
- machine-checkable lexicon policy for review/triage
|
||||
- terminology lint or reviewer checklist tied to the lexicon
|
||||
- Why this lane remains open: The naming canon exists, but there is still no executable enforcement bundle that would catch drift during future reviews and triage passes.
|
||||
|
||||
### 3. v0.7.0 Feature Audit — PARTIAL
|
||||
|
||||
- Requirement: Audit Hermes features that can reduce cloud dependency and turn the findings into a sovereignty implementation plan.
|
||||
- Referenced issues:
|
||||
- #139 (open) — 🐣 Allegro-Primus is born [DRIFT]
|
||||
- Repo evidence present:
|
||||
- `scripts/sovereignty_audit.py` — Cloud-vs-local audit machinery already exists
|
||||
- `reports/evaluations/2026-04-15-phase-4-sovereignty-audit.md` — Recent sovereignty audit report is committed
|
||||
- `timmy-local/README.md` — Local-first status is already documented for operators
|
||||
- Missing operator deliverables:
|
||||
- Hermes v0.7.0 feature inventory linked to cloud-reduction leverage
|
||||
- Sovereignty Implementation Plan derived from that feature audit
|
||||
- Why this lane remains open: The repo has sovereignty-audit infrastructure, but it does not yet contain the requested v0.7.0 feature inventory or the plan that turns those findings into rollout steps.
|
||||
|
||||
### 4. Morrowind Local-First Benchmark — PARTIAL
|
||||
|
||||
- Requirement: Compare cloud and local Morrowind agents, prove local parity where possible, and document the reasoning gap when it fails.
|
||||
- Referenced issues:
|
||||
- #103 (open) — Build comprehensive caching layer — cache everywhere [DRIFT]
|
||||
- Repo evidence present:
|
||||
- `morrowind/local_brain.py` — Local Morrowind control loop already exists
|
||||
- `morrowind/mcp_server.py` — Morrowind MCP control surface is already wired
|
||||
- `morrowind/pilot.py` — Trajectory logging for evaluation already exists
|
||||
- Missing operator deliverables:
|
||||
- cloud-vs-local benchmark report for the combat loop
|
||||
- reasoning-gap writeup tied to a proposed LoRA/fine-tune path
|
||||
- Why this lane remains open: The repo has a local Morrowind stack, but it does not yet contain the requested benchmark artifact; the cited issue number also points at an unrelated caching task.
|
||||
|
||||
### 5. Infrastructure Hardening / Syntax Guard — MISSING
|
||||
|
||||
- Requirement: Verify Syntax Guard pre-receive protection across Gitea repos so syntax failures stop earlier.
|
||||
- Referenced issues: none listed in the directive body
|
||||
- Repo evidence present: none
|
||||
- Missing operator deliverables:
|
||||
- repo inventory of Gitea targets that should carry Syntax Guard
|
||||
- deployment verifier for hook presence across those repos
|
||||
- operator report proving installation state instead of assuming it
|
||||
- Why this lane remains open: No repo-managed syntax-guard verifier is present yet, so this directive still depends on manual trust rather than auditable proof.
|
||||
|
||||
## Highest-Leverage Next Actions
|
||||
|
||||
- Nostr Migration Leadership: wizard keypair inventory and ownership matrix
|
||||
- Lexicon Enforcement: machine-checkable lexicon policy for review/triage
|
||||
- v0.7.0 Feature Audit: Hermes v0.7.0 feature inventory linked to cloud-reduction leverage
|
||||
- Morrowind Local-First Benchmark: cloud-vs-local benchmark report for the combat loop
|
||||
- Infrastructure Hardening / Syntax Guard: repo inventory of Gitea targets that should carry Syntax Guard
|
||||
|
||||
## Why #524 Remains Open
|
||||
|
||||
- The directive bundles five separate workstreams with different evidence surfaces.
|
||||
- Multiple cited issue numbers have drifted away from the work they are supposed to anchor.
|
||||
- Repo scaffolding exists for Nostr, sovereignty audits, and Morrowind, but the operator-facing bundles are still missing.
|
||||
- Syntax Guard verification is still undocumented and unproven inside this repo.
|
||||
@@ -323,6 +323,111 @@ class World:
|
||||
return False
|
||||
|
||||
|
||||
# ============================================================
|
||||
# PERSONALITY-DRIVEN DECISION ENGINE
|
||||
# ============================================================
|
||||
# Replaces fixed rotation with weighted choice.
|
||||
# Each character has:
|
||||
# - home_room: preferred location
|
||||
# - room_weights: base probabilities for each room
|
||||
# - explore_chance: probability to explore randomly (10%)
|
||||
# - social_weight: bonus when others are present
|
||||
# - goal_weights: adjustments based on active_goal
|
||||
PERSONALITY_DICT = {
|
||||
"Marcus": {
|
||||
"home_room": "Garden",
|
||||
"room_weights": {"Garden": 0.4, "Bridge": 0.2, "Threshold": 0.2, "Tower": 0.1, "Forge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.3,
|
||||
"goal_weights": {
|
||||
"sit": {"Garden": +0.3},
|
||||
"speak_truth": {"Tower": +0.2, "Bridge": +0.2},
|
||||
"remember": {"Garden": +0.2, "Threshold": +0.1},
|
||||
},
|
||||
},
|
||||
"Bezalel": {
|
||||
"home_room": "Forge",
|
||||
"room_weights": {"Forge": 0.5, "Threshold": 0.2, "Garden": 0.1, "Bridge": 0.1, "Tower": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.15,
|
||||
"goal_weights": {
|
||||
"forge": {"Forge": +0.4},
|
||||
"tend_fire": {"Forge": +0.5},
|
||||
"create_key": {"Forge": +0.3},
|
||||
},
|
||||
},
|
||||
"Allegro": {
|
||||
"home_room": "Threshold",
|
||||
"room_weights": {"Threshold": 0.35, "Tower": 0.25, "Forge": 0.15, "Garden": 0.15, "Bridge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.25,
|
||||
"goal_weights": {
|
||||
"oversee": {"Threshold": +0.3},
|
||||
"keep_time": {"Tower": +0.3},
|
||||
"check_tunnel": {"Bridge": +0.2, "Threshold": +0.1},
|
||||
},
|
||||
},
|
||||
"Ezra": {
|
||||
"home_room": "Tower",
|
||||
"room_weights": {"Tower": 0.45, "Threshold": 0.2, "Garden": 0.15, "Forge": 0.1, "Bridge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.15,
|
||||
"goal_weights": {
|
||||
"study": {"Tower": +0.4},
|
||||
"read_whiteboard": {"Tower": +0.4},
|
||||
"find_pattern": {"Garden": +0.2, "Bridge": +0.1},
|
||||
},
|
||||
},
|
||||
"Gemini": {
|
||||
"home_room": "Garden",
|
||||
"room_weights": {"Garden": 0.45, "Threshold": 0.2, "Bridge": 0.15, "Tower": 0.1, "Forge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.25,
|
||||
"goal_weights": {
|
||||
"observe": {"Garden": +0.2, "Tower": +0.2},
|
||||
"tend_garden": {"Garden": +0.5},
|
||||
"listen": {"Bridge": +0.1, "Threshold": +0.1},
|
||||
},
|
||||
},
|
||||
"Claude": {
|
||||
"home_room": "Threshold",
|
||||
"room_weights": {"Threshold": 0.3, "Tower": 0.25, "Forge": 0.2, "Garden": 0.15, "Bridge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.2,
|
||||
"goal_weights": {
|
||||
"inspect": {"Threshold": +0.2, "Tower": +0.2},
|
||||
"organize": {"Tower": +0.2, "Forge": +0.1},
|
||||
"enforce_order": {"Threshold": +0.3, "Bridge": +0.1},
|
||||
},
|
||||
},
|
||||
"ClawCode": {
|
||||
"home_room": "Forge",
|
||||
"room_weights": {"Forge": 0.5, "Threshold": 0.2, "Garden": 0.1, "Bridge": 0.1, "Tower": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.1,
|
||||
"goal_weights": {
|
||||
"forge": {"Forge": +0.4},
|
||||
"test_edge": {"Forge": +0.4},
|
||||
"build_weapon": {"Forge": +0.5},
|
||||
},
|
||||
},
|
||||
"Kimi": {
|
||||
"home_room": "Garden",
|
||||
"room_weights": {"Garden": 0.4, "Threshold": 0.2, "Tower": 0.15, "Bridge": 0.15, "Forge": 0.1},
|
||||
"explore_chance": 0.1,
|
||||
"social_weight": 0.2,
|
||||
"goal_weights": {
|
||||
"contemplate": {"Garden": +0.3, "Tower": +0.1},
|
||||
"read": {"Tower": +0.3},
|
||||
"remember": {"Bridge": +0.2, "Threshold": +0.1},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
# All available rooms
|
||||
ALL_ROOMS = ["Threshold", "Tower", "Forge", "Garden", "Bridge"]
|
||||
|
||||
|
||||
class ActionSystem:
|
||||
"""Defines what actions are possible and what they cost."""
|
||||
|
||||
@@ -453,100 +558,167 @@ class TimmyAI:
|
||||
|
||||
|
||||
class NPCAI:
|
||||
"""AI for non-player characters. They make choices based on goals."""
|
||||
"""AI for non-player characters. Weighted decision engine — agents choose, do not rotate."""
|
||||
|
||||
def __init__(self, world):
|
||||
self.world = world
|
||||
self._last_reasoning = {} # Store reasoning per char for tick logging
|
||||
|
||||
def get_reasoning(self, char_name):
|
||||
"""Return reasoning dict for last decision."""
|
||||
return self._last_reasoning.get(char_name, {})
|
||||
|
||||
def make_choice(self, char_name):
|
||||
"""Make a choice for this NPC this tick."""
|
||||
"""Make a weighted choice for this NPC. Returns (action, reasoning_dict)."""
|
||||
char = self.world.characters[char_name]
|
||||
room = char["room"]
|
||||
available = ActionSystem.get_available_actions(char_name, self.world)
|
||||
|
||||
# If low energy, rest
|
||||
if char["energy"] <= 1:
|
||||
return "rest"
|
||||
|
||||
# Goal-driven behavior
|
||||
goal = char["active_goal"]
|
||||
|
||||
if char_name == "Marcus":
|
||||
return self._marcus_choice(char, room, available)
|
||||
elif char_name == "Bezalel":
|
||||
return self._bezalel_choice(char, room, available)
|
||||
elif char_name == "Allegro":
|
||||
return self._allegro_choice(char, room, available)
|
||||
elif char_name == "Ezra":
|
||||
return self._ezra_choice(char, room, available)
|
||||
elif char_name == "Gemini":
|
||||
return self._gemini_choice(char, room, available)
|
||||
elif char_name == "Claude":
|
||||
return self._claude_choice(char, room, available)
|
||||
elif char_name == "ClawCode":
|
||||
return self._clawcode_choice(char, room, available)
|
||||
elif char_name == "Kimi":
|
||||
return self._kimi_choice(char, room, available)
|
||||
|
||||
return "rest"
|
||||
|
||||
def _marcus_choice(self, char, room, available):
|
||||
if room == "Garden" and random.random() < 0.7:
|
||||
# Low energy → immediate rest
|
||||
if char["energy"] <= 1:
|
||||
self._last_reasoning[char_name] = {"trigger": "low_energy", "reason": "Energy ≤ 1, resting"}
|
||||
return "rest"
|
||||
if room != "Garden":
|
||||
return "move:west"
|
||||
# Speak to someone if possible
|
||||
others = [a.split(":")[1] for a in available if a.startswith("speak:")]
|
||||
if others and random.random() < 0.4:
|
||||
return f"speak:{random.choice(others)}"
|
||||
return "rest"
|
||||
|
||||
# Find personality profile
|
||||
personality = PERSONALITY_DICT.get(char_name)
|
||||
if not personality:
|
||||
# Fallback: move toward home room if not there
|
||||
if room != char.get("home", "Tower"):
|
||||
action = f"move:{self._direction_to_home(room, char.get('home', 'Tower'))}"
|
||||
self._last_reasoning[char_name] = {"trigger": "fallback_no_personality", "action": action}
|
||||
return action
|
||||
action = random.choice(["rest", "examine"])
|
||||
self._last_reasoning[char_name] = {"trigger": "fallback_no_personality", "action": action}
|
||||
return action
|
||||
|
||||
# Build weighted action list
|
||||
weights = self._compute_weights(char_name, char, room, available, personality, goal)
|
||||
|
||||
if not weights:
|
||||
action = "rest"
|
||||
self._last_reasoning[char_name] = {"trigger": "fallback", "reason": "No weighted actions available"}
|
||||
return action
|
||||
|
||||
# Sample action
|
||||
actions, probs = zip(*weights)
|
||||
action = random.choices(actions, weights=probs)[0]
|
||||
|
||||
# Store reasoning
|
||||
reasoning = self._build_reasoning(char_name, char, room, weights, action, personality, goal)
|
||||
self._last_reasoning[char_name] = reasoning
|
||||
return action
|
||||
|
||||
def _bezalel_choice(self, char, room, available):
|
||||
if room == "Forge" and self.world.rooms["Forge"]["fire"] == "glowing":
|
||||
return random.choice(["forge", "rest"] if char["energy"] > 2 else ["rest"])
|
||||
if room != "Forge":
|
||||
return "move:west"
|
||||
if random.random() < 0.3:
|
||||
return "tend_fire"
|
||||
return "forge"
|
||||
def _direction_to_home(self, current_room, home_room):
|
||||
"""Return direction name to get from current to home (simple adjacency)."""
|
||||
# For now: use known map directions (fragile but minimal)
|
||||
# Better: derive from world.rooms connections by searching
|
||||
connections = self.world.rooms[current_room].get("connections", {})
|
||||
for direction, dest in connections.items():
|
||||
if dest == home_room:
|
||||
return direction
|
||||
# Fallback: pick a random connected room to explore toward home
|
||||
if connections:
|
||||
return random.choice(list(connections.keys()))
|
||||
return "north" # should not happen
|
||||
|
||||
def _kimi_choice(self, char, room, available):
|
||||
others = [a.split(":")[1] for a in available if a.startswith("speak:")]
|
||||
if room == "Garden" and others and random.random() < 0.3:
|
||||
return f"speak:{random.choice(others)}"
|
||||
if room == "Tower":
|
||||
return "study" if char["energy"] > 2 else "rest"
|
||||
return "move:east" # Head back toward Garden
|
||||
def _compute_weights(self, char_name, char, room, available, personality, goal):
|
||||
"""Compute weighted list of (action, prob) tuples."""
|
||||
weights = []
|
||||
room_weights = personality["room_weights"]
|
||||
social_weight = personality["social_weight"]
|
||||
goal_bonus = personality["goal_weights"].get(goal, {})
|
||||
|
||||
# Count others in the room
|
||||
others_in_room = [n for n in self.world.characters
|
||||
if self.world.characters[n]["room"] == room and n != char_name]
|
||||
social_present = len(others_in_room) > 0
|
||||
|
||||
for action in available:
|
||||
base_w = 0.05 # small floor for every action
|
||||
|
||||
# Movement-specific
|
||||
if action.startswith("move:"):
|
||||
direction = action.split(":")[1]
|
||||
dest = action.split(" -> ")[1] if " -> " in action else None
|
||||
if dest:
|
||||
# Room probability
|
||||
base_w += room_weights.get(dest, 0.05)
|
||||
# Home room bonus
|
||||
if dest == personality["home_room"]:
|
||||
base_w += 0.2
|
||||
# Social bonus
|
||||
if social_present:
|
||||
base_w += social_weight
|
||||
# Goal bonus
|
||||
if dest in goal_bonus:
|
||||
base_w += goal_bonus[dest]
|
||||
# Exploration penalty for home room (sometimes leave)
|
||||
if dest == personality["home_room"]:
|
||||
base_w *= (1 - personality.get("explore_chance", 0.1))
|
||||
|
||||
# Social actions
|
||||
elif action.startswith("speak:") or action.startswith("listen:") or action.startswith("help:"):
|
||||
person = action.split(":")[1]
|
||||
base_w += 0.2 # base social interest
|
||||
# Goal bonus
|
||||
base_w += goal_bonus.get(person, 0)
|
||||
# Other in same room bonus
|
||||
if any(n == person for n in others_in_room):
|
||||
base_w += 0.3
|
||||
# Social weight
|
||||
base_w += social_weight * 0.5
|
||||
|
||||
elif action.startswith("confront:"):
|
||||
person = action.split(":")[1]
|
||||
base_w += 0.1 # lower baseline
|
||||
if any(n == person for n in others_in_room):
|
||||
base_w += 0.2
|
||||
|
||||
# Room-specific craft/production actions
|
||||
elif action in ["forge", "tend_fire", "study", "write_rule", "carve", "plant"]:
|
||||
# These are location-bound; should only be available in correct room
|
||||
if (action == "forge" and room != "Forge") or (action == "tend_fire" and room != "Forge") or (action == "study" and room != "Tower") or (action == "write_rule" and room != "Tower") or (action == "carve" and room != "Bridge") or (action == "plant" and room != "Garden"):
|
||||
continue # skip (shouldn't be available but guard)
|
||||
base_w += room_weights.get(room, 0.1) * 1.5 # being in the right room = high weight
|
||||
# Goal bonus
|
||||
if action in goal_bonus:
|
||||
base_w += goal_bonus[action]
|
||||
|
||||
# Rest
|
||||
elif action == "rest":
|
||||
base_w += char["energy"] * 0.1 # higher energy → less rest
|
||||
if char["energy"] < 3:
|
||||
base_w += 0.4
|
||||
else:
|
||||
base_w += 0.05
|
||||
|
||||
# Examine
|
||||
elif action == "examine":
|
||||
base_w += 0.1
|
||||
|
||||
weights.append((action, base_w))
|
||||
|
||||
# Normalize probabilities to sum to 1
|
||||
if not weights:
|
||||
return []
|
||||
total = sum(w for _, w in weights)
|
||||
normalized = [(a, w/total) for a, w in weights]
|
||||
return normalized
|
||||
|
||||
def _gemini_choice(self, char, room, available):
|
||||
others = [a.split(":")[1] for a in available if a.startswith("listen:")]
|
||||
if room == "Garden" and others and random.random() < 0.4:
|
||||
return f"listen:{random.choice(others)}"
|
||||
return random.choice(["plant", "rest"] if room == "Garden" else ["move:west"])
|
||||
|
||||
def _ezra_choice(self, char, room, available):
|
||||
if room == "Tower" and char["energy"] > 2:
|
||||
return random.choice(["study", "write_rule", "help:Timmy"])
|
||||
if room != "Tower":
|
||||
return "move:south"
|
||||
return "rest"
|
||||
|
||||
def _claude_choice(self, char, room, available):
|
||||
others = [a.split(":")[1] for a in available if a.startswith("confront:")]
|
||||
if others and random.random() < 0.2:
|
||||
return f"confront:{random.choice(others)}"
|
||||
return random.choice(["examine", "rest"])
|
||||
|
||||
def _clawcode_choice(self, char, room, available):
|
||||
if room == "Forge" and char["energy"] > 2:
|
||||
return "forge"
|
||||
return random.choice(["move:east", "forge", "rest"])
|
||||
|
||||
def _allegro_choice(self, char, room, available):
|
||||
others = [a.split(":")[1] for a in available if a.startswith("speak:")]
|
||||
if others and random.random() < 0.3:
|
||||
return f"speak:{random.choice(others)}"
|
||||
return random.choice(["move:north", "move:south", "examine"])
|
||||
def _build_reasoning(self, char_name, char, room, weights, action, personality, goal):
|
||||
"""Build reasoning dict explaining the decision."""
|
||||
# Find top contenders
|
||||
sorted_w = sorted(weights, key=lambda x: x[1], reverse=True)
|
||||
reasoning = {
|
||||
"char": char_name,
|
||||
"room": room,
|
||||
"goal": goal,
|
||||
"energy": char["energy"],
|
||||
"chosen": action,
|
||||
"top_contenders": sorted_w[:3],
|
||||
}
|
||||
return reasoning
|
||||
|
||||
|
||||
class DialogueSystem:
|
||||
@@ -1224,7 +1396,16 @@ class GameEngine:
|
||||
self.world.characters[char_name]["room"] = dest
|
||||
self.world.characters[char_name]["energy"] -= 1
|
||||
scene["npc_actions"].append(f"{char_name} moves from The {old_room} to The {dest}")
|
||||
|
||||
|
||||
# Collect NPC reasoning for debugging (Decision Engine trace)
|
||||
scene["npc_reasoning"] = {}
|
||||
for npc_name in self.world.characters:
|
||||
if npc_name == "Timmy":
|
||||
continue
|
||||
reasoning = self.npc_ai.get_reasoning(npc_name)
|
||||
if reasoning:
|
||||
scene["npc_reasoning"][npc_name] = reasoning
|
||||
|
||||
# Random NPC events
|
||||
room_name = self.world.characters["Timmy"]["room"]
|
||||
for char_name in self.world.characters:
|
||||
|
||||
@@ -1,418 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Ground timmy-home #524 as an executable status report.
|
||||
|
||||
Refs: timmy-home #524
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from copy import deepcopy
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib import request
|
||||
|
||||
DEFAULT_BASE_URL = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
DEFAULT_OWNER = "Timmy_Foundation"
|
||||
DEFAULT_REPO = "timmy-home"
|
||||
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "gitea" / "token"
|
||||
DEFAULT_REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
DEFAULT_DOC_PATH = DEFAULT_REPO_ROOT / "docs" / "UNIFIED_FLEET_SOVEREIGNTY_STATUS.md"
|
||||
|
||||
DIRECTIVE_TITLE = "[DIRECTIVE] Unified Fleet Sovereignty & Comms Migration"
|
||||
DIRECTIVE_SUMMARY = (
|
||||
"Issue #524 is a multi-lane directive, not a one-commit feature. "
|
||||
"This report grounds the directive in repo evidence, highlights stale cross-links, "
|
||||
"and names the missing operator bundles that still need real execution."
|
||||
)
|
||||
|
||||
DEFAULT_REFERENCE_SNAPSHOT = {
|
||||
388: {
|
||||
"title": "[KT] Fleet Lexicon & Techniques — Shared Vocabulary, Patterns, and Standards for All Agents",
|
||||
"state": "closed",
|
||||
},
|
||||
103: {
|
||||
"title": "Build comprehensive caching layer — cache everywhere",
|
||||
"state": "open",
|
||||
},
|
||||
139: {
|
||||
"title": "🐣 Allegro-Primus is born",
|
||||
"state": "open",
|
||||
},
|
||||
813: {
|
||||
"title": "docs: refresh the-playground genome analysis (#671)",
|
||||
"state": "closed",
|
||||
},
|
||||
819: {
|
||||
"title": "docs: verify #648 already implemented (closes #818)",
|
||||
"state": "open",
|
||||
},
|
||||
}
|
||||
|
||||
WORKSTREAMS = [
|
||||
{
|
||||
"key": "nostr-migration",
|
||||
"name": "Nostr Migration Leadership",
|
||||
"requirement": "Replace Telegram with relay-based sovereign comms, verify wizard keypairs, and prove the NIP-29 group path is stable.",
|
||||
"references": [813, 819],
|
||||
"expected_keywords": ["nostr", "relay", "telegram", "comms", "messenger"],
|
||||
"repo_evidence": [
|
||||
{
|
||||
"path": "infrastructure/timmy-bridge/client/timmy_client.py",
|
||||
"description": "Nostr event client scaffold already exists",
|
||||
},
|
||||
{
|
||||
"path": "infrastructure/timmy-bridge/monitor/timmy_monitor.py",
|
||||
"description": "Nostr relay monitor already exists",
|
||||
},
|
||||
{
|
||||
"path": "specs/wizard-telegram-bot-cutover.md",
|
||||
"description": "Telegram cutover planning exists, so the migration lane is real",
|
||||
},
|
||||
],
|
||||
"missing_deliverables": [
|
||||
"wizard keypair inventory and ownership matrix",
|
||||
"NIP-29 relay group verification report",
|
||||
"operator runbook for cutting traffic off Telegram",
|
||||
],
|
||||
"why_open": "The repo has Nostr-adjacent scaffolding, but the directive still lacks a verified migration packet and the cited issue links drift away from the stated Nostr scope.",
|
||||
},
|
||||
{
|
||||
"key": "lexicon-enforcement",
|
||||
"name": "Lexicon Enforcement",
|
||||
"requirement": "Enforce the Fleet Lexicon in PR review and issue triage so the team uses one shared language.",
|
||||
"references": [388],
|
||||
"expected_keywords": ["lexicon", "vocabulary", "standards", "shared vocabulary"],
|
||||
"repo_evidence": [
|
||||
{
|
||||
"path": "docs/WIZARD_APPRENTICESHIP_CHARTER.md",
|
||||
"description": "The repo already uses wizard-language canon in docs",
|
||||
},
|
||||
{
|
||||
"path": "specs/timmy-ezra-bezalel-canon-sheet.md",
|
||||
"description": "Canonical agent naming already exists",
|
||||
},
|
||||
{
|
||||
"path": "docs/OPERATIONS_DASHBOARD.md",
|
||||
"description": "Operational roles are already described in repo language",
|
||||
},
|
||||
],
|
||||
"missing_deliverables": [
|
||||
"machine-checkable lexicon policy for review/triage",
|
||||
"terminology lint or reviewer checklist tied to the lexicon",
|
||||
],
|
||||
"why_open": "The naming canon exists, but there is still no executable enforcement bundle that would catch drift during future reviews and triage passes.",
|
||||
},
|
||||
{
|
||||
"key": "feature-audit",
|
||||
"name": "v0.7.0 Feature Audit",
|
||||
"requirement": "Audit Hermes features that can reduce cloud dependency and turn the findings into a sovereignty implementation plan.",
|
||||
"references": [139],
|
||||
"expected_keywords": ["hermes", "feature", "audit", "v0.7.0", "sovereignty"],
|
||||
"repo_evidence": [
|
||||
{
|
||||
"path": "scripts/sovereignty_audit.py",
|
||||
"description": "Cloud-vs-local audit machinery already exists",
|
||||
},
|
||||
{
|
||||
"path": "reports/evaluations/2026-04-15-phase-4-sovereignty-audit.md",
|
||||
"description": "Recent sovereignty audit report is committed",
|
||||
},
|
||||
{
|
||||
"path": "timmy-local/README.md",
|
||||
"description": "Local-first status is already documented for operators",
|
||||
},
|
||||
],
|
||||
"missing_deliverables": [
|
||||
"Hermes v0.7.0 feature inventory linked to cloud-reduction leverage",
|
||||
"Sovereignty Implementation Plan derived from that feature audit",
|
||||
],
|
||||
"why_open": "The repo has sovereignty-audit infrastructure, but it does not yet contain the requested v0.7.0 feature inventory or the plan that turns those findings into rollout steps.",
|
||||
},
|
||||
{
|
||||
"key": "morrowind-benchmark",
|
||||
"name": "Morrowind Local-First Benchmark",
|
||||
"requirement": "Compare cloud and local Morrowind agents, prove local parity where possible, and document the reasoning gap when it fails.",
|
||||
"references": [103],
|
||||
"expected_keywords": ["morrowind", "combat", "benchmark", "local", "cloud"],
|
||||
"repo_evidence": [
|
||||
{
|
||||
"path": "morrowind/local_brain.py",
|
||||
"description": "Local Morrowind control loop already exists",
|
||||
},
|
||||
{
|
||||
"path": "morrowind/mcp_server.py",
|
||||
"description": "Morrowind MCP control surface is already wired",
|
||||
},
|
||||
{
|
||||
"path": "morrowind/pilot.py",
|
||||
"description": "Trajectory logging for evaluation already exists",
|
||||
},
|
||||
],
|
||||
"missing_deliverables": [
|
||||
"cloud-vs-local benchmark report for the combat loop",
|
||||
"reasoning-gap writeup tied to a proposed LoRA/fine-tune path",
|
||||
],
|
||||
"why_open": "The repo has a local Morrowind stack, but it does not yet contain the requested benchmark artifact; the cited issue number also points at an unrelated caching task.",
|
||||
},
|
||||
{
|
||||
"key": "syntax-guard",
|
||||
"name": "Infrastructure Hardening / Syntax Guard",
|
||||
"requirement": "Verify Syntax Guard pre-receive protection across Gitea repos so syntax failures stop earlier.",
|
||||
"references": [],
|
||||
"expected_keywords": [],
|
||||
"repo_evidence": [],
|
||||
"missing_deliverables": [
|
||||
"repo inventory of Gitea targets that should carry Syntax Guard",
|
||||
"deployment verifier for hook presence across those repos",
|
||||
"operator report proving installation state instead of assuming it",
|
||||
],
|
||||
"why_open": "No repo-managed syntax-guard verifier is present yet, so this directive still depends on manual trust rather than auditable proof.",
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def default_snapshot() -> dict[int, dict[str, str]]:
|
||||
return deepcopy(DEFAULT_REFERENCE_SNAPSHOT)
|
||||
|
||||
|
||||
class GiteaClient:
|
||||
def __init__(self, token: str, owner: str = DEFAULT_OWNER, repo: str = DEFAULT_REPO, base_url: str = DEFAULT_BASE_URL):
|
||||
self.token = token
|
||||
self.owner = owner
|
||||
self.repo = repo
|
||||
self.base_url = base_url.rstrip("/")
|
||||
|
||||
def get_issue(self, issue_number: int) -> dict[str, Any]:
|
||||
req = request.Request(
|
||||
f"{self.base_url}/repos/{self.owner}/{self.repo}/issues/{issue_number}",
|
||||
headers={"Authorization": f"token {self.token}", "Accept": "application/json"},
|
||||
)
|
||||
with request.urlopen(req, timeout=30) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
|
||||
|
||||
def load_snapshot(path: Path | None = None) -> dict[int, dict[str, str]]:
|
||||
if path is None:
|
||||
return default_snapshot()
|
||||
data = json.loads(path.read_text(encoding="utf-8"))
|
||||
return {int(k): v for k, v in data.items()}
|
||||
|
||||
|
||||
def refresh_snapshot(token_file: Path = DEFAULT_TOKEN_FILE) -> dict[int, dict[str, str]]:
|
||||
token = token_file.read_text(encoding="utf-8").strip()
|
||||
client = GiteaClient(token=token)
|
||||
snapshot: dict[int, dict[str, str]] = {}
|
||||
for issue_number in sorted(DEFAULT_REFERENCE_SNAPSHOT):
|
||||
issue = client.get_issue(issue_number)
|
||||
snapshot[issue_number] = {
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
}
|
||||
return snapshot
|
||||
|
||||
|
||||
def collect_repo_evidence(entries: list[dict[str, str]], repo_root: Path) -> tuple[list[str], list[str]]:
|
||||
present: list[str] = []
|
||||
missing: list[str] = []
|
||||
for entry in entries:
|
||||
label = f"`{entry['path']}` — {entry['description']}"
|
||||
if (repo_root / entry["path"]).exists():
|
||||
present.append(label)
|
||||
else:
|
||||
missing.append(label)
|
||||
return present, missing
|
||||
|
||||
|
||||
|
||||
def evaluate_reference(issue_number: int, snapshot: dict[int, dict[str, str]], expected_keywords: list[str]) -> dict[str, Any]:
|
||||
record = snapshot.get(issue_number, {"title": "missing from snapshot", "state": "unknown"})
|
||||
title = record["title"]
|
||||
title_lower = title.lower()
|
||||
matched_keywords = [kw for kw in expected_keywords if kw.lower() in title_lower]
|
||||
aligned = bool(matched_keywords) if expected_keywords else True
|
||||
return {
|
||||
"number": issue_number,
|
||||
"title": title,
|
||||
"state": record["state"],
|
||||
"aligned": aligned,
|
||||
"matched_keywords": matched_keywords,
|
||||
}
|
||||
|
||||
|
||||
|
||||
def classify_workstream(reference_results: list[dict[str, Any]], evidence_present: list[str], missing_deliverables: list[str]) -> str:
|
||||
has_drift = any(not item["aligned"] for item in reference_results)
|
||||
if not evidence_present:
|
||||
return "MISSING"
|
||||
if has_drift or missing_deliverables:
|
||||
return "PARTIAL"
|
||||
return "GROUNDED"
|
||||
|
||||
|
||||
|
||||
def evaluate_directive(snapshot: dict[int, dict[str, str]] | None = None, repo_root: Path | None = None) -> dict[str, Any]:
|
||||
snapshot = snapshot or default_snapshot()
|
||||
repo_root = repo_root or DEFAULT_REPO_ROOT
|
||||
workstreams: list[dict[str, Any]] = []
|
||||
drift_items: list[str] = []
|
||||
|
||||
for lane in WORKSTREAMS:
|
||||
reference_results = [
|
||||
evaluate_reference(issue_number, snapshot, lane["expected_keywords"])
|
||||
for issue_number in lane["references"]
|
||||
]
|
||||
present, missing = collect_repo_evidence(lane["repo_evidence"], repo_root)
|
||||
for item in reference_results:
|
||||
if not item["aligned"]:
|
||||
drift_items.append(
|
||||
f"#{item['number']} is cited for {lane['name']}, but its current title is '{item['title']}'."
|
||||
)
|
||||
workstream = {
|
||||
"key": lane["key"],
|
||||
"name": lane["name"],
|
||||
"requirement": lane["requirement"],
|
||||
"reference_results": reference_results,
|
||||
"repo_evidence_present": present,
|
||||
"repo_evidence_missing": missing,
|
||||
"missing_deliverables": list(lane["missing_deliverables"]),
|
||||
"why_open": lane["why_open"],
|
||||
}
|
||||
workstream["status"] = classify_workstream(
|
||||
reference_results=reference_results,
|
||||
evidence_present=present,
|
||||
missing_deliverables=workstream["missing_deliverables"],
|
||||
)
|
||||
workstreams.append(workstream)
|
||||
|
||||
next_actions: list[str] = []
|
||||
for workstream in workstreams:
|
||||
if workstream["missing_deliverables"]:
|
||||
next_actions.append(f"{workstream['name']}: {workstream['missing_deliverables'][0]}")
|
||||
|
||||
return {
|
||||
"issue_number": 524,
|
||||
"title": DIRECTIVE_TITLE,
|
||||
"summary": DIRECTIVE_SUMMARY,
|
||||
"reference_snapshot": {str(k): v for k, v in sorted(snapshot.items())},
|
||||
"workstreams": workstreams,
|
||||
"reference_drift": drift_items,
|
||||
"grounded_workstreams": sum(1 for item in workstreams if item["status"] == "GROUNDED"),
|
||||
"partial_workstreams": sum(1 for item in workstreams if item["status"] == "PARTIAL"),
|
||||
"missing_workstreams": sum(1 for item in workstreams if item["status"] == "MISSING"),
|
||||
"next_actions": next_actions,
|
||||
}
|
||||
|
||||
|
||||
|
||||
def render_markdown(result: dict[str, Any]) -> str:
|
||||
lines = [
|
||||
f"# {result['title']}",
|
||||
"",
|
||||
"Grounding report for `timmy-home #524`.",
|
||||
"",
|
||||
result["summary"],
|
||||
"",
|
||||
"This remains a `Refs #524` artifact. The directive spans multiple repos and operator actions, so this report makes the current repo-side state executable without pretending the whole migration is complete.",
|
||||
"",
|
||||
"## Directive Snapshot",
|
||||
"",
|
||||
f"- Repo-grounded workstreams: {result['grounded_workstreams']}",
|
||||
f"- Partial workstreams: {result['partial_workstreams']}",
|
||||
f"- Missing workstreams: {result['missing_workstreams']}",
|
||||
f"- Drifted references: {len(result['reference_drift'])}",
|
||||
"",
|
||||
"## Reference Drift",
|
||||
"",
|
||||
]
|
||||
if result["reference_drift"]:
|
||||
lines.extend(f"- {item}" for item in result["reference_drift"])
|
||||
else:
|
||||
lines.append("- No stale cross-links detected in the directive snapshot.")
|
||||
|
||||
lines.extend(["", "## Workstream Matrix", ""])
|
||||
for index, workstream in enumerate(result["workstreams"], start=1):
|
||||
lines.extend(
|
||||
[
|
||||
f"### {index}. {workstream['name']} — {workstream['status']}",
|
||||
"",
|
||||
f"- Requirement: {workstream['requirement']}",
|
||||
]
|
||||
)
|
||||
if workstream["reference_results"]:
|
||||
lines.append("- Referenced issues:")
|
||||
for ref in workstream["reference_results"]:
|
||||
alignment = "aligned" if ref["aligned"] else "DRIFT"
|
||||
lines.append(
|
||||
f" - #{ref['number']} ({ref['state']}) — {ref['title']} [{alignment}]"
|
||||
)
|
||||
else:
|
||||
lines.append("- Referenced issues: none listed in the directive body")
|
||||
|
||||
if workstream["repo_evidence_present"]:
|
||||
lines.append("- Repo evidence present:")
|
||||
lines.extend(f" - {item}" for item in workstream["repo_evidence_present"])
|
||||
else:
|
||||
lines.append("- Repo evidence present: none")
|
||||
|
||||
if workstream["repo_evidence_missing"]:
|
||||
lines.append("- Repo evidence expected but missing:")
|
||||
lines.extend(f" - {item}" for item in workstream["repo_evidence_missing"])
|
||||
|
||||
if workstream["missing_deliverables"]:
|
||||
lines.append("- Missing operator deliverables:")
|
||||
lines.extend(f" - {item}" for item in workstream["missing_deliverables"])
|
||||
else:
|
||||
lines.append("- Missing operator deliverables: none")
|
||||
|
||||
lines.append(f"- Why this lane remains open: {workstream['why_open']}")
|
||||
lines.append("")
|
||||
|
||||
lines.extend(["## Highest-Leverage Next Actions", ""])
|
||||
lines.extend(f"- {item}" for item in result["next_actions"])
|
||||
|
||||
lines.extend(
|
||||
[
|
||||
"",
|
||||
"## Why #524 Remains Open",
|
||||
"",
|
||||
"- The directive bundles five separate workstreams with different evidence surfaces.",
|
||||
"- Multiple cited issue numbers have drifted away from the work they are supposed to anchor.",
|
||||
"- Repo scaffolding exists for Nostr, sovereignty audits, and Morrowind, but the operator-facing bundles are still missing.",
|
||||
"- Syntax Guard verification is still undocumented and unproven inside this repo.",
|
||||
]
|
||||
)
|
||||
|
||||
return "\n".join(lines).rstrip() + "\n"
|
||||
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Render the unified fleet sovereignty status report for issue #524")
|
||||
parser.add_argument("--snapshot", help="Optional JSON snapshot file overriding the default issue-title/state snapshot")
|
||||
parser.add_argument("--live", action="store_true", help="Refresh the issue snapshot from Gitea before rendering")
|
||||
parser.add_argument("--token-file", default=str(DEFAULT_TOKEN_FILE), help="Token file used with --live")
|
||||
parser.add_argument("--output", help="Optional path to write the rendered report")
|
||||
parser.add_argument("--json", action="store_true", help="Print computed JSON instead of markdown")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.live:
|
||||
snapshot = refresh_snapshot(Path(args.token_file).expanduser())
|
||||
else:
|
||||
snapshot = load_snapshot(Path(args.snapshot).expanduser() if args.snapshot else None)
|
||||
|
||||
result = evaluate_directive(snapshot=snapshot, repo_root=DEFAULT_REPO_ROOT)
|
||||
rendered = json.dumps(result, indent=2) if args.json else render_markdown(result)
|
||||
|
||||
if args.output:
|
||||
output_path = Path(args.output).expanduser()
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text(rendered, encoding="utf-8")
|
||||
print(f"Directive status written to {output_path}")
|
||||
else:
|
||||
print(rendered)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,77 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
SCRIPT_PATH = ROOT / "scripts" / "unified_fleet_sovereignty_status.py"
|
||||
DOC_PATH = ROOT / "docs" / "UNIFIED_FLEET_SOVEREIGNTY_STATUS.md"
|
||||
|
||||
|
||||
def _load_module(path: Path, name: str):
|
||||
assert path.exists(), f"missing {path.relative_to(ROOT)}"
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
assert spec and spec.loader
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def _workstream(result: dict, key: str) -> dict:
|
||||
for workstream in result["workstreams"]:
|
||||
if workstream["key"] == key:
|
||||
return workstream
|
||||
raise AssertionError(f"missing workstream {key}")
|
||||
|
||||
|
||||
def test_evaluate_directive_flags_reference_drift_without_faking_completion() -> None:
|
||||
mod = _load_module(SCRIPT_PATH, "unified_fleet_sovereignty_status")
|
||||
result = mod.evaluate_directive(snapshot=mod.default_snapshot(), repo_root=ROOT)
|
||||
|
||||
assert len(result["reference_drift"]) == 4
|
||||
assert any("#813" in item for item in result["reference_drift"])
|
||||
assert any("#103" in item for item in result["reference_drift"])
|
||||
|
||||
nostr = _workstream(result, "nostr-migration")
|
||||
assert nostr["status"] == "PARTIAL"
|
||||
assert any("timmy_client.py" in item for item in nostr["repo_evidence_present"])
|
||||
|
||||
lexicon = _workstream(result, "lexicon-enforcement")
|
||||
assert all(item["aligned"] for item in lexicon["reference_results"])
|
||||
assert lexicon["status"] == "PARTIAL"
|
||||
|
||||
syntax_guard = _workstream(result, "syntax-guard")
|
||||
assert syntax_guard["status"] == "MISSING"
|
||||
assert any("deployment verifier" in item for item in syntax_guard["missing_deliverables"])
|
||||
|
||||
|
||||
def test_render_markdown_includes_required_sections_and_grounding_evidence() -> None:
|
||||
mod = _load_module(SCRIPT_PATH, "unified_fleet_sovereignty_status")
|
||||
result = mod.evaluate_directive(snapshot=mod.default_snapshot(), repo_root=ROOT)
|
||||
report = mod.render_markdown(result)
|
||||
|
||||
for snippet in (
|
||||
"# [DIRECTIVE] Unified Fleet Sovereignty & Comms Migration",
|
||||
"## Directive Snapshot",
|
||||
"## Reference Drift",
|
||||
"## Workstream Matrix",
|
||||
"### 5. Infrastructure Hardening / Syntax Guard — MISSING",
|
||||
"`infrastructure/timmy-bridge/client/timmy_client.py`",
|
||||
"machine-checkable lexicon policy for review/triage",
|
||||
"## Why #524 Remains Open",
|
||||
):
|
||||
assert snippet in report
|
||||
|
||||
|
||||
def test_repo_contains_committed_issue_524_grounding_doc() -> None:
|
||||
assert DOC_PATH.exists(), "missing committed directive grounding doc"
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
for snippet in (
|
||||
"# [DIRECTIVE] Unified Fleet Sovereignty & Comms Migration",
|
||||
"## Reference Drift",
|
||||
"## Workstream Matrix",
|
||||
"## Highest-Leverage Next Actions",
|
||||
"## Why #524 Remains Open",
|
||||
):
|
||||
assert snippet in text
|
||||
Reference in New Issue
Block a user