Compare commits
1 Commits
fix/1123
...
burn/672-1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b587e756e0 |
262
GENOME.md
Normal file
262
GENOME.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# GENOME.md — the-nexus
|
||||
|
||||
> Codebase Genome: The Sovereign Home of Timmy's Consciousness
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
**the-nexus** is Timmy's sovereign home — a 3D world built with Three.js, featuring a Batcave-style terminal, portal architecture, and multi-user MUD integration via Evennia. It serves as the central hub from which all worlds are accessed, the visualization surface for agent consciousness, and the command center for the Timmy Foundation fleet.
|
||||
|
||||
**Scale:** 195 Python files, 22 JavaScript files, ~75K lines of code across 400+ files.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Frontend Layer"
|
||||
IDX[index.html]
|
||||
BOOT[boot.js]
|
||||
COMP[nexus/components/*]
|
||||
PLAY[playground/playground.html]
|
||||
end
|
||||
|
||||
subgraph "Backend Layer"
|
||||
SRV[server.py<br/>WebSocket Gateway :8765]
|
||||
BRIDGE[multi_user_bridge.py<br/>Evennia MUD Bridge]
|
||||
LLAMA[nexus/llama_provider.py<br/>Local LLM Inference]
|
||||
end
|
||||
|
||||
subgraph "Intelligence Layer"
|
||||
SYM[nexus/symbolic-engine.js<br/>Symbolic Reasoning]
|
||||
THINK[nexus/nexus_think.py<br/>Consciousness Loop]
|
||||
PERCEP[nexus/perception_adapter.py<br/>Perception Buffer]
|
||||
TRAJ[nexus/trajectory_logger.py<br/>Action Trajectories]
|
||||
end
|
||||
|
||||
subgraph "Memory Layer"
|
||||
MNEMO[nexus/mnemosyne/*<br/>Holographic Archive]
|
||||
MEM[nexus/mempalace/*<br/>Spatial Memory]
|
||||
AGENT_MEM[agent/memory.py<br/>Cross-Session Memory]
|
||||
EXP[nexus/experience_store.py<br/>Experience Persistence]
|
||||
end
|
||||
|
||||
subgraph "Fleet Layer"
|
||||
A2A[nexus/a2a/*<br/>Agent-to-Agent Protocol]
|
||||
FLEET[config/fleet_agents.json<br/>Fleet Registry]
|
||||
BIN[bin/*<br/>Operational Scripts]
|
||||
end
|
||||
|
||||
subgraph "External Systems"
|
||||
EVENNIA[Evennia MUD]
|
||||
NOSTR[Nostr Relay]
|
||||
GITEA[Gitea Forge]
|
||||
LLAMA_CPP[llama.cpp Server]
|
||||
end
|
||||
|
||||
IDX --> SRV
|
||||
SRV --> THINK
|
||||
SRV --> BRIDGE
|
||||
BRIDGE --> EVENNIA
|
||||
THINK --> SYM
|
||||
THINK --> PERCEP
|
||||
THINK --> TRAJ
|
||||
THINK --> LLAMA
|
||||
LLAMA --> LLAMA_CPP
|
||||
SYM --> MNEMO
|
||||
THINK --> MNEMO
|
||||
THINK --> MEM
|
||||
THINK --> EXP
|
||||
AGENT_MEM --> MEM
|
||||
A2A --> GITEA
|
||||
THINK --> NOSTR
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Points
|
||||
|
||||
| Entry Point | Type | Purpose |
|
||||
|-------------|------|---------|
|
||||
| `index.html` | Browser | Main 3D world (Three.js) |
|
||||
| `server.py` | Python | WebSocket gateway on :8765 |
|
||||
| `boot.js` | Browser | Module loader, file protocol guard |
|
||||
| `multi_user_bridge.py` | Python | Evennia MUD ↔ AI agent bridge |
|
||||
| `nexus/a2a/server.py` | Python | A2A JSON-RPC server |
|
||||
| `nexus/mnemosyne/cli.py` | CLI | Archive management |
|
||||
| `bin/nexus_watchdog.py` | Script | Health monitoring |
|
||||
| `scripts/smoke.mjs` | Script | Smoke tests |
|
||||
|
||||
---
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User (Browser)
|
||||
│
|
||||
▼
|
||||
index.html (Three.js 3D world)
|
||||
│
|
||||
├── WebSocket ──► server.py :8765
|
||||
│ │
|
||||
│ ├──► nexus_think.py (consciousness loop)
|
||||
│ │ ├── perception_adapter.py (parse events)
|
||||
│ │ ├── symbolic-engine.js (reasoning)
|
||||
│ │ ├── llama_provider.py (inference)
|
||||
│ │ ├── trajectory_logger.py (action log)
|
||||
│ │ └── experience_store.py (persistence)
|
||||
│ │
|
||||
│ └──► evennia_ws_bridge.py
|
||||
│ └──► Evennia MUD (telnet :4000)
|
||||
│
|
||||
├── Three.js Scene ──► nexus/components/*
|
||||
│ ├── memory-particles.js (memory viz)
|
||||
│ ├── portal-status-wall.html (portals)
|
||||
│ ├── fleet-health-dashboard.html
|
||||
│ └── session-rooms.js (spatial rooms)
|
||||
│
|
||||
└── Playground ──► playground/playground.html (creative mode)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### SymbolicEngine (`nexus/symbolic-engine.js`)
|
||||
Bitmask-based symbolic reasoning engine. Facts are stored as boolean flags, rules fire when patterns match. Used for world state reasoning without LLM overhead.
|
||||
|
||||
### NexusMind (`nexus/nexus_think.py`)
|
||||
The consciousness loop. Receives perceptions, invokes reasoning, produces actions. The bridge between the 3D world and the AI agent.
|
||||
|
||||
### PerceptionBuffer (`nexus/perception_adapter.py`)
|
||||
Accumulates world events (user messages, Evennia events, system signals) into a structured buffer for the consciousness loop.
|
||||
|
||||
### MemPalace (`nexus/mempalace/`, `mempalace/`)
|
||||
Spatial memory system. Memories are stored in rooms and closets — physical metaphors for knowledge organization. Supports fleet-wide shared memory wings.
|
||||
|
||||
### Mnemosyne (`nexus/mnemosyne/`)
|
||||
Holographic archive. Ingests documents, extracts meaning, builds a graph of linked concepts. The long-term memory layer.
|
||||
|
||||
### Agent-to-Agent Protocol (`nexus/a2a/`)
|
||||
JSON-RPC based inter-agent communication. Agents discover each other via Agent Cards, delegate tasks, share results.
|
||||
|
||||
### Multi-User Bridge (`multi_user_bridge.py`)
|
||||
121K-line Evennia MUD bridge. Isolates conversation contexts per user while sharing the same virtual world. Each user gets their own AIAgent instance.
|
||||
|
||||
---
|
||||
|
||||
## API Surface
|
||||
|
||||
### WebSocket API (server.py :8765)
|
||||
```
|
||||
ws://localhost:8765
|
||||
send: {"type": "perception", "data": {...}}
|
||||
recv: {"type": "action", "data": {...}}
|
||||
recv: {"type": "heartbeat", "data": {...}}
|
||||
```
|
||||
|
||||
### A2A JSON-RPC (nexus/a2a/server.py)
|
||||
```
|
||||
POST /a2a/v1
|
||||
{"jsonrpc": "2.0", "method": "SendMessage", "params": {...}}
|
||||
|
||||
GET /.well-known/agent-card.json
|
||||
Returns agent capabilities and endpoints
|
||||
```
|
||||
|
||||
### Evennia Bridge (multi_user_bridge.py)
|
||||
```
|
||||
telnet://localhost:4000
|
||||
Evennia MUD commands → AI responses
|
||||
Each user isolated via session ID
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `multi_user_bridge.py` | 121K | Evennia MUD bridge (largest file) |
|
||||
| `index.html` | 21K | Main 3D world |
|
||||
| `nexus/symbolic-engine.js` | 12K | Symbolic reasoning |
|
||||
| `nexus/evennia_ws_bridge.py` | 14K | Evennia ↔ WebSocket |
|
||||
| `nexus/a2a/server.py` | 12K | A2A server |
|
||||
| `agent/memory.py` | 12K | Cross-session memory |
|
||||
| `server.py` | 4K | WebSocket gateway |
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage
|
||||
|
||||
**Test files:** 34 test files in `tests/`
|
||||
|
||||
| Area | Tests | Status |
|
||||
|------|-------|--------|
|
||||
| Portal Registry | `test_portal_registry_schema.py` | ✅ |
|
||||
| MemPalace | `test_mempalace_*.py` (4 files) | ✅ |
|
||||
| Nexus Watchdog | `test_nexus_watchdog.py` | ✅ |
|
||||
| A2A | `test_a2a.py` | ✅ |
|
||||
| Fleet Audit | `test_fleet_audit.py` | ✅ |
|
||||
| Provenance | `test_provenance.py` | ✅ |
|
||||
| Boot | `boot.test.js` | ✅ |
|
||||
|
||||
### Coverage Gaps
|
||||
|
||||
- **No tests for `multi_user_bridge.py`** (121K lines, zero test coverage)
|
||||
- **No tests for `server.py` WebSocket gateway**
|
||||
- **No tests for `nexus/symbolic-engine.js`** (only `symbolic-engine.test.js` stub)
|
||||
- **No integration tests for Evennia ↔ Bridge ↔ AI flow**
|
||||
- **No load tests for WebSocket connections**
|
||||
- **No tests for Nostr publisher**
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **WebSocket gateway** runs on `0.0.0.0:8765` — accessible from network. Needs auth or firewall.
|
||||
2. **No authentication** on WebSocket or A2A endpoints in current code.
|
||||
3. **Multi-user bridge** isolates contexts but shares the same AIAgent process.
|
||||
4. **Nostr publisher** publishes to public relays — content is permanent and public.
|
||||
5. **Fleet scripts** in `bin/` have broad filesystem access.
|
||||
6. **Systemd services** (`systemd/llama-server.service`) run as root.
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **Python:** websockets, pytest, pyyaml, edge-tts, requests, playwright
|
||||
- **JavaScript:** Three.js (CDN), Monaco Editor (CDN)
|
||||
- **External:** Evennia MUD, llama.cpp, Nostr relay, Gitea
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Config | File | Purpose |
|
||||
|--------|------|---------|
|
||||
| Fleet agents | `config/fleet_agents.json` | Agent registry for A2A |
|
||||
| MemPalace | `nexus/mempalace/config.py` | Memory paths and settings |
|
||||
| DeepDive | `config/deepdive_sources.yaml` | Research sources |
|
||||
| MCP | `mcp_config.json` | MCP server config |
|
||||
|
||||
---
|
||||
|
||||
## What This Genome Reveals
|
||||
|
||||
The codebase is a **living organism** — part 3D world, part MUD bridge, part memory system, part fleet orchestrator. The `multi_user_bridge.py` alone is 121K lines — larger than most entire projects.
|
||||
|
||||
**Critical findings:**
|
||||
1. The 121K-line bridge has zero test coverage
|
||||
2. WebSocket gateway exposes on 0.0.0.0 without auth
|
||||
3. No load testing infrastructure exists
|
||||
4. Symbolic engine test is a stub
|
||||
5. Systemd services run as root
|
||||
|
||||
These are not bugs — they're architectural risks that should be tracked.
|
||||
|
||||
---
|
||||
|
||||
*Generated by Codebase Genome Pipeline — Issue #672*
|
||||
@@ -37,8 +37,6 @@ import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
@@ -121,6 +119,8 @@ def _check_memory(threshold_pct: int = 90) -> tuple[str, str]:
|
||||
|
||||
def _check_gitea_reachability(gitea_url: str = "https://forge.alexanderwhitestone.com") -> tuple[str, str]:
|
||||
"""Return (status, detail) for Gitea HTTPS reachability."""
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
try:
|
||||
with urllib.request.urlopen(gitea_url, timeout=10) as resp:
|
||||
code = resp.status
|
||||
@@ -131,21 +131,6 @@ def _check_gitea_reachability(gitea_url: str = "https://forge.alexanderwhiteston
|
||||
return "WARN", f"Gitea unreachable: {exc}"
|
||||
|
||||
|
||||
def _check_llama_server(endpoint: str = "http://127.0.0.1:11435") -> tuple[str, str]:
|
||||
"""Return (status, detail) for the local llama.cpp server health endpoint."""
|
||||
health_url = f"{endpoint.rstrip('/')}/health"
|
||||
try:
|
||||
req = urllib.request.Request(health_url, headers={"Accept": "application/json"})
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
data = json.loads(resp.read().decode())
|
||||
if data.get("status") == "ok":
|
||||
model_name = Path(str(data.get("model_path", ""))).name or data.get("model", "unknown-model")
|
||||
return "OK", f"llama-server healthy at {endpoint} ({model_name})"
|
||||
return "WARN", f"llama-server unhealthy at {endpoint}: {data}"
|
||||
except Exception as exc:
|
||||
return "WARN", f"llama-server unreachable at {endpoint}: {exc}"
|
||||
|
||||
|
||||
def _check_world_readable_secrets() -> tuple[str, str]:
|
||||
"""Return (status, detail) for world-readable sensitive files."""
|
||||
sensitive_patterns = ["*.key", "*.pem", "*.secret", ".env", "*.token"]
|
||||
@@ -187,9 +172,6 @@ def generate_report(date_str: str, checker_mod) -> str:
|
||||
gitea_status, gitea_detail = _check_gitea_reachability()
|
||||
rows.append(("Alpha VPS", gitea_status, gitea_detail))
|
||||
|
||||
llama_status, llama_detail = _check_llama_server()
|
||||
rows.append(("Local LLM", llama_status, llama_detail))
|
||||
|
||||
sec_status, sec_detail = _check_world_readable_secrets()
|
||||
rows.append(("Security", sec_status, sec_detail))
|
||||
|
||||
|
||||
@@ -40,9 +40,6 @@ Standardizes local LLM inference across the fleet using llama.cpp.
|
||||
curl -sf http://localhost:11435/health
|
||||
curl -s http://localhost:11435/v1/models
|
||||
|
||||
Night Watch integration:
|
||||
- `bin/night_watch.py` probes the local llama.cpp `/health` endpoint and surfaces failures in the nightly report.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Won't start → smaller model / lower quant
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
|
||||
|
||||
|
||||
class _FakeResponse:
|
||||
def __init__(self, payload: dict):
|
||||
self.payload = json.dumps(payload).encode()
|
||||
|
||||
def read(self):
|
||||
return self.payload
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc, tb):
|
||||
return False
|
||||
|
||||
|
||||
class _FakeHeartbeatReport:
|
||||
def to_panel_markdown(self):
|
||||
return "## Heartbeat Panel\n\nAll jobs healthy."
|
||||
|
||||
|
||||
class _FakeChecker:
|
||||
@staticmethod
|
||||
def build_report():
|
||||
return _FakeHeartbeatReport()
|
||||
|
||||
|
||||
def test_check_llama_server_reports_healthy_model():
|
||||
import bin.night_watch as nw
|
||||
|
||||
with patch(
|
||||
"bin.night_watch.urllib.request.urlopen",
|
||||
return_value=_FakeResponse({"status": "ok", "model_path": "/opt/models/llama/Qwen2.5-7B-Instruct-Q4_K_M.gguf"}),
|
||||
):
|
||||
status, detail = nw._check_llama_server("http://127.0.0.1:11435")
|
||||
|
||||
assert status == "OK"
|
||||
assert "127.0.0.1:11435" in detail
|
||||
assert "Qwen2.5-7B-Instruct-Q4_K_M.gguf" in detail
|
||||
|
||||
|
||||
def test_check_llama_server_reports_warning_on_failure():
|
||||
import bin.night_watch as nw
|
||||
|
||||
with patch(
|
||||
"bin.night_watch.urllib.request.urlopen",
|
||||
side_effect=OSError("connection refused"),
|
||||
):
|
||||
status, detail = nw._check_llama_server("http://127.0.0.1:11435")
|
||||
|
||||
assert status == "WARN"
|
||||
assert "connection refused" in detail
|
||||
|
||||
|
||||
def test_generate_report_includes_local_llm_row():
|
||||
import bin.night_watch as nw
|
||||
|
||||
with patch("bin.night_watch._check_service", return_value=("OK", "hermes-bezalel is active")), \
|
||||
patch("bin.night_watch._check_disk", return_value=("OK", "disk usage 23%")), \
|
||||
patch("bin.night_watch._check_memory", return_value=("OK", "memory usage 30%")), \
|
||||
patch("bin.night_watch._check_gitea_reachability", return_value=("OK", "Gitea HTTPS is responding (200)")), \
|
||||
patch("bin.night_watch._check_world_readable_secrets", return_value=("OK", "no sensitive recently-modified world-readable files found")), \
|
||||
patch("bin.night_watch._check_llama_server", return_value=("OK", "llama-server healthy at http://127.0.0.1:11435 (Qwen2.5-7B-Instruct-Q4_K_M.gguf)")):
|
||||
report = nw.generate_report("2026-04-15", _FakeChecker())
|
||||
|
||||
assert "| Local LLM | OK | llama-server healthy at http://127.0.0.1:11435 (Qwen2.5-7B-Instruct-Q4_K_M.gguf) |" in report
|
||||
assert "## Heartbeat Panel" in report
|
||||
Reference in New Issue
Block a user