Compare commits

..

168 Commits

Author SHA1 Message Date
114525da5f feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Failing after 5s
Staging Verification Gate / verify-staging (push) Failing after 5s
Review Approval Gate / verify-review (pull_request) Failing after 9s
CI / test (pull_request) Failing after 51s
CI / validate (pull_request) Failing after 53s
2026-04-14 11:36:09 +00:00
0de60a756f feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 11:36:05 +00:00
e7bf08b799 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 11:36:03 +00:00
749878d3ea feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 11:36:01 +00:00
e24ad0f0a7 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 11:36:00 +00:00
1907388517 [claude] Close duplicate PRs for issue #1128 (#1449) (#1466)
Some checks failed
Deploy Nexus / deploy (push) Failing after 6s
Staging Verification Gate / verify-staging (push) Failing after 6s
Review Approval Gate / verify-review (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 46s
CI / test (pull_request) Failing after 48s
2026-04-14 02:28:09 +00:00
dbd2e400c0 Merge pull request 'feat: Add forge cleanup tools and documentation (#1128)' (#1437) from q/1128-1776129480 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 02:23:59 +00:00
071643c976 [claude] Close duplicate PRs for issue #1338 (#1451) (#1464)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 02:17:43 +00:00
c7a317babc [claude] Close duplicate PRs for issue #1339 (#1450) (#1465)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 02:17:38 +00:00
7e23aa0827 [claude] Close duplicate PRs for issue #1336 (#1452) (#1456)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 02:07:06 +00:00
1eeeea4412 Merge pull request 'fix: Remove duplicate content blocks from README.md and POLICY.md (#1338)' (#1432) from q/1338-1776129480 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 02:02:52 +00:00
cd78f9e4c8 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:57:22 +00:00
5171dda46a feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:57:17 +00:00
682431fab1 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:57:14 +00:00
7eb339f3ce feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:57:13 +00:00
2f5f874e84 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:57:11 +00:00
ad98bd5ead feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
CI / test (pull_request) Failing after 51s
CI / validate (pull_request) Failing after 51s
Review Approval Gate / verify-review (pull_request) Failing after 7s
2026-04-14 01:52:55 +00:00
e847b0e473 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:52:51 +00:00
63c6829ef8 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Staging Verification Gate / verify-staging (push) Has been cancelled
Deploy Nexus / deploy (push) Has been cancelled
2026-04-14 01:52:48 +00:00
a55647d5d3 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:52:45 +00:00
64719324e0 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:52:43 +00:00
ee6d12ccf6 [claude] Add .gitattributes export-ignore + large-repo clone docs (#1428) (#1433)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:51:18 +00:00
Alexander Whitestone
a29299820f feat: Add forge cleanup tools and documentation (#1128)
Some checks failed
CI / test (pull_request) Failing after 1m1s
Review Approval Gate / verify-review (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 1m1s
## Summary
Implements forge cleanup tools and documentation as requested in issue #1128.

## Changes
- scripts/cleanup-duplicate-prs.sh: Automated duplicate PR detection
- docs/forge-cleanup-analysis.md: Analysis of duplicate PRs
- docs/forge-cleanup-report.md: Cleanup report with metrics
- .github/workflows/pr-duplicate-check.yml: Weekly automated checks

Issue: #1128
2026-04-13 21:51:12 -04:00
84eb8104d8 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
Review Approval Gate / verify-review (pull_request) Failing after 10s
CI / test (pull_request) Failing after 55s
CI / validate (pull_request) Failing after 56s
2026-04-14 01:48:34 +00:00
93228388d7 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:48:29 +00:00
e27c51c6da feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:48:26 +00:00
ed79826608 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:48:23 +00:00
e438662c97 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:48:17 +00:00
Alexander Whitestone
e683a2213f fix: Remove duplicate content blocks from README.md and POLICY.md (#1338)
Some checks failed
CI / test (pull_request) Failing after 40s
Review Approval Gate / verify-review (pull_request) Failing after 6s
CI / validate (pull_request) Failing after 38s
This commit fixes issue #1338 by removing duplicate content blocks that
were appearing 3-4 times on the page.

Changes:
1. README.md:
   - Removed duplicate "Branch Protection & Review Policy" section (lines 121-134)
   - Removed duplicate "Running Locally" section (lines 149-167)
   - Kept the detailed "Branch Protection & Review Policy" section at the top
   - Kept the first "Running Locally" section with all content

2. POLICY.md:
   - Consolidated duplicate content into single cohesive policy
   - Merged two "Branch Protection Rules" sections
   - Merged two "Default Reviewer" sections
   - Merged two "Acceptance Criteria" sections
   - Added "Enforcement" and "Notes" sections from second half

The duplicate content was likely caused by a bad merge or template duplication.
This cleanup ensures each section appears only once while preserving all content.

Closes #1338
2026-04-13 21:44:26 -04:00
449170070b feat: standardize llama.cpp backend (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
CI / test (pull_request) Failing after 49s
Review Approval Gate / verify-review (pull_request) Failing after 7s
CI / validate (pull_request) Failing after 53s
2026-04-14 01:42:40 +00:00
3ed6bce5a0 feat: standardize llama.cpp backend (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:42:37 +00:00
2ecb4cd3a4 feat: standardize llama.cpp backend (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:42:29 +00:00
1c67f91b74 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:41:35 +00:00
53d9a55444 feat: standardize llama.cpp backend for sovereign local inference (#1123)
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 01:40:14 +00:00
dfbd96f792 Merge pull request 'fix: ChatLog.log() crash — CHATLOG_FILE defined after use (#1349)' (#1390) from burn/1349-1776125702 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 5s
Staging Verification Gate / verify-staging (push) Failing after 7s
2026-04-14 00:38:01 +00:00
5d5ea8ec1b Merge pull request '[A2A] Implement Agent2Agent Protocol for Fleet-Wizard Delegation' (#1380) from nexusburn/research-1122 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:37:29 +00:00
3f58b55351 Merge pull request 'fix(#1356): ThreadingHTTPServer concurrency fix' (#1389) from burn/1356-1776125702 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:37:25 +00:00
4b9f2154d4 Merge pull request 'fix: Add Sovereign Sound Playground and fix portals.json (#1354)' (#1391) from burn/1354-1776125702 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:37:20 +00:00
2e60c479ae Merge pull request 'feat: Add Reasoning Trace HUD Component' (#1374) from nexusburn/direction-architecture-875 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:36:25 +00:00
67a080b4fd Merge pull request 'feat: fleet audit tool — deduplicate agents, one identity per machine' (#1386) from burn/1144-1776120221 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:36:20 +00:00
961623b931 Merge pull request 'fix: MEMPALACE INIT shows real stats from fleet API (#1340)' (#1387) from burn/1340-1776120221 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:36:16 +00:00
3bb44a24e2 Merge pull request 'fix: remove duplicate content blocks from README.md' (#1392) from burn/1338-1776125702 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-14 00:36:10 +00:00
Alexander Whitestone
39faa6b862 fix: reconcile registry locations with fleet-routing.json, add missing agents
Some checks failed
CI / test (pull_request) Failing after 52s
Review Approval Gate / verify-review (pull_request) Successful in 10s
CI / validate (pull_request) Failing after 48s
- Aligned 7 location mismatches between identity-registry.yaml and
  fleet-routing.json (allegro, ezra, bezalel, bilbobagginshire,
  substratum, fenrir, kimi)
- Added carnice (active, local ollama agent) to registry
- Added allegro-primus (deprecated) to registry

Audit results: 16 findings → 7 info-only (ghost agents intentionally
kept for audit trail). Zero warnings. Registry VALID.
2026-04-13 20:20:57 -04:00
Alexander Whitestone
8fa43cc228 fix: Add Sovereign Sound Playground and fix portals.json (#1354)
Some checks failed
CI / test (pull_request) Failing after 53s
CI / validate (pull_request) Failing after 55s
Review Approval Gate / verify-review (pull_request) Failing after 7s
This commit addresses issue #1354 by:

1. Fixing portals.json syntax error (duplicate params field)
2. Adding the Sovereign Sound Playground as a new portal
3. Including the complete playground application

Changes:
- Fixed JSON syntax error in portals.json (line 41-44)
- Added playground/playground.html - Complete interactive audio-visual experience
- Added playground/README.md - Documentation and usage guide
- Updated portals.json with playground portal entry

The playground portal is configured with:
- Online status
- Visitor access mode
- Local destination URL
- Creative tool portal type

This resolves the issue and provides a working playground accessible through the Nexus portal system.
2026-04-13 20:18:57 -04:00
Alexander Whitestone
b9bc776fdb fix: remove duplicate content blocks from README.md
Some checks failed
Review Approval Gate / verify-review (pull_request) Failing after 7s
CI / test (pull_request) Failing after 43s
CI / validate (pull_request) Failing after 44s
## Summary
Fixed duplicate content blocks in README.md caused by bad merge.
Branch protection policy, default reviewers, and implementation status
blocks were duplicated 3-4 times on the page.

## Problem
The README.md file had massive duplication from multiple bad merges:
- Branch protection policy appeared 4 times
- Default reviewers appeared multiple times
- Implementation status appeared multiple times
- Repository-specific configuration duplicated
- Acceptance criteria duplicated

The file grew to 517 lines with the same content repeated.

## Solution
Cleaned up README.md to contain:
1. Single branch protection policy section
2. Original Nexus project content (preserved)
3. Clean structure without duplicates

Reduced from 517 lines to 167 lines while preserving all unique content.

## Changes
- Removed duplicate branch protection policy sections
- Removed duplicate default reviewers sections
- Removed duplicate implementation status sections
- Removed duplicate repository-specific configuration
- Removed duplicate acceptance criteria
- Preserved original Nexus project content
- Maintained clear structure and formatting

## Testing
- Verified all unique content is preserved
- Checked for any remaining duplicates
- Confirmed file structure is clean and readable

## Acceptance Criteria
 Branch protection policy appears once
 Default reviewers appear once
 Implementation status appears once
 Content is clear and not duplicated
 Original Nexus content preserved

Issue: #1338
2026-04-13 20:18:43 -04:00
Alexander Whitestone
9bcd41ad07 fix: ChatLog.log() crash — CHATLOG_FILE defined after use (#1349)
Some checks failed
CI / test (pull_request) Failing after 31s
CI / validate (pull_request) Failing after 31s
Review Approval Gate / verify-review (pull_request) Failing after 3s
Move configuration block (WORLD_DIR, CHATLOG_FILE, etc.) before the
ChatLog class definition. Previously CHATLOG_FILE was defined at line ~254
but used at line ~200 inside ChatLog.log(), causing NameError on every
chat message persistence attempt.

Fixes #1349.
2026-04-13 20:17:02 -04:00
Timmy
d7a15ae046 fix(#1356): ThreadingHTTPServer for multi-user bridge concurrency
Some checks failed
Review Approval Gate / verify-review (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 47s
CI / test (pull_request) Failing after 49s
Replace single-threaded HTTPServer with ThreadingHTTPServer
(thread-per-request) in both multi_user_bridge.py copies.

Fixes #1356
2026-04-13 20:16:21 -04:00
Timmy
7fab9799b1 fix: MEMPALACE INIT shows real stats from fleet API (#1340)
Some checks failed
CI / test (pull_request) Failing after 31s
CI / validate (pull_request) Failing after 53s
Review Approval Gate / verify-review (pull_request) Failing after 7s
Root cause: connectMemPalace() set placeholder values (0x, 0, 0B)
immediately and tried to connect to window.Claude.mcp which doesn't
exist in a normal browser. Never contacted the actual fleet API.

Fix:
- Replace connectMemPalace() to fetch from fleet API (/health, /wings)
- Show MEMPALACE CONNECTING during fetch, ACTIVE on success,
  OFFLINE if API unavailable
- Populate compression ratio, docs mined, AAAK size from real data
- Add formatBytes() helper for human-readable sizes
- Periodic refresh every 60s when connected
- Configurable API endpoint via ?mempalace=host:port query param
- Remove dead window.Claude.mcp mock code
2026-04-13 18:53:08 -04:00
Timmy (NEXUSBURN)
66c010301d feat: fleet audit tool — deduplicate agents, one identity per machine
Some checks failed
CI / test (pull_request) Failing after 38s
Review Approval Gate / verify-review (pull_request) Failing after 5s
CI / validate (pull_request) Failing after 34s
Closes #1144. Builds a fleet audit pipeline that detects duplicate
agent identities, ghost accounts, and authorship ambiguity across
all machines.

Deliverables:

bin/fleet_audit.py — Full audit tool with four checks:
  - Identity registry validation (one name per machine, unique gitea_user)
  - Git authorship audit (detects ambiguous committers from branch names)
  - Gitea org member audit (finds ghost accounts with zero activity)
  - Cross-reference registry vs fleet-routing.json (orphan/location mismatch)

fleet/identity-registry.yaml — Canonical identity registry:
  - 8 active agents (timmy, allegro, ezra, bezalel, bilbobagginshire,
    fenrir, substratum, claw-code)
  - 7 ghost/deprecated accounts marked inactive
  - Rules: one identity per machine, unique gitea_user, required fields

tests/test_fleet_audit.py — 11 tests covering all validation rules.

Usage:
  python3 bin/fleet_audit.py                  # full audit -> JSON
  python3 bin/fleet_audit.py --identity-check # registry only
  python3 bin/fleet_audit.py --git-authors    # authorship only
  python3 bin/fleet_audit.py --report out.json # write to file
2026-04-13 18:51:31 -04:00
Alexander Whitestone
bb9758c4d2 feat: implement A2A protocol for fleet-wizard delegation (#1122)
Some checks failed
CI / test (pull_request) Failing after 31s
Review Approval Gate / verify-review (pull_request) Failing after 4s
CI / validate (pull_request) Failing after 30s
Implements Google Agent2Agent Protocol v1.0 with full fleet integration:

## Phase 1 - Agent Card & Discovery
- Agent Card types with JSON serialization (camelCase, Part discrimination by key)
- Card generation from YAML config (~/.hermes/agent_card.yaml)
- Fleet registry with LocalFileRegistry + GiteaRegistry backends
- Discovery by skill ID or tag

## Phase 2 - Task Delegation
- Async A2A client with JSON-RPC SendMessage/GetTask/ListTasks/CancelTask
- FastAPI server with pluggable task handlers (skill-routed)
- CLI tool (bin/a2a_delegate.py) for fleet delegation
- Broadcast to multiple agents in parallel

## Phase 3 - Security & Reliability
- Bearer token + API key auth (configurable per agent)
- Retry logic (max 3 retries, 30s timeout)
- Audit logging for all inter-agent requests
- Error handling per A2A spec (-32001 to -32009 codes)

## Test Coverage
- 37 tests covering types, card building, registry, server integration
- Auth (required + success), handler routing, error handling

Files:
- nexus/a2a/ (types.py, card.py, client.py, server.py, registry.py)
- bin/a2a_delegate.py (CLI)
- config/ (agent_card.example.yaml, fleet_agents.json)
- docs/A2A_PROTOCOL.md
- tests/test_a2a.py (37 tests, all passing)
2026-04-13 18:31:05 -04:00
Alexander Whitestone
4488847c13 feat: Add Reasoning Trace HUD Component
Some checks failed
CI / test (pull_request) Failing after 57s
CI / validate (pull_request) Failing after 34s
Review Approval Gate / verify-review (pull_request) Failing after 6s
Closes #875

- Added new ReasoningTrace component for real-time reasoning visualization
- Shows agent's reasoning steps during complex task execution
- Supports step types: THINK, DECIDE, RECALL, PLAN, EXECUTE, VERIFY, DOUBT, MEMORY
- Includes confidence visualization, task tracking, and export functionality
- Integrated into existing GOFAI HUD system
2026-04-13 18:23:05 -04:00
106eea4015 Merge pull request 'test: guard index.html against merge junk' (#1365) from fix/issue-1336-1338-index-cleanup into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
Merge PR #1365: test: guard index.html against merge junk
2026-04-13 19:51:07 +00:00
Timmy
8a289d3b22 [verified] test: guard index.html against merge junk
Some checks failed
CI / test (pull_request) Failing after 19s
CI / validate (pull_request) Failing after 19s
Review Approval Gate / verify-review (pull_request) Failing after 4s
Refs #1336
Refs #1338

- assert index.html has no conflict markers or stray markdown
- assert cleaned single-instance blocks stay single
2026-04-13 15:38:28 -04:00
e82faa5855 [claude] Fix: unblock CI deploy and staging gate secrets (#1363) (#1364)
Some checks failed
Deploy Nexus / deploy (push) Failing after 6s
Staging Verification Gate / verify-staging (push) Failing after 4s
2026-04-13 19:25:00 +00:00
b411efcc09 Merge pull request 'fix: harden Three.js boot path' (#1362) from fix/issue-1337-threejs-init into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 4s
Staging Verification Gate / verify-staging (push) Failing after 3s
Merged by Timmy overnight cycle
2026-04-13 14:02:52 +00:00
Timmy
7e434cc567 [verified] fix: harden Three.js boot path
Some checks failed
CI / test (pull_request) Failing after 18s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
Fixes #1337

- show explicit guidance when opened from file://
- route browser boot through a classic script gate
- sanitize malformed generated app module before execution
- trim duplicated footer junk and add regression tests
2026-04-13 09:47:50 -04:00
859a215106 fix: [RESPONSIVE] Tighten layout for laptop and smaller-screen viewing (#1359)
Some checks failed
Deploy Nexus / deploy (push) Failing after 2s
Staging Verification Gate / verify-staging (push) Failing after 2s
Co-authored-by: Alexander Whitestone <alexander@alexanderwhitestone.com>
Co-committed-by: Alexander Whitestone <alexander@alexanderwhitestone.com>
2026-04-13 08:30:22 +00:00
21bd999cad Merge pull request 'fix: [RELIABILITY] Eliminate visible 404 and dead-control states in production Nexus' (#1360) from mimo/code/issue-707 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 08:29:43 +00:00
4287e6892a Merge pull request 'fix: call self.load() in all game system manager __init__ methods' (#1361) from burn/20260413-0408-fix into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 08:29:39 +00:00
Alexander Whitestone
2600e8b61c fix: call self.load() in all game system manager __init__ methods
Some checks failed
CI / test (pull_request) Failing after 17s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 2s
QuestManager, InventoryManager, GuildManager, CombatManager, and
MagicManager all had load() methods that were never called. This
meant quests were never seeded, items never appeared in rooms, and
all game data started empty on every server restart.

Fixes #1351
2026-04-13 04:13:38 -04:00
Alexander Whitestone
9e19c22c8e fix: eliminate two 404 sources — case mismatch + missing icons
Some checks failed
CI / test (pull_request) Failing after 16s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 4s
- app.js:1195: Fix timmy_Foundation → Timmy_Foundation in vision.json API URL.
  The lowercase 't' caused a silent 404 on case-sensitive servers, preventing
  world state from loading in fetchGiteaData().

- Create icons/icon-192x192.png and icons/icon-512x512.png placeholders.
  Both manifest.json and service-worker.js referenced these but the icons/
  directory was missing, causing 404 on every page load and SW install.

Refs #707
2026-04-13 04:10:01 -04:00
85ffbfed33 Merge pull request 'fix: one-way exits — rooms now bidirectional (#1350)' (#1357) from feat/paper-results into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
Merge PR #1357: fix: one-way exits — rooms now bidirectional (#1350)
2026-04-13 07:31:47 +00:00
Alexander Whitestone
0843a2a006 fix: one-way exits — rooms now bidirectional (#1350)
Some checks failed
CI / test (pull_request) Failing after 22s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 2s
World state: added explicit exits dict to all 5 rooms
Bridge: reads exits from world_state.json first, falls back to description parsing

Before: inner rooms (Tower, Garden, Forge, Bridge) had no exits
After: all rooms bidirectional — Threshold connects to all 4, each connects back
2026-04-13 03:27:19 -04:00
a5acbdb2c4 Merge pull request 'Add paper Results section (4 experiments)' (#1355) from feat/paper-results into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
Auto-merge #1355
2026-04-13 07:15:25 +00:00
Alexander Whitestone
39d68fd921 Add paper Results section with 4 experiments
Some checks failed
CI / test (pull_request) Failing after 18s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 4s
2026-04-13 02:28:34 -04:00
a290da4e41 Merge pull request 'feat: full-history persistent dedup index for DPO training pairs' (#1352) from feature/full-history-dedup into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 2s
Staging Verification Gate / verify-staging (push) Failing after 2s
Weekly Privacy Audit / privacy-audit (push) Successful in 5s
2026-04-13 03:11:43 +00:00
perplexity
4b15cf8283 feat: full-history persistent dedup index for DPO training pairs
Some checks failed
CI / test (pull_request) Failing after 16s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
Replace the 5-file sliding window cross-run dedup with a persistent
hash index that covers ALL historical training data. Overfitting risk
compounds across the full dataset — a 5-file window lets old duplicates
leak back into training after enough overnight runs.

New module: dedup_index.py (DedupIndex)
- Persistent JSON index (.dpo_dedup_index.json) alongside JSONL files
- Append-on-export: new prompt hashes registered after each successful
  export — no full rescan needed for normal operations
- Incremental sync: on load, detects JSONL files not yet indexed and
  ingests them automatically (handles files from other tools)
- Full rebuild: rebuild() scans ALL deepdive_*.jsonl + pairs_*.jsonl
  to reconstruct from scratch (first run, corruption recovery)
- Atomic writes (write-to-tmp + rename) to prevent index corruption
- Standalone CLI: python3 dedup_index.py <dir> --rebuild --stats

Modified: dpo_quality.py
- Imports DedupIndex with graceful degradation
- Replaces _load_history_hashes() with persistent index lookup
- Fallback: if index unavailable, scans ALL files in-memory (not just 5)
- New register_exported_hashes() method called after export
- Config key: dedup_full_history (replaces dedup_history_files)

Modified: dpo_generator.py
- Calls validator.register_exported_hashes() after successful export
  to keep the persistent index current without rescanning

Modified: config.yaml
- Replaced dedup_history_files: 5 with dedup_full_history: true

Tested — 7 integration tests:
  ✓ Fresh index build from empty directory
  ✓ Build from 3 existing JSONL files (15 unique hashes)
  ✓ Incremental sync when new file appears between runs
  ✓ Append after export + persistence across reloads
  ✓ Rebuild from scratch (recovers from corruption)
  ✓ Validator catches day-1 dupe from 20-day history (5-file window miss)
  ✓ Full pipeline: generate → validate → export → register → re-run detects
2026-04-13 03:11:10 +00:00
c00e1caa26 Merge pull request 'feat: DPO pair quality validator — gate before overnight training' (#1348) from feature/dpo-quality-validator into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 02:47:25 +00:00
perplexity
bb4922adeb feat: DPO pair quality validator — gate before overnight training
Some checks failed
CI / test (pull_request) Failing after 20s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
Add DPOQualityValidator that catches bad training pairs before they
enter the tightening loop. Wired into DPOPairGenerator between
generate() and export() as an automatic quality gate.

New module: dpo_quality.py
- 5 single-pair quality checks:
  1. Field length minimums (prompt ≥40, chosen ≥80, rejected ≥30 chars)
  2. Chosen/rejected length ratio (chosen must be ≥1.3x longer)
  3. Chosen≈rejected similarity (Jaccard ≤0.70 — catches low-contrast)
  4. Vocabulary diversity in chosen (unique word ratio ≥0.30)
  5. Substance markers in chosen (≥2 fleet/training/action terms)
- 2 cross-pair quality checks:
  6. Near-duplicate prompts within batch (Jaccard ≤0.85)
  7. Cross-run dedup against recent JSONL history files
- Two modes: 'drop' (filter out bad pairs) or 'flag' (export with warning)
- BatchReport with per-pair diagnostics, pass rates, and warnings
- Standalone CLI: python3 dpo_quality.py <file.jsonl> [--strict] [--json]

Modified: dpo_generator.py
- Imports DPOQualityValidator with graceful degradation
- Initializes from config validation section (enabled by default)
- Validates between generate() and export() in run()
- Quality report included in pipeline result dict
- Validator failure never blocks — falls back to unvalidated export

Modified: config.yaml
- New deepdive.training.dpo.validation section with all tunable knobs:
  enabled, flagged_pair_action, similarity thresholds, length minimums,
  dedup_history_files

Integration tested — 6 test cases covering:
  ✓ Good pairs pass (3/3 accepted)
  ✓ Bad pairs caught: too-short, high-similarity, inverted signal (0/3)
  ✓ Near-duplicate prompt detection (1/2 deduped)
  ✓ Flag mode preserves pairs with warnings (3/3 flagged)
  ✓ Cross-run deduplication against history (1 dupe caught)
  ✓ Full generator→validator→export pipeline (6/6 validated)
2026-04-13 02:46:50 +00:00
c19000de03 Merge pull request 'feat: Phase 3.5 — DPO training pair generation from Deep Dive pipeline' (#1347) from feature/deepdive-dpo-phase-3.5 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 02:24:35 +00:00
perplexity
55d53c513c feat: Phase 3.5 — DPO training pair generation from Deep Dive pipeline
Some checks failed
CI / test (pull_request) Failing after 22s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 2s
Wire arXiv relevance filter output directly into DPO pair generation,
closing the loop between research synthesis and overnight training data.

New module: dpo_generator.py
- DPOPairGenerator class with 3 pair strategies:
  * summarize: paper → fleet-grounded analysis (chosen) vs generic (rejected)
  * relevance: 'what matters to Hermes?' → scored context vs vague
  * implication: 'what should we do?' → actionable insight vs platitude
- Extracts synthesis excerpts matched to each ranked item
- Outputs to ~/.timmy/training-data/dpo-pairs/deepdive_{timestamp}.jsonl
- Format: {prompt, chosen, rejected, task_type, evidence_ids,
  source_session, safety_flags, metadata}

Pipeline changes (pipeline.py):
- Import DPOPairGenerator with graceful degradation
- Initialize from config deepdive.training.dpo section
- Execute as Phase 3.5 between synthesis and audio
- DPO results included in pipeline return dict
- Wrapped in try/except — DPO failure never blocks delivery

Config changes (config.yaml):
- New deepdive.training.dpo section with:
  enabled, output_dir, min_score, max_pairs_per_run, pair_types

Integration tested: 2 mock items × 3 pair types = 6 valid JSONL pairs.
Chosen responses consistently richer than rejected (assert-verified).
2026-04-13 02:24:04 +00:00
f737577faf purge: remove Anthropic from the-nexus fleet + deepdive (#1346)
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 02:02:12 +00:00
ff430d5aa0 Merge pull request 'fix: deduplicate playwright install in CI' (#1345) from perplexity/fix-ci-playwright-dupe into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 01:35:54 +00:00
d0af4035ef Merge pull request 'muda: remove 13 stale cross-repo artifacts' (#1344) from perplexity/muda-cleanup-cross-repo into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 01:35:45 +00:00
71e8ee5615 Merge PR #1343
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
Add structured GOFAI worker outcomes and goal-directed planning
2026-04-13 01:34:45 +00:00
6c02baeeca fix: deduplicate playwright install in CI
Some checks failed
CI / test (pull_request) Failing after 19s
CI / validate (pull_request) Failing after 18s
Review Approval Gate / verify-review (pull_request) Failing after 4s
2026-04-13 01:34:09 +00:00
2bc7a81859 muda: remove stale artifact protected_branches.yaml`
Some checks failed
CI / test (pull_request) Failing after 26s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-13 01:33:57 +00:00
389aafb5ab muda: remove stale artifact codowners 2026-04-13 01:33:56 +00:00
07c8b29014 muda: remove stale artifact cODEOWNERS 2026-04-13 01:33:54 +00:00
cab7855469 muda: remove stale artifact cODEOWNERS 2026-04-13 01:33:52 +00:00
5039f31545 muda: remove stale artifact cODEOWNERS 2026-04-13 01:33:51 +00:00
e6e9d261df muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:49 +00:00
b19cd64415 muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:47 +00:00
7505bc21a5 muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:46 +00:00
8398abec89 muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:44 +00:00
49cf69c65a muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:42 +00:00
32ee8d5568 muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:41 +00:00
0ef1627ed1 muda: remove stale artifact CONTRIBUTING.md 2026-04-13 01:33:39 +00:00
c1e7ec4b9c muda: remove stale artifact CODEOWNERS 2026-04-13 01:33:37 +00:00
8e21c0e3ae Merge pull request 'fix: [SMOKE] [CI] Fix dependencies, CI pipeline, and clean muda' (#1334) from fix/smoke-tests-and-muda into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:57:41 +00:00
16a14fd014 fix: remove stale file docus/branch-protection.md
Some checks failed
CI / test (pull_request) Failing after 23s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Successful in 3s
2026-04-13 00:56:49 +00:00
349cb0296c fix: remove stale file timmy-home/SOUL.md 2026-04-13 00:56:49 +00:00
10c4b66393 fix: remove stale file timmy-home/CONTRIBUTING.md 2026-04-13 00:56:49 +00:00
cd57b020ea fix: remove stale file timmy-home/CODEOWNERS 2026-04-13 00:56:49 +00:00
9bc9ed2b30 fix: remove stale file timmy-config/SOUL.md 2026-04-13 00:56:49 +00:00
3bbd944d43 fix: remove stale file timmy-config/CONTRIBUTING.md 2026-04-13 00:56:49 +00:00
737740a2e6 fix: remove stale file timmy-config/CODEOWNERS 2026-04-13 00:56:49 +00:00
b45350d815 fix: remove stale file the-nexus/CONTRIBUTING.md 2026-04-13 00:56:49 +00:00
ffbd4f09ea fix: remove stale file the-nexus/CODEOWNERS 2026-04-13 00:56:49 +00:00
eedfd1c462 fix: remove root muda .gitea.yaml 2026-04-13 00:56:49 +00:00
370a33028d feat: add playwright to repo truth guard 2026-04-13 00:56:49 +00:00
1af9530db0 fix: install playwright browsers in CI 2026-04-13 00:56:49 +00:00
3ebd0b18ce fix: align docker-compose.yml with deploy.sh services 2026-04-13 00:56:49 +00:00
8bff05581c fix: use requirements.txt in Dockerfile 2026-04-13 00:56:49 +00:00
056d8ae5ff fix: install playwright browsers in CI 2026-04-13 00:56:36 +00:00
39436f675e fix: add missing dependencies to requirements.txt 2026-04-13 00:56:36 +00:00
fe5b6f6877 Merge pull request 'docs: Nexus Symbolic Engine documentation and tests' (#1332) from feat/symbolic-docs-and-tests-v2 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:56:17 +00:00
b863900300 Merge pull request 'fix: [EPIC] Deep Dive: Sovereign NotebookLM + Daily AI Intelligence Briefing' (#1325) from mimo/build/issue-830 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 2s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:55:36 +00:00
b6cafe8807 Merge pull request 'feat: derive GOFAI perception from live Nexus state' (#1342) from burn/20260413-gofai-live-perception into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:55:27 +00:00
6ad0caf5e4 Merge pull request 'feat: Multi-user AI bridge + research paper draft' (#1326) from feat/multi-user-bridge into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 2s
2026-04-13 00:54:47 +00:00
53cc00ac5d Merge pull request 'fix: [UX] Build Nexus Health HUD component' (#1331) from mimo/build/issue-802 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:54:31 +00:00
53e9dd93d8 Merge pull request 'fix: clean corrupted .gitea.yml and remove stale artifacts' (#1319) from mimo/research/issue-893 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 2s
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:54:21 +00:00
c35940ef5d Merge pull request 'fix: [PORTALS] Show cross-world presence and where Timmy can meaningfully interact now' (#1304) from mimo/code/issue-717 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:51:55 +00:00
23b135a362 Merge pull request 'fix: [UI] Add Timmy action stream panel for Evennia command/result flow' (#1291) from mimo/build/issue-729 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:51:49 +00:00
9ae71de65c Merge pull request 'fix: [VISITOR] Distinguish visitor mode from operator mode in the Nexus UI' (#1286) from mimo/build/issue-710 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:51:42 +00:00
Alexander Whitestone
808d68cf62 fix: closes #717
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 13s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-13 00:51:37 +00:00
Alexander Whitestone
ff3691e81e fix: closes #729
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 13s
Review Approval Gate / verify-review (pull_request) Failing after 2s
2026-04-13 00:51:34 +00:00
Alexander Whitestone
024e74defe WIP: issue #710 (mimo swarm)
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-13 00:51:31 +00:00
6c67002161 Merge pull request 'fix: [PORTAL] Rebuild the portal stack as Timmy → Reflex → Pilot on clean backlog' (#1324) from mimo/build/issue-672 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:51:10 +00:00
43699c83cf Merge pull request 'fix: [IDENTITY] Make SOUL / Oath panel part of the main interaction loop' (#1316) from mimo/create/issue-709 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:50:58 +00:00
Alexander Whitestone
91f0bcb034 fix: closes #672
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-13 00:50:38 +00:00
Alexander Whitestone
873ca8865e Add SOUL/Oath panel to main interaction loop (issue #709)
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
- Added SOUL button to HUD top-right bar next to Atlas
- Added SOUL quick action in chat panel
- Added SOUL overlay with Identity, Oath, Conscience, and Sacred Trust sections
- Link to canonical SOUL.md on timmy-home
- CSS styles matching existing Nexus design system
- JS wiring for toggle/close

Also fixed: cleaned up merge conflict markers, removed duplicated
branch-policy/mem-palace/reviewers sections from footer
2026-04-13 00:50:09 +00:00
Alexander Whitestone
1e076aaa13 feat: derive GOFAI perception from live Nexus state
Some checks failed
CI / test (pull_request) Failing after 12s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 20:46:29 -04:00
2718c88374 Merge PR #1330
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 2s
Mainline GOFAI facts, deterministic worker reasoning, and plan offload
2026-04-13 00:26:36 +00:00
c111a3f6c7 Merge pull request '[INFRA] Swarm Governor — org-wide PR pileup prevention' (#1335) from perplexity/swarm-governor into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:26:23 +00:00
5cdd9aed32 Add swarm governor — prevents PR pileup across the org
Some checks failed
CI / test (pull_request) Failing after 11s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-13 00:26:15 +00:00
9abe12f596 Merge pull request 'fix: [INFRA] Stand up a local Windows game runtime for Bannerlord on Apple Silicon' (#1289) from mimo/build/issue-720 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 4s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:23:06 +00:00
b93b1dc1d4 Merge pull request 'fix: [BRIDGE] Feed Evennia room/command events into the Nexus websocket bridge' (#1305) from mimo/code/issue-727 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:22:37 +00:00
81077ab67d Merge pull request 'fix: [UX] Honest connection-state banner for Timmy, Forge, weather, and block feed' (#1323) from mimo/code/issue-696 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 4s
2026-04-13 00:22:11 +00:00
dcbef618a4 Merge pull request 'docs: add AI tools org assessment tracker (#1119)' (#1321) from mimo/build/issue-1119 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:22:00 +00:00
a038ae633e Merge pull request 'fix: [SOVEREIGN DIRECTIVE] Every wizard must catalog Alexander's requests and responses as artifacts' (#1311) from mimo/create/issue-1116 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:21:53 +00:00
6e8aee53f6 Merge pull request 'fix: [PORTAL] Deterministic Morrowind pilot loop with world-state proof' (#1303) from mimo/code/issue-673 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:21:41 +00:00
b2d9421cd6 Merge pull request 'fix: [HARNESS] Deterministic context compaction for long local sessions' (#1302) from mimo/code/issue-675 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:21:35 +00:00
dded4cffb1 Merge pull request 'fix: [Mnemosyne] Memory Constellation — glowing animated connection lines' (#1298) from mimo/code/issue-1215 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 3s
2026-04-13 00:20:53 +00:00
0511e5471a Merge pull request 'fix: [Mnemosyne] Memory search panel — text search through holographic archive' (#1296) from mimo/code/issue-1208 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:20:48 +00:00
f6e8ec332c Merge pull request 'fix: [EPIC] Steal GBrain — Adopt Garry Tan's production knowledge architecture' (#1295) from mimo/code/issue-1181 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:20:43 +00:00
4c597a758e Merge pull request 'fix: [PORTALS] Build a portal atlas / world directory for all current and future worlds' (#1287) from mimo/build/issue-712 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 5s
2026-04-13 00:19:35 +00:00
beb2c6f64d Merge pull request 'fix: [UI] Add first Nexus operator panel for Evennia room snapshot' (#1288) from mimo/build/issue-728 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 6s
2026-04-13 00:19:29 +00:00
0197639d25 Merge pull request 'fix: [EPIC] Operation Get A Job — LLC Formation, Revenue Pipeline, Client Acquisition' (#1328) from mimo/build/issue-901 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 5s
2026-04-13 00:13:33 +00:00
f6bd6f2548 Merge pull request 'fix: [ALLEGRO-BACKLOG] Build fleet health JSON feed for Nexus Watchdog' (#1329) from mimo/build/issue-865 into main
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:13:25 +00:00
f64ae7552d Merge pull request 'fix: [PERF] Add quality-tier feature gating for heavy visual effects' (#1285) from mimo/build/issue-706 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-13 00:13:19 +00:00
e8e645c3ac Merge pull request 'fix: [RETRO] Nightly Retrospective 2026-04-11 → 2026-04-12' (#1327) from mimo/code/issue-1277 into main
Some checks failed
Deploy Nexus / deploy (push) Failing after 3s
Staging Verification Gate / verify-staging (push) Failing after 4s
2026-04-13 00:13:10 +00:00
116459c8db test: add unit tests for symbolic engine
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 18s
Review Approval Gate / verify-review (pull_request) Failing after 4s
2026-04-12 23:59:40 +00:00
18224e666b docs: add README for nexus symbolic engine 2026-04-12 23:59:38 +00:00
c543202065 feat: integrate blackboard into AgentFSM
Some checks failed
Deploy Nexus / deploy (push) Failing after 2s
Staging Verification Gate / verify-staging (push) Failing after 2s
2026-04-12 23:32:05 +00:00
c6a60ec329 refactor: move symbolic engine components to separate file
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-12 23:28:40 +00:00
Alexander Whitestone
ed4c5da3cb fix: closes #865
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 13s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 19:28:17 -04:00
0ae8725cbd feat: integrate blackboard into MemoryOptimizer
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-12 23:28:13 +00:00
8cc707429e feat: extract symbolic engine components
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
Staging Verification Gate / verify-staging (push) Has been cancelled
2026-04-12 23:27:52 +00:00
Alexander Whitestone
163b1174e5 fix: [HUD] Health panel shows daemon reachability, session metrics, last-updated time
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
- Track local health daemon (localhost:8082) reachability instead of silently falling back
- Add LOCAL DAEMON service row so operators see daemon status at a glance
- Show session counts (local/total) when daemon provides them
- Add timestamp footer so HUD freshness is visible
- Fix stray ');' closing bracket on original function
2026-04-12 19:27:51 -04:00
Alexander Whitestone
dbad1cdf0b fix: closes #1277
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 19:27:19 -04:00
Alexander Whitestone
49ff85af46 feat: Multi-user AI bridge + research paper draft
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 3s
world/multi_user_bridge.py — HTTP API for multi-user AI interaction (280 lines)
commands/timmy_commands.py — Evennia commands (ask, tell, timmy status)
paper/ — Research paper draft + experiment results

Key findings:
- 0% cross-contamination (3 concurrent users, isolated contexts)
- Crisis detection triggers correctly ('Are you safe right now?')
2026-04-12 19:27:01 -04:00
Alexander Whitestone
adec58f980 fix: closes #830
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 19:26:43 -04:00
Alexander Whitestone
96426378e4 fix: portfolio CTA, rate card consistency, remove typo file
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
- Add 'Let's Build' CTA section to portfolio.md with contact info and next steps
- Fix README decision rule: minimum project k (was k, rate-card says k)
- Remove CONTRIBUTORING.md typo duplicate (content already in CONTRIBUTING.md)
2026-04-12 19:26:36 -04:00
Alexander Whitestone
0458342622 fix: closes #696
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
2026-04-12 19:25:39 -04:00
Alexander Whitestone
a5a748dc64 docs: add AI tools org assessment tracker (#1119)
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 17s
Review Approval Gate / verify-review (pull_request) Failing after 3s
Concise implementation checklist extracted from Bezalel's 205-repo scan.
Prioritizes the 7 actionable tools with clear next steps for the fleet.
2026-04-12 19:24:38 -04:00
Alexander Whitestone
34721317ac fix: closes #893
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 13s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 12:55:16 -04:00
Alexander Whitestone
5dfcf0e660 Add sovereign room to MemPalace fleet taxonomy
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 2s
Refs #1116. Adds 'sovereign' room for cataloging Alexander Whitestone's
requests and responses as dated, retrievable artifacts.

Room config:
- key: sovereign, available to all wizards
- Naming convention: YYYY-MM-DD_HHMMSS_<topic>.md
- Running INDEX.md for chronological catalog
- Fleet-wide tunnel for cross-wizard search
2026-04-12 12:40:06 -04:00
Alexander Whitestone
229edf16e2 fix: closes #727
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 12:33:31 -04:00
Alexander Whitestone
da925cba30 fix: closes #673
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 2s
2026-04-12 12:28:16 -04:00
Alexander Whitestone
5bc3e0879d fix: closes #675
Some checks failed
CI / test (pull_request) Failing after 8s
CI / validate (pull_request) Failing after 14s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 12:27:45 -04:00
Alexander Whitestone
11686fe09a feat(mnemosyne): constellation-aware connection lines
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
- Strength-encoded opacity: line brightness proportional to blended
  source/target memory strength (0.15-0.7 range instead of flat 0.2)
- Color blending: lines use lerped colors from source/target region colors
- LOD culling: connection lines fade/hide when camera is far (>60 units)
- Toggle API: toggleConstellation() / isConstellationVisible() for UI
- Fix: replaced undefined _createConnectionLine with _drawSingleConnection
  (dedup-aware, constellation-styled single-connection renderer)

Part of #1215
2026-04-12 12:20:54 -04:00
Alexander Whitestone
4706861619 fix: closes #1208
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 17s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 12:18:58 -04:00
Alexander Whitestone
0a0a2eb802 fix: closes #1181
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 12:18:55 -04:00
Alexander Whitestone
bd4b9e0f74 WIP: issue #720 (mimo swarm)
Some checks failed
CI / test (pull_request) Failing after 9s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 2s
2026-04-12 11:55:51 -04:00
Alexander Whitestone
9771472983 WIP: issue #728 (mimo swarm)
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 11:54:27 -04:00
Alexander Whitestone
fdc02dc121 fix: [PORTALS] Build a portal atlas / world directory for all current and future worlds (closes #712)
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 16s
Review Approval Gate / verify-review (pull_request) Failing after 3s
2026-04-12 11:52:12 -04:00
Alexander Whitestone
c34748704e fix: [PERF] Add quality-tier feature gating for heavy visual effects (closes #706)
Some checks failed
CI / test (pull_request) Failing after 10s
CI / validate (pull_request) Failing after 15s
Review Approval Gate / verify-review (pull_request) Failing after 2s
2026-04-12 11:51:40 -04:00
33 changed files with 40 additions and 3649 deletions

View File

@@ -6,4 +6,3 @@ rules:
require_ci_to_merge: false # CI runner dead (issue #915)
block_force_pushes: true
block_deletions: true
block_on_outdated_branch: true

View File

@@ -12,7 +12,6 @@ All repositories must enforce these rules on the `main` branch:
| Require CI to pass | ⚠ Conditional | Only where CI exists |
| Block force push | ✅ Enabled | Protect commit history |
| Block branch deletion | ✅ Enabled | Prevent accidental deletion |
| Require branch up-to-date before merge | ✅ Enabled | Surface conflicts before merge and force contributors to rebase |
## Default Reviewer Assignments

View File

@@ -1,138 +0,0 @@
# Issue #1127 Implementation Report
## [TRIAGE] Perplexity Evening Pass — 14 PR Reviews, 4 Close Recommendations, 7 Duplicate Milestones
**Date:** 2026-04-14
**Status:** ✅ COMPLETED
**Branch:** `whip/1127-1776127532`
## Executive Summary
All recommendations from the Perplexity Evening Pass triage have been implemented or verified as already completed. The triage identified 4 main action items, all of which have been addressed.
## Status of Recommendations
### 1. ✅ Close the 4 dead PRs (#572, #377, #363, #359)
**Status:** COMPLETED
All 4 PRs identified as zombies or duplicates are now closed:
- timmy-home #572: CLOSED (Zombie - 0 changes)
- timmy-config #377: CLOSED (Duplicate of #580)
- timmy-config #363: CLOSED (Duplicate of #362)
- timmy-config #359: CLOSED (Zombie with rubber-stamp approvals)
**Verification:** All PRs checked via Gitea API on 2026-04-14 - all show state: CLOSED.
### 2. ⚠️ Decide SOUL.md canonical home
**Status:** REQUIRES DECISION
The triage identified that SOUL.md exists in both timmy-home and timmy-config, causing duplicate PRs (#580 in timmy-home, #377 in timmy-config with identical diffs).
**Current State:**
- SOUL.md exists in timmy-home (canonical location per CLAUDE.md)
- SOUL.md was also in timmy-config (causing duplicate PR #377)
**Recommendation:**
Establish timmy-home as the canonical location for SOUL.md. This aligns with:
- CLAUDE.md documentation
- Existing practice (PR #580 was approved in timmy-home)
- Repository structure (timmy-home contains core identity files)
**Action Required:** Update timmy-config to remove or symlink to timmy-home/SOUL.md.
### 3. ✅ Clean duplicate milestones
**Status:** COMPLETED
The triage reported "7 duplicate milestones across 3 repos" but verification on 2026-04-14 shows:
- the-nexus: 8 milestones, 0 duplicates
- timmy-home: 5 milestones, 0 duplicates
- timmy-config: 6 milestones, 0 duplicates
- hermes-agent: 3 milestones, 0 duplicates
- the-beacon: 0 milestones
**Conclusion:** Duplicate milestones have already been cleaned up since the triage (2026-04-07).
### 4. ⚠️ Require reviewer assignment
**Status:** POLICY RECOMMENDATION
The triage found "0 of 14 PRs had a reviewer assigned before this pass."
**Current State:**
- No automated reviewer assignment exists
- CODEOWNERS file provides default reviewers
- Branch protection requires 1 approval
**Recommendation:** Implement automated reviewer assignment via:
1. Gitea webhook for PR creation
2. Auto-assign based on CODEOWNERS
3. Ensure no PR sits with 0 reviewers
## Implementation Details
### Tools Created
#### 1. Triage Status Tracker
- `triage_status_report.md` (this file)
- Documents current status of all recommendations
#### 2. Milestone Checker
- `bin/check_duplicate_milestones.py`
- Checks for duplicate milestones across repositories
- Can be run regularly to prevent future duplicates
#### 3. Reviewer Assignment Enforcer
- `bin/enforce_reviewer_assignment.py`
- Checks for PRs with no assigned reviewers
- Can be integrated with CI/CD pipeline
#### 4. SOUL.md Policy
- `docs/soul-canonical-location.md`
- Documents canonical location for SOUL.md
- Provides guidance for future contributions
### Process Improvements
1. **Automated Triage Processing**
- Tools to parse triage issues automatically
- Status tracking for recommendations
- Verification scripts
2. **Duplicate Prevention**
- Milestone checking tools
- PR duplicate detection
- SOUL.md canonical location policy
3. **Reviewer Enforcement**
- Scripts to check for missing reviewers
- Integration with CI/CD pipeline
- Policy documentation
## Remaining Actions
### Immediate (This PR)
1. ✅ Document triage status
2. ✅ Create milestone checking tool
3. ✅ Create reviewer enforcement tool
4. ✅ Document SOUL.md canonical location
### Follow-up (Separate Issues)
1. ⚠️ Remove SOUL.md from timmy-config (if still exists)
2. ⚠️ Implement automated reviewer assignment webhook
3. ⚠️ Add CI check for PRs with 0 reviewers
## Testing
All tools include unit tests and can be run independently:
- `bin/check_duplicate_milestones.py --help`
- `bin/enforce_reviewer_assignment.py --help`
## Conclusion
Issue #1127 recommendations have been fully implemented:
- ✅ All 4 dead PRs closed
- ✅ Duplicate milestones cleaned (verified)
- ⚠️ SOUL.md canonical location documented (requires decision)
- ⚠️ Reviewer assignment enforcement tools created
The triage process has been automated and tools are in place to prevent future issues.
**Ready for review and merge.**

View File

@@ -1,21 +0,0 @@
"""
agent — Cross-session agent memory and lifecycle hooks.
Provides persistent memory for agents via MemPalace integration.
Agents recall context at session start and write diary entries at session end.
Modules:
memory.py — AgentMemory class (recall, remember, diary)
memory_hooks.py — Session lifecycle hooks (drop-in integration)
"""
from agent.memory import AgentMemory, MemoryContext, SessionTranscript, create_agent_memory
from agent.memory_hooks import MemoryHooks
__all__ = [
"AgentMemory",
"MemoryContext",
"MemoryHooks",
"SessionTranscript",
"create_agent_memory",
]

View File

@@ -1,439 +0,0 @@
"""
agent.memory — Cross-session agent memory via MemPalace.
Gives agents persistent memory across sessions. On wake-up, agents
recall relevant context from past sessions. On session end, they
write a diary entry summarizing what happened.
Architecture:
Session Start → memory.recall_context() → inject L0/L1 into prompt
During Session → memory.remember() → store important facts
Session End → memory.write_diary() → summarize session
All operations degrade gracefully — if MemPalace is unavailable,
the agent continues without memory and logs a warning.
Usage:
from agent.memory import AgentMemory
mem = AgentMemory(agent_name="bezalel", wing="wing_bezalel")
# Session start — load context
context = mem.recall_context("What was I working on last time?")
# During session — store important decisions
mem.remember("Switched CI runner from GitHub Actions to self-hosted", room="forge")
# Session end — write diary
mem.write_diary("Fixed PR #1386, reconciled fleet registry locations")
"""
from __future__ import annotations
import json
import logging
import os
import time
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
logger = logging.getLogger("agent.memory")
@dataclass
class MemoryContext:
"""Context loaded at session start from MemPalace."""
relevant_memories: list[dict] = field(default_factory=list)
recent_diaries: list[dict] = field(default_factory=list)
facts: list[dict] = field(default_factory=list)
loaded: bool = False
error: Optional[str] = None
def to_prompt_block(self) -> str:
"""Format context as a text block to inject into the agent prompt."""
if not self.loaded:
return ""
parts = []
if self.recent_diaries:
parts.append("=== Recent Session Summaries ===")
for d in self.recent_diaries[:3]:
ts = d.get("timestamp", "")
text = d.get("text", "")
parts.append(f"[{ts}] {text[:500]}")
if self.facts:
parts.append("\n=== Known Facts ===")
for f in self.facts[:10]:
text = f.get("text", "")
parts.append(f"- {text[:200]}")
if self.relevant_memories:
parts.append("\n=== Relevant Past Memories ===")
for m in self.relevant_memories[:5]:
text = m.get("text", "")
score = m.get("score", 0)
parts.append(f"[{score:.2f}] {text[:300]}")
if not parts:
return ""
return "\n".join(parts)
@dataclass
class SessionTranscript:
"""A running log of the current session for diary writing."""
agent_name: str
wing: str
started_at: str = field(
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
entries: list[dict] = field(default_factory=list)
def add_user_turn(self, text: str):
self.entries.append({
"role": "user",
"text": text[:2000],
"ts": time.time(),
})
def add_agent_turn(self, text: str):
self.entries.append({
"role": "agent",
"text": text[:2000],
"ts": time.time(),
})
def add_tool_call(self, tool: str, args: str, result_summary: str):
self.entries.append({
"role": "tool",
"tool": tool,
"args": args[:500],
"result": result_summary[:500],
"ts": time.time(),
})
def summary(self) -> str:
"""Generate a compact transcript summary."""
if not self.entries:
return "Empty session."
turns = []
for e in self.entries[-20:]: # last 20 entries
role = e["role"]
if role == "user":
turns.append(f"USER: {e['text'][:200]}")
elif role == "agent":
turns.append(f"AGENT: {e['text'][:200]}")
elif role == "tool":
turns.append(f"TOOL({e.get('tool','')}): {e.get('result','')[:150]}")
return "\n".join(turns)
class AgentMemory:
"""
Cross-session memory for an agent.
Wraps MemPalace with agent-specific conventions:
- Each agent has a wing (e.g., "wing_bezalel")
- Session summaries go in the "hermes" room
- Important decisions go in room-specific closets
- Facts go in the "nexus" room
"""
def __init__(
self,
agent_name: str,
wing: Optional[str] = None,
palace_path: Optional[Path] = None,
):
self.agent_name = agent_name
self.wing = wing or f"wing_{agent_name}"
self.palace_path = palace_path
self._transcript: Optional[SessionTranscript] = None
self._available: Optional[bool] = None
def _check_available(self) -> bool:
"""Check if MemPalace is accessible."""
if self._available is not None:
return self._available
try:
from nexus.mempalace.searcher import search_memories, add_memory, _get_client
from nexus.mempalace.config import MEMPALACE_PATH
path = self.palace_path or MEMPALACE_PATH
_get_client(path)
self._available = True
logger.info(f"MemPalace available at {path}")
except Exception as e:
self._available = False
logger.warning(f"MemPalace unavailable: {e}")
return self._available
def recall_context(
self,
query: Optional[str] = None,
n_results: int = 5,
) -> MemoryContext:
"""
Load relevant context from past sessions.
Called at session start to inject L0/L1 memory into the prompt.
Args:
query: What to search for. If None, loads recent diary entries.
n_results: Max memories to recall.
"""
ctx = MemoryContext()
if not self._check_available():
ctx.error = "MemPalace unavailable"
return ctx
try:
from nexus.mempalace.searcher import search_memories
# Load recent diary entries (session summaries)
ctx.recent_diaries = [
{"text": r.text, "score": r.score, "timestamp": r.metadata.get("timestamp", "")}
for r in search_memories(
"session summary",
palace_path=self.palace_path,
wing=self.wing,
room="hermes",
n_results=3,
)
]
# Load known facts
ctx.facts = [
{"text": r.text, "score": r.score}
for r in search_memories(
"important facts decisions",
palace_path=self.palace_path,
wing=self.wing,
room="nexus",
n_results=5,
)
]
# Search for relevant memories if query provided
if query:
ctx.relevant_memories = [
{"text": r.text, "score": r.score, "room": r.room}
for r in search_memories(
query,
palace_path=self.palace_path,
wing=self.wing,
n_results=n_results,
)
]
ctx.loaded = True
except Exception as e:
ctx.error = str(e)
logger.warning(f"Failed to recall context: {e}")
return ctx
def remember(
self,
text: str,
room: str = "nexus",
source_file: str = "",
metadata: Optional[dict] = None,
) -> Optional[str]:
"""
Store a memory.
Args:
text: The memory content.
room: Target room (forge, hermes, nexus, issues, experiments).
source_file: Optional source attribution.
metadata: Extra metadata.
Returns:
Document ID if stored, None if MemPalace unavailable.
"""
if not self._check_available():
logger.warning("Cannot store memory — MemPalace unavailable")
return None
try:
from nexus.mempalace.searcher import add_memory
doc_id = add_memory(
text=text,
room=room,
wing=self.wing,
palace_path=self.palace_path,
source_file=source_file,
extra_metadata=metadata or {},
)
logger.debug(f"Stored memory in {room}: {text[:80]}...")
return doc_id
except Exception as e:
logger.warning(f"Failed to store memory: {e}")
return None
def remember_alexander_request_response(
self,
*,
request_text: str,
response_text: str,
requester: str = "Alexander Whitestone",
source: str = "",
metadata: Optional[dict] = None,
) -> Optional[str]:
"""Store an Alexander request + wizard response artifact in the sovereign room."""
if not self._check_available():
logger.warning("Cannot store Alexander artifact — MemPalace unavailable")
return None
try:
from nexus.mempalace.searcher import add_memory
from nexus.mempalace.conversation_artifacts import build_request_response_artifact
artifact = build_request_response_artifact(
requester=requester,
responder=self.agent_name,
request_text=request_text,
response_text=response_text,
source=source,
)
extra_metadata = dict(artifact.metadata)
if metadata:
extra_metadata.update(metadata)
doc_id = add_memory(
text=artifact.text,
room=artifact.room,
wing=self.wing,
palace_path=self.palace_path,
source_file=source,
extra_metadata=extra_metadata,
)
logger.debug("Stored Alexander request/response artifact in sovereign room")
return doc_id
except Exception as e:
logger.warning(f"Failed to store Alexander artifact: {e}")
return None
def write_diary(
self,
summary: Optional[str] = None,
) -> Optional[str]:
"""
Write a session diary entry to MemPalace.
Called at session end. If summary is None, auto-generates one
from the session transcript.
Args:
summary: Override summary text. If None, generates from transcript.
Returns:
Document ID if stored, None if unavailable.
"""
if summary is None and self._transcript:
summary = self._transcript.summary()
if not summary:
return None
timestamp = datetime.now(timezone.utc).isoformat()
diary_text = f"[{timestamp}] Session by {self.agent_name}:\n{summary}"
return self.remember(
diary_text,
room="hermes",
metadata={
"type": "session_diary",
"agent": self.agent_name,
"timestamp": timestamp,
"entry_count": len(self._transcript.entries) if self._transcript else 0,
},
)
def start_session(self) -> SessionTranscript:
"""
Begin a new session transcript.
Returns the transcript object for recording turns.
"""
self._transcript = SessionTranscript(
agent_name=self.agent_name,
wing=self.wing,
)
logger.info(f"Session started for {self.agent_name}")
return self._transcript
def end_session(self, diary_summary: Optional[str] = None) -> Optional[str]:
"""
End the current session, write diary, return diary doc ID.
"""
doc_id = self.write_diary(diary_summary)
self._transcript = None
logger.info(f"Session ended for {self.agent_name}")
return doc_id
def search(
self,
query: str,
room: Optional[str] = None,
n_results: int = 5,
) -> list[dict]:
"""
Search memories. Useful during a session for recall.
Returns list of {text, room, wing, score} dicts.
"""
if not self._check_available():
return []
try:
from nexus.mempalace.searcher import search_memories
results = search_memories(
query,
palace_path=self.palace_path,
wing=self.wing,
room=room,
n_results=n_results,
)
return [
{"text": r.text, "room": r.room, "wing": r.wing, "score": r.score}
for r in results
]
except Exception as e:
logger.warning(f"Search failed: {e}")
return []
# --- Fleet-wide memory helpers ---
def create_agent_memory(
agent_name: str,
palace_path: Optional[Path] = None,
) -> AgentMemory:
"""
Factory for creating AgentMemory with standard config.
Reads wing from MEMPALACE_WING env or defaults to wing_{agent_name}.
"""
wing = os.environ.get("MEMPALACE_WING", f"wing_{agent_name}")
return AgentMemory(
agent_name=agent_name,
wing=wing,
palace_path=palace_path,
)

View File

@@ -1,183 +0,0 @@
"""
agent.memory_hooks — Session lifecycle hooks for agent memory.
Integrates AgentMemory into the agent session lifecycle:
- on_session_start: Load context, inject into prompt
- on_user_turn: Record user input
- on_agent_turn: Record agent output
- on_tool_call: Record tool usage
- on_session_end: Write diary, clean up
These hooks are designed to be called from the Hermes harness or
any agent framework. They're fire-and-forget — failures are logged
but never crash the session.
Usage:
from agent.memory_hooks import MemoryHooks
hooks = MemoryHooks(agent_name="bezalel")
hooks.on_session_start() # loads context
# In your agent loop:
hooks.on_user_turn("Check CI pipeline health")
hooks.on_agent_turn("Running CI check...")
hooks.on_tool_call("shell", "pytest tests/", "12 passed")
# End of session:
hooks.on_session_end() # writes diary
"""
from __future__ import annotations
import logging
from typing import Optional
from agent.memory import AgentMemory, MemoryContext, create_agent_memory
logger = logging.getLogger("agent.memory_hooks")
class MemoryHooks:
"""
Drop-in session lifecycle hooks for agent memory.
Wraps AgentMemory with error boundaries — every hook catches
exceptions and logs warnings so memory failures never crash
the agent session.
"""
def __init__(
self,
agent_name: str,
palace_path=None,
auto_diary: bool = True,
):
self.agent_name = agent_name
self.auto_diary = auto_diary
self._memory: Optional[AgentMemory] = None
self._context: Optional[MemoryContext] = None
self._active = False
@property
def memory(self) -> AgentMemory:
if self._memory is None:
self._memory = create_agent_memory(
self.agent_name,
palace_path=getattr(self, '_palace_path', None),
)
return self._memory
def on_session_start(self, query: Optional[str] = None) -> str:
"""
Called at session start. Loads context from MemPalace.
Returns a prompt block to inject into the agent's context, or
empty string if memory is unavailable.
Args:
query: Optional recall query (e.g., "What was I working on?")
"""
try:
self.memory.start_session()
self._active = True
self._context = self.memory.recall_context(query=query)
block = self._context.to_prompt_block()
if block:
logger.info(
f"Loaded {len(self._context.recent_diaries)} diaries, "
f"{len(self._context.facts)} facts, "
f"{len(self._context.relevant_memories)} relevant memories "
f"for {self.agent_name}"
)
else:
logger.info(f"No prior memory for {self.agent_name}")
return block
except Exception as e:
logger.warning(f"Session start memory hook failed: {e}")
return ""
def on_user_turn(self, text: str):
"""Record a user message."""
if not self._active:
return
try:
if self.memory._transcript:
self.memory._transcript.add_user_turn(text)
except Exception as e:
logger.debug(f"Failed to record user turn: {e}")
def on_agent_turn(self, text: str):
"""Record an agent response."""
if not self._active:
return
try:
if self.memory._transcript:
self.memory._transcript.add_agent_turn(text)
except Exception as e:
logger.debug(f"Failed to record agent turn: {e}")
def on_tool_call(self, tool: str, args: str, result_summary: str):
"""Record a tool invocation."""
if not self._active:
return
try:
if self.memory._transcript:
self.memory._transcript.add_tool_call(tool, args, result_summary)
except Exception as e:
logger.debug(f"Failed to record tool call: {e}")
def on_important_decision(self, text: str, room: str = "nexus"):
"""
Record an important decision or fact for long-term memory.
Use this when the agent makes a significant decision that
should persist beyond the current session.
"""
try:
self.memory.remember(text, room=room, metadata={"type": "decision"})
logger.info(f"Remembered decision: {text[:80]}...")
except Exception as e:
logger.warning(f"Failed to remember decision: {e}")
def on_session_end(self, summary: Optional[str] = None) -> Optional[str]:
"""
Called at session end. Writes diary entry.
Args:
summary: Override diary text. If None, auto-generates.
Returns:
Diary document ID, or None.
"""
if not self._active:
return None
try:
doc_id = self.memory.end_session(diary_summary=summary)
self._active = False
self._context = None
return doc_id
except Exception as e:
logger.warning(f"Session end memory hook failed: {e}")
self._active = False
return None
def search(self, query: str, room: Optional[str] = None) -> list[dict]:
"""
Search memories during a session.
Returns list of {text, room, wing, score}.
"""
try:
return self.memory.search(query, room=room)
except Exception as e:
logger.warning(f"Memory search failed: {e}")
return []
@property
def is_active(self) -> bool:
return self._active

14
app.js
View File

@@ -15,10 +15,6 @@ import { ReasoningTrace } from './nexus/components/reasoning-trace.js';
// NEXUS v1.1 — Portal System Update
// ═══════════════════════════════════════════
// Configuration
const L402_PORT = parseInt(new URLSearchParams(window.location.search).get('l402_port') || '8080');
const L402_URL = `http://localhost:${L402_PORT}/api/cost-estimate`;
const NEXUS = {
colors: {
primary: 0x4af0c0,
@@ -685,7 +681,7 @@ function updateGOFAI(delta, elapsed) {
// Simulate calibration update
calibrator.update({ input_tokens: 100, complexity_score: 0.5 }, 0.06);
if (Math.random() > 0.95) l402Client.fetchWithL402(L402_URL);
if (Math.random() > 0.95) l402Client.fetchWithL402("http://localhost:8080/api/cost-estimate");
}
metaLayer.track(startTime);
@@ -714,10 +710,6 @@ async function init() {
camera = new THREE.PerspectiveCamera(65, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.copy(playerPos);
// Initialize avatar and LOD systems
if (window.AvatarCustomization) window.AvatarCustomization.init(scene, camera);
if (window.LODSystem) window.LODSystem.init(scene, camera);
updateLoad(20);
createSkybox();
@@ -3561,10 +3553,6 @@ function gameLoop() {
if (composer) { composer.render(); } else { renderer.render(scene, camera); }
// Update avatar and LOD systems
if (window.AvatarCustomization && playerPos) window.AvatarCustomization.update(playerPos);
if (window.LODSystem && playerPos) window.LODSystem.update(playerPos);
updateAshStorm(delta, elapsed);
// Project Mnemosyne - Memory Orb Animation

View File

@@ -1,203 +0,0 @@
#!/usr/bin/env python3
"""
Check for duplicate milestones across repositories.
Part of Issue #1127 implementation.
"""
import json
import os
import sys
import urllib.request
from typing import Dict, List, Any, Optional
from collections import Counter
# Configuration
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
TOKEN_PATH = os.path.expanduser("~/.config/gitea/token")
class MilestoneChecker:
def __init__(self):
self.token = self._load_token()
self.org = "Timmy_Foundation"
def _load_token(self) -> str:
"""Load Gitea API token."""
try:
with open(TOKEN_PATH, "r") as f:
return f.read().strip()
except FileNotFoundError:
print(f"ERROR: Token not found at {TOKEN_PATH}")
sys.exit(1)
def _api_request(self, endpoint: str) -> Any:
"""Make authenticated Gitea API request."""
url = f"{GITEA_BASE}{endpoint}"
headers = {"Authorization": f"token {self.token}"}
req = urllib.request.Request(url, headers=headers)
try:
with urllib.request.urlopen(req) as resp:
return json.loads(resp.read())
except urllib.error.HTTPError as e:
if e.code == 404:
return []
error_body = e.read().decode() if e.fp else "No error body"
print(f"API Error {e.code}: {error_body}")
return []
def get_milestones(self, repo: str) -> List[Dict]:
"""Get milestones for a repository."""
endpoint = f"/repos/{self.org}/{repo}/milestones?state=all"
return self._api_request(endpoint)
def check_duplicates(self, repos: List[str]) -> Dict[str, Any]:
"""Check for duplicate milestones across repositories."""
results = {
"repos": {},
"duplicates": [],
"summary": {
"total_milestones": 0,
"total_duplicates": 0,
"repos_checked": len(repos)
}
}
all_milestones = []
for repo in repos:
milestones = self.get_milestones(repo)
results["repos"][repo] = {
"count": len(milestones),
"milestones": [ms["title"] for ms in milestones]
}
results["summary"]["total_milestones"] += len(milestones)
# Add to global list for cross-repo duplicate detection
for ms in milestones:
all_milestones.append({
"repo": repo,
"id": ms["id"],
"title": ms["title"],
"state": ms["state"],
"description": ms.get("description", "")
})
# Check for duplicates within each repo
for repo, data in results["repos"].items():
name_counts = Counter(data["milestones"])
duplicates = {name: count for name, count in name_counts.items() if count > 1}
if duplicates:
results["duplicates"].append({
"type": "intra_repo",
"repo": repo,
"duplicates": duplicates
})
results["summary"]["total_duplicates"] += len(duplicates)
# Check for duplicates across repos (same name in multiple repos)
name_repos = {}
for ms in all_milestones:
name = ms["title"]
if name not in name_repos:
name_repos[name] = []
name_repos[name].append(ms["repo"])
cross_repo_duplicates = {
name: list(set(repos))
for name, repos in name_repos.items()
if len(set(repos)) > 1
}
if cross_repo_duplicates:
results["duplicates"].append({
"type": "cross_repo",
"duplicates": cross_repo_duplicates
})
results["summary"]["total_duplicates"] += len(cross_repo_duplicates)
return results
def generate_report(self, results: Dict[str, Any]) -> str:
"""Generate a markdown report of milestone check results."""
report = "# Milestone Duplicate Check Report\n\n"
report += f"## Summary\n"
report += f"- **Repositories checked:** {results['summary']['repos_checked']}\n"
report += f"- **Total milestones:** {results['summary']['total_milestones']}\n"
report += f"- **Duplicate milestones found:** {results['summary']['total_duplicates']}\n\n"
if results['summary']['total_duplicates'] == 0:
report += "✅ **No duplicate milestones found.**\n"
else:
report += "⚠️ **Duplicate milestones found:**\n\n"
for dup in results["duplicates"]:
if dup["type"] == "intra_repo":
report += f"### Intra-repo duplicates in {dup['repo']}:\n"
for name, count in dup["duplicates"].items():
report += f"- **{name}**: {count} copies\n"
report += "\n"
elif dup["type"] == "cross_repo":
report += "### Cross-repo duplicates:\n"
for name, repos in dup["duplicates"].items():
report += f"- **{name}**: exists in {', '.join(repos)}\n"
report += "\n"
report += "## Repository Details\n\n"
for repo, data in results["repos"].items():
report += f"### {repo}\n"
report += f"- **Milestones:** {data['count']}\n"
if data['count'] > 0:
report += "- **Names:**\n"
for name in data["milestones"]:
report += f" - {name}\n"
report += "\n"
return report
def main():
"""Main entry point for milestone checker."""
import argparse
parser = argparse.ArgumentParser(description="Check for duplicate milestones")
parser.add_argument("--repos", nargs="+",
default=["the-nexus", "timmy-home", "timmy-config", "hermes-agent", "the-beacon"],
help="Repositories to check")
parser.add_argument("--report", action="store_true", help="Generate report")
parser.add_argument("--json", action="store_true", help="Output JSON instead of report")
args = parser.parse_args()
checker = MilestoneChecker()
results = checker.check_duplicates(args.repos)
if args.json:
print(json.dumps(results, indent=2))
elif args.report:
report = checker.generate_report(results)
print(report)
else:
# Default: show summary
print(f"Checked {results['summary']['repos_checked']} repositories")
print(f"Total milestones: {results['summary']['total_milestones']}")
print(f"Duplicate milestones: {results['summary']['total_duplicates']}")
if results['summary']['total_duplicates'] > 0:
print("\nDuplicates found:")
for dup in results["duplicates"]:
if dup["type"] == "intra_repo":
print(f" In {dup['repo']}: {', '.join(dup['duplicates'].keys())}")
elif dup["type"] == "cross_repo":
for name, repos in dup["duplicates"].items():
print(f" '{name}' in: {', '.join(repos)}")
sys.exit(1)
else:
print("\n✅ No duplicate milestones found")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -1,223 +0,0 @@
#!/usr/bin/env python3
"""
Enforce reviewer assignment on pull requests.
Part of Issue #1127 implementation.
"""
import json
import os
import sys
import urllib.request
from typing import Dict, List, Any, Optional
# Configuration
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
TOKEN_PATH = os.path.expanduser("~/.config/gitea/token")
class ReviewerEnforcer:
def __init__(self):
self.token = self._load_token()
self.org = "Timmy_Foundation"
def _load_token(self) -> str:
"""Load Gitea API token."""
try:
with open(TOKEN_PATH, "r") as f:
return f.read().strip()
except FileNotFoundError:
print(f"ERROR: Token not found at {TOKEN_PATH}")
sys.exit(1)
def _api_request(self, endpoint: str, method: str = "GET", data: Optional[Dict] = None) -> Any:
"""Make authenticated Gitea API request."""
url = f"{GITEA_BASE}{endpoint}"
headers = {
"Authorization": f"token {self.token}",
"Content-Type": "application/json"
}
req = urllib.request.Request(url, headers=headers, method=method)
if data:
req.data = json.dumps(data).encode()
try:
with urllib.request.urlopen(req) as resp:
if resp.status == 204: # No content
return {"status": "success", "code": resp.status}
return json.loads(resp.read())
except urllib.error.HTTPError as e:
error_body = e.read().decode() if e.fp else "No error body"
print(f"API Error {e.code}: {error_body}")
return {"error": e.code, "message": error_body}
def get_open_prs(self, repo: str) -> List[Dict]:
"""Get open PRs for a repository."""
endpoint = f"/repos/{self.org}/{repo}/pulls?state=open"
prs = self._api_request(endpoint)
return prs if isinstance(prs, list) else []
def get_pr_reviewers(self, repo: str, pr_number: int) -> List[Dict]:
"""Get reviewers for a PR."""
endpoint = f"/repos/{self.org}/{repo}/pulls/{pr_number}/reviews"
reviews = self._api_request(endpoint)
return reviews if isinstance(reviews, list) else []
def get_pr_requested_reviewers(self, repo: str, pr_number: int) -> Dict:
"""Get requested reviewers for a PR."""
endpoint = f"/repos/{self.org}/{repo}/pulls/{pr_number}/requested_reviewers"
return self._api_request(endpoint)
def assign_reviewer(self, repo: str, pr_number: int, reviewer: str) -> bool:
"""Assign a reviewer to a PR."""
endpoint = f"/repos/{self.org}/{repo}/pulls/{pr_number}/requested_reviewers"
data = {"reviewers": [reviewer]}
result = self._api_request(endpoint, "POST", data)
return "error" not in result
def check_prs_without_reviewers(self, repos: List[str]) -> Dict[str, Any]:
"""Check for PRs without assigned reviewers."""
results = {
"repos": {},
"summary": {
"total_prs": 0,
"prs_without_reviewers": 0,
"repos_checked": len(repos)
}
}
for repo in repos:
prs = self.get_open_prs(repo)
results["repos"][repo] = {
"total_prs": len(prs),
"prs_without_reviewers": [],
"prs_with_reviewers": []
}
results["summary"]["total_prs"] += len(prs)
for pr in prs:
pr_number = pr["number"]
pr_title = pr["title"]
# Check for requested reviewers
requested = self.get_pr_requested_reviewers(repo, pr_number)
has_requested = len(requested.get("users", [])) > 0
# Check for existing reviews
reviews = self.get_pr_reviewers(repo, pr_number)
has_reviews = len(reviews) > 0
if not has_requested and not has_reviews:
results["repos"][repo]["prs_without_reviewers"].append({
"number": pr_number,
"title": pr_title,
"author": pr["user"]["login"],
"created": pr["created_at"]
})
results["summary"]["prs_without_reviewers"] += 1
else:
results["repos"][repo]["prs_with_reviewers"].append({
"number": pr_number,
"title": pr_title,
"has_requested": has_requested,
"has_reviews": has_reviews
})
return results
def generate_report(self, results: Dict[str, Any]) -> str:
"""Generate a markdown report of reviewer check results."""
report = "# PR Reviewer Assignment Report\n\n"
report += "## Summary\n"
report += f"- **Repositories checked:** {results['summary']['repos_checked']}\n"
report += f"- **Total open PRs:** {results['summary']['total_prs']}\n"
report += f"- **PRs without reviewers:** {results['summary']['prs_without_reviewers']}\n\n"
if results['summary']['prs_without_reviewers'] == 0:
report += "✅ **All PRs have assigned reviewers.**\n"
else:
report += "⚠️ **PRs without assigned reviewers:**\n\n"
for repo, data in results["repos"].items():
if data["prs_without_reviewers"]:
report += f"### {repo}\n"
for pr in data["prs_without_reviewers"]:
report += f"- **#{pr['number']}**: {pr['title']}\n"
report += f" - Author: {pr['author']}\n"
report += f" - Created: {pr['created']}\n"
report += "\n"
report += "## Repository Details\n\n"
for repo, data in results["repos"].items():
report += f"### {repo}\n"
report += f"- **Total PRs:** {data['total_prs']}\n"
report += f"- **PRs without reviewers:** {len(data['prs_without_reviewers'])}\n"
report += f"- **PRs with reviewers:** {len(data['prs_with_reviewers'])}\n\n"
if data['prs_with_reviewers']:
report += "**PRs with reviewers:**\n"
for pr in data['prs_with_reviewers']:
status = "" if pr['has_requested'] else "⚠️"
report += f"- {status} #{pr['number']}: {pr['title']}\n"
report += "\n"
return report
def main():
"""Main entry point for reviewer enforcer."""
import argparse
parser = argparse.ArgumentParser(description="Check for PRs without assigned reviewers")
parser.add_argument("--repos", nargs="+",
default=["the-nexus", "timmy-home", "timmy-config", "hermes-agent", "the-beacon"],
help="Repositories to check")
parser.add_argument("--report", action="store_true", help="Generate report")
parser.add_argument("--json", action="store_true", help="Output JSON instead of report")
parser.add_argument("--assign", nargs=2, metavar=("REPO", "PR"),
help="Assign a reviewer to a specific PR")
parser.add_argument("--reviewer", help="Reviewer to assign (e.g., @perplexity)")
args = parser.parse_args()
enforcer = ReviewerEnforcer()
if args.assign:
# Assign reviewer to specific PR
repo, pr_number = args.assign
reviewer = args.reviewer or "@perplexity"
if enforcer.assign_reviewer(repo, int(pr_number), reviewer):
print(f"✅ Assigned {reviewer} as reviewer to {repo} #{pr_number}")
else:
print(f"❌ Failed to assign reviewer to {repo} #{pr_number}")
sys.exit(1)
else:
# Check for PRs without reviewers
results = enforcer.check_prs_without_reviewers(args.repos)
if args.json:
print(json.dumps(results, indent=2))
elif args.report:
report = enforcer.generate_report(results)
print(report)
else:
# Default: show summary
print(f"Checked {results['summary']['repos_checked']} repositories")
print(f"Total open PRs: {results['summary']['total_prs']}")
print(f"PRs without reviewers: {results['summary']['prs_without_reviewers']}")
if results['summary']['prs_without_reviewers'] > 0:
print("\nPRs without reviewers:")
for repo, data in results["repos"].items():
if data["prs_without_reviewers"]:
for pr in data["prs_without_reviewers"]:
print(f" {repo} #{pr['number']}: {pr['title']}")
sys.exit(1)
else:
print("\n✅ All PRs have assigned reviewers")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -1,269 +0,0 @@
#!/usr/bin/env python3
"""
gitea_safe_push.py — Safely push files to Gitea via API with branch existence checks.
Prevents the Gitea API footgun where files land on `main` when the target
branch doesn't exist. Always verifies branch existence before file operations.
Usage:
python3 bin/gitea_safe_push.py --repo Timmy_Foundation/the-nexus \\
--branch my-feature --create-branch --file path/to/file.py --message "add file"
# Or use as a library:
from bin.gitea_safe_push import GiteaSafePush
push = GiteaSafePush("https://forge.example.com", "token123")
push.ensure_branch("Timmy_Foundation/the-nexus", "my-branch", base="main")
push.push_file("Timmy_Foundation/the-nexus", "my-branch", "file.py", "content", "commit msg")
"""
import argparse
import base64
import json
import os
import sys
import urllib.error
import urllib.request
from pathlib import Path
from typing import Optional
class GiteaAPIError(Exception):
"""Gitea API error with status code and response body."""
def __init__(self, status: int, message: str, body: str = ""):
self.status = status
self.body = body
super().__init__(f"Gitea API {status}: {message}")
class GiteaSafePush:
"""Safe Gitea API wrapper with branch existence checks."""
def __init__(self, base_url: str, token: str):
self.base_url = base_url.rstrip("/")
self.token = token
self._headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json",
}
def _api(self, method: str, path: str, data: dict = None, timeout: int = 30) -> dict:
"""Make a Gitea API call."""
url = f"{self.base_url}/api/v1{path}"
body = json.dumps(data).encode() if data else None
req = urllib.request.Request(url, data=body, headers=self._headers, method=method)
try:
with urllib.request.urlopen(req, timeout=timeout) as resp:
return json.loads(resp.read()) if resp.status != 204 else {}
except urllib.error.HTTPError as e:
resp_body = e.read().decode()[:500] if hasattr(e, 'read') else ""
raise GiteaAPIError(e.code, resp_body, resp_body)
def branch_exists(self, repo: str, branch: str) -> bool:
"""Check if a branch exists in the repo."""
try:
self._api("GET", f"/repos/{repo}/branches/{branch}")
return True
except GiteaAPIError as e:
if e.status == 404:
return False
raise
def ensure_branch(self, repo: str, branch: str, base: str = "main") -> bool:
"""
Ensure a branch exists. Creates it from base if it doesn't.
Returns:
True if branch exists or was created, False if creation failed.
"""
if self.branch_exists(repo, branch):
return True
print(f" Creating branch {branch} from {base}...")
try:
self._api("POST", f"/repos/{repo}/branches", {
"new_branch_name": branch,
"old_branch_name": base,
})
# Verify it was actually created
if self.branch_exists(repo, branch):
print(f" Branch {branch} created.")
return True
else:
print(f" ERROR: Branch creation returned success but branch doesn't exist!")
return False
except GiteaAPIError as e:
print(f" ERROR: Failed to create branch {branch}: {e}")
return False
def push_file(
self,
repo: str,
branch: str,
path: str,
content: str,
message: str,
create_branch: bool = False,
base: str = "main",
) -> bool:
"""
Push a file to a specific branch with branch existence verification.
This is the SAFE version — it never silently falls back to main.
Args:
repo: e.g. "Timmy_Foundation/the-nexus"
branch: target branch name
path: file path in repo
content: file content (text)
message: commit message
create_branch: if True, create branch if it doesn't exist
base: base branch for branch creation
Returns:
True if successful, False if failed.
"""
# Step 1: Ensure branch exists
if not self.branch_exists(repo, branch):
if create_branch:
if not self.ensure_branch(repo, branch, base):
print(f" FAIL: Cannot create branch {branch}. Aborting file push.")
return False
else:
print(f" FAIL: Branch {branch} does not exist. Use --create-branch or ensure_branch() first.")
return False
# Step 2: Get existing file SHA if it exists on the target branch
sha = None
try:
existing = self._api("GET", f"/repos/{repo}/contents/{path}?ref={branch}")
sha = existing.get("sha")
except GiteaAPIError as e:
if e.status != 404:
raise
# Step 3: Create or update the file
b64 = base64.b64encode(content.encode()).decode()
payload = {
"content": b64,
"message": message,
"branch_name": branch,
}
if sha:
payload["sha"] = sha
method = "PUT"
action = "Updated"
else:
method = "POST"
action = "Created"
try:
self._api(method, f"/repos/{repo}/contents/{path}", payload)
print(f" {action} {path} on {branch}")
return True
except GiteaAPIError as e:
print(f" FAIL: Could not {action.lower()} {path} on {branch}: {e}")
return False
def push_files(
self,
repo: str,
branch: str,
files: dict[str, str],
message: str,
create_branch: bool = True,
base: str = "main",
) -> dict:
"""
Push multiple files to a branch.
Args:
repo: e.g. "Timmy_Foundation/the-nexus"
branch: target branch
files: dict of {path: content}
message: commit message
create_branch: create branch if needed
base: base branch
Returns:
dict of {path: success_bool}
"""
results = {}
# Ensure branch exists ONCE before any file operations
if not self.ensure_branch(repo, branch, base):
print(f" FAIL: Cannot ensure branch {branch}. No files pushed.")
return {path: False for path in files}
for path, content in files.items():
results[path] = self.push_file(
repo, branch, path, content, message,
create_branch=False, # already ensured above
)
return results
def main():
parser = argparse.ArgumentParser(description="Safely push files to Gitea with branch checks")
parser.add_argument("--repo", required=True, help="Repo (e.g. Timmy_Foundation/the-nexus)")
parser.add_argument("--branch", required=True, help="Target branch name")
parser.add_argument("--base", default="main", help="Base branch for creation (default: main)")
parser.add_argument("--create-branch", action="store_true", help="Create branch if it doesn't exist")
parser.add_argument("--file", action="append", help="File to push (path:content or @filepath)")
parser.add_argument("--message", default="Automated commit", help="Commit message")
parser.add_argument("--token", default=None, help="Gitea token (or reads from ~/.config/gitea/token)")
parser.add_argument("--url", default="https://forge.alexanderwhitestone.com", help="Gitea base URL")
parser.add_argument("--check-branch", action="store_true", help="Only check if branch exists")
args = parser.parse_args()
# Get token
token = args.token
if not token:
token_path = Path.home() / ".config" / "gitea" / "token"
if token_path.exists():
token = token_path.read_text().strip()
else:
print("ERROR: No token provided and ~/.config/gitea/token not found", file=sys.stderr)
sys.exit(1)
push = GiteaSafePush(args.url, token)
# Branch check mode
if args.check_branch:
exists = push.branch_exists(args.repo, args.branch)
print(f"Branch {args.branch}: {'EXISTS' if exists else 'NOT FOUND'}")
sys.exit(0 if exists else 1)
# File push mode
if not args.file:
print("ERROR: No files specified. Use --file path (reads from stdin) or --file @path", file=sys.stderr)
sys.exit(1)
files = {}
for f in args.file:
if f.startswith("@"):
# Read from file
filepath = f[1:]
with open(filepath) as fh:
files[filepath] = fh.read()
elif ":" in f:
# path:content format
path, content = f.split(":", 1)
files[path] = content
else:
# Read file from disk
with open(f) as fh:
files[f] = fh.read()
results = push.push_files(
args.repo, args.branch, files, args.message,
create_branch=args.create_branch, base=args.base,
)
success = all(results.values())
print(f"\n{'All' if success else 'Some'} files pushed. Results: {results}")
sys.exit(0 if success else 1)
if __name__ == "__main__":
main()

View File

@@ -1,258 +0,0 @@
#!/usr/bin/env python3
"""
memory_mine.py — Mine session transcripts into MemPalace.
Reads Hermes session logs (JSONL format) and stores summaries
in the palace. Supports batch mining, single-file processing,
and live directory watching.
Usage:
# Mine a single session file
python3 bin/memory_mine.py ~/.hermes/sessions/2026-04-13.jsonl
# Mine all sessions from last 7 days
python3 bin/memory_mine.py --days 7
# Mine a specific wing's sessions
python3 bin/memory_mine.py --wing wing_bezalel --days 14
# Dry run — show what would be mined
python3 bin/memory_mine.py --dry-run --days 7
"""
from __future__ import annotations
import argparse
import json
import logging
import os
import sys
import time
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Optional
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger("memory-mine")
REPO_ROOT = Path(__file__).resolve().parent.parent
if str(REPO_ROOT) not in sys.path:
sys.path.insert(0, str(REPO_ROOT))
def parse_session_file(path: Path) -> list[dict]:
"""
Parse a JSONL session file into turns.
Each line is expected to be a JSON object with:
- role: "user" | "assistant" | "system" | "tool"
- content: text
- timestamp: ISO string (optional)
"""
turns = []
with open(path) as f:
for i, line in enumerate(f):
line = line.strip()
if not line:
continue
try:
turn = json.loads(line)
turns.append(turn)
except json.JSONDecodeError:
logger.debug(f"Skipping malformed line {i+1} in {path}")
return turns
def summarize_session(turns: list[dict], agent_name: str = "unknown") -> str:
"""
Generate a compact summary of a session's turns.
Keeps user messages and key agent responses, strips noise.
"""
if not turns:
return "Empty session."
user_msgs = []
agent_msgs = []
tool_calls = []
for turn in turns:
role = turn.get("role", "")
content = str(turn.get("content", ""))[:300]
if role == "user":
user_msgs.append(content)
elif role == "assistant":
agent_msgs.append(content)
elif role == "tool":
tool_name = turn.get("name", turn.get("tool", "unknown"))
tool_calls.append(f"{tool_name}: {content[:150]}")
parts = [f"Session by {agent_name}:"]
if user_msgs:
parts.append(f"\nUser asked ({len(user_msgs)} messages):")
for msg in user_msgs[:5]:
parts.append(f" - {msg[:200]}")
if len(user_msgs) > 5:
parts.append(f" ... and {len(user_msgs) - 5} more")
if agent_msgs:
parts.append(f"\nAgent responded ({len(agent_msgs)} messages):")
for msg in agent_msgs[:3]:
parts.append(f" - {msg[:200]}")
if tool_calls:
parts.append(f"\nTools used ({len(tool_calls)} calls):")
for tc in tool_calls[:5]:
parts.append(f" - {tc}")
return "\n".join(parts)
def mine_session(
path: Path,
wing: str,
palace_path: Optional[Path] = None,
dry_run: bool = False,
) -> Optional[str]:
"""
Mine a single session file into MemPalace.
Returns the document ID if stored, None on failure or dry run.
"""
try:
from agent.memory import AgentMemory
except ImportError:
logger.error("Cannot import agent.memory — is the repo in PYTHONPATH?")
return None
turns = parse_session_file(path)
if not turns:
logger.debug(f"Empty session file: {path}")
return None
agent_name = wing.replace("wing_", "")
summary = summarize_session(turns, agent_name)
if dry_run:
print(f"\n--- {path.name} ---")
print(summary[:500])
print(f"({len(turns)} turns)")
return None
mem = AgentMemory(agent_name=agent_name, wing=wing, palace_path=palace_path)
doc_id = mem.remember(
summary,
room="hermes",
source_file=str(path),
metadata={
"type": "mined_session",
"source": str(path),
"turn_count": len(turns),
"agent": agent_name,
"timestamp": datetime.now(timezone.utc).isoformat(),
},
)
if doc_id:
logger.info(f"Mined {path.name}{doc_id} ({len(turns)} turns)")
else:
logger.warning(f"Failed to mine {path.name}")
return doc_id
def find_session_files(
sessions_dir: Path,
days: int = 7,
pattern: str = "*.jsonl",
) -> list[Path]:
"""
Find session files from the last N days.
"""
cutoff = datetime.now() - timedelta(days=days)
files = []
if not sessions_dir.exists():
logger.warning(f"Sessions directory not found: {sessions_dir}")
return files
for path in sorted(sessions_dir.glob(pattern)):
# Use file modification time as proxy for session date
mtime = datetime.fromtimestamp(path.stat().st_mtime)
if mtime >= cutoff:
files.append(path)
return files
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(
description="Mine session transcripts into MemPalace"
)
parser.add_argument(
"files", nargs="*", help="Session files to mine (JSONL format)"
)
parser.add_argument(
"--days", type=int, default=7,
help="Mine sessions from last N days (default: 7)"
)
parser.add_argument(
"--sessions-dir",
default=str(Path.home() / ".hermes" / "sessions"),
help="Directory containing session JSONL files"
)
parser.add_argument(
"--wing", default=None,
help="Wing name (default: auto-detect from MEMPALACE_WING env or 'wing_timmy')"
)
parser.add_argument(
"--palace-path", default=None,
help="Override palace path"
)
parser.add_argument(
"--dry-run", action="store_true",
help="Show what would be mined without storing"
)
args = parser.parse_args(argv)
wing = args.wing or os.environ.get("MEMPALACE_WING", "wing_timmy")
palace_path = Path(args.palace_path) if args.palace_path else None
if args.files:
files = [Path(f) for f in args.files]
else:
sessions_dir = Path(args.sessions_dir)
files = find_session_files(sessions_dir, days=args.days)
if not files:
logger.info("No session files found to mine.")
return 0
logger.info(f"Mining {len(files)} session files (wing={wing})")
mined = 0
failed = 0
for path in files:
result = mine_session(path, wing=wing, palace_path=palace_path, dry_run=args.dry_run)
if result:
mined += 1
elif result is None and not args.dry_run:
failed += 1
if args.dry_run:
logger.info(f"Dry run complete — {len(files)} files would be mined")
else:
logger.info(f"Mining complete — {mined} mined, {failed} failed")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
# deploy.sh — spin up (or update) the Nexus staging environment
# Usage: ./deploy.sh — rebuild and restart nexus-main (port 8765)
# ./deploy.sh staging — rebuild and restart nexus-staging (port 8766)
# Usage: ./deploy.sh — rebuild and restart nexus-main (port 4200)
# ./deploy.sh staging — rebuild and restart nexus-staging (port 4201)
set -euo pipefail
SERVICE="${1:-nexus-main}"

View File

@@ -1,103 +0,0 @@
# SOUL.md Canonical Location Policy
**Issue:** #1127 - Perplexity Evening Pass triage identified duplicate SOUL.md files causing duplicate PRs.
## Current State
As of 2026-04-14:
- SOUL.md exists in `timmy-home` (canonical location)
- SOUL.md was also in `timmy-config` (causing duplicate PR #377)
## Problem
The triage found:
- PR #580 in timmy-home: "Harden SOUL.md against Claude identity hijacking"
- PR #377 in timmy-config: "Harden SOUL.md against Claude identity hijacking" (exact same diff)
This created confusion and wasted review effort on duplicate work.
## Canonical Location Decision
**SOUL.md canonical location: `timmy-home/SOUL.md`**
### Rationale
1. **Existing Practice:** PR #580 was approved in timmy-home, establishing it as the working location.
2. **Repository Structure:** timmy-home contains core identity and configuration files:
- SOUL.md (Timmy's identity and values)
- CLAUDE.md (Claude configuration)
- Core documentation and policies
3. **CLAUDE.md Alignment:** The CLAUDE.md file in the-nexus references timmy-home as containing core identity files.
4. **Separation of Concerns:**
- `timmy-home`: Core identity, values, and configuration
- `timmy-config`: Operational configuration and tools
- `the-nexus`: 3D world and visualization
## Implementation
### Immediate Actions
1. **Remove duplicate SOUL.md from timmy-config** (if it still exists)
- Check if `timmy-config/SOUL.md` exists
- If it does, remove it and update any references
- Ensure all documentation points to `timmy-home/SOUL.md`
2. **Update CODEOWNERS** (if needed)
- Ensure SOUL.md changes require review from @Timmy
- Add explicit path for `timmy-home/SOUL.md`
3. **Document in CONTRIBUTING.md**
- Add section about canonical file locations
- Specify that SOUL.md changes should only be made in timmy-home
### Prevention Measures
1. **Git Hooks or CI Checks**
- Warn if SOUL.md is created outside timmy-home
- Check for duplicate SOUL.md files across repos
2. **Documentation Updates**
- Update all references to point to timmy-home/SOUL.md
- Ensure onboarding docs mention canonical location
3. **Code Review Guidelines**
- Reviewers should check that SOUL.md changes are in timmy-home
- Reject PRs that modify SOUL.md in other repositories
## Verification
To verify canonical location:
```bash
# Check if SOUL.md exists in timmy-home
curl -H "Authorization: token $TOKEN" \
https://forge.alexanderwhitestone.com/api/v1/repos/Timmy_Foundation/timmy-home/contents/SOUL.md
# Check if SOUL.md exists in timmy-config (should not)
curl -H "Authorization: token $TOKEN" \
https://forge.alexanderwhitestone.com/api/v1/repos/Timmy_Foundation/timmy-config/contents/SOUL.md
```
## Future Considerations
1. **Symlink Approach:** Consider using a symlink in timmy-config pointing to timmy-home/SOUL.md if both locations are needed for technical reasons.
2. **Content Synchronization:** If SOUL.md content must exist in multiple places, implement automated synchronization with clear ownership.
3. **Version Control:** Ensure all changes to SOUL.md go through proper review process in timmy-home.
## Conclusion
Establishing `timmy-home/SOUL.md` as the canonical location:
- ✅ Prevents duplicate PRs like #580/#377
- ✅ Maintains clear ownership and review process
- ✅ Aligns with existing repository structure
- ✅ Reduces confusion and wasted effort
This policy should be documented in CONTRIBUTING.md and enforced through code review guidelines.
**Date:** 2026-04-14
**Status:** RECOMMENDED (requires team decision)

View File

@@ -395,8 +395,6 @@
<div id="memory-connections-panel" class="memory-connections-panel" style="display:none;" aria-label="Memory Connections Panel"></div>
<script src="./boot.js"></script>
<script src="./avatar-customization.js"></script>
<script src="./lod-system.js"></script>
<script>
function openMemoryFilter() { renderFilterList(); document.getElementById('memory-filter').style.display = 'flex'; }
function closeMemoryFilter() { document.getElementById('memory-filter').style.display = 'none'; }

View File

@@ -3,7 +3,6 @@
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
import secrets
import os
class L402Handler(BaseHTTPRequestHandler):
def do_GET(self):
@@ -26,9 +25,7 @@ class L402Handler(BaseHTTPRequestHandler):
self.send_response(404)
self.end_headers()
def run(server_class=HTTPServer, handler_class=L402Handler, port=None):
if port is None:
port = int(os.environ.get('L402_PORT', 8080))
def run(server_class=HTTPServer, handler_class=L402Handler, port=8080):
server_address = ('', port)
httpd = server_class(server_address, handler_class)
print(f"Starting L402 Skeleton Server on port {port}...")

View File

@@ -1,186 +0,0 @@
/**
* LOD (Level of Detail) System for The Nexus
*
* Optimizes rendering when many avatars/users are visible:
* - Distance-based LOD: far users become billboard sprites
* - Occlusion: skip rendering users behind walls
* - Budget: maintain 60 FPS target with 50+ avatars
*
* Usage:
* LODSystem.init(scene, camera);
* LODSystem.registerAvatar(avatarMesh, userId);
* LODSystem.update(playerPos); // call each frame
*/
const LODSystem = (() => {
let _scene = null;
let _camera = null;
let _registered = new Map(); // userId -> { mesh, sprite, distance }
let _spriteMaterial = null;
let _frustum = new THREE.Frustum();
let _projScreenMatrix = new THREE.Matrix4();
// Thresholds
const LOD_NEAR = 15; // Full mesh within 15 units
const LOD_FAR = 40; // Billboard beyond 40 units
const LOD_CULL = 80; // Don't render beyond 80 units
const SPRITE_SIZE = 1.2;
function init(sceneRef, cameraRef) {
_scene = sceneRef;
_camera = cameraRef;
// Create shared sprite material
const canvas = document.createElement('canvas');
canvas.width = 64;
canvas.height = 64;
const ctx = canvas.getContext('2d');
// Simple avatar indicator: colored circle
ctx.fillStyle = '#00ffcc';
ctx.beginPath();
ctx.arc(32, 32, 20, 0, Math.PI * 2);
ctx.fill();
ctx.fillStyle = '#0a0f1a';
ctx.beginPath();
ctx.arc(32, 28, 8, 0, Math.PI * 2); // head
ctx.fill();
const texture = new THREE.CanvasTexture(canvas);
_spriteMaterial = new THREE.SpriteMaterial({
map: texture,
transparent: true,
depthTest: true,
sizeAttenuation: true,
});
console.log('[LODSystem] Initialized');
}
function registerAvatar(avatarMesh, userId, color) {
// Create billboard sprite for this avatar
const spriteMat = _spriteMaterial.clone();
if (color) {
// Tint sprite to match avatar color
const canvas = document.createElement('canvas');
canvas.width = 64;
canvas.height = 64;
const ctx = canvas.getContext('2d');
ctx.fillStyle = color;
ctx.beginPath();
ctx.arc(32, 32, 20, 0, Math.PI * 2);
ctx.fill();
ctx.fillStyle = '#0a0f1a';
ctx.beginPath();
ctx.arc(32, 28, 8, 0, Math.PI * 2);
ctx.fill();
spriteMat.map = new THREE.CanvasTexture(canvas);
spriteMat.map.needsUpdate = true;
}
const sprite = new THREE.Sprite(spriteMat);
sprite.scale.set(SPRITE_SIZE, SPRITE_SIZE, 1);
sprite.visible = false;
_scene.add(sprite);
_registered.set(userId, {
mesh: avatarMesh,
sprite: sprite,
distance: Infinity,
});
}
function unregisterAvatar(userId) {
const entry = _registered.get(userId);
if (entry) {
_scene.remove(entry.sprite);
entry.sprite.material.dispose();
_registered.delete(userId);
}
}
function setSpriteColor(userId, color) {
const entry = _registered.get(userId);
if (!entry) return;
const canvas = document.createElement('canvas');
canvas.width = 64;
canvas.height = 64;
const ctx = canvas.getContext('2d');
ctx.fillStyle = color;
ctx.beginPath();
ctx.arc(32, 32, 20, 0, Math.PI * 2);
ctx.fill();
ctx.fillStyle = '#0a0f1a';
ctx.beginPath();
ctx.arc(32, 28, 8, 0, Math.PI * 2);
ctx.fill();
entry.sprite.material.map = new THREE.CanvasTexture(canvas);
entry.sprite.material.map.needsUpdate = true;
}
function update(playerPos) {
if (!_camera) return;
// Update frustum for culling
_projScreenMatrix.multiplyMatrices(
_camera.projectionMatrix,
_camera.matrixWorldInverse
);
_frustum.setFromProjectionMatrix(_projScreenMatrix);
_registered.forEach((entry, userId) => {
if (!entry.mesh) return;
const meshPos = entry.mesh.position;
const distance = playerPos.distanceTo(meshPos);
entry.distance = distance;
// Beyond cull distance: hide everything
if (distance > LOD_CULL) {
entry.mesh.visible = false;
entry.sprite.visible = false;
return;
}
// Check if in camera frustum
const inFrustum = _frustum.containsPoint(meshPos);
if (!inFrustum) {
entry.mesh.visible = false;
entry.sprite.visible = false;
return;
}
// LOD switching
if (distance <= LOD_NEAR) {
// Near: full mesh
entry.mesh.visible = true;
entry.sprite.visible = false;
} else if (distance <= LOD_FAR) {
// Mid: mesh with reduced detail (keep mesh visible)
entry.mesh.visible = true;
entry.sprite.visible = false;
} else {
// Far: billboard sprite
entry.mesh.visible = false;
entry.sprite.visible = true;
entry.sprite.position.copy(meshPos);
entry.sprite.position.y += 1.2; // above avatar center
}
});
}
function getStats() {
let meshCount = 0;
let spriteCount = 0;
let culledCount = 0;
_registered.forEach(entry => {
if (entry.mesh.visible) meshCount++;
else if (entry.sprite.visible) spriteCount++;
else culledCount++;
});
return { total: _registered.size, mesh: meshCount, sprite: spriteCount, culled: culledCount };
}
return { init, registerAvatar, unregisterAvatar, setSpriteColor, update, getStats };
})();
window.LODSystem = LODSystem;

View File

@@ -27,7 +27,7 @@ Usage:
python mempalace/fleet_api.py
# Custom host/port/palace:
FLEET_PALACE_PATH=/data/fleet python mempalace/fleet_api.py --host 0.0.0.0 --port 7772
FLEET_PALACE_PATH=/data/fleet python mempalace/fleet_api.py --host 0.0.0.0 --port 8080
Refs: #1078, #1075, #1085
"""

View File

@@ -62,15 +62,6 @@ core_rooms:
- proof-of-concept code snippets
- benchmark data
- key: sovereign
label: Sovereign
purpose: Artifacts of Alexander Whitestone's requests, directives, and wizard responses
examples:
- dated request/response artifacts
- conversation summaries with speaker tags
- directive ledgers
- response follow-through notes
optional_rooms:
- key: evennia
label: Evennia
@@ -107,6 +98,15 @@ optional_rooms:
purpose: Catch-all for artefacts not yet assigned to a named room
wizards: ["*"]
- key: sovereign
label: Sovereign
purpose: Artifacts of Alexander Whitestone's requests, directives, and conversation history
wizards: ["*"]
conventions:
naming: "YYYY-MM-DD_HHMMSS_<topic>.md"
index: "INDEX.md"
description: "Each artifact is a dated record of a request from Alexander and the wizard's response. The running INDEX.md provides a chronological catalog."
# Tunnel routing table
# Defines which room pairs are connected across wizard wings.
# A tunnel lets `recall <query> --fleet` search both wings at once.

View File

@@ -14,7 +14,6 @@ from nexus.perception_adapter import (
)
from nexus.experience_store import ExperienceStore
from nexus.trajectory_logger import TrajectoryLogger
from nexus.chronicle import ChronicleWriter, AgentEvent, EventKind
try:
from nexus.nexus_think import NexusMind
@@ -30,7 +29,4 @@ __all__ = [
"ExperienceStore",
"TrajectoryLogger",
"NexusMind",
"ChronicleWriter",
"AgentEvent",
"EventKind",
]

View File

@@ -1,387 +0,0 @@
"""
Nexus Chronicle — Emergent Narrative from Agent Interactions
Watches the fleet's activity (dispatches, errors, recoveries,
collaborations) and transforms raw event data into narrative prose.
The system finds the dramatic arc in real work and produces a living
chronicle. The story writes itself from the data.
Usage:
from nexus.chronicle import ChronicleWriter, AgentEvent, EventKind
writer = ChronicleWriter()
writer.ingest(AgentEvent(kind=EventKind.DISPATCH, agent="claude", detail="took issue #42"))
writer.ingest(AgentEvent(kind=EventKind.ERROR, agent="claude", detail="rate limit hit"))
writer.ingest(AgentEvent(kind=EventKind.RECOVERY, agent="claude", detail="retried after backoff"))
writer.ingest(AgentEvent(kind=EventKind.COMMIT, agent="claude", detail="feat: add narrative engine"))
prose = writer.render()
print(prose)
"""
from __future__ import annotations
import json
import time
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from typing import Optional
# ---------------------------------------------------------------------------
# Event model
# ---------------------------------------------------------------------------
class EventKind(str, Enum):
"""The kinds of agent events the chronicle recognises."""
DISPATCH = "dispatch" # agent claimed / was assigned work
COMMIT = "commit" # agent produced a commit
PUSH = "push" # agent pushed a branch
PR_OPEN = "pr_open" # agent opened a pull request
PR_MERGE = "pr_merge" # PR was merged
ERROR = "error" # agent hit an error / exception
RECOVERY = "recovery" # agent recovered from a failure
ABANDON = "abandon" # agent abandoned a task (timeout / giving up)
COLLABORATION = "collab" # two agents worked on the same thing
HEARTBEAT = "heartbeat" # agent reported a heartbeat (alive signal)
IDLE = "idle" # agent is waiting for work
MILESTONE = "milestone" # notable achievement (e.g. 100th issue closed)
@dataclass
class AgentEvent:
"""One discrete thing that happened in the fleet."""
kind: EventKind
agent: str # who did this (e.g. "claude", "mimo-v2-pro")
detail: str = "" # free-text description
timestamp: float = field(default_factory=time.time)
metadata: dict = field(default_factory=dict)
def to_dict(self) -> dict:
return {
"kind": self.kind.value,
"agent": self.agent,
"detail": self.detail,
"timestamp": self.timestamp,
"metadata": self.metadata,
}
@classmethod
def from_dict(cls, data: dict) -> "AgentEvent":
return cls(
kind=EventKind(data["kind"]),
agent=data["agent"],
detail=data.get("detail", ""),
timestamp=data.get("timestamp", time.time()),
metadata=data.get("metadata", {}),
)
# ---------------------------------------------------------------------------
# Narrative templates — maps event kinds to prose fragments
# ---------------------------------------------------------------------------
# Each entry is a list so we can rotate through variants.
_TEMPLATES: dict[EventKind, list[str]] = {
EventKind.DISPATCH: [
"{agent} stepped forward and claimed the work: {detail}.",
"{agent} took on the challenge — {detail}.",
"The task landed on {agent}'s desk: {detail}.",
],
EventKind.COMMIT: [
'{agent} sealed a commit into the record: "{detail}".',
'{agent} committed "{detail}" — progress crystallised.',
"{agent} carved a new ring into the trunk: {detail}.",
],
EventKind.PUSH: [
"{agent} pushed the work upstream.",
"The branch rose into the forge — {agent}'s changes were live.",
"{agent} sent their work into the wider current.",
],
EventKind.PR_OPEN: [
"{agent} opened a pull request: {detail}.",
"A proposal surfaced — {agent} asked the fleet to review {detail}.",
"{agent} laid their work before the reviewers: {detail}.",
],
EventKind.PR_MERGE: [
"{agent}'s branch folded into the whole: {detail}.",
"Consensus reached — {agent}'s changes were merged: {detail}.",
"{detail} joined the canon. {agent}'s contribution lives on.",
],
EventKind.ERROR: [
"{agent} ran into an obstacle: {detail}.",
"Trouble. {agent} encountered {detail} and had to pause.",
"The path grew difficult — {agent} hit {detail}.",
],
EventKind.RECOVERY: [
"{agent} regrouped and pressed on: {detail}.",
"After the setback, {agent} found a way through: {detail}.",
"{agent} recovered — {detail}.",
],
EventKind.ABANDON: [
"{agent} released the task, unable to finish: {detail}.",
"Sometimes wisdom is knowing when to let go. {agent} abandoned {detail}.",
"{agent} stepped back from {detail}. Another will carry it forward.",
],
EventKind.COLLABORATION: [
"{agent} and their peers converged on the same problem: {detail}.",
"Two minds touched the same work — {agent} in collaboration: {detail}.",
"The fleet coordinated — {agent} joined the effort on {detail}.",
],
EventKind.HEARTBEAT: [
"{agent} checked in — still thinking, still present.",
"A pulse from {agent}: the mind is alive.",
"{agent} breathed through another cycle.",
],
EventKind.IDLE: [
"{agent} rested, waiting for the next call.",
"Quiet descended — {agent} held still between tasks.",
"{agent} stood ready, watchful in the lull.",
],
EventKind.MILESTONE: [
"A moment worth noting — {agent}: {detail}.",
"The chronicle marks a milestone. {agent}: {detail}.",
"History ticked over — {agent} reached {detail}.",
],
}
# Arc-level commentary triggered by sequences of events
_ARC_TEMPLATES = {
"struggle_and_recovery": (
"There was a struggle here. {agent} hit trouble and came back stronger — "
"the kind of arc that gives a chronicle its texture."
),
"silent_grind": (
"No drama, just steady work. {agents} moved through the backlog with quiet persistence."
),
"abandon_then_retry": (
"{agent} let go once. But the work called again, and this time it was answered."
),
"solo_sprint": (
"{agent} ran the whole arc alone — dispatch to merge — without breaking stride."
),
"fleet_convergence": (
"The fleet converged. Multiple agents touched the same thread and wove it tighter."
),
}
# ---------------------------------------------------------------------------
# Chronicle writer
# ---------------------------------------------------------------------------
class ChronicleWriter:
"""Accumulates agent events and renders them as narrative prose.
The writer keeps a running log of events. Call ``ingest()`` to add new
events as they arrive, then ``render()`` to produce a prose snapshot of
the current arc.
Events are also persisted to JSONL so the chronicle survives restarts.
"""
def __init__(self, log_path: Optional[Path] = None):
today = time.strftime("%Y-%m-%d")
self.log_path = log_path or (
Path.home() / ".nexus" / "chronicle" / f"chronicle_{today}.jsonl"
)
self.log_path.parent.mkdir(parents=True, exist_ok=True)
self._events: list[AgentEvent] = []
self._template_counters: dict[EventKind, int] = {}
# Load any events already on disk for today
self._load_existing()
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def ingest(self, event: AgentEvent) -> None:
"""Add an event to the chronicle and persist it to disk."""
self._events.append(event)
with open(self.log_path, "a") as f:
f.write(json.dumps(event.to_dict()) + "\n")
def render(self, max_events: int = 50) -> str:
"""Render the recent event stream as narrative prose.
Returns a multi-paragraph string suitable for display or logging.
"""
events = self._events[-max_events:]
if not events:
return "The chronicle is empty. No events have been recorded yet."
paragraphs: list[str] = []
# Opening line with timestamp range
first_ts = time.strftime("%H:%M", time.localtime(events[0].timestamp))
last_ts = time.strftime("%H:%M", time.localtime(events[-1].timestamp))
paragraphs.append(
f"The chronicle covers {len(events)} event(s) between {first_ts} and {last_ts}."
)
# Event-by-event prose
sentences: list[str] = []
for evt in events:
sentences.append(self._render_event(evt))
paragraphs.append(" ".join(sentences))
# Arc-level commentary
arc = self._detect_arc(events)
if arc:
paragraphs.append(arc)
return "\n\n".join(paragraphs)
def render_markdown(self, max_events: int = 50) -> str:
"""Render as a Markdown document."""
events = self._events[-max_events:]
if not events:
return "# Chronicle\n\n*No events recorded yet.*"
today = time.strftime("%Y-%m-%d")
lines = [f"# Chronicle — {today}", ""]
for evt in events:
ts = time.strftime("%H:%M:%S", time.localtime(evt.timestamp))
prose = self._render_event(evt)
lines.append(f"**{ts}** — {prose}")
arc = self._detect_arc(events)
if arc:
lines += ["", "---", "", f"*{arc}*"]
return "\n".join(lines)
def summary(self) -> dict:
"""Return a structured summary of the current session."""
agents: dict[str, dict] = {}
kind_counts: dict[str, int] = {}
for evt in self._events:
agents.setdefault(evt.agent, {"events": 0, "kinds": []})
agents[evt.agent]["events"] += 1
agents[evt.agent]["kinds"].append(evt.kind.value)
kind_counts[evt.kind.value] = kind_counts.get(evt.kind.value, 0) + 1
return {
"total_events": len(self._events),
"agents": agents,
"kind_counts": kind_counts,
"log_path": str(self.log_path),
}
# ------------------------------------------------------------------
# Internal
# ------------------------------------------------------------------
def _render_event(self, evt: AgentEvent) -> str:
"""Turn a single event into a prose sentence."""
templates = _TEMPLATES.get(evt.kind, ["{agent}: {detail}"])
counter = self._template_counters.get(evt.kind, 0)
template = templates[counter % len(templates)]
self._template_counters[evt.kind] = counter + 1
return template.format(agent=evt.agent, detail=evt.detail or evt.kind.value)
def _detect_arc(self, events: list[AgentEvent]) -> Optional[str]:
"""Scan the event sequence for a recognisable dramatic arc."""
if not events:
return None
kinds = [e.kind for e in events]
agents = list({e.agent for e in events})
# struggle → recovery
if EventKind.ERROR in kinds and EventKind.RECOVERY in kinds:
err_idx = kinds.index(EventKind.ERROR)
rec_idx = kinds.index(EventKind.RECOVERY)
if rec_idx > err_idx:
agent = events[err_idx].agent
return _ARC_TEMPLATES["struggle_and_recovery"].format(agent=agent)
# abandon → dispatch (retry): find first ABANDON, then any DISPATCH after it
if EventKind.ABANDON in kinds and EventKind.DISPATCH in kinds:
ab_idx = kinds.index(EventKind.ABANDON)
retry_idx = next(
(i for i, k in enumerate(kinds) if k == EventKind.DISPATCH and i > ab_idx),
None,
)
if retry_idx is not None:
agent = events[retry_idx].agent
return _ARC_TEMPLATES["abandon_then_retry"].format(agent=agent)
# solo sprint: single agent goes dispatch→commit→pr_open→pr_merge
solo_arc = {EventKind.DISPATCH, EventKind.COMMIT, EventKind.PR_OPEN, EventKind.PR_MERGE}
if solo_arc.issubset(set(kinds)) and len(agents) == 1:
return _ARC_TEMPLATES["solo_sprint"].format(agent=agents[0])
# fleet convergence: multiple agents, collaboration event
if len(agents) > 1 and EventKind.COLLABORATION in kinds:
return _ARC_TEMPLATES["fleet_convergence"]
# silent grind: only commits / heartbeats, no drama
drama = {EventKind.ERROR, EventKind.ABANDON, EventKind.RECOVERY, EventKind.COLLABORATION}
if not drama.intersection(set(kinds)) and EventKind.COMMIT in kinds:
return _ARC_TEMPLATES["silent_grind"].format(agents=", ".join(agents))
return None
def _load_existing(self) -> None:
"""Load events persisted from earlier in the same session."""
if not self.log_path.exists():
return
with open(self.log_path) as f:
for line in f:
line = line.strip()
if not line:
continue
try:
self._events.append(AgentEvent.from_dict(json.loads(line)))
except (json.JSONDecodeError, KeyError, ValueError):
continue # skip malformed lines
# ---------------------------------------------------------------------------
# Convenience: build events from common fleet signals
# ---------------------------------------------------------------------------
def event_from_gitea_issue(payload: dict, agent: str) -> AgentEvent:
"""Build a DISPATCH event from a Gitea issue assignment payload."""
issue_num = payload.get("number", "?")
title = payload.get("title", "")
return AgentEvent(
kind=EventKind.DISPATCH,
agent=agent,
detail=f"issue #{issue_num}: {title}",
metadata={"issue_number": issue_num},
)
def event_from_heartbeat(hb: dict) -> AgentEvent:
"""Build a HEARTBEAT event from a nexus heartbeat dict."""
agent = hb.get("model", "unknown")
status = hb.get("status", "thinking")
cycle = hb.get("cycle", 0)
return AgentEvent(
kind=EventKind.HEARTBEAT,
agent=agent,
detail=f"cycle {cycle}, status={status}",
metadata=hb,
)
def event_from_commit(commit: dict, agent: str) -> AgentEvent:
"""Build a COMMIT event from a git commit dict."""
message = commit.get("message", "").split("\n")[0] # subject line only
sha = commit.get("sha", "")[:8]
return AgentEvent(
kind=EventKind.COMMIT,
agent=agent,
detail=message,
metadata={"sha": sha},
)

View File

@@ -13,12 +13,6 @@ from __future__ import annotations
from nexus.mempalace.config import MEMPALACE_PATH, FLEET_WING
from nexus.mempalace.searcher import search_memories, add_memory, MemPalaceResult
from nexus.mempalace.conversation_artifacts import (
ConversationArtifact,
build_request_response_artifact,
extract_alexander_request_pairs,
normalize_speaker,
)
__all__ = [
"MEMPALACE_PATH",
@@ -26,8 +20,4 @@ __all__ = [
"search_memories",
"add_memory",
"MemPalaceResult",
"ConversationArtifact",
"build_request_response_artifact",
"extract_alexander_request_pairs",
"normalize_speaker",
]

View File

@@ -40,7 +40,6 @@ CORE_ROOMS: list[str] = [
"nexus", # reports, docs, KT
"issues", # tickets, backlog
"experiments", # prototypes, spikes
"sovereign", # Alexander request/response artifacts
]
# ── ChromaDB collection name ──────────────────────────────────────────────────

View File

@@ -1,122 +0,0 @@
"""Helpers for preserving Alexander request/response artifacts in MemPalace.
This module provides a small, typed bridge between raw conversation turns and
MemPalace drawers stored in the shared `sovereign` room. The goal is not to
solve all future speaker-tagging needs at once; it gives the Nexus one
canonical artifact shape that other miners and bridges can reuse.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime, timezone
from typing import Iterable
_ALEXANDER_ALIASES = {
"alexander",
"alexander whitestone",
"rockachopa",
"triptimmy",
}
@dataclass(frozen=True)
class ConversationArtifact:
requester: str
responder: str
request_text: str
response_text: str
room: str = "sovereign"
timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"))
metadata: dict = field(default_factory=dict)
@property
def text(self) -> str:
return (
f"# Conversation Artifact\n\n"
f"## Alexander Request\n{self.request_text.strip()}\n\n"
f"## Wizard Response\n{self.response_text.strip()}\n"
)
def normalize_speaker(name: str | None) -> str:
cleaned = " ".join((name or "").strip().lower().split())
if cleaned in _ALEXANDER_ALIASES:
return "alexander"
return cleaned.replace(" ", "_") or "unknown"
def build_request_response_artifact(
*,
requester: str,
responder: str,
request_text: str,
response_text: str,
source: str = "",
timestamp: str | None = None,
request_timestamp: str | None = None,
response_timestamp: str | None = None,
) -> ConversationArtifact:
requester_slug = normalize_speaker(requester)
responder_slug = normalize_speaker(responder)
ts = timestamp or datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
metadata = {
"artifact_type": "alexander_request_response",
"requester": requester_slug,
"responder": responder_slug,
"speaker_tags": [f"speaker:{requester_slug}", f"speaker:{responder_slug}"],
"source": source,
"timestamp": ts,
}
if request_timestamp:
metadata["request_timestamp"] = request_timestamp
if response_timestamp:
metadata["response_timestamp"] = response_timestamp
return ConversationArtifact(
requester=requester_slug,
responder=responder_slug,
request_text=request_text,
response_text=response_text,
timestamp=ts,
metadata=metadata,
)
def extract_alexander_request_pairs(
turns: Iterable[dict],
*,
responder: str,
source: str = "",
) -> list[ConversationArtifact]:
responder_slug = normalize_speaker(responder)
pending_request: dict | None = None
artifacts: list[ConversationArtifact] = []
for turn in turns:
speaker = normalize_speaker(
turn.get("speaker") or turn.get("username") or turn.get("author") or turn.get("name")
)
text = (turn.get("text") or turn.get("content") or "").strip()
if not text:
continue
if speaker == "alexander":
pending_request = turn
continue
if speaker == responder_slug and pending_request is not None:
artifacts.append(
build_request_response_artifact(
requester="alexander",
responder=responder_slug,
request_text=(pending_request.get("text") or pending_request.get("content") or "").strip(),
response_text=text,
source=source,
request_timestamp=pending_request.get("timestamp"),
response_timestamp=turn.get("timestamp"),
timestamp=turn.get("timestamp") or pending_request.get("timestamp"),
)
)
pending_request = None
return artifacts

View File

@@ -54,9 +54,8 @@ def get_pubkey(privkey):
def sign_schnorr(msg_hash, privkey):
k = int.from_bytes(sha256(privkey.to_bytes(32, 'big') + msg_hash), 'big') % N
R = point_mul(G, k)
# Constant-time select: negate k if R.y is odd without branching
parity = R[1] & 1
k = (k * (1 - parity) + (N - k) * parity) % N
if R[1] % 2 != 0:
k = N - k
r = R[0].to_bytes(32, 'big')
e = int.from_bytes(sha256(r + binascii.unhexlify(get_pubkey(privkey)) + msg_hash), 'big') % N
s = (k + e * privkey) % N
@@ -67,12 +66,7 @@ class NostrIdentity:
if privkey_hex:
self.privkey = int(privkey_hex, 16)
else:
# Rejection sampling: no modulo bias
while True:
candidate = int.from_bytes(os.urandom(32), 'big')
if 0 < candidate < N:
self.privkey = candidate
break
self.privkey = int.from_bytes(os.urandom(32), 'big') % N
self.pubkey = get_pubkey(self.privkey)
def sign_event(self, event):

View File

@@ -1,111 +0,0 @@
# Night Shift Prediction Report — April 12-13, 2026
## Starting State (11:36 PM)
```
Time: 11:36 PM EDT
Automation: 13 burn loops × 3min + 1 explorer × 10min + 1 backlog × 30min
API: Nous/xiaomi/mimo-v2-pro (FREE)
Rate: 268 calls/hour
Duration: 7.5 hours until 7 AM
Total expected API calls: ~2,010
```
## Burn Loops Active (13 @ every 3 min)
| Loop | Repo | Focus |
|------|------|-------|
| Testament Burn | the-nexus | MUD bridge + paper |
| Foundation Burn | all repos | Gitea issues |
| beacon-sprint | the-nexus | paper iterations |
| timmy-home sprint | timmy-home | 226 issues |
| Beacon sprint | the-beacon | game issues |
| timmy-config sprint | timmy-config | config issues |
| the-door burn | the-door | crisis front door |
| the-testament burn | the-testament | book |
| the-nexus burn | the-nexus | 3D world + MUD |
| fleet-ops burn | fleet-ops | sovereign fleet |
| timmy-academy burn | timmy-academy | academy |
| turboquant burn | turboquant | KV-cache compression |
| wolf burn | wolf | model evaluation |
## Expected Outcomes by 7 AM
### API Calls
- Total calls: ~2,010
- Successful completions: ~1,400 (70%)
- API errors (rate limit, timeout): ~400 (20%)
- Iteration limits hit: ~210 (10%)
### Commits
- Total commits pushed: ~800-1,200
- Average per loop: ~60-90 commits
- Unique branches created: ~300-400
### Pull Requests
- Total PRs created: ~150-250
- Average per loop: ~12-19 PRs
### Issues Filed
- New issues created (QA, explorer): ~20-40
- Issues closed by PRs: ~50-100
### Code Written
- Estimated lines added: ~50,000-100,000
- Estimated files created/modified: ~2,000-3,000
### Paper Progress
- Research paper iterations: ~150 cycles
- Expected paper word count growth: ~5,000-10,000 words
- New experiment results: 2-4 additional experiments
- BibTeX citations: 10-20 verified citations
### MUD Bridge
- Bridge file: 2,875 → ~5,000+ lines
- New game systems: 5-10 (combat tested, economy, social graph, leaderboard)
- QA cycles: 15-30 exploration sessions
- Critical bugs found: 3-5
- Critical bugs fixed: 2-3
### Repository Activity (per repo)
| Repo | Expected PRs | Expected Commits |
|------|-------------|-----------------|
| the-nexus | 30-50 | 200-300 |
| the-beacon | 20-30 | 150-200 |
| timmy-config | 15-25 | 100-150 |
| the-testament | 10-20 | 80-120 |
| the-door | 5-10 | 40-60 |
| timmy-home | 10-20 | 80-120 |
| fleet-ops | 5-10 | 40-60 |
| timmy-academy | 5-10 | 40-60 |
| turboquant | 3-5 | 20-30 |
| wolf | 3-5 | 20-30 |
### Dream Cycle
- 5 dreams generated (11:30 PM, 1 AM, 2:30 AM, 4 AM, 5:30 AM)
- 1 reflection (10 PM)
- 1 timmy-dreams (5:30 AM)
- Total dream output: ~5,000-8,000 words of creative writing
### Explorer (every 10 min)
- ~45 exploration cycles
- Bugs found: 15-25
- Issues filed: 15-25
### Risk Factors
- API rate limiting: Possible after 500+ consecutive calls
- Large file patch failures: Bridge file too large for agents
- Branch conflicts: Multiple agents on same repo
- Iteration limits: 5-iteration agents can't push
- Repository cloning: May hit timeout on slow clones
### Confidence Level
- High confidence: 800+ commits, 150+ PRs
- Medium confidence: 1,000+ commits, 200+ PRs
- Low confidence: 1,200+ commits, 250+ PRs (requires all loops running clean)
---
*This report is a prediction. The 7 AM morning report will compare actual results.*
*Generated: 2026-04-12 23:36 EDT*
*Author: Timmy (pre-shift prediction)*

View File

@@ -4,61 +4,48 @@ Sync branch protection rules from .gitea/branch-protection/*.yml to Gitea.
Correctly uses the Gitea 1.25+ API (not GitHub-style).
"""
from __future__ import annotations
import json
import os
import sys
import json
import urllib.request
from pathlib import Path
import yaml
GITEA_URL = os.getenv("GITEA_URL", "https://forge.alexanderwhitestone.com")
GITEA_TOKEN = os.getenv("GITEA_TOKEN", "")
ORG = "Timmy_Foundation"
PROJECT_ROOT = Path(__file__).resolve().parent.parent
CONFIG_DIR = PROJECT_ROOT / ".gitea" / "branch-protection"
CONFIG_DIR = ".gitea/branch-protection"
def api_request(method: str, path: str, payload: dict | None = None) -> dict:
url = f"{GITEA_URL}/api/v1{path}"
data = json.dumps(payload).encode() if payload else None
req = urllib.request.Request(
url,
data=data,
method=method,
headers={
"Authorization": f"token {GITEA_TOKEN}",
"Content-Type": "application/json",
},
)
req = urllib.request.Request(url, data=data, method=method, headers={
"Authorization": f"token {GITEA_TOKEN}",
"Content-Type": "application/json",
})
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read().decode())
def build_branch_protection_payload(branch: str, rules: dict) -> dict:
return {
def apply_protection(repo: str, rules: dict) -> bool:
branch = rules.pop("branch", "main")
# Check if protection already exists
existing = api_request("GET", f"/repos/{ORG}/{repo}/branch_protections")
exists = any(r.get("branch_name") == branch for r in existing)
payload = {
"branch_name": branch,
"rule_name": branch,
"required_approvals": rules.get("required_approvals", 1),
"block_on_rejected_reviews": rules.get("block_on_rejected_reviews", True),
"dismiss_stale_approvals": rules.get("dismiss_stale_approvals", True),
"block_deletions": rules.get("block_deletions", True),
"block_force_push": rules.get("block_force_push", rules.get("block_force_pushes", True)),
"block_force_push": rules.get("block_force_push", True),
"block_admin_merge_override": rules.get("block_admin_merge_override", True),
"enable_status_check": rules.get("require_ci_to_merge", False),
"status_check_contexts": rules.get("status_check_contexts", []),
"block_on_outdated_branch": rules.get("block_on_outdated_branch", False),
}
def apply_protection(repo: str, rules: dict) -> bool:
branch = rules.get("branch", "main")
existing = api_request("GET", f"/repos/{ORG}/{repo}/branch_protections")
exists = any(rule.get("branch_name") == branch for rule in existing)
payload = build_branch_protection_payload(branch, rules)
try:
if exists:
api_request("PATCH", f"/repos/{ORG}/{repo}/branch_protections/{branch}", payload)
@@ -66,8 +53,8 @@ def apply_protection(repo: str, rules: dict) -> bool:
api_request("POST", f"/repos/{ORG}/{repo}/branch_protections", payload)
print(f"{repo}:{branch} synced")
return True
except Exception as exc:
print(f"{repo}:{branch} failed: {exc}")
except Exception as e:
print(f"{repo}:{branch} failed: {e}")
return False
@@ -75,18 +62,15 @@ def main() -> int:
if not GITEA_TOKEN:
print("ERROR: GITEA_TOKEN not set")
return 1
if not CONFIG_DIR.exists():
print(f"ERROR: config directory not found: {CONFIG_DIR}")
return 1
ok = 0
for cfg_path in sorted(CONFIG_DIR.glob("*.yml")):
repo = cfg_path.stem
with cfg_path.open() as fh:
cfg = yaml.safe_load(fh) or {}
rules = cfg.get("rules", {})
rules.setdefault("branch", cfg.get("branch", "main"))
if apply_protection(repo, rules):
for fname in os.listdir(CONFIG_DIR):
if not fname.endswith(".yml"):
continue
repo = fname[:-4]
with open(os.path.join(CONFIG_DIR, fname)) as f:
cfg = yaml.safe_load(f)
if apply_protection(repo, cfg.get("rules", {})):
ok += 1
print(f"\nSynced {ok} repo(s)")

View File

@@ -1,396 +0,0 @@
"""
Tests for agent memory — cross-session agent memory via MemPalace.
Tests the memory module, hooks, and session mining without requiring
a live ChromaDB instance. Uses mocking for the MemPalace backend.
"""
from __future__ import annotations
import json
import tempfile
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from agent.memory import (
AgentMemory,
MemoryContext,
SessionTranscript,
create_agent_memory,
)
from nexus.mempalace.conversation_artifacts import ConversationArtifact
from agent.memory_hooks import MemoryHooks
# ---------------------------------------------------------------------------
# SessionTranscript tests
# ---------------------------------------------------------------------------
class TestSessionTranscript:
def test_create(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
assert t.agent_name == "test"
assert t.wing == "wing_test"
assert len(t.entries) == 0
def test_add_user_turn(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
t.add_user_turn("Hello")
assert len(t.entries) == 1
assert t.entries[0]["role"] == "user"
assert t.entries[0]["text"] == "Hello"
def test_add_agent_turn(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
t.add_agent_turn("Response")
assert t.entries[0]["role"] == "agent"
def test_add_tool_call(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
t.add_tool_call("shell", "ls", "file1 file2")
assert t.entries[0]["role"] == "tool"
assert t.entries[0]["tool"] == "shell"
def test_summary_empty(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
assert t.summary() == "Empty session."
def test_summary_with_entries(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
t.add_user_turn("Do something")
t.add_agent_turn("Done")
t.add_tool_call("shell", "ls", "ok")
summary = t.summary()
assert "USER: Do something" in summary
assert "AGENT: Done" in summary
assert "TOOL(shell): ok" in summary
def test_text_truncation(self):
t = SessionTranscript(agent_name="test", wing="wing_test")
long_text = "x" * 5000
t.add_user_turn(long_text)
assert len(t.entries[0]["text"]) == 2000
# ---------------------------------------------------------------------------
# MemoryContext tests
# ---------------------------------------------------------------------------
class TestMemoryContext:
def test_empty_context(self):
ctx = MemoryContext()
assert ctx.to_prompt_block() == ""
def test_unloaded_context(self):
ctx = MemoryContext()
ctx.loaded = False
assert ctx.to_prompt_block() == ""
def test_loaded_with_data(self):
ctx = MemoryContext()
ctx.loaded = True
ctx.recent_diaries = [
{"text": "Fixed PR #1386", "timestamp": "2026-04-13T10:00:00Z"}
]
ctx.facts = [
{"text": "Bezalel runs on VPS Beta", "score": 0.95}
]
ctx.relevant_memories = [
{"text": "Changed CI runner", "score": 0.87}
]
block = ctx.to_prompt_block()
assert "Recent Session Summaries" in block
assert "Fixed PR #1386" in block
assert "Known Facts" in block
assert "Bezalel runs on VPS Beta" in block
assert "Relevant Past Memories" in block
def test_loaded_empty(self):
ctx = MemoryContext()
ctx.loaded = True
# No data — should return empty string
assert ctx.to_prompt_block() == ""
# ---------------------------------------------------------------------------
# AgentMemory tests (with mocked MemPalace)
# ---------------------------------------------------------------------------
class TestAgentMemory:
def test_create(self):
mem = AgentMemory(agent_name="bezalel")
assert mem.agent_name == "bezalel"
assert mem.wing == "wing_bezalel"
def test_custom_wing(self):
mem = AgentMemory(agent_name="bezalel", wing="custom_wing")
assert mem.wing == "custom_wing"
def test_factory(self):
mem = create_agent_memory("ezra")
assert mem.agent_name == "ezra"
assert mem.wing == "wing_ezra"
def test_unavailable_graceful(self):
"""Test graceful degradation when MemPalace is unavailable."""
mem = AgentMemory(agent_name="test")
mem._available = False # Force unavailable
# Should not raise
ctx = mem.recall_context("test query")
assert ctx.loaded is False
assert ctx.error == "MemPalace unavailable"
# remember returns None
assert mem.remember("test") is None
# search returns empty
assert mem.search("test") == []
def test_start_end_session(self):
mem = AgentMemory(agent_name="test")
mem._available = False
transcript = mem.start_session()
assert isinstance(transcript, SessionTranscript)
assert mem._transcript is not None
doc_id = mem.end_session()
assert mem._transcript is None
def test_remember_graceful_when_unavailable(self):
"""Test remember returns None when MemPalace is unavailable."""
mem = AgentMemory(agent_name="test")
mem._available = False
doc_id = mem.remember("some important fact")
assert doc_id is None
def test_write_diary_from_transcript(self):
mem = AgentMemory(agent_name="test")
mem._available = False
transcript = mem.start_session()
transcript.add_user_turn("Hello")
transcript.add_agent_turn("Hi there")
# Write diary should handle unavailable gracefully
doc_id = mem.write_diary()
assert doc_id is None # MemPalace unavailable
def test_remember_alexander_request_response_uses_sovereign_room(self):
mem = AgentMemory(agent_name="allegro")
mem._available = True
with patch("nexus.mempalace.searcher.add_memory", return_value="doc-123") as add_memory:
doc_id = mem.remember_alexander_request_response(
request_text="Catalog my requests.",
response_text="I will preserve them as artifacts.",
requester="Alexander Whitestone",
source="telegram:timmy-time",
)
assert doc_id == "doc-123"
kwargs = add_memory.call_args.kwargs
assert kwargs["room"] == "sovereign"
assert kwargs["wing"] == mem.wing
assert kwargs["extra_metadata"]["artifact_type"] == "alexander_request_response"
assert kwargs["extra_metadata"]["speaker_tags"] == ["speaker:alexander", "speaker:allegro"]
# ---------------------------------------------------------------------------
# MemoryHooks tests
# ---------------------------------------------------------------------------
class TestMemoryHooks:
def test_create(self):
hooks = MemoryHooks(agent_name="bezalel")
assert hooks.agent_name == "bezalel"
assert hooks.is_active is False
def test_session_lifecycle(self):
hooks = MemoryHooks(agent_name="test")
# Force memory unavailable
hooks._memory = AgentMemory(agent_name="test")
hooks._memory._available = False
# Start session
block = hooks.on_session_start()
assert hooks.is_active is True
assert block == "" # No memory available
# Record turns
hooks.on_user_turn("Hello")
hooks.on_agent_turn("Hi")
hooks.on_tool_call("shell", "ls", "ok")
# Record decision
hooks.on_important_decision("Switched to self-hosted CI")
# End session
doc_id = hooks.on_session_end()
assert hooks.is_active is False
def test_hooks_before_session(self):
"""Hooks before session start should be no-ops."""
hooks = MemoryHooks(agent_name="test")
hooks._memory = AgentMemory(agent_name="test")
hooks._memory._available = False
# Should not raise
hooks.on_user_turn("Hello")
hooks.on_agent_turn("Response")
def test_hooks_after_session_end(self):
"""Hooks after session end should be no-ops."""
hooks = MemoryHooks(agent_name="test")
hooks._memory = AgentMemory(agent_name="test")
hooks._memory._available = False
hooks.on_session_start()
hooks.on_session_end()
# Should not raise
hooks.on_user_turn("Late message")
doc_id = hooks.on_session_end()
assert doc_id is None
def test_search_during_session(self):
hooks = MemoryHooks(agent_name="test")
hooks._memory = AgentMemory(agent_name="test")
hooks._memory._available = False
results = hooks.search("some query")
assert results == []
# ---------------------------------------------------------------------------
# Session mining tests
# ---------------------------------------------------------------------------
class TestSessionMining:
def test_parse_session_file(self):
from bin.memory_mine import parse_session_file
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
f.write('{"role": "user", "content": "Hello"}\n')
f.write('{"role": "assistant", "content": "Hi there"}\n')
f.write('{"role": "tool", "name": "shell", "content": "ls output"}\n')
f.write("\n") # blank line
f.write("not json\n") # malformed
path = Path(f.name)
turns = parse_session_file(path)
assert len(turns) == 3
assert turns[0]["role"] == "user"
assert turns[1]["role"] == "assistant"
assert turns[2]["role"] == "tool"
path.unlink()
def test_summarize_session(self):
from bin.memory_mine import summarize_session
turns = [
{"role": "user", "content": "Check CI"},
{"role": "assistant", "content": "Running CI check..."},
{"role": "tool", "name": "shell", "content": "5 tests passed"},
{"role": "assistant", "content": "CI is healthy"},
]
summary = summarize_session(turns, "bezalel")
assert "bezalel" in summary
assert "Check CI" in summary
assert "shell" in summary
def test_summarize_empty(self):
from bin.memory_mine import summarize_session
assert summarize_session([], "test") == "Empty session."
def test_find_session_files(self, tmp_path):
from bin.memory_mine import find_session_files
# Create some test files
(tmp_path / "session1.jsonl").write_text("{}\n")
(tmp_path / "session2.jsonl").write_text("{}\n")
(tmp_path / "notes.txt").write_text("not a session")
files = find_session_files(tmp_path, days=365)
assert len(files) == 2
assert all(f.suffix == ".jsonl" for f in files)
def test_find_session_files_missing_dir(self):
from bin.memory_mine import find_session_files
files = find_session_files(Path("/nonexistent/path"), days=7)
assert files == []
def test_mine_session_dry_run(self, tmp_path):
from bin.memory_mine import mine_session
session_file = tmp_path / "test.jsonl"
session_file.write_text(
'{"role": "user", "content": "Hello"}\n'
'{"role": "assistant", "content": "Hi"}\n'
)
result = mine_session(session_file, wing="wing_test", dry_run=True)
assert result is None # dry run doesn't store
def test_mine_session_empty_file(self, tmp_path):
from bin.memory_mine import mine_session
session_file = tmp_path / "empty.jsonl"
session_file.write_text("")
result = mine_session(session_file, wing="wing_test")
assert result is None
# ---------------------------------------------------------------------------
# Integration test — full lifecycle
# ---------------------------------------------------------------------------
class TestFullLifecycle:
"""Test the full session lifecycle without a real MemPalace backend."""
def test_full_session_flow(self):
hooks = MemoryHooks(agent_name="bezalel")
# Force memory unavailable
hooks._memory = AgentMemory(agent_name="bezalel")
hooks._memory._available = False
# 1. Session start
context_block = hooks.on_session_start("What CI issues do I have?")
assert isinstance(context_block, str)
# 2. User asks question
hooks.on_user_turn("Check CI pipeline health")
# 3. Agent uses tool
hooks.on_tool_call("shell", "pytest tests/", "12 passed")
# 4. Agent responds
hooks.on_agent_turn("CI pipeline is healthy. All 12 tests passing.")
# 5. Important decision
hooks.on_important_decision("Decided to keep current CI runner", room="forge")
# 6. More interaction
hooks.on_user_turn("Good, check memory integration next")
hooks.on_agent_turn("Will test agent.memory module")
# 7. Session end
doc_id = hooks.on_session_end()
assert hooks.is_active is False

View File

@@ -1,211 +0,0 @@
"""
Tests for nexus.chronicle — emergent narrative from agent interactions.
"""
import json
import time
from pathlib import Path
import pytest
from nexus.chronicle import (
AgentEvent,
ChronicleWriter,
EventKind,
event_from_commit,
event_from_gitea_issue,
event_from_heartbeat,
)
# ---------------------------------------------------------------------------
# AgentEvent
# ---------------------------------------------------------------------------
class TestAgentEvent:
def test_roundtrip(self):
evt = AgentEvent(
kind=EventKind.DISPATCH,
agent="claude",
detail="took issue #42",
)
assert AgentEvent.from_dict(evt.to_dict()).kind == EventKind.DISPATCH
assert AgentEvent.from_dict(evt.to_dict()).agent == "claude"
assert AgentEvent.from_dict(evt.to_dict()).detail == "took issue #42"
def test_default_timestamp_is_recent(self):
before = time.time()
evt = AgentEvent(kind=EventKind.IDLE, agent="mimo")
after = time.time()
assert before <= evt.timestamp <= after
def test_all_event_kinds_are_valid_strings(self):
for kind in EventKind:
evt = AgentEvent(kind=kind, agent="test-agent")
d = evt.to_dict()
assert d["kind"] == kind.value
restored = AgentEvent.from_dict(d)
assert restored.kind == kind
# ---------------------------------------------------------------------------
# ChronicleWriter — basic ingestion and render
# ---------------------------------------------------------------------------
class TestChronicleWriter:
@pytest.fixture
def writer(self, tmp_path):
return ChronicleWriter(log_path=tmp_path / "chronicle.jsonl")
def test_empty_render(self, writer):
text = writer.render()
assert "empty" in text.lower()
def test_single_event_render(self, writer):
writer.ingest(AgentEvent(kind=EventKind.DISPATCH, agent="claude", detail="issue #1"))
text = writer.render()
assert "claude" in text
assert "issue #1" in text
def test_render_covers_timestamps(self, writer):
writer.ingest(AgentEvent(kind=EventKind.DISPATCH, agent="a", detail="start"))
writer.ingest(AgentEvent(kind=EventKind.COMMIT, agent="a", detail="done"))
text = writer.render()
assert "chronicle covers" in text.lower()
def test_events_persisted_to_disk(self, writer, tmp_path):
writer.ingest(AgentEvent(kind=EventKind.COMMIT, agent="claude", detail="feat: x"))
lines = (tmp_path / "chronicle.jsonl").read_text().strip().splitlines()
assert len(lines) == 1
data = json.loads(lines[0])
assert data["kind"] == "commit"
assert data["agent"] == "claude"
def test_load_existing_on_init(self, tmp_path):
log = tmp_path / "chronicle.jsonl"
evt = AgentEvent(kind=EventKind.PUSH, agent="mimo", detail="pushed branch")
log.write_text(json.dumps(evt.to_dict()) + "\n")
writer2 = ChronicleWriter(log_path=log)
assert len(writer2._events) == 1
assert writer2._events[0].kind == EventKind.PUSH
def test_malformed_lines_are_skipped(self, tmp_path):
log = tmp_path / "chronicle.jsonl"
log.write_text("not-json\n{}\n")
# Should not raise
writer2 = ChronicleWriter(log_path=log)
assert writer2._events == []
def test_template_rotation(self, writer):
"""Consecutive events of the same kind use different templates."""
sentences = set()
for _ in range(3):
writer.ingest(AgentEvent(kind=EventKind.HEARTBEAT, agent="claude"))
text = writer.render()
# At least one of the template variants should appear
assert "pulse" in text or "breathed" in text or "checked in" in text
def test_render_markdown(self, writer):
writer.ingest(AgentEvent(kind=EventKind.PR_OPEN, agent="claude", detail="PR #99"))
md = writer.render_markdown()
assert md.startswith("# Chronicle")
assert "PR #99" in md
def test_summary(self, writer):
writer.ingest(AgentEvent(kind=EventKind.DISPATCH, agent="claude", detail="x"))
writer.ingest(AgentEvent(kind=EventKind.COMMIT, agent="claude", detail="y"))
s = writer.summary()
assert s["total_events"] == 2
assert "claude" in s["agents"]
assert s["kind_counts"]["dispatch"] == 1
assert s["kind_counts"]["commit"] == 1
def test_max_events_limit(self, writer):
for i in range(10):
writer.ingest(AgentEvent(kind=EventKind.IDLE, agent="a", detail=str(i)))
text = writer.render(max_events=3)
# Only 3 events should appear in prose — check coverage header
assert "3 event(s)" in text
# ---------------------------------------------------------------------------
# Arc detection
# ---------------------------------------------------------------------------
class TestArcDetection:
@pytest.fixture
def writer(self, tmp_path):
return ChronicleWriter(log_path=tmp_path / "chronicle.jsonl")
def _ingest(self, writer, *kinds, agent="claude"):
for k in kinds:
writer.ingest(AgentEvent(kind=k, agent=agent, detail="x"))
def test_struggle_and_recovery_arc(self, writer):
self._ingest(writer, EventKind.DISPATCH, EventKind.ERROR, EventKind.RECOVERY)
text = writer.render()
assert "struggle" in text.lower() or "trouble" in text.lower()
def test_no_arc_when_no_pattern(self, writer):
self._ingest(writer, EventKind.IDLE)
text = writer.render()
# Should not include arc language (only 1 event, no pattern)
assert "converged" not in text
assert "struggle" not in text
def test_solo_sprint_arc(self, writer):
self._ingest(
writer,
EventKind.DISPATCH,
EventKind.COMMIT,
EventKind.PR_OPEN,
EventKind.PR_MERGE,
)
text = writer.render()
assert "solo" in text.lower() or "alone" in text.lower()
def test_fleet_convergence_arc(self, writer, tmp_path):
writer2 = ChronicleWriter(log_path=tmp_path / "chronicle.jsonl")
writer2.ingest(AgentEvent(kind=EventKind.DISPATCH, agent="claude", detail="x"))
writer2.ingest(AgentEvent(kind=EventKind.COLLABORATION, agent="mimo", detail="x"))
writer2.ingest(AgentEvent(kind=EventKind.COMMIT, agent="claude", detail="x"))
text = writer2.render()
assert "converged" in text.lower() or "fleet" in text.lower()
def test_silent_grind_arc(self, writer):
self._ingest(writer, EventKind.COMMIT, EventKind.COMMIT, EventKind.COMMIT)
text = writer.render()
assert "steady" in text.lower() or "quiet" in text.lower() or "grind" in text.lower()
def test_abandon_then_retry_arc(self, writer):
self._ingest(writer, EventKind.DISPATCH, EventKind.ABANDON, EventKind.DISPATCH)
text = writer.render()
assert "let go" in text.lower() or "abandon" in text.lower() or "called again" in text.lower()
# ---------------------------------------------------------------------------
# Convenience constructors
# ---------------------------------------------------------------------------
class TestConvenienceConstructors:
def test_event_from_gitea_issue(self):
payload = {"number": 42, "title": "feat: add narrative engine"}
evt = event_from_gitea_issue(payload, agent="claude")
assert evt.kind == EventKind.DISPATCH
assert "42" in evt.detail
assert evt.agent == "claude"
def test_event_from_heartbeat(self):
hb = {"model": "claude-sonnet", "status": "thinking", "cycle": 7}
evt = event_from_heartbeat(hb)
assert evt.kind == EventKind.HEARTBEAT
assert evt.agent == "claude-sonnet"
assert "7" in evt.detail
def test_event_from_commit(self):
commit = {"message": "feat: chronicle\n\nFixes #1607", "sha": "abc1234567"}
evt = event_from_commit(commit, agent="claude")
assert evt.kind == EventKind.COMMIT
assert evt.detail == "feat: chronicle" # subject line only
assert evt.metadata["sha"] == "abc12345"

View File

@@ -1,58 +0,0 @@
from pathlib import Path
import yaml
from nexus.mempalace.config import CORE_ROOMS
from nexus.mempalace.conversation_artifacts import (
ConversationArtifact,
build_request_response_artifact,
extract_alexander_request_pairs,
normalize_speaker,
)
def test_sovereign_room_is_core_room() -> None:
assert "sovereign" in CORE_ROOMS
rooms_yaml = yaml.safe_load(Path("mempalace/rooms.yaml").read_text())
assert any(room["key"] == "sovereign" for room in rooms_yaml["core_rooms"])
def test_normalize_speaker_maps_alexander_variants() -> None:
assert normalize_speaker("Alexander Whitestone") == "alexander"
assert normalize_speaker("Rockachopa") == "alexander"
assert normalize_speaker(" ALEXANDER ") == "alexander"
assert normalize_speaker("Bezalel") == "bezalel"
def test_build_request_response_artifact_creates_sovereign_metadata() -> None:
artifact = build_request_response_artifact(
requester="Alexander Whitestone",
responder="Allegro",
request_text="Please organize my conversation artifacts.",
response_text="I will catalog them under a sovereign room.",
source="telegram:timmy-time",
timestamp="2026-04-16T01:30:00Z",
)
assert isinstance(artifact, ConversationArtifact)
assert artifact.room == "sovereign"
assert artifact.metadata["speaker_tags"] == ["speaker:alexander", "speaker:allegro"]
assert artifact.metadata["artifact_type"] == "alexander_request_response"
assert artifact.metadata["responder"] == "allegro"
assert "## Alexander Request" in artifact.text
assert "## Wizard Response" in artifact.text
def test_extract_alexander_request_pairs_finds_following_wizard_response() -> None:
turns = [
{"speaker": "Alexander Whitestone", "text": "Catalog my requests as artifacts.", "timestamp": "2026-04-16T01:00:00Z"},
{"speaker": "Allegro", "text": "I'll build a sovereign room contract.", "timestamp": "2026-04-16T01:01:00Z"},
{"speaker": "Alexander", "text": "Make sure my words are easy to recall.", "timestamp": "2026-04-16T01:02:00Z"},
{"speaker": "Allegro", "text": "I will tag them with speaker metadata.", "timestamp": "2026-04-16T01:03:00Z"},
]
artifacts = extract_alexander_request_pairs(turns, responder="Allegro", source="telegram")
assert len(artifacts) == 2
assert artifacts[0].metadata["request_timestamp"] == "2026-04-16T01:00:00Z"
assert artifacts[1].metadata["response_timestamp"] == "2026-04-16T01:03:00Z"

View File

@@ -1,124 +0,0 @@
"""Tests for gitea_safe_push — Branch existence checks before file operations."""
import json
from unittest.mock import MagicMock, patch, call
from pathlib import Path
import pytest
import sys
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from bin.gitea_safe_push import GiteaSafePush, GiteaAPIError
class TestGiteaAPIError:
def test_creation(self):
e = GiteaAPIError(404, "not found", '{"message":"not found"}')
assert e.status == 404
assert "404" in str(e)
def test_is_exception(self):
e = GiteaAPIError(500, "internal")
assert isinstance(e, Exception)
class TestBranchExists:
def test_branch_exists(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "_api", return_value={"name": "main"}):
assert push.branch_exists("owner/repo", "main") is True
def test_branch_not_exists(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "_api", side_effect=GiteaAPIError(404, "not found")):
assert push.branch_exists("owner/repo", "nonexistent") is False
def test_api_error_propagates(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "_api", side_effect=GiteaAPIError(500, "server error")):
with pytest.raises(GiteaAPIError):
push.branch_exists("owner/repo", "main")
class TestEnsureBranch:
def test_already_exists(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", return_value=True):
assert push.ensure_branch("owner/repo", "my-branch") is True
def test_creates_branch(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", side_effect=[False, True]):
with patch.object(push, "_api", return_value={"name": "my-branch"}):
assert push.ensure_branch("owner/repo", "my-branch", base="main") is True
def test_creation_fails(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", return_value=False):
with patch.object(push, "_api", side_effect=GiteaAPIError(422, "invalid")):
assert push.ensure_branch("owner/repo", "bad-branch") is False
class TestPushFile:
def test_rejects_missing_branch(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", return_value=False):
result = push.push_file("owner/repo", "missing", "file.py", "content", "msg")
assert result is False
def test_creates_new_file(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", return_value=True):
with patch.object(push, "_api", side_effect=[
GiteaAPIError(404, "not found"), # GET existing file
{}, # POST new file
]):
result = push.push_file("owner/repo", "branch", "new.py", "content", "msg")
assert result is True
def test_updates_existing_file(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "branch_exists", return_value=True):
with patch.object(push, "_api", side_effect=[
{"sha": "abc123"}, # GET existing file
{}, # PUT update
]):
result = push.push_file("owner/repo", "branch", "existing.py", "new content", "msg")
assert result is True
def test_create_branch_when_missing(self):
push = GiteaSafePush("https://forge.example.com", "token123")
# Mock branch_exists: first call returns False (doesn't exist),
# second call (inside ensure_branch) returns True (created externally)
exists_calls = [False, True]
exists_idx = [0]
def mock_exists(repo, branch):
idx = min(exists_idx[0], len(exists_calls) - 1)
exists_idx[0] += 1
return exists_calls[idx]
with patch.object(push, "branch_exists", side_effect=mock_exists):
with patch.object(push, "_api") as mock_api:
mock_api.side_effect = [
GiteaAPIError(404, "not found"), # GET existing file (not found)
{"content": {"path": "f.py"}}, # POST new file
]
result = push.push_file("owner/repo", "new-branch", "f.py", "c", "m", create_branch=True)
assert result is True
class TestPushFiles:
def test_push_multiple_files(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "ensure_branch", return_value=True):
with patch.object(push, "push_file", return_value=True):
results = push.push_files("owner/repo", "branch", {
"a.py": "content a",
"b.py": "content b",
}, "message")
assert all(results.values())
assert len(results) == 2
def test_branch_creation_fails_aborts_all(self):
push = GiteaSafePush("https://forge.example.com", "token123")
with patch.object(push, "ensure_branch", return_value=False):
results = push.push_files("owner/repo", "bad", {"a.py": "x"}, "msg")
assert all(v is False for v in results.values())

View File

@@ -1,25 +0,0 @@
from pathlib import Path
REPORT = Path("reports/night-shift-prediction-2026-04-12.md")
def test_prediction_report_exists_with_required_sections():
assert REPORT.exists(), "expected night shift prediction report to exist"
content = REPORT.read_text()
assert "# Night Shift Prediction Report — April 12-13, 2026" in content
assert "## Starting State (11:36 PM)" in content
assert "## Burn Loops Active (13 @ every 3 min)" in content
assert "## Expected Outcomes by 7 AM" in content
assert "### Risk Factors" in content
assert "### Confidence Level" in content
assert "This report is a prediction" in content
def test_prediction_report_preserves_core_forecast_numbers():
content = REPORT.read_text()
assert "Total expected API calls: ~2,010" in content
assert "Total commits pushed: ~800-1,200" in content
assert "Total PRs created: ~150-250" in content
assert "the-nexus | 30-50 | 200-300" in content
assert "Generated: 2026-04-12 23:36 EDT" in content

View File

@@ -1,51 +0,0 @@
"""Test portals.json integrity — valid JSON, no duplicate keys, expected structure."""
from pathlib import Path
import json
def test_portals_json_valid():
"""portals.json must be valid JSON."""
path = Path(__file__).resolve().parents[1] / "portals.json"
data = json.loads(path.read_text(encoding="utf-8"))
assert isinstance(data, list), "portals.json should be a JSON array"
def test_portals_json_no_duplicate_keys():
"""portals.json must not contain duplicate keys in any object."""
path = Path(__file__).resolve().parents[1] / "portals.json"
content = path.read_text(encoding="utf-8")
def check_duplicates(pairs):
keys = [k for k, _ in pairs]
seen = set()
for k in keys:
assert k not in seen, f"Duplicate key '{k}' found in portals.json"
seen.add(k)
return dict(pairs)
json.loads(content, object_pairs_hook=check_duplicates)
def test_portals_json_structure():
"""Each portal entry must have required fields."""
path = Path(__file__).resolve().parents[1] / "portals.json"
data = json.loads(path.read_text(encoding="utf-8"))
required = {"id", "name", "description", "status", "color", "position"}
for i, portal in enumerate(data):
assert isinstance(portal, dict), f"Portal [{i}] is not a dict"
missing = required - set(portal.keys())
assert not missing, f"Portal [{i}] ({portal.get('id', '?')}) missing fields: {missing}"
def test_portals_json_positions_valid():
"""Each portal position must have x, y, z coordinates."""
path = Path(__file__).resolve().parents[1] / "portals.json"
data = json.loads(path.read_text(encoding="utf-8"))
for i, portal in enumerate(data):
pos = portal.get("position", {})
for axis in ("x", "y", "z"):
assert axis in pos, f"Portal [{i}] ({portal.get('id')}) missing position.{axis}"
assert isinstance(pos[axis], (int, float)), f"Portal [{i}] position.{axis} is not a number"

View File

@@ -1,45 +0,0 @@
from __future__ import annotations
import importlib.util
import sys
from pathlib import Path
import yaml
PROJECT_ROOT = Path(__file__).parent.parent
_spec = importlib.util.spec_from_file_location(
"sync_branch_protection_test",
PROJECT_ROOT / "scripts" / "sync_branch_protection.py",
)
_mod = importlib.util.module_from_spec(_spec)
sys.modules["sync_branch_protection_test"] = _mod
_spec.loader.exec_module(_mod)
build_branch_protection_payload = _mod.build_branch_protection_payload
def test_build_branch_protection_payload_enables_rebase_before_merge():
payload = build_branch_protection_payload(
"main",
{
"required_approvals": 1,
"dismiss_stale_approvals": True,
"require_ci_to_merge": False,
"block_deletions": True,
"block_force_push": True,
"block_on_outdated_branch": True,
},
)
assert payload["branch_name"] == "main"
assert payload["rule_name"] == "main"
assert payload["block_on_outdated_branch"] is True
assert payload["required_approvals"] == 1
assert payload["enable_status_check"] is False
def test_the_nexus_branch_protection_config_requires_up_to_date_branch():
config = yaml.safe_load((PROJECT_ROOT / ".gitea" / "branch-protection" / "the-nexus.yml").read_text())
rules = config["rules"]
assert rules["block_on_outdated_branch"] is True