Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a617ef83f4 | ||
|
|
b634beb704 | ||
| 3016e012cc | |||
| 60b9b90f34 | |||
|
|
c818a30522 | ||
|
|
89dfa1e5de | ||
|
|
d791c087cb |
61
docs/KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md
Normal file
61
docs/KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Know Thy Father — Multimodal Media Consumption Pipeline
|
||||
|
||||
Refs #582
|
||||
|
||||
This document makes the epic operational by naming the current source-of-truth scripts, their handoff artifacts, and the one-command runner that coordinates them.
|
||||
|
||||
## Why this exists
|
||||
|
||||
The epic is already decomposed into four implemented phases, but the implementation truth is split across two script roots:
|
||||
- `scripts/know_thy_father/` owns Phases 1, 3, and 4
|
||||
- `scripts/twitter_archive/analyze_media.py` owns Phase 2
|
||||
- `twitter-archive/know-thy-father/tracker.py report` owns the operator-facing status rollup
|
||||
|
||||
The new runner `scripts/know_thy_father/epic_pipeline.py` does not replace those scripts. It stitches them together into one explicit, reviewable plan.
|
||||
|
||||
## Phase map
|
||||
|
||||
| Phase | Script | Primary output |
|
||||
|-------|--------|----------------|
|
||||
| 1. Media Indexing | `scripts/know_thy_father/index_media.py` | `twitter-archive/know-thy-father/media_manifest.jsonl` |
|
||||
| 2. Multimodal Analysis | `scripts/twitter_archive/analyze_media.py --batch 10` | `twitter-archive/know-thy-father/analysis.jsonl` + `meaning-kernels.jsonl` + `pipeline-status.json` |
|
||||
| 3. Holographic Synthesis | `scripts/know_thy_father/synthesize_kernels.py` | `twitter-archive/knowledge/fathers_ledger.jsonl` |
|
||||
| 4. Cross-Reference Audit | `scripts/know_thy_father/crossref_audit.py` | `twitter-archive/notes/crossref_report.md` |
|
||||
| 5. Processing Log | `twitter-archive/know-thy-father/tracker.py report` | `twitter-archive/know-thy-father/REPORT.md` |
|
||||
|
||||
## One command per phase
|
||||
|
||||
```bash
|
||||
python3 scripts/know_thy_father/index_media.py --tweets twitter-archive/extracted/tweets.jsonl --output twitter-archive/know-thy-father/media_manifest.jsonl
|
||||
python3 scripts/twitter_archive/analyze_media.py --batch 10
|
||||
python3 scripts/know_thy_father/synthesize_kernels.py --input twitter-archive/media/manifest.jsonl --output twitter-archive/knowledge/fathers_ledger.jsonl --summary twitter-archive/knowledge/fathers_ledger.summary.json
|
||||
python3 scripts/know_thy_father/crossref_audit.py --soul SOUL.md --kernels twitter-archive/notes/know_thy_father_crossref.md --output twitter-archive/notes/crossref_report.md
|
||||
python3 twitter-archive/know-thy-father/tracker.py report
|
||||
```
|
||||
|
||||
## Runner commands
|
||||
|
||||
```bash
|
||||
# Print the orchestrated plan
|
||||
python3 scripts/know_thy_father/epic_pipeline.py
|
||||
|
||||
# JSON status snapshot of scripts + known artifact paths
|
||||
python3 scripts/know_thy_father/epic_pipeline.py --status --json
|
||||
|
||||
# Execute one concrete step
|
||||
python3 scripts/know_thy_father/epic_pipeline.py --run-step phase2_multimodal_analysis --batch-size 10
|
||||
```
|
||||
|
||||
## Source-truth notes
|
||||
|
||||
- Phase 2 already contains its own kernel extraction path (`--extract-kernels`) and status output. The epic runner does not reimplement that logic.
|
||||
- Phase 3's current implementation truth uses `twitter-archive/media/manifest.jsonl` as its default input. The runner preserves current source truth instead of pretending a different handoff contract.
|
||||
- The processing log in `twitter-archive/know-thy-father/PROCESSING_LOG.md` can drift from current code reality. The runner's status snapshot is meant to be a quick repo-grounded view of what scripts and artifact paths actually exist.
|
||||
|
||||
## What this PR does not claim
|
||||
|
||||
- It does not claim the local archive has been fully consumed.
|
||||
- It does not claim the halted processing log has been resumed.
|
||||
- It does not claim fact_store ingestion has been fully wired end-to-end.
|
||||
|
||||
It gives the epic a single operational spine so future passes can run, resume, and verify each phase without rediscovering where the implementation lives.
|
||||
92
docs/MEMPALACE_EZRA_INTEGRATION.md
Normal file
92
docs/MEMPALACE_EZRA_INTEGRATION.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# MemPalace v3.0.0 — Ezra Integration Packet
|
||||
|
||||
This packet turns issue #570 into an executable, reviewable integration plan for Ezra's Hermes home.
|
||||
It is a repo-side scaffold: no live Ezra host changes are claimed in this artifact.
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
pip install mempalace==3.0.0
|
||||
mempalace init ~/.hermes/ --yes
|
||||
cat > ~/.hermes/mempalace.yaml <<'YAML'
|
||||
wing: ezra_home
|
||||
palace: ~/.mempalace/palace
|
||||
rooms:
|
||||
- name: sessions
|
||||
description: Conversation history and durable agent transcripts
|
||||
globs:
|
||||
- "*.json"
|
||||
- "*.jsonl"
|
||||
- name: config
|
||||
description: Hermes configuration and runtime settings
|
||||
globs:
|
||||
- "*.yaml"
|
||||
- "*.yml"
|
||||
- "*.toml"
|
||||
- name: docs
|
||||
description: Notes, markdown docs, and operating reports
|
||||
globs:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
people: []
|
||||
projects: []
|
||||
YAML
|
||||
echo "" | mempalace mine ~/.hermes/
|
||||
echo "" | mempalace mine ~/.hermes/sessions/ --mode convos
|
||||
mempalace search "your common queries"
|
||||
mempalace wake-up
|
||||
hermes mcp add mempalace -- python -m mempalace.mcp_server
|
||||
```
|
||||
|
||||
## Manual config template
|
||||
|
||||
```yaml
|
||||
wing: ezra_home
|
||||
palace: ~/.mempalace/palace
|
||||
rooms:
|
||||
- name: sessions
|
||||
description: Conversation history and durable agent transcripts
|
||||
globs:
|
||||
- "*.json"
|
||||
- "*.jsonl"
|
||||
- name: config
|
||||
description: Hermes configuration and runtime settings
|
||||
globs:
|
||||
- "*.yaml"
|
||||
- "*.yml"
|
||||
- "*.toml"
|
||||
- name: docs
|
||||
description: Notes, markdown docs, and operating reports
|
||||
globs:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
people: []
|
||||
projects: []
|
||||
```
|
||||
|
||||
## Why this shape
|
||||
|
||||
- `wing: ezra_home` matches the issue's Ezra-specific integration target.
|
||||
- `rooms` split the mined material into sessions, config, and docs to keep retrieval interpretable.
|
||||
- Mining commands pipe empty stdin to avoid the interactive entity-detector hang noted in the evaluation.
|
||||
|
||||
## Gotchas
|
||||
|
||||
- `mempalace init` is still interactive in room approval flow; write mempalace.yaml manually if the init output stalls.
|
||||
- The yaml key is `wing:` not `wings:`. Using the wrong key causes mine/setup failures.
|
||||
- Pipe empty stdin into mining commands (`echo "" | ...`) to avoid the entity-detector stdin hang on larger directories.
|
||||
- First mine downloads the ChromaDB embedding model cache (~79MB).
|
||||
- Report Ezra's before/after metrics back to issue #568 after live installation and retrieval tests.
|
||||
|
||||
## Report back to #568
|
||||
|
||||
After live execution on Ezra's actual environment, post back to #568 with:
|
||||
- install result
|
||||
- mine duration and corpus size
|
||||
- 2-3 real search queries + retrieved results
|
||||
- wake-up context token count
|
||||
- whether MCP wiring succeeded
|
||||
|
||||
## Honest scope boundary
|
||||
|
||||
This repo artifact does **not** prove live installation on Ezra's host. It makes the work reproducible and testable so the next pass can execute it without guesswork.
|
||||
62
docs/laptop-fleet-manifest.example.yaml
Normal file
62
docs/laptop-fleet-manifest.example.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
fleet_name: timmy-laptop-fleet
|
||||
machines:
|
||||
- hostname: timmy-anchor-a
|
||||
machine_type: laptop
|
||||
ram_gb: 16
|
||||
cpu_cores: 8
|
||||
os: macOS
|
||||
adapter_condition: good
|
||||
idle_watts: 11
|
||||
always_on_capable: true
|
||||
notes: candidate 24/7 anchor agent
|
||||
|
||||
- hostname: timmy-anchor-b
|
||||
machine_type: laptop
|
||||
ram_gb: 8
|
||||
cpu_cores: 4
|
||||
os: Linux
|
||||
adapter_condition: good
|
||||
idle_watts: 13
|
||||
always_on_capable: true
|
||||
notes: candidate 24/7 anchor agent
|
||||
|
||||
- hostname: timmy-daylight-a
|
||||
machine_type: laptop
|
||||
ram_gb: 32
|
||||
cpu_cores: 10
|
||||
os: macOS
|
||||
adapter_condition: ok
|
||||
idle_watts: 22
|
||||
always_on_capable: true
|
||||
notes: higher-performance daylight compute
|
||||
|
||||
- hostname: timmy-daylight-b
|
||||
machine_type: laptop
|
||||
ram_gb: 16
|
||||
cpu_cores: 8
|
||||
os: Linux
|
||||
adapter_condition: ok
|
||||
idle_watts: 19
|
||||
always_on_capable: true
|
||||
notes: daylight compute node
|
||||
|
||||
- hostname: timmy-daylight-c
|
||||
machine_type: laptop
|
||||
ram_gb: 8
|
||||
cpu_cores: 4
|
||||
os: Windows
|
||||
adapter_condition: needs_replacement
|
||||
idle_watts: 17
|
||||
always_on_capable: false
|
||||
notes: repair power adapter before production duty
|
||||
|
||||
- hostname: timmy-desktop-nas
|
||||
machine_type: desktop
|
||||
ram_gb: 64
|
||||
cpu_cores: 12
|
||||
os: Linux
|
||||
adapter_condition: good
|
||||
idle_watts: 58
|
||||
always_on_capable: false
|
||||
has_4tb_ssd: true
|
||||
notes: desktop plus 4TB SSD NAS and heavy compute during peak sun
|
||||
30
docs/laptop-fleet-plan.example.md
Normal file
30
docs/laptop-fleet-plan.example.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Laptop Fleet Deployment Plan
|
||||
|
||||
Fleet: timmy-laptop-fleet
|
||||
Machine count: 6
|
||||
24/7 anchor agents: timmy-anchor-a, timmy-anchor-b
|
||||
Desktop/NAS: timmy-desktop-nas
|
||||
Daylight schedule: 10:00-16:00
|
||||
|
||||
## Role mapping
|
||||
|
||||
| Hostname | Role | Schedule | Duty cycle |
|
||||
|---|---|---|---|
|
||||
| timmy-anchor-a | anchor_agent | 24/7 | continuous |
|
||||
| timmy-anchor-b | anchor_agent | 24/7 | continuous |
|
||||
| timmy-daylight-a | daylight_agent | 10:00-16:00 | peak_solar |
|
||||
| timmy-daylight-b | daylight_agent | 10:00-16:00 | peak_solar |
|
||||
| timmy-daylight-c | daylight_agent | 10:00-16:00 | peak_solar |
|
||||
| timmy-desktop-nas | desktop_nas | 10:00-16:00 | daylight_only |
|
||||
|
||||
## Machine inventory
|
||||
|
||||
| Hostname | Type | RAM | CPU cores | OS | Adapter | Idle watts | Notes |
|
||||
|---|---|---:|---:|---|---|---:|---|
|
||||
| timmy-anchor-a | laptop | 16 | 8 | macOS | good | 11 | candidate 24/7 anchor agent |
|
||||
| timmy-anchor-b | laptop | 8 | 4 | Linux | good | 13 | candidate 24/7 anchor agent |
|
||||
| timmy-daylight-a | laptop | 32 | 10 | macOS | ok | 22 | higher-performance daylight compute |
|
||||
| timmy-daylight-b | laptop | 16 | 8 | Linux | ok | 19 | daylight compute node |
|
||||
| timmy-daylight-c | laptop | 8 | 4 | Windows | needs_replacement | 17 | repair power adapter before production duty |
|
||||
| timmy-desktop-nas | desktop | 64 | 12 | Linux | good | 58 | desktop plus 4TB SSD NAS and heavy compute during peak sun |
|
||||
|
||||
70
docs/nh-broadband-install-packet.example.md
Normal file
70
docs/nh-broadband-install-packet.example.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# NH Broadband install packet — Lempster cabin
|
||||
|
||||
## Verified provider contacts
|
||||
|
||||
- Website: https://nhbroadband.com/
|
||||
- Check availability portal: https://www.connectsignup.com/?client=118
|
||||
- Customer support: (866) 431-1928
|
||||
- Technical support: (866) 431-7617
|
||||
- Support hours: 7 days a week, 8 a.m. – 8 p.m. EST
|
||||
- Residential agreement: https://nhbroadband.com/assets/Documents/NH-Broadband-Residential-Agreement-Form.pdf
|
||||
- Conduit guidelines: https://nhbroadband.com/assets/Documents/NHBroadband_ConduitGuidelines.pdf
|
||||
|
||||
## Service address packet
|
||||
|
||||
- Site label: Lempster cabin
|
||||
- Street address: 123 Example Road
|
||||
- City/State/ZIP: Lempster, NH 03605
|
||||
- Contact: Cabin operator | 603-555-0100 | operator@example.com
|
||||
- Desired install window: Weekday mornings
|
||||
- Driveway note: Long gravel driveway; call ahead before arrival.
|
||||
|
||||
## Selected plan
|
||||
|
||||
- Plan: Premier 1 Gig (1,000 Mbps) Internet
|
||||
- Speed: 1 Gbps
|
||||
- Base monthly price: $79.95
|
||||
- Estimated monthly total: $79.95
|
||||
- Router policy: Customer must supply a wireless router unless Managed Wi-Fi Service is added.
|
||||
- Add-ons: none selected
|
||||
|
||||
## Live actions still required
|
||||
|
||||
- Availability status: pending_exact_address_check
|
||||
- Availability note: Use the official availability portal with the exact service address, then confirm by phone before scheduling.
|
||||
- Payment method status: not_ready
|
||||
- Payment note: Prepare first-month charges plus any install fee before the live scheduling step.
|
||||
|
||||
- Run the exact address through the official availability portal or with customer support.
|
||||
- Call customer support and confirm the installer can reach the cabin using the driveway note below.
|
||||
- Confirm install fee and first-month charges before scheduling.
|
||||
- Prepare a payment method before the scheduling call.
|
||||
- Book the earliest acceptable installation appointment and record the confirmation number.
|
||||
- Post a speed test to the issue after the installation is complete.
|
||||
|
||||
## Call log
|
||||
|
||||
| Date | Channel | Contact | Outcome | Follow-up |
|
||||
|---|---|---|---|---|
|
||||
| TBD | portal | exact-address lookup | pending | record portal result |
|
||||
| TBD | phone | NH Broadband customer support | pending | confirm availability + install fee + appointment |
|
||||
|
||||
## Appointment checklist
|
||||
|
||||
- [ ] Exact address entered into the official portal
|
||||
- [ ] Fiber availability confirmed for the exact cabin address
|
||||
- [ ] Monthly price and any install fee recorded
|
||||
- [ ] Driveway access note relayed to the installer
|
||||
- [ ] Appointment date/time and confirmation number captured
|
||||
- [ ] Payment method ready for first month + install fee
|
||||
|
||||
## Post-install verification
|
||||
|
||||
- [ ] Installation completed
|
||||
- [ ] Speed test posted back to Gitea issue #533
|
||||
- [ ] Router / Wi-Fi setup confirmed inside the cabin
|
||||
|
||||
## Notes
|
||||
|
||||
Placeholder request only. Replace the example address and contact details before
|
||||
using the portal or calling NH Broadband.
|
||||
18
docs/nh-broadband-install-request.example.yaml
Normal file
18
docs/nh-broadband-install-request.example.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
site_label: Lempster cabin
|
||||
street_address: 123 Example Road
|
||||
city: Lempster
|
||||
state: NH
|
||||
zip: "03605"
|
||||
contact_name: Cabin operator
|
||||
contact_phone: 603-555-0100
|
||||
contact_email: operator@example.com
|
||||
preferred_plan: premier_1_gig
|
||||
managed_wifi: false
|
||||
safe_secure: false
|
||||
extended_wifi_units: 0
|
||||
payment_ready: false
|
||||
desired_install_window: Weekday mornings
|
||||
notes: |
|
||||
Placeholder request only. Replace the example address and contact details before
|
||||
using the portal or calling NH Broadband.
|
||||
driveway_note: Long gravel driveway; call ahead before arrival.
|
||||
@@ -0,0 +1,81 @@
|
||||
# NH Broadband fiber install public research packet
|
||||
|
||||
Refs #533
|
||||
|
||||
## Scope of this packet
|
||||
|
||||
This packet moves LAB-008 forward without pretending that a real-world service order or appointment was completed in-session.
|
||||
|
||||
The verified public-source inputs used here were:
|
||||
- https://nhbroadband.com/
|
||||
- https://nhbroadband.com/contact/
|
||||
- https://www.connectsignup.com/?client=118
|
||||
|
||||
## What was verified from official public sources
|
||||
|
||||
1. NH Broadband publicly markets 100% fiber internet and says the residential offer has:
|
||||
- no contracts
|
||||
- no gimmicks
|
||||
- no hidden fees
|
||||
2. The residential pricing shown on the NH Broadband home page is:
|
||||
- Basic 100 Mbps Internet — $49.95/month
|
||||
- Premier 1 Gig (1,000 Mbps) Internet — $79.95/month
|
||||
- Ultimate 2 Gig (2,000 Mbps) Internet — $99.95/month
|
||||
3. The public availability flow requires an exact service address through the official portal:
|
||||
- https://www.connectsignup.com/?client=118
|
||||
4. The availability portal states: "Serviceability is subject to areas where the Conexon Connect network is being or will be constructed."
|
||||
5. The publicly listed support contacts are:
|
||||
- Customer support: (866) 431-1928
|
||||
- Technical support: (866) 431-7617
|
||||
- Support hours: 7 days a week, 8 a.m. – 8 p.m. EST
|
||||
- General inquiries / billing email: info@nhbroadband.com
|
||||
- Technical support email: support@nhbroadband.com
|
||||
6. NH Broadband also publishes the following operator-useful documents:
|
||||
- Residential agreement form: https://nhbroadband.com/assets/Documents/NH-Broadband-Residential-Agreement-Form.pdf
|
||||
- Conduit guidelines: https://nhbroadband.com/assets/Documents/NHBroadband_ConduitGuidelines.pdf
|
||||
|
||||
## Most useful public-doc answer
|
||||
|
||||
Public documentation is enough to confirm that NH Broadband offers fiber plans in this service ecosystem and that the next required action is an exact-address availability check through the official portal or with customer support.
|
||||
|
||||
Public documentation is not enough to prove that the specific cabin address is serviceable today or to book an installation without a live interaction.
|
||||
|
||||
## What was not verified in-session
|
||||
|
||||
- Exact-address availability at the cabin was not verified in-session.
|
||||
- Actual installation appointment was not scheduled in-session.
|
||||
- No confirmation number was obtained in-session.
|
||||
- No payment method was submitted in-session.
|
||||
- No public install fee was confirmed from the pages reviewed in-session.
|
||||
- No post-install speed test exists yet because the installation is not complete.
|
||||
|
||||
## Recommended next live actions
|
||||
|
||||
1. Enter the exact cabin address into the official availability portal:
|
||||
- https://www.connectsignup.com/?client=118
|
||||
2. If the portal result is unclear or negative, call NH Broadband customer support at (866) 431-1928 and verify the exact address manually.
|
||||
3. During the live call, confirm all of the following before accepting an appointment:
|
||||
- exact monthly price for the selected tier
|
||||
- whether there is any install fee
|
||||
- earliest available installation slot
|
||||
- whether the installer needs any special driveway or conduit prep details
|
||||
4. Record the appointment date/time and confirmation number in the issue thread.
|
||||
5. After installation, post a speed test result back to issue #533.
|
||||
|
||||
## Deliverables in this PR
|
||||
|
||||
- `scripts/plan_nh_broadband_install.py` — deterministic packet builder for the live scheduling step
|
||||
- `docs/nh-broadband-install-request.example.yaml` — placeholder request manifest to fill in before the real call
|
||||
- `docs/nh-broadband-install-packet.example.md` — rendered install packet with call log + appointment checklist
|
||||
- `tests/test_nh_broadband_install_planner.py` — regression coverage locking the packet and the public research facts
|
||||
|
||||
## Why the issue stays open
|
||||
|
||||
LAB-008 still requires real-world execution:
|
||||
- an exact-address availability lookup
|
||||
- a live scheduling step
|
||||
- a payment decision
|
||||
- a real appointment confirmation
|
||||
- a post-install speed test
|
||||
|
||||
This PR should therefore advance the issue, not close it.
|
||||
127
scripts/know_thy_father/epic_pipeline.py
Normal file
127
scripts/know_thy_father/epic_pipeline.py
Normal file
@@ -0,0 +1,127 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Operational runner and status view for the Know Thy Father multimodal epic."""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from subprocess import run
|
||||
|
||||
|
||||
PHASES = [
|
||||
{
|
||||
"id": "phase1_media_indexing",
|
||||
"name": "Phase 1 — Media Indexing",
|
||||
"script": "scripts/know_thy_father/index_media.py",
|
||||
"command_template": "python3 scripts/know_thy_father/index_media.py --tweets twitter-archive/extracted/tweets.jsonl --output twitter-archive/know-thy-father/media_manifest.jsonl",
|
||||
"outputs": ["twitter-archive/know-thy-father/media_manifest.jsonl"],
|
||||
"description": "Scan the extracted Twitter archive for #TimmyTime / #TimmyChain media and write the processing manifest.",
|
||||
},
|
||||
{
|
||||
"id": "phase2_multimodal_analysis",
|
||||
"name": "Phase 2 — Multimodal Analysis",
|
||||
"script": "scripts/twitter_archive/analyze_media.py",
|
||||
"command_template": "python3 scripts/twitter_archive/analyze_media.py --batch {batch_size}",
|
||||
"outputs": [
|
||||
"twitter-archive/know-thy-father/analysis.jsonl",
|
||||
"twitter-archive/know-thy-father/meaning-kernels.jsonl",
|
||||
"twitter-archive/know-thy-father/pipeline-status.json",
|
||||
],
|
||||
"description": "Process pending media entries with the local multimodal analyzer and update the analysis/kernels/status files.",
|
||||
},
|
||||
{
|
||||
"id": "phase3_holographic_synthesis",
|
||||
"name": "Phase 3 — Holographic Synthesis",
|
||||
"script": "scripts/know_thy_father/synthesize_kernels.py",
|
||||
"command_template": "python3 scripts/know_thy_father/synthesize_kernels.py --input twitter-archive/media/manifest.jsonl --output twitter-archive/knowledge/fathers_ledger.jsonl --summary twitter-archive/knowledge/fathers_ledger.summary.json",
|
||||
"outputs": [
|
||||
"twitter-archive/knowledge/fathers_ledger.jsonl",
|
||||
"twitter-archive/knowledge/fathers_ledger.summary.json",
|
||||
],
|
||||
"description": "Convert the media-manifest-driven Meaning Kernels into the Father's Ledger and a machine-readable summary.",
|
||||
},
|
||||
{
|
||||
"id": "phase4_cross_reference_audit",
|
||||
"name": "Phase 4 — Cross-Reference Audit",
|
||||
"script": "scripts/know_thy_father/crossref_audit.py",
|
||||
"command_template": "python3 scripts/know_thy_father/crossref_audit.py --soul SOUL.md --kernels twitter-archive/notes/know_thy_father_crossref.md --output twitter-archive/notes/crossref_report.md",
|
||||
"outputs": ["twitter-archive/notes/crossref_report.md"],
|
||||
"description": "Compare Know Thy Father kernels against SOUL.md and related canon, then emit a Markdown audit report.",
|
||||
},
|
||||
{
|
||||
"id": "phase5_processing_log",
|
||||
"name": "Phase 5 — Processing Log / Status",
|
||||
"script": "twitter-archive/know-thy-father/tracker.py",
|
||||
"command_template": "python3 twitter-archive/know-thy-father/tracker.py report",
|
||||
"outputs": ["twitter-archive/know-thy-father/REPORT.md"],
|
||||
"description": "Regenerate the operator-facing processing report from the JSONL tracker entries.",
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def build_pipeline_plan(batch_size: int = 10):
|
||||
plan = []
|
||||
for phase in PHASES:
|
||||
plan.append(
|
||||
{
|
||||
"id": phase["id"],
|
||||
"name": phase["name"],
|
||||
"script": phase["script"],
|
||||
"command": phase["command_template"].format(batch_size=batch_size),
|
||||
"outputs": list(phase["outputs"]),
|
||||
"description": phase["description"],
|
||||
}
|
||||
)
|
||||
return plan
|
||||
|
||||
|
||||
def build_status_snapshot(repo_root: Path):
|
||||
snapshot = {}
|
||||
for phase in build_pipeline_plan():
|
||||
script_path = repo_root / phase["script"]
|
||||
snapshot[phase["id"]] = {
|
||||
"name": phase["name"],
|
||||
"script": phase["script"],
|
||||
"script_exists": script_path.exists(),
|
||||
"outputs": [
|
||||
{
|
||||
"path": output,
|
||||
"exists": (repo_root / output).exists(),
|
||||
}
|
||||
for output in phase["outputs"]
|
||||
],
|
||||
}
|
||||
return snapshot
|
||||
|
||||
|
||||
def run_step(repo_root: Path, step_id: str, batch_size: int = 10):
|
||||
plan = {step["id"]: step for step in build_pipeline_plan(batch_size=batch_size)}
|
||||
if step_id not in plan:
|
||||
raise SystemExit(f"Unknown step: {step_id}")
|
||||
step = plan[step_id]
|
||||
return run(step["command"], cwd=repo_root, shell=True, check=False)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Know Thy Father epic orchestration helper")
|
||||
parser.add_argument("--batch-size", type=int, default=10)
|
||||
parser.add_argument("--status", action="store_true")
|
||||
parser.add_argument("--run-step", default=None)
|
||||
parser.add_argument("--json", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[2]
|
||||
|
||||
if args.run_step:
|
||||
result = run_step(repo_root, args.run_step, batch_size=args.batch_size)
|
||||
raise SystemExit(result.returncode)
|
||||
|
||||
payload = build_status_snapshot(repo_root) if args.status else build_pipeline_plan(batch_size=args.batch_size)
|
||||
if args.json or args.status:
|
||||
print(json.dumps(payload, indent=2))
|
||||
else:
|
||||
for step in payload:
|
||||
print(f"[{step['id']}] {step['command']}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
159
scripts/mempalace_ezra_integration.py
Normal file
159
scripts/mempalace_ezra_integration.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Prepare a MemPalace v3.0.0 integration packet for Ezra's Hermes home."""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
PACKAGE_SPEC = "mempalace==3.0.0"
|
||||
DEFAULT_HERMES_HOME = "~/.hermes/"
|
||||
DEFAULT_SESSIONS_DIR = "~/.hermes/sessions/"
|
||||
DEFAULT_PALACE_PATH = "~/.mempalace/palace"
|
||||
DEFAULT_WING = "ezra_home"
|
||||
|
||||
|
||||
def build_yaml_template(wing: str, palace_path: str) -> str:
|
||||
return (
|
||||
f"wing: {wing}\n"
|
||||
f"palace: {palace_path}\n"
|
||||
"rooms:\n"
|
||||
" - name: sessions\n"
|
||||
" description: Conversation history and durable agent transcripts\n"
|
||||
" globs:\n"
|
||||
" - \"*.json\"\n"
|
||||
" - \"*.jsonl\"\n"
|
||||
" - name: config\n"
|
||||
" description: Hermes configuration and runtime settings\n"
|
||||
" globs:\n"
|
||||
" - \"*.yaml\"\n"
|
||||
" - \"*.yml\"\n"
|
||||
" - \"*.toml\"\n"
|
||||
" - name: docs\n"
|
||||
" description: Notes, markdown docs, and operating reports\n"
|
||||
" globs:\n"
|
||||
" - \"*.md\"\n"
|
||||
" - \"*.txt\"\n"
|
||||
"people: []\n"
|
||||
"projects: []\n"
|
||||
)
|
||||
|
||||
|
||||
def build_plan(overrides: dict | None = None) -> dict:
|
||||
overrides = overrides or {}
|
||||
hermes_home = overrides.get("hermes_home", DEFAULT_HERMES_HOME)
|
||||
sessions_dir = overrides.get("sessions_dir", DEFAULT_SESSIONS_DIR)
|
||||
palace_path = overrides.get("palace_path", DEFAULT_PALACE_PATH)
|
||||
wing = overrides.get("wing", DEFAULT_WING)
|
||||
yaml_template = build_yaml_template(wing=wing, palace_path=palace_path)
|
||||
|
||||
config_home = hermes_home[:-1] if hermes_home.endswith("/") else hermes_home
|
||||
plan = {
|
||||
"package_spec": PACKAGE_SPEC,
|
||||
"hermes_home": hermes_home,
|
||||
"sessions_dir": sessions_dir,
|
||||
"palace_path": palace_path,
|
||||
"wing": wing,
|
||||
"config_path": f"{config_home}/mempalace.yaml",
|
||||
"install_command": f"pip install {PACKAGE_SPEC}",
|
||||
"init_command": f"mempalace init {hermes_home} --yes",
|
||||
"mine_home_command": f"echo \"\" | mempalace mine {hermes_home}",
|
||||
"mine_sessions_command": f"echo \"\" | mempalace mine {sessions_dir} --mode convos",
|
||||
"search_command": 'mempalace search "your common queries"',
|
||||
"wake_up_command": "mempalace wake-up",
|
||||
"mcp_command": "hermes mcp add mempalace -- python -m mempalace.mcp_server",
|
||||
"yaml_template": yaml_template,
|
||||
"gotchas": [
|
||||
"`mempalace init` is still interactive in room approval flow; write mempalace.yaml manually if the init output stalls.",
|
||||
"The yaml key is `wing:` not `wings:`. Using the wrong key causes mine/setup failures.",
|
||||
"Pipe empty stdin into mining commands (`echo \"\" | ...`) to avoid the entity-detector stdin hang on larger directories.",
|
||||
"First mine downloads the ChromaDB embedding model cache (~79MB).",
|
||||
"Report Ezra's before/after metrics back to issue #568 after live installation and retrieval tests.",
|
||||
],
|
||||
}
|
||||
return plan
|
||||
|
||||
|
||||
def render_markdown(plan: dict) -> str:
|
||||
gotchas = "\n".join(f"- {item}" for item in plan["gotchas"])
|
||||
return f"""# MemPalace v3.0.0 — Ezra Integration Packet
|
||||
|
||||
This packet turns issue #570 into an executable, reviewable integration plan for Ezra's Hermes home.
|
||||
It is a repo-side scaffold: no live Ezra host changes are claimed in this artifact.
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
{plan['install_command']}
|
||||
{plan['init_command']}
|
||||
cat > {plan['config_path']} <<'YAML'
|
||||
{plan['yaml_template'].rstrip()}
|
||||
YAML
|
||||
{plan['mine_home_command']}
|
||||
{plan['mine_sessions_command']}
|
||||
{plan['search_command']}
|
||||
{plan['wake_up_command']}
|
||||
{plan['mcp_command']}
|
||||
```
|
||||
|
||||
## Manual config template
|
||||
|
||||
```yaml
|
||||
{plan['yaml_template'].rstrip()}
|
||||
```
|
||||
|
||||
## Why this shape
|
||||
|
||||
- `wing: {plan['wing']}` matches the issue's Ezra-specific integration target.
|
||||
- `rooms` split the mined material into sessions, config, and docs to keep retrieval interpretable.
|
||||
- Mining commands pipe empty stdin to avoid the interactive entity-detector hang noted in the evaluation.
|
||||
|
||||
## Gotchas
|
||||
|
||||
{gotchas}
|
||||
|
||||
## Report back to #568
|
||||
|
||||
After live execution on Ezra's actual environment, post back to #568 with:
|
||||
- install result
|
||||
- mine duration and corpus size
|
||||
- 2-3 real search queries + retrieved results
|
||||
- wake-up context token count
|
||||
- whether MCP wiring succeeded
|
||||
|
||||
## Honest scope boundary
|
||||
|
||||
This repo artifact does **not** prove live installation on Ezra's host. It makes the work reproducible and testable so the next pass can execute it without guesswork.
|
||||
"""
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Prepare the MemPalace Ezra integration packet")
|
||||
parser.add_argument("--hermes-home", default=DEFAULT_HERMES_HOME)
|
||||
parser.add_argument("--sessions-dir", default=DEFAULT_SESSIONS_DIR)
|
||||
parser.add_argument("--palace-path", default=DEFAULT_PALACE_PATH)
|
||||
parser.add_argument("--wing", default=DEFAULT_WING)
|
||||
parser.add_argument("--output", default=None)
|
||||
parser.add_argument("--json", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
plan = build_plan(
|
||||
{
|
||||
"hermes_home": args.hermes_home,
|
||||
"sessions_dir": args.sessions_dir,
|
||||
"palace_path": args.palace_path,
|
||||
"wing": args.wing,
|
||||
}
|
||||
)
|
||||
rendered = json.dumps(plan, indent=2) if args.json else render_markdown(plan)
|
||||
|
||||
if args.output:
|
||||
output_path = Path(args.output).expanduser()
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text(rendered, encoding="utf-8")
|
||||
print(f"MemPalace integration packet written to {output_path}")
|
||||
else:
|
||||
print(rendered)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
155
scripts/plan_laptop_fleet.py
Normal file
155
scripts/plan_laptop_fleet.py
Normal file
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
|
||||
DAYLIGHT_START = "10:00"
|
||||
DAYLIGHT_END = "16:00"
|
||||
|
||||
|
||||
def load_manifest(path: str | Path) -> dict[str, Any]:
|
||||
data = yaml.safe_load(Path(path).read_text()) or {}
|
||||
data.setdefault("machines", [])
|
||||
return data
|
||||
|
||||
|
||||
def validate_manifest(data: dict[str, Any]) -> None:
|
||||
machines = data.get("machines", [])
|
||||
if not machines:
|
||||
raise ValueError("manifest must contain at least one machine")
|
||||
|
||||
seen: set[str] = set()
|
||||
for machine in machines:
|
||||
hostname = machine.get("hostname", "").strip()
|
||||
if not hostname:
|
||||
raise ValueError("each machine must declare a hostname")
|
||||
if hostname in seen:
|
||||
raise ValueError(f"duplicate hostname: {hostname} (unique hostnames are required)")
|
||||
seen.add(hostname)
|
||||
|
||||
for field in ("machine_type", "ram_gb", "cpu_cores", "os", "adapter_condition"):
|
||||
if field not in machine:
|
||||
raise ValueError(f"machine {hostname} missing required field: {field}")
|
||||
|
||||
|
||||
def _laptops(machines: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
return [m for m in machines if m.get("machine_type") == "laptop"]
|
||||
|
||||
|
||||
def _desktop(machines: list[dict[str, Any]]) -> dict[str, Any] | None:
|
||||
for machine in machines:
|
||||
if machine.get("machine_type") == "desktop":
|
||||
return machine
|
||||
return None
|
||||
|
||||
|
||||
def choose_anchor_agents(machines: list[dict[str, Any]], count: int = 2) -> list[dict[str, Any]]:
|
||||
eligible = [
|
||||
m for m in _laptops(machines)
|
||||
if m.get("adapter_condition") in {"good", "ok"} and m.get("always_on_capable", True)
|
||||
]
|
||||
eligible.sort(key=lambda m: (m.get("idle_watts", 9999), -m.get("ram_gb", 0), -m.get("cpu_cores", 0), m["hostname"]))
|
||||
return eligible[:count]
|
||||
|
||||
|
||||
def assign_roles(machines: list[dict[str, Any]]) -> dict[str, Any]:
|
||||
anchors = choose_anchor_agents(machines, count=2)
|
||||
anchor_names = {m["hostname"] for m in anchors}
|
||||
desktop = _desktop(machines)
|
||||
|
||||
mapping: dict[str, dict[str, Any]] = {}
|
||||
for machine in machines:
|
||||
hostname = machine["hostname"]
|
||||
if desktop and hostname == desktop["hostname"]:
|
||||
mapping[hostname] = {
|
||||
"role": "desktop_nas",
|
||||
"schedule": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
|
||||
"duty_cycle": "daylight_only",
|
||||
}
|
||||
elif hostname in anchor_names:
|
||||
mapping[hostname] = {
|
||||
"role": "anchor_agent",
|
||||
"schedule": "24/7",
|
||||
"duty_cycle": "continuous",
|
||||
}
|
||||
else:
|
||||
mapping[hostname] = {
|
||||
"role": "daylight_agent",
|
||||
"schedule": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
|
||||
"duty_cycle": "peak_solar",
|
||||
}
|
||||
return {
|
||||
"anchor_agents": [m["hostname"] for m in anchors],
|
||||
"desktop_nas": desktop["hostname"] if desktop else None,
|
||||
"role_mapping": mapping,
|
||||
}
|
||||
|
||||
|
||||
def build_plan(data: dict[str, Any]) -> dict[str, Any]:
|
||||
validate_manifest(data)
|
||||
machines = data["machines"]
|
||||
role_plan = assign_roles(machines)
|
||||
return {
|
||||
"fleet_name": data.get("fleet_name", "timmy-laptop-fleet"),
|
||||
"machine_count": len(machines),
|
||||
"anchor_agents": role_plan["anchor_agents"],
|
||||
"desktop_nas": role_plan["desktop_nas"],
|
||||
"daylight_window": f"{DAYLIGHT_START}-{DAYLIGHT_END}",
|
||||
"role_mapping": role_plan["role_mapping"],
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(plan: dict[str, Any], data: dict[str, Any]) -> str:
|
||||
lines = [
|
||||
"# Laptop Fleet Deployment Plan",
|
||||
"",
|
||||
f"Fleet: {plan['fleet_name']}",
|
||||
f"Machine count: {plan['machine_count']}",
|
||||
f"24/7 anchor agents: {', '.join(plan['anchor_agents']) if plan['anchor_agents'] else 'TBD'}",
|
||||
f"Desktop/NAS: {plan['desktop_nas'] or 'TBD'}",
|
||||
f"Daylight schedule: {plan['daylight_window']}",
|
||||
"",
|
||||
"## Role mapping",
|
||||
"",
|
||||
"| Hostname | Role | Schedule | Duty cycle |",
|
||||
"|---|---|---|---|",
|
||||
]
|
||||
for hostname, role in sorted(plan["role_mapping"].items()):
|
||||
lines.append(f"| {hostname} | {role['role']} | {role['schedule']} | {role['duty_cycle']} |")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Machine inventory",
|
||||
"",
|
||||
"| Hostname | Type | RAM | CPU cores | OS | Adapter | Idle watts | Notes |",
|
||||
"|---|---|---:|---:|---|---|---:|---|",
|
||||
])
|
||||
for machine in data["machines"]:
|
||||
lines.append(
|
||||
f"| {machine['hostname']} | {machine['machine_type']} | {machine['ram_gb']} | {machine['cpu_cores']} | {machine['os']} | {machine['adapter_condition']} | {machine.get('idle_watts', 'n/a')} | {machine.get('notes', '')} |"
|
||||
)
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Plan LAB-005 laptop fleet deployment.")
|
||||
parser.add_argument("manifest", help="Path to laptop fleet manifest YAML")
|
||||
parser.add_argument("--markdown", action="store_true", help="Render a markdown deployment plan instead of JSON")
|
||||
args = parser.parse_args()
|
||||
|
||||
data = load_manifest(args.manifest)
|
||||
plan = build_plan(data)
|
||||
if args.markdown:
|
||||
print(render_markdown(plan, data))
|
||||
else:
|
||||
print(json.dumps(plan, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
313
scripts/plan_nh_broadband_install.py
Normal file
313
scripts/plan_nh_broadband_install.py
Normal file
@@ -0,0 +1,313 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from decimal import Decimal, ROUND_HALF_UP
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
|
||||
AVAILABILITY_URL = "https://www.connectsignup.com/?client=118"
|
||||
WEBSITE_URL = "https://nhbroadband.com/"
|
||||
RESIDENTIAL_AGREEMENT_URL = "https://nhbroadband.com/assets/Documents/NH-Broadband-Residential-Agreement-Form.pdf"
|
||||
CONDUIT_GUIDELINES_URL = "https://nhbroadband.com/assets/Documents/NHBroadband_ConduitGuidelines.pdf"
|
||||
|
||||
PLAN_CATALOG: dict[str, dict[str, str | Decimal]] = {
|
||||
"basic_100": {
|
||||
"name": "Basic 100 Mbps Internet",
|
||||
"speed": "100 Mbps",
|
||||
"base_monthly": Decimal("49.95"),
|
||||
"router_policy": "Customer must supply a wireless router unless Managed Wi-Fi Service is added.",
|
||||
},
|
||||
"premier_1_gig": {
|
||||
"name": "Premier 1 Gig (1,000 Mbps) Internet",
|
||||
"speed": "1 Gbps",
|
||||
"base_monthly": Decimal("79.95"),
|
||||
"router_policy": "Customer must supply a wireless router unless Managed Wi-Fi Service is added.",
|
||||
},
|
||||
"ultimate_2_gig": {
|
||||
"name": "Ultimate 2 Gig (2,000 Mbps) Internet",
|
||||
"speed": "2 Gbps",
|
||||
"base_monthly": Decimal("99.95"),
|
||||
"router_policy": "Managed Wi-Fi router and Safe & Secure package are included with this tier.",
|
||||
},
|
||||
}
|
||||
|
||||
ADD_ON_PRICES = {
|
||||
"managed_wifi": Decimal("4.95"),
|
||||
"safe_secure": Decimal("3.00"),
|
||||
"extended_wifi": Decimal("3.00"),
|
||||
}
|
||||
|
||||
REQUIRED_FIELDS = (
|
||||
"site_label",
|
||||
"street_address",
|
||||
"city",
|
||||
"state",
|
||||
"zip",
|
||||
"contact_name",
|
||||
"contact_phone",
|
||||
"contact_email",
|
||||
"preferred_plan",
|
||||
"driveway_note",
|
||||
)
|
||||
|
||||
|
||||
def load_request(path: str | Path) -> dict[str, Any]:
|
||||
return yaml.safe_load(Path(path).read_text(encoding="utf-8")) or {}
|
||||
|
||||
|
||||
def _clean_str(value: Any) -> str:
|
||||
return str(value or "").strip()
|
||||
|
||||
|
||||
def _as_bool(value: Any) -> bool:
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
text = _clean_str(value).lower()
|
||||
if text in {"1", "true", "yes", "y", "on"}:
|
||||
return True
|
||||
if text in {"0", "false", "no", "n", "off", ""}:
|
||||
return False
|
||||
raise ValueError(f"cannot interpret boolean value: {value!r}")
|
||||
|
||||
|
||||
def _as_non_negative_int(value: Any, field_name: str) -> int:
|
||||
if value in (None, ""):
|
||||
return 0
|
||||
try:
|
||||
parsed = int(value)
|
||||
except (TypeError, ValueError) as exc:
|
||||
raise ValueError(f"{field_name} must be an integer") from exc
|
||||
if parsed < 0:
|
||||
raise ValueError(f"{field_name} must be >= 0")
|
||||
return parsed
|
||||
|
||||
|
||||
def _money(value: Decimal) -> str:
|
||||
return str(value.quantize(Decimal("0.01"), rounding=ROUND_HALF_UP))
|
||||
|
||||
|
||||
def validate_request(data: dict[str, Any]) -> None:
|
||||
for field in REQUIRED_FIELDS:
|
||||
if not _clean_str(data.get(field)):
|
||||
raise ValueError(f"{field} is required")
|
||||
|
||||
preferred_plan = _clean_str(data.get("preferred_plan"))
|
||||
if preferred_plan not in PLAN_CATALOG:
|
||||
raise ValueError(f"preferred_plan must be one of: {', '.join(sorted(PLAN_CATALOG))}")
|
||||
|
||||
managed_wifi = _as_bool(data.get("managed_wifi", False))
|
||||
safe_secure = _as_bool(data.get("safe_secure", False))
|
||||
extended_wifi_units = _as_non_negative_int(data.get("extended_wifi_units", 0), "extended_wifi_units")
|
||||
|
||||
if safe_secure and not managed_wifi and preferred_plan != "ultimate_2_gig":
|
||||
raise ValueError("safe_secure requires managed_wifi unless the 2 Gig tier is selected")
|
||||
|
||||
if preferred_plan == "ultimate_2_gig":
|
||||
if data.get("managed_wifi") not in (None, "", True):
|
||||
raise ValueError("ultimate_2_gig already includes managed Wi-Fi")
|
||||
if data.get("safe_secure") not in (None, "", True):
|
||||
raise ValueError("ultimate_2_gig already includes Safe & Secure")
|
||||
|
||||
# force parsing so invalid boolean values fail early
|
||||
_ = managed_wifi, safe_secure, extended_wifi_units
|
||||
|
||||
|
||||
def build_install_packet(data: dict[str, Any]) -> dict[str, Any]:
|
||||
validate_request(data)
|
||||
|
||||
plan_id = _clean_str(data["preferred_plan"])
|
||||
plan = PLAN_CATALOG[plan_id]
|
||||
managed_wifi = _as_bool(data.get("managed_wifi", False))
|
||||
safe_secure = _as_bool(data.get("safe_secure", False))
|
||||
extended_wifi_units = _as_non_negative_int(data.get("extended_wifi_units", 0), "extended_wifi_units")
|
||||
payment_ready = _as_bool(data.get("payment_ready", False))
|
||||
|
||||
add_ons: list[dict[str, str]] = []
|
||||
monthly_total = Decimal(plan["base_monthly"])
|
||||
|
||||
if plan_id == "ultimate_2_gig":
|
||||
add_ons.append({"name": "Managed Wi-Fi Service", "monthly_usd": _money(Decimal("0.00")), "note": "Included with the 2 Gig tier."})
|
||||
add_ons.append({"name": "Safe & Secure Package", "monthly_usd": _money(Decimal("0.00")), "note": "Included with the 2 Gig tier."})
|
||||
else:
|
||||
if managed_wifi:
|
||||
monthly_total += ADD_ON_PRICES["managed_wifi"]
|
||||
add_ons.append({"name": "Managed Wi-Fi Service", "monthly_usd": _money(ADD_ON_PRICES["managed_wifi"]), "note": "Includes Conexon Connect router."})
|
||||
if safe_secure:
|
||||
monthly_total += ADD_ON_PRICES["safe_secure"]
|
||||
add_ons.append({"name": "Safe & Secure Package", "monthly_usd": _money(ADD_ON_PRICES["safe_secure"]), "note": "Only available with Managed Wi-Fi."})
|
||||
|
||||
if extended_wifi_units:
|
||||
ext_total = ADD_ON_PRICES["extended_wifi"] * extended_wifi_units
|
||||
monthly_total += ext_total
|
||||
add_ons.append({
|
||||
"name": "Extended Wi-Fi Service",
|
||||
"monthly_usd": _money(ext_total),
|
||||
"note": f"{extended_wifi_units} extender(s) at $3.00/month each.",
|
||||
})
|
||||
|
||||
install_window = _clean_str(data.get("desired_install_window") or "TBD")
|
||||
notes = _clean_str(data.get("notes"))
|
||||
|
||||
next_actions = [
|
||||
"Run the exact address through the official availability portal or with customer support.",
|
||||
"Call customer support and confirm the installer can reach the cabin using the driveway note below.",
|
||||
"Confirm install fee and first-month charges before scheduling.",
|
||||
"Book the earliest acceptable installation appointment and record the confirmation number.",
|
||||
"Post a speed test to the issue after the installation is complete.",
|
||||
]
|
||||
if not payment_ready:
|
||||
next_actions.insert(3, "Prepare a payment method before the scheduling call.")
|
||||
|
||||
return {
|
||||
"provider": {
|
||||
"name": "NH Broadband",
|
||||
"website": WEBSITE_URL,
|
||||
"availability_url": AVAILABILITY_URL,
|
||||
"customer_support_phone": "(866) 431-1928",
|
||||
"technical_support_phone": "(866) 431-7617",
|
||||
"support_hours": "7 days a week, 8 a.m. – 8 p.m. EST",
|
||||
"residential_agreement_url": RESIDENTIAL_AGREEMENT_URL,
|
||||
"conduit_guidelines_url": CONDUIT_GUIDELINES_URL,
|
||||
},
|
||||
"site": {
|
||||
"label": _clean_str(data["site_label"]),
|
||||
"street_address": _clean_str(data["street_address"]),
|
||||
"city": _clean_str(data["city"]),
|
||||
"state": _clean_str(data["state"]),
|
||||
"zip": _clean_str(data["zip"]),
|
||||
"driveway_note": _clean_str(data["driveway_note"]),
|
||||
"desired_install_window": install_window,
|
||||
},
|
||||
"contact": {
|
||||
"name": _clean_str(data["contact_name"]),
|
||||
"phone": _clean_str(data["contact_phone"]),
|
||||
"email": _clean_str(data["contact_email"]),
|
||||
},
|
||||
"selected_plan": {
|
||||
"id": plan_id,
|
||||
"name": str(plan["name"]),
|
||||
"speed": str(plan["speed"]),
|
||||
"base_monthly_usd": _money(Decimal(plan["base_monthly"])),
|
||||
"monthly_estimate_usd": _money(monthly_total),
|
||||
"router_policy": str(plan["router_policy"]),
|
||||
"add_ons": add_ons,
|
||||
},
|
||||
"availability_check": {
|
||||
"status": "pending_exact_address_check",
|
||||
"exact_address_required": True,
|
||||
"note": "Use the official availability portal with the exact service address, then confirm by phone before scheduling.",
|
||||
},
|
||||
"payment": {
|
||||
"payment_ready": payment_ready,
|
||||
"status": "ready" if payment_ready else "not_ready",
|
||||
"note": "Prepare first-month charges plus any install fee before the live scheduling step.",
|
||||
},
|
||||
"next_actions": next_actions,
|
||||
"notes": notes,
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(packet: dict[str, Any]) -> str:
|
||||
lines = [
|
||||
f"# NH Broadband install packet — {packet['site']['label']}",
|
||||
"",
|
||||
"## Verified provider contacts",
|
||||
"",
|
||||
f"- Website: {packet['provider']['website']}",
|
||||
f"- Check availability portal: {packet['provider']['availability_url']}",
|
||||
f"- Customer support: {packet['provider']['customer_support_phone']}",
|
||||
f"- Technical support: {packet['provider']['technical_support_phone']}",
|
||||
f"- Support hours: {packet['provider']['support_hours']}",
|
||||
f"- Residential agreement: {packet['provider']['residential_agreement_url']}",
|
||||
f"- Conduit guidelines: {packet['provider']['conduit_guidelines_url']}",
|
||||
"",
|
||||
"## Service address packet",
|
||||
"",
|
||||
f"- Site label: {packet['site']['label']}",
|
||||
f"- Street address: {packet['site']['street_address']}",
|
||||
f"- City/State/ZIP: {packet['site']['city']}, {packet['site']['state']} {packet['site']['zip']}",
|
||||
f"- Contact: {packet['contact']['name']} | {packet['contact']['phone']} | {packet['contact']['email']}",
|
||||
f"- Desired install window: {packet['site']['desired_install_window']}",
|
||||
f"- Driveway note: {packet['site']['driveway_note']}",
|
||||
"",
|
||||
"## Selected plan",
|
||||
"",
|
||||
f"- Plan: {packet['selected_plan']['name']}",
|
||||
f"- Speed: {packet['selected_plan']['speed']}",
|
||||
f"- Base monthly price: ${packet['selected_plan']['base_monthly_usd']}",
|
||||
f"- Estimated monthly total: ${packet['selected_plan']['monthly_estimate_usd']}",
|
||||
f"- Router policy: {packet['selected_plan']['router_policy']}",
|
||||
]
|
||||
|
||||
if packet["selected_plan"]["add_ons"]:
|
||||
lines.extend(["- Add-ons:"])
|
||||
for addon in packet["selected_plan"]["add_ons"]:
|
||||
lines.append(f" - {addon['name']}: ${addon['monthly_usd']} — {addon['note']}")
|
||||
else:
|
||||
lines.append("- Add-ons: none selected")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Live actions still required",
|
||||
"",
|
||||
f"- Availability status: {packet['availability_check']['status']}",
|
||||
f"- Availability note: {packet['availability_check']['note']}",
|
||||
f"- Payment method status: {packet['payment']['status']}",
|
||||
f"- Payment note: {packet['payment']['note']}",
|
||||
"",
|
||||
])
|
||||
for action in packet["next_actions"]:
|
||||
lines.append(f"- {action}")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Call log",
|
||||
"",
|
||||
"| Date | Channel | Contact | Outcome | Follow-up |",
|
||||
"|---|---|---|---|---|",
|
||||
"| TBD | portal | exact-address lookup | pending | record portal result |",
|
||||
"| TBD | phone | NH Broadband customer support | pending | confirm availability + install fee + appointment |",
|
||||
"",
|
||||
"## Appointment checklist",
|
||||
"",
|
||||
"- [ ] Exact address entered into the official portal",
|
||||
"- [ ] Fiber availability confirmed for the exact cabin address",
|
||||
"- [ ] Monthly price and any install fee recorded",
|
||||
"- [ ] Driveway access note relayed to the installer",
|
||||
"- [ ] Appointment date/time and confirmation number captured",
|
||||
"- [ ] Payment method ready for first month + install fee",
|
||||
"",
|
||||
"## Post-install verification",
|
||||
"",
|
||||
"- [ ] Installation completed",
|
||||
"- [ ] Speed test posted back to Gitea issue #533",
|
||||
"- [ ] Router / Wi-Fi setup confirmed inside the cabin",
|
||||
])
|
||||
|
||||
if packet["notes"]:
|
||||
lines.extend(["", "## Notes", "", packet["notes"]])
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Build an NH Broadband install packet for a cabin or lab site.")
|
||||
parser.add_argument("request", help="Path to the YAML request manifest")
|
||||
parser.add_argument("--markdown", action="store_true", help="Render markdown instead of JSON")
|
||||
args = parser.parse_args()
|
||||
|
||||
packet = build_install_packet(load_request(args.request))
|
||||
if args.markdown:
|
||||
sys.stdout.write(render_markdown(packet))
|
||||
else:
|
||||
print(json.dumps(packet, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
76
tests/test_know_thy_father_pipeline.py
Normal file
76
tests/test_know_thy_father_pipeline.py
Normal file
@@ -0,0 +1,76 @@
|
||||
from pathlib import Path
|
||||
import importlib.util
|
||||
import unittest
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
SCRIPT_PATH = ROOT / "scripts" / "know_thy_father" / "epic_pipeline.py"
|
||||
DOC_PATH = ROOT / "docs" / "KNOW_THY_FATHER_MULTIMODAL_PIPELINE.md"
|
||||
|
||||
|
||||
def load_module(path: Path, name: str):
|
||||
assert path.exists(), f"missing {path.relative_to(ROOT)}"
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
assert spec and spec.loader
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
class TestKnowThyFatherEpicPipeline(unittest.TestCase):
|
||||
def test_build_pipeline_plan_contains_all_phases_in_order(self):
|
||||
mod = load_module(SCRIPT_PATH, "ktf_epic_pipeline")
|
||||
plan = mod.build_pipeline_plan(batch_size=10)
|
||||
|
||||
self.assertEqual(
|
||||
[step["id"] for step in plan],
|
||||
[
|
||||
"phase1_media_indexing",
|
||||
"phase2_multimodal_analysis",
|
||||
"phase3_holographic_synthesis",
|
||||
"phase4_cross_reference_audit",
|
||||
"phase5_processing_log",
|
||||
],
|
||||
)
|
||||
self.assertIn("scripts/know_thy_father/index_media.py", plan[0]["command"])
|
||||
self.assertIn("scripts/twitter_archive/analyze_media.py --batch 10", plan[1]["command"])
|
||||
self.assertIn("scripts/know_thy_father/synthesize_kernels.py", plan[2]["command"])
|
||||
self.assertIn("scripts/know_thy_father/crossref_audit.py", plan[3]["command"])
|
||||
self.assertIn("twitter-archive/know-thy-father/tracker.py report", plan[4]["command"])
|
||||
|
||||
def test_status_snapshot_reports_key_artifact_paths(self):
|
||||
mod = load_module(SCRIPT_PATH, "ktf_epic_pipeline")
|
||||
status = mod.build_status_snapshot(ROOT)
|
||||
|
||||
self.assertIn("phase1_media_indexing", status)
|
||||
self.assertIn("phase2_multimodal_analysis", status)
|
||||
self.assertIn("phase3_holographic_synthesis", status)
|
||||
self.assertIn("phase4_cross_reference_audit", status)
|
||||
self.assertIn("phase5_processing_log", status)
|
||||
self.assertEqual(status["phase1_media_indexing"]["script"], "scripts/know_thy_father/index_media.py")
|
||||
self.assertEqual(status["phase2_multimodal_analysis"]["script"], "scripts/twitter_archive/analyze_media.py")
|
||||
self.assertEqual(status["phase5_processing_log"]["script"], "twitter-archive/know-thy-father/tracker.py")
|
||||
self.assertTrue(status["phase1_media_indexing"]["script_exists"])
|
||||
self.assertTrue(status["phase2_multimodal_analysis"]["script_exists"])
|
||||
self.assertTrue(status["phase3_holographic_synthesis"]["script_exists"])
|
||||
self.assertTrue(status["phase4_cross_reference_audit"]["script_exists"])
|
||||
self.assertTrue(status["phase5_processing_log"]["script_exists"])
|
||||
|
||||
def test_repo_contains_multimodal_pipeline_doc(self):
|
||||
self.assertTrue(DOC_PATH.exists(), "missing committed Know Thy Father pipeline doc")
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# Know Thy Father — Multimodal Media Consumption Pipeline",
|
||||
"scripts/know_thy_father/index_media.py",
|
||||
"scripts/twitter_archive/analyze_media.py --batch 10",
|
||||
"scripts/know_thy_father/synthesize_kernels.py",
|
||||
"scripts/know_thy_father/crossref_audit.py",
|
||||
"twitter-archive/know-thy-father/tracker.py report",
|
||||
"Refs #582",
|
||||
]
|
||||
for snippet in required:
|
||||
self.assertIn(snippet, text)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
52
tests/test_laptop_fleet_planner.py
Normal file
52
tests/test_laptop_fleet_planner.py
Normal file
@@ -0,0 +1,52 @@
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
from scripts.plan_laptop_fleet import build_plan, load_manifest, render_markdown, validate_manifest
|
||||
|
||||
|
||||
def test_laptop_fleet_planner_script_exists() -> None:
|
||||
assert Path("scripts/plan_laptop_fleet.py").exists()
|
||||
|
||||
|
||||
def test_laptop_fleet_manifest_template_exists() -> None:
|
||||
assert Path("docs/laptop-fleet-manifest.example.yaml").exists()
|
||||
|
||||
|
||||
def test_build_plan_selects_two_lowest_idle_watt_laptops_as_anchors() -> None:
|
||||
data = load_manifest("docs/laptop-fleet-manifest.example.yaml")
|
||||
plan = build_plan(data)
|
||||
assert plan["anchor_agents"] == ["timmy-anchor-a", "timmy-anchor-b"]
|
||||
assert plan["desktop_nas"] == "timmy-desktop-nas"
|
||||
assert plan["role_mapping"]["timmy-daylight-a"]["schedule"] == "10:00-16:00"
|
||||
|
||||
|
||||
def test_validate_manifest_requires_unique_hostnames() -> None:
|
||||
data = {
|
||||
"machines": [
|
||||
{"hostname": "dup", "machine_type": "laptop", "ram_gb": 8, "cpu_cores": 4, "os": "Linux", "adapter_condition": "good"},
|
||||
{"hostname": "dup", "machine_type": "laptop", "ram_gb": 16, "cpu_cores": 8, "os": "Linux", "adapter_condition": "good"},
|
||||
]
|
||||
}
|
||||
try:
|
||||
validate_manifest(data)
|
||||
except ValueError as exc:
|
||||
assert "duplicate hostname" in str(exc)
|
||||
assert "unique hostnames" in str(exc)
|
||||
else:
|
||||
raise AssertionError("validate_manifest should reject duplicate hostname")
|
||||
|
||||
|
||||
def test_markdown_contains_anchor_agents_and_daylight_schedule() -> None:
|
||||
data = load_manifest("docs/laptop-fleet-manifest.example.yaml")
|
||||
plan = build_plan(data)
|
||||
content = render_markdown(plan, data)
|
||||
assert "24/7 anchor agents: timmy-anchor-a, timmy-anchor-b" in content
|
||||
assert "Daylight schedule: 10:00-16:00" in content
|
||||
assert "desktop_nas" in content
|
||||
|
||||
|
||||
def test_manifest_template_is_valid_yaml() -> None:
|
||||
data = yaml.safe_load(Path("docs/laptop-fleet-manifest.example.yaml").read_text())
|
||||
assert data["fleet_name"] == "timmy-laptop-fleet"
|
||||
assert len(data["machines"]) == 6
|
||||
68
tests/test_mempalace_ezra_integration.py
Normal file
68
tests/test_mempalace_ezra_integration.py
Normal file
@@ -0,0 +1,68 @@
|
||||
from pathlib import Path
|
||||
import importlib.util
|
||||
import unittest
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
SCRIPT_PATH = ROOT / "scripts" / "mempalace_ezra_integration.py"
|
||||
DOC_PATH = ROOT / "docs" / "MEMPALACE_EZRA_INTEGRATION.md"
|
||||
|
||||
|
||||
def load_module(path: Path, name: str):
|
||||
assert path.exists(), f"missing {path.relative_to(ROOT)}"
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
assert spec and spec.loader
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
class TestMempalaceEzraIntegration(unittest.TestCase):
|
||||
def test_build_plan_contains_issue_required_steps_and_gotchas(self):
|
||||
mod = load_module(SCRIPT_PATH, "mempalace_ezra_integration")
|
||||
plan = mod.build_plan({})
|
||||
|
||||
self.assertEqual(plan["package_spec"], "mempalace==3.0.0")
|
||||
self.assertIn("pip install mempalace==3.0.0", plan["install_command"])
|
||||
self.assertEqual(plan["wing"], "ezra_home")
|
||||
self.assertIn('echo "" | mempalace mine ~/.hermes/', plan["mine_home_command"])
|
||||
self.assertIn('--mode convos', plan["mine_sessions_command"])
|
||||
self.assertIn('mempalace wake-up', plan["wake_up_command"])
|
||||
self.assertIn('hermes mcp add mempalace -- python -m mempalace.mcp_server', plan["mcp_command"])
|
||||
self.assertIn('wing:', plan["yaml_template"])
|
||||
self.assertTrue(any('stdin' in item.lower() for item in plan["gotchas"]))
|
||||
self.assertTrue(any('wing:' in item for item in plan["gotchas"]))
|
||||
|
||||
def test_build_plan_accepts_path_and_wing_overrides(self):
|
||||
mod = load_module(SCRIPT_PATH, "mempalace_ezra_integration")
|
||||
plan = mod.build_plan(
|
||||
{
|
||||
"hermes_home": "/root/wizards/ezra/home",
|
||||
"sessions_dir": "/root/wizards/ezra/home/sessions",
|
||||
"wing": "ezra_archive",
|
||||
}
|
||||
)
|
||||
|
||||
self.assertEqual(plan["wing"], "ezra_archive")
|
||||
self.assertIn('/root/wizards/ezra/home', plan["mine_home_command"])
|
||||
self.assertIn('/root/wizards/ezra/home/sessions', plan["mine_sessions_command"])
|
||||
self.assertIn('wing: ezra_archive', plan["yaml_template"])
|
||||
|
||||
def test_repo_contains_mem_palace_ezra_doc(self):
|
||||
self.assertTrue(DOC_PATH.exists(), "missing committed MemPalace Ezra integration doc")
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# MemPalace v3.0.0 — Ezra Integration Packet",
|
||||
"pip install mempalace==3.0.0",
|
||||
'echo "" | mempalace mine ~/.hermes/',
|
||||
"mempalace mine ~/.hermes/sessions/ --mode convos",
|
||||
"mempalace wake-up",
|
||||
"hermes mcp add mempalace -- python -m mempalace.mcp_server",
|
||||
"Report back to #568",
|
||||
]
|
||||
for snippet in required:
|
||||
self.assertIn(snippet, text)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
82
tests/test_nh_broadband_install_planner.py
Normal file
82
tests/test_nh_broadband_install_planner.py
Normal file
@@ -0,0 +1,82 @@
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from scripts.plan_nh_broadband_install import build_install_packet, load_request, render_markdown, validate_request
|
||||
|
||||
|
||||
REQUEST_PATH = Path("docs/nh-broadband-install-request.example.yaml")
|
||||
PACKET_PATH = Path("docs/nh-broadband-install-packet.example.md")
|
||||
REPORT_PATH = Path("reports/operations/2026-04-15-nh-broadband-public-research.md")
|
||||
|
||||
|
||||
def test_example_request_and_packet_artifacts_exist() -> None:
|
||||
assert REQUEST_PATH.exists()
|
||||
assert PACKET_PATH.exists()
|
||||
assert REPORT_PATH.exists()
|
||||
|
||||
|
||||
def test_build_install_packet_calculates_monthly_estimate_and_live_steps() -> None:
|
||||
data = load_request(REQUEST_PATH)
|
||||
packet = build_install_packet(data)
|
||||
|
||||
assert packet["provider"]["name"] == "NH Broadband"
|
||||
assert packet["provider"]["availability_url"] == "https://www.connectsignup.com/?client=118"
|
||||
assert packet["selected_plan"]["id"] == "premier_1_gig"
|
||||
assert packet["selected_plan"]["monthly_estimate_usd"] == "79.95"
|
||||
assert packet["availability_check"]["status"] == "pending_exact_address_check"
|
||||
assert packet["payment"]["payment_ready"] is False
|
||||
assert any(
|
||||
"Confirm install fee and first-month charges before scheduling" in action
|
||||
for action in packet["next_actions"]
|
||||
)
|
||||
|
||||
|
||||
def test_validate_request_requires_exact_address_fields() -> None:
|
||||
data = {
|
||||
"site_label": "Cabin",
|
||||
"city": "Lempster",
|
||||
"state": "NH",
|
||||
"zip": "03605",
|
||||
"contact_name": "Operator",
|
||||
"contact_phone": "603-555-0100",
|
||||
"contact_email": "operator@example.com",
|
||||
"preferred_plan": "basic_100",
|
||||
"driveway_note": "Call ahead.",
|
||||
}
|
||||
with pytest.raises(ValueError, match="street_address"):
|
||||
validate_request(data)
|
||||
|
||||
|
||||
def test_example_markdown_matches_rendered_packet() -> None:
|
||||
data = load_request(REQUEST_PATH)
|
||||
packet = build_install_packet(data)
|
||||
rendered = render_markdown(packet)
|
||||
committed = PACKET_PATH.read_text(encoding="utf-8")
|
||||
assert committed == rendered
|
||||
assert "Check availability portal: https://www.connectsignup.com/?client=118" in committed
|
||||
assert "Customer support: (866) 431-1928" in committed
|
||||
assert "Driveway note: Long gravel driveway; call ahead before arrival." in committed
|
||||
assert "## Call log" in committed
|
||||
assert "## Post-install verification" in committed
|
||||
|
||||
|
||||
def test_public_research_report_locks_verified_facts_and_remaining_live_work() -> None:
|
||||
text = REPORT_PATH.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"# NH Broadband fiber install public research packet",
|
||||
"https://nhbroadband.com/",
|
||||
"https://www.connectsignup.com/?client=118",
|
||||
"(866) 431-1928",
|
||||
"8 a.m. – 8 p.m. EST",
|
||||
"$49.95/month",
|
||||
"$79.95/month",
|
||||
"$99.95/month",
|
||||
"What was verified from official public sources",
|
||||
"What was not verified in-session",
|
||||
"Exact-address availability at the cabin was not verified in-session.",
|
||||
"Actual installation appointment was not scheduled in-session.",
|
||||
"Refs #533",
|
||||
]
|
||||
for snippet in required:
|
||||
assert snippet in text
|
||||
Reference in New Issue
Block a user