Files
the-nexus/intelligence/deepdive/OPERATIONAL_READINESS.md
Ezra (Archivist) 9ad2132482
Some checks failed
Deploy Nexus / deploy (push) Has been cancelled
[ezra] #830: Operational readiness checklist + fix Gitea URL to forge
2026-04-05 19:54:47 +00:00

6.4 KiB

Deep Dive — Operational Readiness Checklist

Issue: #830 — Deep Dive: Sovereign NotebookLM + Daily AI Intelligence Briefing
Location: intelligence/deepdive/OPERATIONAL_READINESS.md
Created: 2026-04-05 by Ezra, Archivist
Purpose: Bridge the gap between "code complete" and "daily briefing delivered." This is the pre-flight checklist for making the Deep Dive pipeline operational on the Hermes VPS.


Executive Summary

The Deep Dive pipeline is code-complete and tested (9/9 tests pass). This document defines the exact steps to move it into daily production.

Phase Status Blocker
Code & tests Complete None
Documentation Complete None
Environment config 🟡 Needs verification Secrets, endpoints, Gitea URL
TTS engine 🟡 Needs install Piper model or ElevenLabs key
LLM endpoint 🟡 Needs running server localhost:4000 or alternative
Systemd timer 🟡 Needs install make install-systemd
Live delivery 🔴 Not yet run Complete checklist below

Step 1: Environment Prerequisites

Run these checks on the host that will execute the pipeline (Hermes VPS):

# Python 3.11+
python3 --version

# Git
git --version

# Network outbound (arXiv, blogs, Telegram, Gitea)
curl -sI http://export.arxiv.org/api/query | head -1
curl -sI https://api.telegram.org | head -1
curl -sI https://forge.alexanderwhitestone.com | head -1

All must return HTTP 200.


Step 2: Clone & Enter Repository

cd /root/wizards/the-nexus/intelligence/deepdive

If the repo is not present:

git clone https://forge.alexanderwhitestone.com/Timmy_Foundation/the-nexus.git /root/wizards/the-nexus
cd /root/wizards/the-nexus/intelligence/deepdive

Step 3: Install Dependencies

make install

This creates ~/.venvs/deepdive/ and installs:

  • feedparser, httpx, pyyaml
  • sentence-transformers + all-MiniLM-L6-v2 model (~80MB)

Verify:

~/.venvs/deepdive/bin/python -c "import feedparser, httpx, sentence_transformers; print('OK')"

Step 4: Configure Secrets

Export these environment variables (add to ~/.bashrc or a .env file loaded by systemd):

export GITEA_TOKEN="<your_gitea_api_token>"
export TELEGRAM_BOT_TOKEN="<your_telegram_bot_token>"
# Optional, for cloud TTS fallback:
export ELEVENLABS_API_KEY="<your_elevenlabs_key>"
export OPENAI_API_KEY="<your_openai_key>"

Verify Gitea connectivity:

curl -s -H "Authorization: token $GITEA_TOKEN" \
  https://forge.alexanderwhitestone.com/api/v1/user | jq -r '.login'

Must print a valid username (e.g., ezra).

Verify Telegram bot:

curl -s "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/getMe" | jq -r '.result.username'

Must print the bot username.


Step 5: TTS Engine Setup

Option A: Piper (sovereign, local)

# Install piper binary (example for Linux x86_64)
mkdir -p ~/.local/bin
curl -L -o ~/.local/bin/piper \
  https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_linux_x86_64.tar.gz
tar -xzf ~/.local/bin/piper -C ~/.local/bin/
export PATH="$HOME/.local/bin:$PATH"

# Download voice model (~2GB)
python3 -c "
from tts_engine import PiperTTS
tts = PiperTTS('en_US-lessac-medium')
print('Piper ready')
"

Option B: ElevenLabs (cloud, premium quality)

Ensure ELEVENLABS_API_KEY is exported. No local binary needed.

Option C: OpenAI TTS (cloud, balance)

Update config.yaml:

tts:
  engine: "openai"
  voice: "alloy"

Ensure OPENAI_API_KEY is exported.


Step 6: LLM Endpoint Verification

The default config points to http://localhost:4000/v1 (LiteLLM or local llama-server).

Verify the endpoint is listening:

curl http://localhost:4000/v1/models

If the endpoint is down, either:

  1. Start it: llama-server -m model.gguf --port 4000 -ngl 999 --jinja
  2. Or change synthesis.llm_endpoint in config.yaml to an alternative (e.g., OpenRouter, Kimi, Anthropic).

Step 7: Dry-Run Verification

make run-dry

Expected output includes:

  • Phase 1: Source Aggregation with >0 items fetched
  • Phase 2: Relevance Scoring with >0 items ranked
  • Phase 0: Fleet Context Grounding with 4 repos, commits, issues
  • Phase 3: Synthesis with briefing saved to ~/.cache/deepdive/
  • Phase 4: Audio disabled (if TTS not configured) or audio path
  • Phase 5: DRY RUN - delivery skipped

If any phase errors, fix before proceeding.


Step 8: First Live Run

⚠️ This will send a Telegram message to the configured channel.

make run-live
# Type 'y' when prompted

Watch for:

  • Telegram text summary delivery
  • Telegram voice message delivery (if TTS + audio enabled)

Step 9: Install Systemd Timer (Daily 06:00)

make install-systemd

Verify:

systemctl --user status deepdive.timer
systemctl --user list-timers --all | grep deepdive

To trigger a manual run via systemd:

systemctl --user start deepdive.service
journalctl --user -u deepdive.service -f

Step 10: Monitoring & Rollback

Monitor daily runs

journalctl --user -u deepdive.service --since today

Check latest briefing

ls -lt ~/.cache/deepdive/briefing_*.json | head -1

Disable timer (rollback)

systemctl --user stop deepdive.timer
systemctl --user disable deepdive.timer

Clean reinstall

make clean
make install
make test

Known Gaps & Mitigations

Gap Impact Mitigation
arXiv RSS empty on weekends Empty briefing Sat/Sun ArXiv API fallback is implemented
feedparser missing RSS skipped API fallback activates automatically
localhost:4000 down Synthesis uses template Start LLM endpoint or update config
Piper model ~2GB download First TTS run slow Pre-download during make install
Telegram rate limits Delivery delayed Retry is manual; add backoff if needed

Sign-Off

Check Verified By Date
Dependencies installed
Secrets configured
TTS engine ready
LLM endpoint responding
Dry-run successful
Live run successful
Systemd timer active

Created by Ezra, Archivist | 2026-04-05