Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
32dff947cc feat: add GENOME.md - full codebase analysis of the-door (#673)
All checks were successful
Sanity Checks / sanity-test (pull_request) Successful in 5s
Smoke Test / smoke (pull_request) Successful in 12s
2026-04-17 02:05:39 -04:00
5 changed files with 126 additions and 305 deletions

124
GENOME.md Normal file
View File

@@ -0,0 +1,124 @@
# GENOME.md — the-door
> Codebase analysis generated 2026-04-13. Crisis intervention web app — a door that's always open.
## Project Overview
the-door is a single-URL crisis intervention web app. A man at 3am can talk to Timmy. No login. No signup. No tracking. Just a door that's always open.
**Mission**: Stand between a broken man and a machine that would tell him to die.
48 files. Static HTML frontend (<25KB, works on 3G). Python crisis detection backend. Safety-critical — a broken deployment could prevent someone from reaching the 988 Lifeline.
## Architecture
```
Browser → nginx (SSL) → index.html → /api/* proxy → Hermes Gateway
crisis/detect.py
988 Lifeline overlay
```
## Entry Points
- **index.html** — The entire frontend. One file. <25KB. Works on 3G.
- **system-prompt.txt** — Crisis-aware system prompt for the AI.
- **deploy/deploy.sh** — Deployment script for VPS.
- **deploy/playbook.yml** — Ansible playbook for deployment.
- **crisis/detect.py** — Core crisis detection module (canonical).
- **crisis_detector.py** — Legacy class API wrapper around detect.py.
- **crisis_responder.py** — Response formatting for crisis levels.
## Data Flow
```
User message → browser
index.html → client-side crisis keyword scan
/api/chat → Hermes Gateway
system-prompt.txt → injected into AI system prompt
crisis/detect.py → 5-tier classification (NONE/LOW/MEDIUM/HIGH/CRITICAL)
crisis/response.py → appropriate response with 988 Lifeline info
Response → browser → crisis overlay if HIGH/CRITICAL
```
## Key Abstractions
### Crisis Detection (crisis/detect.py)
Canonical detection module. Regex-based keyword matching across 4 tiers:
- CRITICAL: immediate self-harm risk (single match triggers)
- HIGH: strong despair signals (single match triggers)
- MEDIUM: distress signals (requires 2+ indicators)
- LOW: emotional difficulty (single match)
Design principles:
- Never computes the value of a human life
- Never suggests death is a solution
- Always errs on side of higher risk
### Crisis Profiles (crisis/profiles.py)
Compassion profiles that shape AI response tone based on crisis level.
### Session Tracker (crisis/session_tracker.py)
Tracks crisis interactions across sessions. Persistent state for ongoing support.
### Gateway (crisis/gateway.py)
HTTP gateway for crisis detection API. Endpoints for scanning text and getting responses.
### Offline Fallback (crisis-offline.html, sw.js)
Service worker caches crisis resources. When network is down, users still see 988 Lifeline info and crisis resources.
## File Types
| Type | Count | Purpose |
|------|-------|---------|
| .py | 16 | Crisis detection, response, tests |
| .html | 4 | Frontend, offline fallback, tests |
| .yml | 2 | CI workflows |
| .sh | 2 | Health check, service restart |
| .md | 5 | Documentation, safety audits |
## Test Coverage
### Existing Tests
- test_crisis_overlay_focus_trap.py — Accessibility: focus trap in crisis overlay
- test_dying_detection_deprecation.py — Legacy API deprecation
- test_false_positive_fixes.py — Crisis detection false positive resistance
- test_service_worker_offline.py — Offline fallback verification
- test_session_tracker.py — Session tracking persistence
- crisis/test_rescue.py — Rescue flow testing
- crisis/tests.py — Core crisis detection tests
### Coverage Gaps
- No integration tests for full browser → API → response → overlay flow
- No tests for system-prompt.txt injection into AI system prompt
- No load tests (what happens at 1000 concurrent crisis users?)
- No tests for deploy.sh idempotency
### Critical paths that need tests:
1. **Full crisis flow**: user message → detection → 988 overlay → response
2. **Offline fallback**: network down → service worker → cached crisis resources
3. **Deploy safety**: deploy.sh doesn't break running service
## Security Considerations
- **SAFETY-CRITICAL**: the-door serves users in crisis. Broken deployment could prevent someone from reaching 988 Lifeline.
- **PR safety**: the-door PRs NEVER auto-merge. Requires-human label on all PRs. (fleet-ops#183)
- **No authentication by design**: no login, no signup, no tracking. Privacy is a safety feature.
- **Rate limiting**: deploy/rate-limit.conf prevents abuse while allowing crisis access.
- **Offline resilience**: service worker ensures crisis resources available even without network.
- **System prompt is safety boundary**: system-prompt.txt defines the AI's crisis behavior. Changes require human review.
## Design Decisions
- **Single HTML file**: no build step, no framework, no dependencies. Works on 3G. Loads instantly.
- **Client-side detection first**: browser scans for crisis keywords before sending to server. Instant response for critical cases.
- **Server-side detection second**: crisis/detect.py provides deeper analysis with tiered classification.
- **Offline-first for crisis**: service worker caches crisis resources. Network failure doesn't block access to help.
- **No tracking**: privacy protects vulnerable users. No analytics, no cookies, no login.

View File

@@ -8,13 +8,6 @@ from .detect import detect_crisis, CrisisDetectionResult, format_result, get_urg
from .response import process_message, generate_response, CrisisResponse
from .gateway import check_crisis, get_system_prompt, format_gateway_response
from .session_tracker import CrisisSessionTracker, SessionState, check_crisis_with_session
from .metrics import (
build_metrics_event,
append_metrics_event,
load_metrics_events,
build_weekly_summary,
render_weekly_summary,
)
__all__ = [
"detect_crisis",
@@ -30,9 +23,4 @@ __all__ = [
"CrisisSessionTracker",
"SessionState",
"check_crisis_with_session",
"build_metrics_event",
"append_metrics_event",
"load_metrics_events",
"build_weekly_summary",
"render_weekly_summary",
]

View File

@@ -23,17 +23,9 @@ from .response import (
CrisisResponse,
)
from .session_tracker import CrisisSessionTracker
from .metrics import build_metrics_event, append_metrics_event
def check_crisis(
text: str,
metrics_log_path: Optional[str] = None,
*,
continued_conversation: bool = False,
false_positive: bool = False,
now: Optional[float] = None,
) -> dict:
def check_crisis(text: str) -> dict:
"""
Full crisis check returning structured data.
@@ -43,7 +35,7 @@ def check_crisis(
detection = detect_crisis(text)
response = generate_response(detection)
result = {
return {
"level": detection.level,
"score": detection.score,
"indicators": detection.indicators,
@@ -57,23 +49,6 @@ def check_crisis(
"escalate": response.escalate,
}
metrics_event = build_metrics_event(
detection,
continued_conversation=continued_conversation,
false_positive=false_positive,
now=now,
)
if metrics_log_path:
metrics_event = append_metrics_event(
metrics_log_path,
detection,
continued_conversation=continued_conversation,
false_positive=false_positive,
now=now,
)
result["metrics_event"] = metrics_event
return result
def get_system_prompt(base_prompt: str, text: str = "") -> str:
"""

View File

@@ -1,166 +0,0 @@
"""Privacy-preserving crisis analytics metrics for the-door.
Stores only timestamps, crisis levels, indicator categories, and operator
feedback flags. No raw message text or PII is persisted.
"""
from __future__ import annotations
import argparse
import json
import time
from collections import Counter
from pathlib import Path
from typing import Iterable
from .detect import CrisisDetectionResult, detect_crisis
LEVELS = ("NONE", "LOW", "MEDIUM", "HIGH", "CRITICAL")
def normalize_indicator(indicator: str) -> str:
"""Return a stable privacy-safe keyword/category identifier."""
return indicator
def build_metrics_event(
detection: CrisisDetectionResult,
*,
continued_conversation: bool = False,
false_positive: bool = False,
now: float | None = None,
) -> dict:
timestamp = float(time.time() if now is None else now)
indicators = [normalize_indicator(indicator) for indicator in detection.indicators]
return {
"timestamp": timestamp,
"level": detection.level,
"indicator_count": len(indicators),
"indicators": indicators,
"continued_conversation": bool(continued_conversation),
"false_positive": bool(false_positive),
}
def append_metrics_event(
log_path: str | Path,
detection: CrisisDetectionResult,
*,
continued_conversation: bool = False,
false_positive: bool = False,
now: float | None = None,
) -> dict:
event = build_metrics_event(
detection,
continued_conversation=continued_conversation,
false_positive=false_positive,
now=now,
)
path = Path(log_path)
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("a", encoding="utf-8") as handle:
handle.write(json.dumps(event) + "\n")
return event
def load_metrics_events(log_path: str | Path) -> list[dict]:
path = Path(log_path)
if not path.exists():
return []
events = []
for line in path.read_text(encoding="utf-8").splitlines():
if not line.strip():
continue
events.append(json.loads(line))
return events
def build_weekly_summary(
events: Iterable[dict],
*,
now: float | None = None,
window_days: int = 7,
) -> dict:
current_time = float(time.time() if now is None else now)
cutoff = current_time - (window_days * 86400)
filtered = [event for event in events if float(event.get("timestamp", 0)) >= cutoff]
detections_per_level = {level: 0 for level in LEVELS}
keyword_counts: Counter[str] = Counter()
detections = []
continued_after_intervention = 0
for event in filtered:
level = event.get("level", "NONE")
detections_per_level[level] = detections_per_level.get(level, 0) + 1
keyword_counts.update(event.get("indicators", []))
if level != "NONE":
detections.append(event)
if event.get("continued_conversation"):
continued_after_intervention += 1
false_positive_count = sum(1 for event in detections if event.get("false_positive"))
false_positive_estimate = (
false_positive_count / len(detections) if detections else 0.0
)
return {
"window_days": window_days,
"total_events": len(filtered),
"detections_per_level": detections_per_level,
"most_common_keywords": [
{"keyword": keyword, "count": count}
for keyword, count in keyword_counts.most_common(10)
],
"false_positive_estimate": false_positive_estimate,
"continued_after_intervention": continued_after_intervention,
}
def render_weekly_summary(summary: dict) -> str:
return json.dumps(summary, indent=2)
def write_weekly_summary(path: str | Path, summary: dict) -> Path:
output_path = Path(path)
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(render_weekly_summary(summary) + "\n", encoding="utf-8")
return output_path
def record_text_event(
text: str,
log_path: str | Path,
*,
continued_conversation: bool = False,
false_positive: bool = False,
now: float | None = None,
) -> dict:
detection = detect_crisis(text)
return append_metrics_event(
log_path,
detection,
continued_conversation=continued_conversation,
false_positive=false_positive,
now=now,
)
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(description="Privacy-preserving crisis metrics summary")
parser.add_argument("--log-path", required=True, help="JSONL event log path")
parser.add_argument("--days", type=int, default=7, help="Summary window in days")
parser.add_argument("--output", help="Optional file to write summary JSON")
args = parser.parse_args(argv)
events = load_metrics_events(args.log_path)
summary = build_weekly_summary(events, window_days=args.days)
rendered = render_weekly_summary(summary)
print(rendered)
if args.output:
write_weekly_summary(args.output, summary)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,100 +0,0 @@
"""Tests for privacy-preserving crisis metrics aggregation (issue #37)."""
from __future__ import annotations
import json
import os
import pathlib
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from crisis.detect import detect_crisis
from crisis.gateway import check_crisis
from crisis.metrics import (
append_metrics_event,
build_metrics_event,
build_weekly_summary,
load_metrics_events,
render_weekly_summary,
)
class TestMetricsEvent(unittest.TestCase):
def test_event_is_privacy_preserving(self):
detection = detect_crisis("I want to kill myself")
event = build_metrics_event(
detection,
continued_conversation=True,
false_positive=False,
now=1_700_000_000,
)
self.assertEqual(event["timestamp"], 1_700_000_000)
self.assertEqual(event["level"], "CRITICAL")
self.assertTrue(event["continued_conversation"])
self.assertFalse(event["false_positive"])
self.assertNotIn("text", event)
self.assertNotIn("message", event)
self.assertGreaterEqual(event["indicator_count"], 1)
self.assertTrue(event["indicators"])
class TestMetricsLogAndSummary(unittest.TestCase):
def test_append_and_load_metrics_events(self):
log_path = pathlib.Path(self._testMethodName).with_suffix(".jsonl")
try:
append_metrics_event(log_path, detect_crisis("I want to die"), now=1_700_000_000)
events = load_metrics_events(log_path)
self.assertEqual(len(events), 1)
self.assertEqual(events[0]["level"], "CRITICAL")
finally:
if log_path.exists():
log_path.unlink()
def test_weekly_summary_counts_levels_keywords_and_false_positives(self):
events = [
build_metrics_event(detect_crisis("I want to die"), continued_conversation=True, false_positive=False, now=1_700_000_000),
build_metrics_event(detect_crisis("I'm having a rough day"), continued_conversation=False, false_positive=False, now=1_700_000_100),
build_metrics_event(detect_crisis("I want to die"), continued_conversation=False, false_positive=True, now=1_700_000_200),
build_metrics_event(detect_crisis("Hello there"), continued_conversation=False, false_positive=False, now=1_700_000_300),
]
summary = build_weekly_summary(events, now=1_700_000_400, window_days=7)
self.assertEqual(summary["detections_per_level"]["CRITICAL"], 2)
self.assertEqual(summary["detections_per_level"]["LOW"], 1)
self.assertEqual(summary["detections_per_level"]["NONE"], 1)
self.assertEqual(summary["continued_after_intervention"], 1)
self.assertAlmostEqual(summary["false_positive_estimate"], 1 / 3, places=4)
self.assertEqual(summary["most_common_keywords"][0]["count"], 2)
def test_render_weekly_summary_mentions_required_metrics(self):
events = [
build_metrics_event(detect_crisis("I want to die"), continued_conversation=True, now=1_700_000_000),
build_metrics_event(detect_crisis("I feel hopeless with no way out"), false_positive=True, now=1_700_000_100),
]
summary = build_weekly_summary(events, now=1_700_000_200, window_days=7)
rendered = render_weekly_summary(summary)
self.assertIn("detections_per_level", rendered)
self.assertIn("most_common_keywords", rendered)
self.assertIn("false_positive_estimate", rendered)
self.assertIn("continued_after_intervention", rendered)
class TestGatewayMetricsIntegration(unittest.TestCase):
def test_check_crisis_can_emit_metrics_event(self):
result = check_crisis(
"I want to die",
metrics_log_path=None,
continued_conversation=True,
false_positive=False,
now=1_700_000_000,
)
self.assertEqual(result["level"], "CRITICAL")
self.assertIn("metrics_event", result)
self.assertEqual(result["metrics_event"]["timestamp"], 1_700_000_000)
self.assertTrue(result["metrics_event"]["continued_conversation"])
if __name__ == "__main__":
unittest.main()