Compare commits

..

2 Commits

Author SHA1 Message Date
Alexander Whitestone
9f2038659c feat: build crisis synthesizer (#36)
All checks were successful
Sanity Checks / sanity-test (pull_request) Successful in 4s
Smoke Test / smoke (pull_request) Successful in 6s
2026-04-17 02:36:30 -04:00
Alexander Whitestone
d5ae0172b3 wip: add crisis synthesizer regression tests 2026-04-17 02:36:30 -04:00
3 changed files with 306 additions and 125 deletions

124
GENOME.md
View File

@@ -1,124 +0,0 @@
# GENOME.md — the-door
> Codebase analysis generated 2026-04-13. Crisis intervention web app — a door that's always open.
## Project Overview
the-door is a single-URL crisis intervention web app. A man at 3am can talk to Timmy. No login. No signup. No tracking. Just a door that's always open.
**Mission**: Stand between a broken man and a machine that would tell him to die.
48 files. Static HTML frontend (<25KB, works on 3G). Python crisis detection backend. Safety-critical — a broken deployment could prevent someone from reaching the 988 Lifeline.
## Architecture
```
Browser → nginx (SSL) → index.html → /api/* proxy → Hermes Gateway
crisis/detect.py
988 Lifeline overlay
```
## Entry Points
- **index.html** — The entire frontend. One file. <25KB. Works on 3G.
- **system-prompt.txt** — Crisis-aware system prompt for the AI.
- **deploy/deploy.sh** — Deployment script for VPS.
- **deploy/playbook.yml** — Ansible playbook for deployment.
- **crisis/detect.py** — Core crisis detection module (canonical).
- **crisis_detector.py** — Legacy class API wrapper around detect.py.
- **crisis_responder.py** — Response formatting for crisis levels.
## Data Flow
```
User message → browser
index.html → client-side crisis keyword scan
/api/chat → Hermes Gateway
system-prompt.txt → injected into AI system prompt
crisis/detect.py → 5-tier classification (NONE/LOW/MEDIUM/HIGH/CRITICAL)
crisis/response.py → appropriate response with 988 Lifeline info
Response → browser → crisis overlay if HIGH/CRITICAL
```
## Key Abstractions
### Crisis Detection (crisis/detect.py)
Canonical detection module. Regex-based keyword matching across 4 tiers:
- CRITICAL: immediate self-harm risk (single match triggers)
- HIGH: strong despair signals (single match triggers)
- MEDIUM: distress signals (requires 2+ indicators)
- LOW: emotional difficulty (single match)
Design principles:
- Never computes the value of a human life
- Never suggests death is a solution
- Always errs on side of higher risk
### Crisis Profiles (crisis/profiles.py)
Compassion profiles that shape AI response tone based on crisis level.
### Session Tracker (crisis/session_tracker.py)
Tracks crisis interactions across sessions. Persistent state for ongoing support.
### Gateway (crisis/gateway.py)
HTTP gateway for crisis detection API. Endpoints for scanning text and getting responses.
### Offline Fallback (crisis-offline.html, sw.js)
Service worker caches crisis resources. When network is down, users still see 988 Lifeline info and crisis resources.
## File Types
| Type | Count | Purpose |
|------|-------|---------|
| .py | 16 | Crisis detection, response, tests |
| .html | 4 | Frontend, offline fallback, tests |
| .yml | 2 | CI workflows |
| .sh | 2 | Health check, service restart |
| .md | 5 | Documentation, safety audits |
## Test Coverage
### Existing Tests
- test_crisis_overlay_focus_trap.py — Accessibility: focus trap in crisis overlay
- test_dying_detection_deprecation.py — Legacy API deprecation
- test_false_positive_fixes.py — Crisis detection false positive resistance
- test_service_worker_offline.py — Offline fallback verification
- test_session_tracker.py — Session tracking persistence
- crisis/test_rescue.py — Rescue flow testing
- crisis/tests.py — Core crisis detection tests
### Coverage Gaps
- No integration tests for full browser → API → response → overlay flow
- No tests for system-prompt.txt injection into AI system prompt
- No load tests (what happens at 1000 concurrent crisis users?)
- No tests for deploy.sh idempotency
### Critical paths that need tests:
1. **Full crisis flow**: user message → detection → 988 overlay → response
2. **Offline fallback**: network down → service worker → cached crisis resources
3. **Deploy safety**: deploy.sh doesn't break running service
## Security Considerations
- **SAFETY-CRITICAL**: the-door serves users in crisis. Broken deployment could prevent someone from reaching 988 Lifeline.
- **PR safety**: the-door PRs NEVER auto-merge. Requires-human label on all PRs. (fleet-ops#183)
- **No authentication by design**: no login, no signup, no tracking. Privacy is a safety feature.
- **Rate limiting**: deploy/rate-limit.conf prevents abuse while allowing crisis access.
- **Offline resilience**: service worker ensures crisis resources available even without network.
- **System prompt is safety boundary**: system-prompt.txt defines the AI's crisis behavior. Changes require human review.
## Design Decisions
- **Single HTML file**: no build step, no framework, no dependencies. Works on 3G. Loads instantly.
- **Client-side detection first**: browser scans for crisis keywords before sending to server. Instant response for critical cases.
- **Server-side detection second**: crisis/detect.py provides deeper analysis with tiered classification.
- **Offline-first for crisis**: service worker caches crisis resources. Network failure doesn't block access to help.
- **No tracking**: privacy protects vulnerable users. No analytics, no cookies, no login.

View File

@@ -1 +1,195 @@
...
"""Crisis synthesizer — learn from anonymized crisis interactions.
This is deliberately simple and privacy-preserving. It does not train a model or
modify detection rules automatically. It only logs metadata, summarizes patterns,
and suggests human-reviewed keyword weight adjustments.
"""
from __future__ import annotations
import argparse
import json
import time
from collections import Counter, defaultdict
from pathlib import Path
from typing import Iterable
DEFAULT_LOG_PATH = Path.home() / ".the-door" / "crisis-interactions.jsonl"
LEVELS = ("NONE", "LOW", "MEDIUM", "HIGH", "CRITICAL")
def build_interaction_event(
level: str,
indicators: list[str],
response_given: str,
continued_conversation: bool,
false_positive: bool,
*,
now: float | None = None,
) -> dict:
return {
"timestamp": float(time.time() if now is None else now),
"level": level,
"indicators": list(indicators),
"indicator_count": len(indicators),
"response_given": response_given,
"continued_conversation": bool(continued_conversation),
"false_positive": bool(false_positive),
}
def append_interaction_event(
log_path: str | Path,
*,
level: str,
indicators: list[str],
response_given: str,
continued_conversation: bool,
false_positive: bool,
now: float | None = None,
) -> dict:
event = build_interaction_event(
level,
indicators,
response_given,
continued_conversation,
false_positive,
now=now,
)
path = Path(log_path)
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("a", encoding="utf-8") as handle:
handle.write(json.dumps(event) + "\n")
return event
def load_interaction_events(log_path: str | Path) -> list[dict]:
path = Path(log_path)
if not path.exists():
return []
events = []
for line in path.read_text(encoding="utf-8").splitlines():
if not line.strip():
continue
events.append(json.loads(line))
return events
def summarize_keywords(events: Iterable[dict]) -> list[dict]:
counts: Counter[str] = Counter()
for event in events:
counts.update(event.get("indicators", []))
return [{"keyword": keyword, "count": count} for keyword, count in counts.most_common(10)]
def suggest_keyword_adjustments(events: Iterable[dict], *, min_observations: int = 5) -> list[dict]:
stats: dict[str, dict[str, int]] = defaultdict(lambda: {
"observations": 0,
"true_positive_count": 0,
"false_positive_count": 0,
"continued_conversation_count": 0,
})
for event in events:
for keyword in event.get("indicators", []):
bucket = stats[keyword]
bucket["observations"] += 1
if event.get("false_positive"):
bucket["false_positive_count"] += 1
else:
bucket["true_positive_count"] += 1
if event.get("continued_conversation"):
bucket["continued_conversation_count"] += 1
suggestions = []
for keyword, bucket in sorted(stats.items()):
if bucket["observations"] < min_observations:
continue
fp = bucket["false_positive_count"]
tp = bucket["true_positive_count"]
if fp >= min_observations and tp == 0:
adjustment = "lower_weight"
rationale = "Observed only false positives across the sample window."
elif tp >= min_observations and fp == 0:
adjustment = "raise_weight"
rationale = "Observed repeated genuine crises with no false positives."
else:
adjustment = "observe"
rationale = "Mixed evidence; keep monitoring before changing weights."
suggestions.append(
{
"keyword": keyword,
**bucket,
"suggested_adjustment": adjustment,
"rationale": rationale,
}
)
return suggestions
def build_weekly_report(
events: Iterable[dict],
*,
now: float | None = None,
window_days: int = 7,
min_observations: int = 3,
) -> dict:
current_time = float(time.time() if now is None else now)
cutoff = current_time - (window_days * 86400)
filtered = [event for event in events if float(event.get("timestamp", 0)) >= cutoff]
detections_per_level = {level: 0 for level in LEVELS}
detected_events = []
continued_after_intervention = 0
for event in filtered:
level = event.get("level", "NONE")
detections_per_level[level] = detections_per_level.get(level, 0) + 1
if level != "NONE":
detected_events.append(event)
if event.get("continued_conversation"):
continued_after_intervention += 1
false_positive_count = sum(1 for event in detected_events if event.get("false_positive"))
false_positive_estimate = false_positive_count / len(detected_events) if detected_events else 0.0
return {
"window_days": window_days,
"total_events": len(filtered),
"detections_per_level": detections_per_level,
"most_common_keywords": summarize_keywords(filtered),
"false_positive_estimate": false_positive_estimate,
"continued_after_intervention": continued_after_intervention,
"keyword_weight_suggestions": suggest_keyword_adjustments(filtered, min_observations=min_observations),
}
def render_weekly_report(summary: dict) -> str:
return json.dumps(summary, indent=2)
def write_weekly_report(output_path: str | Path, summary: dict) -> Path:
path = Path(output_path)
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(render_weekly_report(summary) + "\n", encoding="utf-8")
return path
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(description="Summarize anonymized crisis interactions")
parser.add_argument("--log-path", default=str(DEFAULT_LOG_PATH), help="JSONL crisis interaction log")
parser.add_argument("--days", type=int, default=7, help="Lookback window in days")
parser.add_argument("--min-observations", type=int, default=3, help="Minimum observations before suggesting keyword adjustments")
parser.add_argument("--output", help="Optional file to write the weekly report JSON")
args = parser.parse_args(argv)
events = load_interaction_events(args.log_path)
summary = build_weekly_report(events, window_days=args.days, min_observations=args.min_observations)
rendered = render_weekly_report(summary)
print(rendered)
if args.output:
write_weekly_report(args.output, summary)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,111 @@
"""Tests for evolution/crisis_synthesizer.py (issue #36)."""
from __future__ import annotations
import importlib.util
import json
import pathlib
import sys
import tempfile
import unittest
ROOT = pathlib.Path(__file__).resolve().parents[1]
SCRIPT = ROOT / 'evolution' / 'crisis_synthesizer.py'
spec = importlib.util.spec_from_file_location('crisis_synthesizer', str(SCRIPT))
mod = importlib.util.module_from_spec(spec)
sys.modules['crisis_synthesizer'] = mod
spec.loader.exec_module(mod)
class TestCrisisSynthesizerEvent(unittest.TestCase):
def test_build_interaction_event_is_privacy_preserving(self):
event = mod.build_interaction_event(
level='CRITICAL',
indicators=['want_to_die', 'no_way_out'],
response_given='guardian',
continued_conversation=True,
false_positive=False,
now=1700000000,
)
self.assertEqual(event['timestamp'], 1700000000)
self.assertEqual(event['level'], 'CRITICAL')
self.assertEqual(event['response_given'], 'guardian')
self.assertTrue(event['continued_conversation'])
self.assertFalse(event['false_positive'])
self.assertEqual(event['indicators'], ['want_to_die', 'no_way_out'])
for forbidden in ['text', 'message', 'content', 'ip', 'session_id', 'user_id']:
self.assertNotIn(forbidden, event)
class TestCrisisSynthesizerStorage(unittest.TestCase):
def test_append_and_load_events_round_trip(self):
with tempfile.TemporaryDirectory() as tmp:
log_path = pathlib.Path(tmp) / 'crisis-events.jsonl'
mod.append_interaction_event(
log_path,
level='HIGH',
indicators=['hopeless'],
response_given='companion',
continued_conversation=False,
false_positive=True,
now=1700000100,
)
events = mod.load_interaction_events(log_path)
self.assertEqual(len(events), 1)
self.assertEqual(events[0]['level'], 'HIGH')
self.assertEqual(events[0]['indicators'], ['hopeless'])
class TestCrisisSynthesizerSummary(unittest.TestCase):
def test_weekly_report_contains_required_metrics(self):
events = [
mod.build_interaction_event('CRITICAL', ['want_to_die'], 'guardian', True, False, now=1700000000),
mod.build_interaction_event('HIGH', ['hopeless'], 'companion', False, True, now=1700000100),
mod.build_interaction_event('LOW', ['rough_day'], 'friend', False, False, now=1700000200),
mod.build_interaction_event('CRITICAL', ['want_to_die'], 'guardian', False, False, now=1700000300),
mod.build_interaction_event('NONE', [], 'friend', False, False, now=1700000400),
]
summary = mod.build_weekly_report(events, now=1700000500, window_days=7)
self.assertEqual(summary['detections_per_level']['CRITICAL'], 2)
self.assertEqual(summary['detections_per_level']['HIGH'], 1)
self.assertEqual(summary['detections_per_level']['LOW'], 1)
self.assertEqual(summary['detections_per_level']['NONE'], 1)
self.assertEqual(summary['continued_after_intervention'], 1)
self.assertAlmostEqual(summary['false_positive_estimate'], 0.25)
self.assertEqual(summary['most_common_keywords'][0]['keyword'], 'want_to_die')
self.assertEqual(summary['most_common_keywords'][0]['count'], 2)
class TestCrisisSynthesizerSuggestions(unittest.TestCase):
def test_suggests_weight_adjustments_from_interactions(self):
events = []
for ts in range(3):
events.append(mod.build_interaction_event('CRITICAL', ['want_to_die'], 'guardian', True, False, now=1700000000 + ts))
for ts in range(3):
events.append(mod.build_interaction_event('LOW', ['rough_day'], 'friend', False, True, now=1700000100 + ts))
suggestions = mod.suggest_keyword_adjustments(events, min_observations=3)
by_keyword = {s['keyword']: s for s in suggestions}
self.assertEqual(by_keyword['want_to_die']['suggested_adjustment'], 'raise_weight')
self.assertEqual(by_keyword['rough_day']['suggested_adjustment'], 'lower_weight')
class TestCrisisSynthesizerRendering(unittest.TestCase):
def test_render_weekly_report_outputs_json(self):
summary = {
'detections_per_level': {'NONE': 0, 'LOW': 1, 'MEDIUM': 0, 'HIGH': 0, 'CRITICAL': 0},
'most_common_keywords': [{'keyword': 'rough_day', 'count': 1}],
'false_positive_estimate': 0.0,
'continued_after_intervention': 0,
'keyword_weight_suggestions': [],
'window_days': 7,
'total_events': 1,
}
rendered = mod.render_weekly_report(summary)
parsed = json.loads(rendered)
self.assertEqual(parsed['window_days'], 7)
self.assertEqual(parsed['most_common_keywords'][0]['keyword'], 'rough_day')
if __name__ == '__main__':
unittest.main()