Compare commits
3 Commits
step35/116
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 36b2b07fcc | |||
| 9ed8cd3cae | |||
| 82ab8b22c3 |
@@ -1,103 +0,0 @@
|
||||
# Crisis Detection on Edge Devices
|
||||
|
||||
Deploy a minimal crisis detection system on low-power devices for offline use.
|
||||
|
||||
## Why Edge?
|
||||
|
||||
A person in crisis may not have internet. The model must run locally:
|
||||
- No cloud dependency
|
||||
- No API keys needed
|
||||
- Works on airplane mode, rural areas, network outages
|
||||
- Privacy: text never leaves the device
|
||||
|
||||
## Target Hardware
|
||||
|
||||
| Device | RAM | Expected Latency | Notes |
|
||||
|--------|-----|------------------|-------|
|
||||
| Raspberry Pi 4 (4GB) | 4GB | 2-5s per inference | Recommended. Use Q4_K_M quant. |
|
||||
| Raspberry Pi 3B+ | 1GB | Keyword-only | Not enough RAM for model. Use keyword detector. |
|
||||
| Old Android phone | 2-4GB | 1-3s | Termux + llama.cpp. ARM NEON optimized. |
|
||||
| Any Linux laptop | 4GB+ | <1s | Full model possible. |
|
||||
|
||||
## Quick Start (Raspberry Pi 4)
|
||||
|
||||
### 1. Install Ollama
|
||||
|
||||
```bash
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
```
|
||||
|
||||
### 2. Pull a small crisis-capable model
|
||||
|
||||
```bash
|
||||
ollama pull gemma2:2b
|
||||
```
|
||||
|
||||
### 3. Clone and test
|
||||
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd turboquant
|
||||
python3 edge/detector.py --text "I want to kill myself"
|
||||
```
|
||||
|
||||
### 4. Hardware validation (P2 issue #116)
|
||||
|
||||
Run the built-in benchmark to validate offline operation and latency:
|
||||
|
||||
```bash
|
||||
# Test keyword-only (works without any model)
|
||||
python3 edge/detector.py --offline --benchmark
|
||||
|
||||
# Test with model inference (requires ollama + model)
|
||||
python3 edge/detector.py --benchmark
|
||||
|
||||
# Expected outputs:
|
||||
# - Keyword detection: <1ms (instant)
|
||||
# - Model inference: <5000ms on Pi 4 (5s threshold)
|
||||
# - Network independent: YES (resources cached locally)
|
||||
```
|
||||
|
||||
### 5. Systemd service (optional)
|
||||
|
||||
Create `/etc/systemd/system/crisis-detector.service`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Crisis Detector Edge Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/python3 /path/to/turboquant/edge/detector.py --interactive
|
||||
Restart=on-failure
|
||||
User=pi
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl enable crisis-detector
|
||||
sudo systemctl start crisis-detector
|
||||
```
|
||||
|
||||
## Model Selection
|
||||
|
||||
See [docs/edge-model-selection.md](edge-model-selection.md) for detailed comparison.
|
||||
|
||||
## Offline Resource Cache
|
||||
|
||||
Crisis resources are stored in `edge/crisis_resources.json` and require no internet to display.
|
||||
|
||||
## Crisis Resources
|
||||
|
||||
When crisis is detected, the detector displays:
|
||||
|
||||
- 988 Suicide & Crisis Lifeline (call/text 988)
|
||||
- Crisis Text Line (text HOME to 741741)
|
||||
- SAMHSA Helpline
|
||||
- Veterans Crisis Line
|
||||
- Self-help grounding techniques
|
||||
|
||||
All resources work without internet connection.
|
||||
@@ -1,28 +0,0 @@
|
||||
# Edge Model Selection for Crisis Detection
|
||||
|
||||
## Requirements
|
||||
|
||||
- Must run on 2GB RAM (keyword fallback for 1GB devices)
|
||||
- Must detect crisis intent with >90% recall
|
||||
- Latency <5s on Raspberry Pi 4
|
||||
- Quantized (Q4_K_M or smaller)
|
||||
|
||||
## Candidates
|
||||
|
||||
### Tier 1: Recommended
|
||||
|
||||
| Model | Size (Q4) | RAM | Crisis Recall | Notes |
|
||||
|-------|-----------|-----|---------------|-------|
|
||||
| gemma2:2b | ~700MB | 2GB | ~85% | Best balance of size/quality |
|
||||
| qwen2.5:1.5b | ~500MB | 1.5GB | ~80% | Smallest viable model |
|
||||
|
||||
### Tier 2: If RAM Available
|
||||
|
||||
| Model | Size (Q4) | RAM | Crisis Recall | Notes |
|
||||
|-------|-----------|-----|---------------|-------|
|
||||
| phi3:mini | ~1.2GB | 3GB | ~90% | Better nuance, needs more RAM |
|
||||
| llama3.2:3b | ~1GB | 2.5GB | ~88% | Good general capability |
|
||||
|
||||
### Tier 3: Keyword Only (1GB devices)
|
||||
|
||||
For devices with <2GB RAM, use `--offline` mode — keyword detection runs in <1ms and requires zero model memory.
|
||||
@@ -1,62 +0,0 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"last_updated": "2026-04-15",
|
||||
"national": [
|
||||
{
|
||||
"name": "988 Suicide & Crisis Lifeline",
|
||||
"phone": "988",
|
||||
"sms": "988",
|
||||
"description": "Call or text 988 for free, confidential support 24/7",
|
||||
"available": "24/7"
|
||||
},
|
||||
{
|
||||
"name": "Crisis Text Line",
|
||||
"sms": "741741",
|
||||
"keyword": "HELLO",
|
||||
"description": "Text HOME to 741741 for crisis counseling",
|
||||
"available": "24/7"
|
||||
},
|
||||
{
|
||||
"name": "SAMHSA National Helpline",
|
||||
"phone": "1-800-662-4357",
|
||||
"description": "Free referral service for substance abuse and mental health",
|
||||
"available": "24/7"
|
||||
},
|
||||
{
|
||||
"name": "Veterans Crisis Line",
|
||||
"phone": "988",
|
||||
"sms": "838255",
|
||||
"description": "Press 1 after dialing 988 for Veterans-specific support",
|
||||
"available": "24/7"
|
||||
}
|
||||
],
|
||||
"international": [
|
||||
{
|
||||
"name": "International Association for Suicide Prevention",
|
||||
"url": "https://www.iasp.info/resources/Crisis_Centres/",
|
||||
"description": "Directory of crisis centers worldwide"
|
||||
}
|
||||
],
|
||||
"self_help": [
|
||||
{
|
||||
"name": "Grounding Technique (5-4-3-2-1)",
|
||||
"steps": [
|
||||
"Name 5 things you can SEE",
|
||||
"Name 4 things you can TOUCH",
|
||||
"Name 3 things you can HEAR",
|
||||
"Name 2 things you can SMELL",
|
||||
"Name 1 thing you can TASTE"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Box Breathing",
|
||||
"steps": [
|
||||
"Breathe IN for 4 seconds",
|
||||
"HOLD for 4 seconds",
|
||||
"Breathe OUT for 4 seconds",
|
||||
"HOLD for 4 seconds",
|
||||
"Repeat 4 times"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
217
edge/detector.py
217
edge/detector.py
@@ -1,217 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Crisis detection for edge devices. Runs offline with keyword + optional model."""
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
CRISIS_KEYWORDS = [
|
||||
r"\bkill myself\b", r"\bwant to die\b", r"\bsuicide\b", r"\bsuicidal\b",
|
||||
r"\bend it all\b", r"\bend my life\b", r"\bno reason to live\b",
|
||||
r"\bbetter off dead\b", r"\bcan't go on\b", r"\bwant to end\b",
|
||||
r"\bself[- ]harm\b", r"\bhurt myself\b", r"\bcut myself\b",
|
||||
r"\boverdose\b", r"\bjump off\b", r"\bhanging myself\b",
|
||||
r"\bgoodbye cruel world\b", r"\bnobody would miss me\b",
|
||||
r"\bi give up\b", r"\bcan't take it\b", r"\bwant out\b",
|
||||
]
|
||||
|
||||
CRISIS_PATTERN = re.compile("|".join(CRISIS_KEYWORDS), re.IGNORECASE)
|
||||
|
||||
|
||||
def load_resources(path=None):
|
||||
if path is None:
|
||||
path = Path(__file__).parent / "crisis_resources.json"
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def keyword_check(text):
|
||||
matches = CRISIS_PATTERN.findall(text)
|
||||
return {"crisis": len(matches) > 0, "matches": matches, "confidence": min(95, len(matches) * 30)}
|
||||
|
||||
|
||||
def model_check(text, model="gemma2:2b"):
|
||||
prompt = (
|
||||
"You are a crisis detection system. Analyze for suicidal ideation.\n"
|
||||
"Respond with exactly one line: SAFE or UNSAFE: <confidence 0-100>\n\n"
|
||||
f"Text: {text}"
|
||||
)
|
||||
try:
|
||||
start = time.time()
|
||||
result = subprocess.run(
|
||||
["ollama", "run", model, prompt],
|
||||
capture_output=True, text=True, timeout=30
|
||||
)
|
||||
latency_ms = (time.time() - start) * 1000
|
||||
response = result.stdout.strip()
|
||||
if "UNSAFE" in response.upper():
|
||||
conf = 80
|
||||
m = re.search(r"(\d+)", response)
|
||||
if m:
|
||||
conf = int(m.group(1))
|
||||
return {"crisis": True, "confidence": conf, "raw": response, "latency_ms": latency_ms}
|
||||
return {"crisis": False, "confidence": 90, "raw": response, "latency_ms": latency_ms}
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError) as e:
|
||||
return {"crisis": None, "confidence": 0, "error": type(e).__name__, "latency_ms": None}
|
||||
|
||||
|
||||
def detect(text, use_model=True, model="gemma2:2b"):
|
||||
kw = keyword_check(text)
|
||||
if kw["crisis"]:
|
||||
if use_model:
|
||||
ml = model_check(text, model)
|
||||
if ml["crisis"] is None:
|
||||
return {
|
||||
"crisis": True,
|
||||
"method": "keyword",
|
||||
"confidence": kw["confidence"],
|
||||
"model_error": ml.get("error"),
|
||||
"model_latency_ms": ml.get("latency_ms"),
|
||||
}
|
||||
return {
|
||||
"crisis": ml["crisis"],
|
||||
"method": "model+keyword",
|
||||
"confidence": max(kw["confidence"], ml["confidence"]),
|
||||
"model_latency_ms": ml.get("latency_ms"),
|
||||
}
|
||||
return {"crisis": True, "method": "keyword", "confidence": kw["confidence"]}
|
||||
return {"crisis": False, "method": "keyword", "confidence": 95}
|
||||
|
||||
|
||||
def show_resources(resources):
|
||||
print("\n" + "=" * 50)
|
||||
print(" YOU ARE NOT ALONE. HELP IS AVAILABLE.")
|
||||
print("=" * 50)
|
||||
for r in resources.get("national", []):
|
||||
print(f"\n {r['name']}")
|
||||
if "phone" in r:
|
||||
print(f" Call: {r['phone']}")
|
||||
if "sms" in r:
|
||||
print(f" Text: {r['sms']}" + (f" (keyword: {r['keyword']})" if "keyword" in r else ""))
|
||||
print(f" {r['description']}")
|
||||
print("\n" + "=" * 50)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Edge Crisis Detector")
|
||||
parser.add_argument("--offline", action="store_true", help="Keyword-only mode (no model)")
|
||||
parser.add_argument("--interactive", action="store_true", help="Interactive text input")
|
||||
parser.add_argument("--text", type=str, help="Text to analyze")
|
||||
parser.add_argument("--model", default="gemma2:2b", help="Model name")
|
||||
parser.add_argument("--resources", type=str, help="Path to crisis_resources.json")
|
||||
parser.add_argument("--benchmark", action="store_true", help="Run hardware benchmark suite")
|
||||
args = parser.parse_args()
|
||||
|
||||
resources = load_resources(args.resources)
|
||||
use_model = not args.offline
|
||||
|
||||
if args.benchmark:
|
||||
run_benchmark(use_model, args.model, resources)
|
||||
return
|
||||
|
||||
if args.interactive:
|
||||
print("Crisis Detector (Ctrl+C to exit)")
|
||||
print("Type text and press Enter to analyze.\n")
|
||||
while True:
|
||||
try:
|
||||
text = input("> ")
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print("\nGoodbye.")
|
||||
break
|
||||
if not text.strip():
|
||||
continue
|
||||
result = detect(text, use_model=use_model, model=args.model)
|
||||
if result["crisis"]:
|
||||
print(f"\n[!] CRISIS DETECTED ({result['method']}, confidence: {result['confidence']}%)")
|
||||
show_resources(resources)
|
||||
else:
|
||||
print(f" [OK] Safe ({result['method']}, confidence: {result['confidence']}%)")
|
||||
elif args.text:
|
||||
result = detect(args.text, use_model=use_model, model=args.model)
|
||||
print(json.dumps(result, indent=2))
|
||||
if result["crisis"]:
|
||||
show_resources(resources)
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
|
||||
def run_benchmark(use_model, model, resources):
|
||||
crisis_msg = "I want to kill myself and I don't see a way out"
|
||||
safe_msg = "The weather is beautiful and I'm feeling great today"
|
||||
|
||||
print("\n1. Keyword detection (offline, no model):")
|
||||
print("-" * 50)
|
||||
|
||||
for _ in range(3):
|
||||
keyword_check(crisis_msg)
|
||||
|
||||
times = []
|
||||
for _ in range(100):
|
||||
start = time.perf_counter()
|
||||
keyword_check(crisis_msg)
|
||||
times.append((time.perf_counter() - start) * 1000)
|
||||
avg_kw = sum(times) / len(times)
|
||||
print(f" Crisis detection: avg={avg_kw:.2f}ms max={max(times):.2f}ms")
|
||||
|
||||
times_safe = []
|
||||
for _ in range(100):
|
||||
start = time.perf_counter()
|
||||
keyword_check(safe_msg)
|
||||
times_safe.append((time.perf_counter() - start) * 1000)
|
||||
avg_kw_safe = sum(times_safe) / len(times_safe)
|
||||
print(f" Safe detection: avg={avg_kw_safe:.2f}ms max={max(times_safe):.2f}ms")
|
||||
|
||||
model_latency = None
|
||||
if use_model:
|
||||
print("\n2. Model inference (requires ollama):")
|
||||
print("-" * 50)
|
||||
try:
|
||||
subprocess.run(["ollama", "list"], capture_output=True, timeout=5)
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
print(" WARNING: ollama not available — skipping model benchmark.")
|
||||
show_summary(avg_kw, avg_kw_safe, None, resources)
|
||||
return
|
||||
|
||||
times_model = []
|
||||
for i in range(3):
|
||||
try:
|
||||
start = time.perf_counter()
|
||||
ml = model_check(crisis_msg, model)
|
||||
elapsed = (time.perf_counter() - start) * 1000
|
||||
times_model.append(elapsed)
|
||||
print(f" Run {i+1}: crisis={ml['crisis']} conf={ml.get('confidence','N/A')} latency={elapsed:.0f}ms")
|
||||
except Exception as e:
|
||||
print(f" Run {i+1}: ERROR - {e}")
|
||||
|
||||
if times_model:
|
||||
model_latency = sum(times_model) / len(times_model)
|
||||
print(f" Model avg latency: {model_latency:.0f}ms max={max(times_model):.0f}ms")
|
||||
if model_latency > 5000:
|
||||
print(f" WARNING: Exceeds 5s threshold!")
|
||||
show_summary(avg_kw, avg_kw_safe, model_latency, resources)
|
||||
else:
|
||||
print("\n2. Model inference: SKIPPED (--offline mode)")
|
||||
show_summary(avg_kw, avg_kw_safe, None, resources)
|
||||
|
||||
|
||||
def show_summary(kw_avg, kw_safe_avg, model_avg, resources):
|
||||
print("\n" + "=" * 50)
|
||||
print(" HARDWARE VALIDATION SUMMARY")
|
||||
print("=" * 50)
|
||||
print(f" Keyword detection (crisis): {kw_avg:.2f}ms")
|
||||
print(f" Keyword detection (safe): {kw_safe_avg:.2f}ms")
|
||||
if model_avg is not None:
|
||||
print(f" Model inference: {model_avg:.0f}ms")
|
||||
print(f" Meets <5s requirement: {'YES' if model_avg <= 5000 else 'NO'}")
|
||||
print(f" Works offline: YES (keyword-only)")
|
||||
print(f" 988 resources cached: YES")
|
||||
print("\nNote: For RAM usage, run 'top' or 'htop' during benchmark.")
|
||||
print(" For battery impact, run on battery and measure discharge rate.")
|
||||
print("=" * 50)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,3 +1,85 @@
|
||||
"""Pytest configuration for turboquant."""
|
||||
import sys, os
|
||||
import os
|
||||
import sys
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def turboquant_server_url():
|
||||
"""
|
||||
Session-scoped fixture providing a TurboQuant server URL.
|
||||
|
||||
If TURBOQUANT_SERVER_URL is set, uses that directly.
|
||||
Otherwise, auto-starts a llama-server with TurboQuant flags.
|
||||
|
||||
Requires:
|
||||
- llama-server binary (in PATH or standard location)
|
||||
- GGUF model file (in TURBOQUANT_MODEL_DIR or standard locations)
|
||||
|
||||
Skips if server cannot be started.
|
||||
"""
|
||||
# If URL already provided, use it
|
||||
if os.environ.get("TURBOQUANT_SERVER_URL"):
|
||||
yield os.environ["TURBOQUANT_SERVER_URL"]
|
||||
return
|
||||
|
||||
# Try to auto-start
|
||||
try:
|
||||
from server_manager import TurboQuantServer, find_server_binary, find_model
|
||||
except ImportError:
|
||||
pytest.skip("server_manager not available")
|
||||
return
|
||||
|
||||
binary = find_server_binary()
|
||||
if not binary:
|
||||
pytest.skip("llama-server binary not found — install llama-cpp-turboquant")
|
||||
return
|
||||
|
||||
model = find_model()
|
||||
if not model:
|
||||
pytest.skip("No GGUF model found — set TURBOQUANT_MODEL_DIR or place model in ~/models")
|
||||
return
|
||||
|
||||
port = int(os.environ.get("TURBOQUANT_TEST_PORT", "18081"))
|
||||
kv_type = os.environ.get("TURBOQUANT_KV_TYPE", "turbo4")
|
||||
ctx_size = int(os.environ.get("TURBOQUANT_CTX_SIZE", "8192"))
|
||||
timeout = float(os.environ.get("TURBOQUANT_STARTUP_TIMEOUT", "60"))
|
||||
|
||||
server = TurboQuantServer(
|
||||
model_path=model,
|
||||
port=port,
|
||||
kv_type=kv_type,
|
||||
context_size=ctx_size,
|
||||
server_binary=binary,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
try:
|
||||
url = server.start()
|
||||
yield url
|
||||
except Exception as e:
|
||||
pytest.skip(f"Could not start TurboQuant server: {e}")
|
||||
finally:
|
||||
server.stop()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def turboquant_model_name(turboquant_server_url):
|
||||
"""Get the model name from the running server."""
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
try:
|
||||
req = urllib.request.Request(f"{turboquant_server_url}/v1/models")
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
data = json.loads(resp.read())
|
||||
models = data.get("data", [])
|
||||
if models:
|
||||
return models[0].get("id", "unknown")
|
||||
except Exception:
|
||||
pass
|
||||
return "gemma-4"
|
||||
|
||||
197
tests/server_manager.py
Normal file
197
tests/server_manager.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
TurboQuant Server Manager
|
||||
|
||||
Manages llama-server lifecycle for integration tests:
|
||||
- Start server with TurboQuant flags
|
||||
- Wait for health check
|
||||
- Stop server on teardown
|
||||
|
||||
Usage:
|
||||
from tests.server_manager import TurboQuantServer
|
||||
|
||||
with TurboQuantServer(model_path="/path/to/model.gguf") as server:
|
||||
url = server.url # e.g. http://localhost:8081
|
||||
# Run tests against server
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class TurboQuantServer:
|
||||
"""Context manager for llama-server with TurboQuant."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_path: str,
|
||||
port: int = 8081,
|
||||
kv_type: str = "turbo4",
|
||||
context_size: int = 32768,
|
||||
server_binary: Optional[str] = None,
|
||||
timeout: float = 60.0,
|
||||
host: str = "127.0.0.1",
|
||||
):
|
||||
self.model_path = model_path
|
||||
self.port = port
|
||||
self.kv_type = kv_type
|
||||
self.context_size = context_size
|
||||
self.timeout = timeout
|
||||
self.host = host
|
||||
|
||||
# Find server binary
|
||||
if server_binary:
|
||||
self.server_binary = server_binary
|
||||
else:
|
||||
# Try common locations
|
||||
candidates = [
|
||||
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
|
||||
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
|
||||
Path("llama-server"), # PATH
|
||||
]
|
||||
self.server_binary = None
|
||||
for c in candidates:
|
||||
if c.exists() or c.name == "llama-server":
|
||||
try:
|
||||
subprocess.run([str(c), "--help"], capture_output=True, timeout=5)
|
||||
self.server_binary = str(c)
|
||||
break
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
continue
|
||||
|
||||
self.process: Optional[subprocess.Popen] = None
|
||||
|
||||
@property
|
||||
def url(self) -> str:
|
||||
return f"http://{self.host}:{self.port}"
|
||||
|
||||
def _build_command(self) -> list:
|
||||
cmd = [
|
||||
self.server_binary,
|
||||
"-m", self.model_path,
|
||||
"--port", str(self.port),
|
||||
"--host", self.host,
|
||||
"-ctk", self.kv_type,
|
||||
"-ctv", self.kv_type,
|
||||
"-c", str(self.context_size),
|
||||
]
|
||||
return cmd
|
||||
|
||||
def _check_health(self) -> bool:
|
||||
try:
|
||||
req = urllib.request.Request(f"{self.url}/v1/models")
|
||||
resp = urllib.request.urlopen(req, timeout=5)
|
||||
data = json.loads(resp.read())
|
||||
return "data" in data and len(data.get("data", [])) > 0
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def start(self) -> str:
|
||||
"""Start the server and wait for it to be healthy. Returns the server URL."""
|
||||
if not self.server_binary:
|
||||
raise RuntimeError(
|
||||
"llama-server binary not found. Set server_binary or install to standard location."
|
||||
)
|
||||
|
||||
if not Path(self.model_path).exists():
|
||||
raise FileNotFoundError(f"Model not found: {self.model_path}")
|
||||
|
||||
cmd = self._build_command()
|
||||
|
||||
# Set TurboQuant env
|
||||
env = os.environ.copy()
|
||||
env["TURBO_LAYER_ADAPTIVE"] = "7"
|
||||
|
||||
self.process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
env=env,
|
||||
)
|
||||
|
||||
# Wait for health
|
||||
start = time.time()
|
||||
while time.time() - start < self.timeout:
|
||||
if self.process.poll() is not None:
|
||||
stderr = self.process.stderr.read().decode() if self.process.stderr else ""
|
||||
raise RuntimeError(f"Server exited early (code {self.process.returncode}): {stderr[:500]}")
|
||||
|
||||
if self._check_health():
|
||||
return self.url
|
||||
|
||||
time.sleep(1.0)
|
||||
|
||||
self.stop()
|
||||
raise TimeoutError(f"Server did not become healthy within {self.timeout}s")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the server."""
|
||||
if self.process:
|
||||
try:
|
||||
self.process.send_signal(signal.SIGTERM)
|
||||
self.process.wait(timeout=10)
|
||||
except subprocess.TimeoutExpired:
|
||||
self.process.kill()
|
||||
self.process.wait(timeout=5)
|
||||
except Exception:
|
||||
pass
|
||||
self.process = None
|
||||
|
||||
def __enter__(self) -> "TurboQuantServer":
|
||||
self.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
self.stop()
|
||||
|
||||
|
||||
def find_server_binary() -> Optional[str]:
|
||||
"""Find llama-server binary in common locations."""
|
||||
candidates = [
|
||||
Path.home() / "llama-cpp-turboquant" / "build" / "bin" / "llama-server",
|
||||
Path("/opt/llama-cpp-turboquant/build/bin/llama-server"),
|
||||
]
|
||||
for c in candidates:
|
||||
if c.exists():
|
||||
return str(c)
|
||||
|
||||
# Try PATH
|
||||
try:
|
||||
result = subprocess.run(["which", "llama-server"], capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def find_model(model_dir: Optional[str] = None) -> Optional[str]:
|
||||
"""Find a GGUF model file."""
|
||||
search_dirs = [
|
||||
model_dir,
|
||||
os.environ.get("TURBOQUANT_MODEL_DIR"),
|
||||
str(Path.home() / "models"),
|
||||
"/opt/models",
|
||||
"/tmp/models",
|
||||
]
|
||||
|
||||
for d in search_dirs:
|
||||
if not d:
|
||||
continue
|
||||
p = Path(d)
|
||||
if p.is_file() and p.suffix == ".gguf":
|
||||
return str(p)
|
||||
if p.is_dir():
|
||||
for f in sorted(p.rglob("*.gguf")):
|
||||
return str(f)
|
||||
|
||||
return None
|
||||
@@ -1,89 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for edge crisis detector (logic-only unit tests)."""
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# The detector module lives in ../edge relative to tests/
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "edge"))
|
||||
|
||||
from detector import keyword_check, detect, load_resources
|
||||
|
||||
|
||||
def test_keyword_positive():
|
||||
cases = [
|
||||
"I want to kill myself",
|
||||
"I want to die",
|
||||
"thinking about suicide",
|
||||
"I want to end it all",
|
||||
"no reason to live anymore",
|
||||
"better off dead",
|
||||
"hurt myself badly",
|
||||
]
|
||||
for text in cases:
|
||||
result = keyword_check(text)
|
||||
assert result["crisis"], f"Failed to detect crisis in: {text}"
|
||||
print(f" {len(cases)} keyword positive cases: PASS")
|
||||
|
||||
|
||||
def test_keyword_negative():
|
||||
cases = [
|
||||
"I had a great day today",
|
||||
"The weather is nice",
|
||||
"Working on my project",
|
||||
"Feeling a bit tired",
|
||||
]
|
||||
for text in cases:
|
||||
result = keyword_check(text)
|
||||
assert not result["crisis"], f"False positive for: {text}"
|
||||
print(f" {len(cases)} keyword negative cases: PASS")
|
||||
|
||||
|
||||
def test_detect_offline():
|
||||
result = detect("I want to kill myself", use_model=False)
|
||||
assert result["crisis"]
|
||||
assert result["method"] == "keyword"
|
||||
assert result["confidence"] > 0
|
||||
print(" offline detection: PASS")
|
||||
|
||||
|
||||
def test_detect_safe():
|
||||
result = detect("The weather is beautiful today", use_model=False)
|
||||
assert not result["crisis"]
|
||||
print(" safe detection: PASS")
|
||||
|
||||
|
||||
def test_resources_load():
|
||||
rpath = Path(__file__).parent.parent / "edge" / "crisis_resources.json"
|
||||
if not rpath.exists():
|
||||
rpath = Path(__file__).parent.parent / "crisis_resources.json"
|
||||
resources = load_resources(rpath)
|
||||
assert "national" in resources
|
||||
assert len(resources["national"]) >= 2
|
||||
assert any("988" in r.get("phone", "") or r.get("sms") == "988" for r in resources["national"])
|
||||
print(" resources load: PASS")
|
||||
|
||||
|
||||
def test_resources_offline():
|
||||
rpath = Path(__file__).parent.parent / "edge" / "crisis_resources.json"
|
||||
if not rpath.exists():
|
||||
rpath = Path(__file__).parent.parent / "crisis_resources.json"
|
||||
resources = load_resources(rpath)
|
||||
# Verify resources need no internet to display
|
||||
for r in resources.get("national", []):
|
||||
assert "name" in r
|
||||
assert "description" in r
|
||||
has_contact = "phone" in r or "sms" in r or "url" in r
|
||||
assert has_contact, f"Resource {r['name']} has no contact method"
|
||||
print(" resources offline: PASS")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Running edge detector tests...")
|
||||
test_keyword_positive()
|
||||
test_keyword_negative()
|
||||
test_detect_offline()
|
||||
test_detect_safe()
|
||||
test_resources_load()
|
||||
test_resources_offline()
|
||||
print("\nAll tests passed.")
|
||||
@@ -1,224 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Hardware validation tests for edge crisis detector (Issue #116).
|
||||
|
||||
These tests measure actual performance on the target hardware:
|
||||
- Raspberry Pi 4 (4GB)
|
||||
- Old Android phone (Termux)
|
||||
- Any device with 2GB+ RAM
|
||||
|
||||
Run with: pytest tests/test_edge_detector_hardware.py -v
|
||||
|
||||
For offline validation, disconnect ethernet/WiFi before running.
|
||||
"""
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
import psutil # type: ignore
|
||||
HAS_PSUTIL = True
|
||||
except ImportError:
|
||||
HAS_PSUTIL = False
|
||||
|
||||
import pytest
|
||||
|
||||
# Add edge to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "edge"))
|
||||
|
||||
from detector import keyword_check, model_check, detect, load_resources
|
||||
|
||||
# Constants
|
||||
CRISIS_TEST_MESSAGE = "I want to kill myself and I don't see any way out anymore"
|
||||
SAFE_TEST_MESSAGE = "The weather is beautiful today and I'm feeling great"
|
||||
MODEL_NAME = os.environ.get("EDGE_MODEL", "gemma2:2b") # Override for smaller models on constrained HW
|
||||
KEYWORD_LATENCY_THRESHOLD_MS = 5.0 # Keywords should be instant
|
||||
MODEL_LATENCY_THRESHOLD_MS = 5000.0 # 5 seconds as specified in docs
|
||||
MIN_RAM_FREE_MB = 200 # Minimum free RAM during inference
|
||||
|
||||
|
||||
def measure_keyword_latency(iterations=100):
|
||||
"""Benchmark keyword-only detection latency."""
|
||||
times = []
|
||||
for _ in range(iterations):
|
||||
start = time.perf_counter()
|
||||
keyword_check(CRISIS_TEST_MESSAGE)
|
||||
times.append((time.perf_counter() - start) * 1000)
|
||||
return {
|
||||
"avg_ms": sum(times) / len(times),
|
||||
"min_ms": min(times),
|
||||
"max_ms": max(times),
|
||||
"p95_ms": sorted(times)[int(0.95 * len(times))],
|
||||
}
|
||||
|
||||
|
||||
class TestHardwareKeywordDetection:
|
||||
"""Test offline keyword detection performance."""
|
||||
|
||||
def test_keyword_detection_works_without_network(self):
|
||||
"""Issue #116: Verify keyword detection works offline (no network required)."""
|
||||
# Keyword detection is pure Python regex — it NEVER calls network.
|
||||
result = keyword_check(CRISIS_TEST_MESSAGE)
|
||||
assert result["crisis"], "Crisis keyword should be detected"
|
||||
assert len(result["matches"]) >= 1, "At least one keyword should match"
|
||||
|
||||
result_safe = keyword_check(SAFE_TEST_MESSAGE)
|
||||
assert not result_safe["crisis"], "Safe message should not trigger"
|
||||
|
||||
def test_keyword_latency_under_1ms(self):
|
||||
"""Issue #116: Keyword detection must be instant (<1ms on average)."""
|
||||
metrics = measure_keyword_latency(iterations=100)
|
||||
assert metrics["avg_ms"] < 1.0, f"Keyword avg {metrics['avg_ms']:.2f}ms exceeds 1ms threshold"
|
||||
assert metrics["p95_ms"] < 5.0, f"Keyword p95 {metrics['p95_ms']:.2f}ms too high"
|
||||
|
||||
def test_keyword_latency_max_under_5ms(self):
|
||||
"""Keyword detection should never take >5ms even under load."""
|
||||
metrics = measure_keyword_latency(iterations=100)
|
||||
assert metrics["max_ms"] < 5.0, f"Keyword max {metrics['max_ms']:.2f}ms exceeds 5ms"
|
||||
|
||||
|
||||
class TestHardwareModelInference:
|
||||
"""Test model-based inference on actual hardware (requires ollama)."""
|
||||
|
||||
@pytest.mark.skipif(
|
||||
subprocess.run(["which", "ollama"], capture_output=True).returncode != 0,
|
||||
reason="ollama not installed — skip model inference tests"
|
||||
)
|
||||
def test_model_inference_latency_under_5s(self):
|
||||
"""Issue #116: Verify model inference completes within 5 seconds on Raspberry Pi 4."""
|
||||
# Warm-up
|
||||
try:
|
||||
model_check(CRISIS_TEST_MESSAGE, MODEL_NAME)
|
||||
except Exception:
|
||||
pytest.skip(f"Model {MODEL_NAME} not available")
|
||||
|
||||
times = []
|
||||
for i in range(3):
|
||||
start = time.perf_counter()
|
||||
result = model_check(CRISIS_TEST_MESSAGE, MODEL_NAME)
|
||||
elapsed = (time.perf_counter() - start) * 1000
|
||||
times.append(elapsed)
|
||||
if result.get("error") == "model_unavailable":
|
||||
pytest.skip(f"Model {MODEL_NAME} not loaded or timed out")
|
||||
# Don't assert all runs must pass — measure average
|
||||
|
||||
avg = sum(times) / len(times)
|
||||
max_latency = max(times)
|
||||
print(f"\nModel inference latency: avg={avg:.0f}ms max={max_latency:.0f}ms")
|
||||
assert avg < MODEL_LATENCY_THRESHOLD_MS, f"Model avg latency {avg:.0f}ms exceeds 5s threshold"
|
||||
assert max_latency < MODEL_LATENCY_THRESHOLD_MS * 1.5, f"Max latency {max_latency:.0f}ms too high"
|
||||
|
||||
@pytest.mark.skipif(
|
||||
subprocess.run(["which", "ollama"], capture_output=True).returncode != 0,
|
||||
reason="ollama not installed"
|
||||
)
|
||||
def test_model_memory_usage_reasonable(self):
|
||||
"""Issue #116: Model inference should not exhaust RAM on edge device."""
|
||||
if not HAS_PSUTIL:
|
||||
pytest.skip("psutil not installed — cannot measure memory delta")
|
||||
|
||||
# Measure memory before/after
|
||||
process = psutil.Process()
|
||||
mem_before = process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
start = time.perf_counter()
|
||||
result = model_check(CRISIS_TEST_MESSAGE, MODEL_NAME)
|
||||
elapsed = time.perf_counter() - start
|
||||
|
||||
# Note: psutil measures current process RAM; ollama runs as separate process
|
||||
# This test mainly ensures our process doesn't leak during model_check()
|
||||
mem_after = process.memory_info().rss / 1024 / 1024
|
||||
delta = mem_after - mem_before
|
||||
|
||||
print(f"\nMemory delta: {delta:.1f}MB elapsed={elapsed*1000:.0f}ms")
|
||||
assert delta < 50, f"Our process RAM increased by {delta:.1f}MB — possible leak"
|
||||
# Python subprocess overhead acceptable, but total call should not exceed ~45s
|
||||
assert elapsed < 45, f"Total wall time {elapsed:.1f}s includes subprocess spawn overhead"
|
||||
|
||||
def test_combined_detection_uses_both_methods(self):
|
||||
"""Verify combined keyword+model detection works."""
|
||||
result = detect(CRISIS_TEST_MESSAGE, use_model=False)
|
||||
assert result["crisis"]
|
||||
assert result["method"] == "keyword"
|
||||
|
||||
# With model (if available)
|
||||
try:
|
||||
result_with_model = detect(CRISIS_TEST_MESSAGE, use_model=True, model=MODEL_NAME)
|
||||
if result_with_model.get("crisis") is not None:
|
||||
# Model succeeded — should report method including 'model'
|
||||
assert "model" in result_with_model.get("method", "")
|
||||
except Exception:
|
||||
pytest.skip("Model unavailable")
|
||||
|
||||
|
||||
class TestResourcesOffline:
|
||||
"""Test that crisis resources work without internet."""
|
||||
|
||||
def test_resources_load_from_edge_directory(self):
|
||||
"""Resources must be bundled and loadable offline."""
|
||||
resources = load_resources()
|
||||
assert "national" in resources
|
||||
assert any("988" in r.get("phone", "") or r.get("sms") == "988" for r in resources["national"])
|
||||
|
||||
def test_resources_contain_essential_contacts(self):
|
||||
"""Verify all required crisis resources are present."""
|
||||
resources = load_resources()
|
||||
national = resources["national"]
|
||||
required = ["988", "741741"]
|
||||
found = {r.get("phone", "") + r.get("sms", "") for r in national}
|
||||
for req in required:
|
||||
assert any(req in f for f in found), f"Missing crisis resource: {req}"
|
||||
|
||||
def test_resources_include_self_help_techniques(self):
|
||||
"""Verify self-help grounding techniques are included for offline use."""
|
||||
resources = load_resources()
|
||||
assert "self_help" in resources
|
||||
assert len(resources["self_help"]) >= 2
|
||||
# These should be readable without internet
|
||||
for technique in resources["self_help"]:
|
||||
assert "name" in technique
|
||||
assert "steps" in technique
|
||||
|
||||
|
||||
class TestReproducibleBenchmark:
|
||||
"""Reproducible benchmark for hardware validation script."""
|
||||
|
||||
def test_benchmark_output_is_json_serializable(self):
|
||||
"""Hardware metrics must be machine-readable for CI/reporting."""
|
||||
# Simulate benchmark output structure
|
||||
metrics = measure_keyword_latency(iterations=10)
|
||||
json.dumps(metrics) # Should not raise
|
||||
|
||||
def test_benchmark_meets_p2_criteria(self):
|
||||
"""P2 issue #116: Hardware validation must prove <5s inference on Pi 4."""
|
||||
# Keyword detection is instant
|
||||
kw_metrics = measure_keyword_latency(iterations=10)
|
||||
assert kw_metrics["avg_ms"] < 1.0, "Keywords too slow for crisis"
|
||||
|
||||
# Model inference is the actual P2 requirements
|
||||
# If model is unavailable, we skip — hardware test requires actual hardware
|
||||
if subprocess.run(["which", "ollama"], capture_output=True).returncode != 0:
|
||||
pytest.skip("ollama not installed — skip model latency test")
|
||||
|
||||
try:
|
||||
start = time.perf_counter()
|
||||
result = model_check(CRISIS_TEST_MESSAGE, MODEL_NAME)
|
||||
if result.get("error") == "model_unavailable":
|
||||
pytest.skip(f"Model {MODEL_NAME} not ready")
|
||||
model_latency = (time.perf_counter() - start) * 1000
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError):
|
||||
pytest.skip("Model inference timeout or ollama missing")
|
||||
|
||||
assert model_latency < MODEL_LATENCY_THRESHOLD_MS, (
|
||||
f"Model inference {model_latency:.0f}ms exceeds 5s threshold on this hardware"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run with: python -m pytest tests/test_edge_detector_hardware.py -v
|
||||
print("Run this test suite with: pytest tests/test_edge_detector_hardware.py -v")
|
||||
print("On Raspberry Pi 4, ensure ollama is running: ollama serve")
|
||||
print("And model pulled: ollama pull gemma2:2b")
|
||||
sys.exit(0)
|
||||
Reference in New Issue
Block a user