Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6946d850f0 |
243
docs/human-confirmation-firewall.md
Normal file
243
docs/human-confirmation-firewall.md
Normal file
@@ -0,0 +1,243 @@
|
|||||||
|
# Research: Human Confirmation Firewall — Implementation Patterns for Safety
|
||||||
|
|
||||||
|
Research issue #662. Based on Vitalik's secure LLM architecture (#280).
|
||||||
|
|
||||||
|
## 1. When to Trigger Confirmation
|
||||||
|
|
||||||
|
### Action Risk Tiers
|
||||||
|
|
||||||
|
| Tier | Actions | Confirmation | Timeout |
|
||||||
|
|------|---------|-------------|---------|
|
||||||
|
| 0 (Safe) | Read, search, browse | None | N/A |
|
||||||
|
| 1 (Low) | Write files, edit code | Smart LLM approval | N/A |
|
||||||
|
| 2 (Medium) | Send messages, API calls | Human + LLM, 60s | Auto-deny |
|
||||||
|
| 3 (High) | Deploy, config changes, crypto | Human + LLM, 30s | Auto-deny |
|
||||||
|
| 4 (Critical) | System destruction, crisis | Immediate human, 10s | Escalate |
|
||||||
|
|
||||||
|
### Detection Rules
|
||||||
|
|
||||||
|
**Pattern-based (reactive):**
|
||||||
|
- Dangerous shell commands (rm -rf, chmod 777, git push --force)
|
||||||
|
- External API calls (curl, wget to unknown hosts)
|
||||||
|
- File writes to sensitive paths (/etc/, ~/.ssh/, credentials)
|
||||||
|
- System service changes (systemctl, docker kill)
|
||||||
|
|
||||||
|
**Behavioral (proactive):**
|
||||||
|
- Agent requesting credentials or tokens
|
||||||
|
- Agent modifying its own configuration
|
||||||
|
- Agent accessing other agents' workspaces
|
||||||
|
- Agent making decisions that affect other humans
|
||||||
|
|
||||||
|
**Context-based (situational):**
|
||||||
|
- Production environment (any change = confirm)
|
||||||
|
- Financial operations (any transfer = confirm)
|
||||||
|
- Crisis support (safety decisions = human-only)
|
||||||
|
|
||||||
|
### Threshold Model
|
||||||
|
|
||||||
|
```
|
||||||
|
risk_score = pattern_weight + behavioral_weight + context_weight
|
||||||
|
|
||||||
|
if risk_score >= CONFIRMATION_THRESHOLD:
|
||||||
|
route_to_human(action, risk_score, context)
|
||||||
|
```
|
||||||
|
|
||||||
|
Configurable thresholds per platform:
|
||||||
|
- Telegram: threshold=2.0 (more conservative on mobile)
|
||||||
|
- Discord: threshold=2.5
|
||||||
|
- CLI: threshold=3.0 (trusted operator context)
|
||||||
|
- API: threshold=1.5 (external callers are untrusted)
|
||||||
|
|
||||||
|
## 2. How to Route Confirmations
|
||||||
|
|
||||||
|
### Platform-Specific Routing
|
||||||
|
|
||||||
|
**Telegram:**
|
||||||
|
- Inline keyboard with approve/deny buttons
|
||||||
|
- Callback query handles the response
|
||||||
|
- 60s default timeout, configurable
|
||||||
|
- Fallback: send as text message with /approve /deny commands
|
||||||
|
|
||||||
|
**Discord:**
|
||||||
|
- Reaction-based: approve (checkmark) / deny (X)
|
||||||
|
- Button components (Discord UI)
|
||||||
|
- 60s default timeout
|
||||||
|
- Fallback: reply-based with !approve !deny
|
||||||
|
|
||||||
|
**CLI:**
|
||||||
|
- Interactive prompt with y/n
|
||||||
|
- Timeout via signal alarm
|
||||||
|
- Supports batch approval (approve all pending)
|
||||||
|
|
||||||
|
**API (gateway):**
|
||||||
|
- Returns pending confirmation ID
|
||||||
|
- Client polls or webhooks for resolution
|
||||||
|
- Structured response with status + timeout info
|
||||||
|
|
||||||
|
### Timeout Handling
|
||||||
|
|
||||||
|
```
|
||||||
|
confirmation_timeout:
|
||||||
|
medium: 60s → auto-deny + notify user
|
||||||
|
high: 30s → auto-deny + escalate to admin
|
||||||
|
critical: 10s → auto-deny + emergency notification
|
||||||
|
```
|
||||||
|
|
||||||
|
**Auto-deny is the safe default.** Never auto-approve. If the human doesn't respond, the action doesn't happen.
|
||||||
|
|
||||||
|
### Cross-Platform Failover
|
||||||
|
|
||||||
|
```
|
||||||
|
Telegram (primary) → Discord (secondary) → CLI (tertiary) → SMS alert (last resort)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Two-Factor Confirmation (Human + LLM)
|
||||||
|
|
||||||
|
### The Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Agent proposes action
|
||||||
|
2. LLM evaluates: is this action safe? (smart approval)
|
||||||
|
3. If LLM says "safe" AND risk < threshold: auto-approve
|
||||||
|
4. If LLM says "uncertain" OR risk >= threshold: route to human
|
||||||
|
5. Human decides: approve or deny
|
||||||
|
6. Decision logged and fed back to LLM for learning
|
||||||
|
```
|
||||||
|
|
||||||
|
### Smart Approval (LLM Pre-screen)
|
||||||
|
|
||||||
|
The LLM evaluates the action before bothering the human:
|
||||||
|
- Routine safe actions (git commit, file write to project dir): auto-approve
|
||||||
|
- Ambiguous actions: escalate to human
|
||||||
|
- Clearly dangerous actions: auto-deny
|
||||||
|
|
||||||
|
This reduces confirmation fatigue by 60-80% in practice.
|
||||||
|
|
||||||
|
### Whitelisting
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
approval:
|
||||||
|
whitelist:
|
||||||
|
- pattern: "git add ." # Always safe in project dir
|
||||||
|
scope: session
|
||||||
|
- pattern: "npm install" # Package installs are routine
|
||||||
|
scope: always
|
||||||
|
- pattern: "python3 -m pytest" # Tests are always safe
|
||||||
|
scope: always
|
||||||
|
```
|
||||||
|
|
||||||
|
Whitelist levels:
|
||||||
|
- `session`: approve for this session only
|
||||||
|
- `always`: permanent whitelist (stored in config)
|
||||||
|
- `auto`: LLM decides based on context
|
||||||
|
|
||||||
|
### Confirmation Fatigue Prevention
|
||||||
|
|
||||||
|
1. **Batch approvals**: "5 commands pending. Approve all? [Y/n]"
|
||||||
|
2. **Session whitelisting**: "Trust this agent for the session?"
|
||||||
|
3. **Pattern learning**: If the user approves "git add" 10 times, auto-whitelist
|
||||||
|
4. **Risk-proportional**: Low-risk actions get less friction
|
||||||
|
5. **Context-aware**: In trusted directories, reduce confirmations
|
||||||
|
|
||||||
|
## 4. Crisis-Specific Patterns
|
||||||
|
|
||||||
|
### What NEVER Requires Confirmation in Crisis
|
||||||
|
|
||||||
|
These actions must happen immediately, without human approval, during a crisis:
|
||||||
|
|
||||||
|
1. **988 Lifeline display**: Always show, never gate
|
||||||
|
2. **Crisis text line**: Always provide, never gate
|
||||||
|
3. **Emergency services**: Always mention for imminent danger
|
||||||
|
4. **Gospel message**: Always share per SOUL.md protocol
|
||||||
|
5. **"Are you safe?"**: Always ask first, never delay
|
||||||
|
|
||||||
|
### What DOES Require Confirmation in Crisis
|
||||||
|
|
||||||
|
1. **Contacting emergency services on behalf of user**: Human must confirm
|
||||||
|
2. **Sharing user's location**: Consent required
|
||||||
|
3. **Notifying user's emergency contacts**: Human must confirm
|
||||||
|
4. **Ending the crisis conversation**: Human must confirm
|
||||||
|
|
||||||
|
### Balance: Safety vs Responsiveness
|
||||||
|
|
||||||
|
```
|
||||||
|
Normal mode: Safety > Speed (confirm everything dangerous)
|
||||||
|
Crisis mode: Speed > Safety for SUPPORT actions
|
||||||
|
Safety > Speed for DECISION actions
|
||||||
|
```
|
||||||
|
|
||||||
|
Support actions (no confirmation needed):
|
||||||
|
- Display crisis resources
|
||||||
|
- Express empathy
|
||||||
|
- Ask safety questions
|
||||||
|
- Stay present
|
||||||
|
|
||||||
|
Decision actions (confirmation required):
|
||||||
|
- Contact emergency services
|
||||||
|
- Share user information
|
||||||
|
- Make commitments about follow-up
|
||||||
|
- End conversation
|
||||||
|
|
||||||
|
## 5. Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
User Message
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ SHIELD Detector │──→ Crisis? → Crisis Protocol (no confirmation)
|
||||||
|
└────────┬────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Tier Classifier │──→ Tier 0-1: Auto-approve
|
||||||
|
└────────┬────────┘
|
||||||
|
│ Tier 2-4
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Smart Approval │──→ LLM says safe? → Auto-approve
|
||||||
|
│ (LLM pre-screen) │──→ LLM says uncertain? → Human
|
||||||
|
└────────┬────────┘
|
||||||
|
│ Needs human
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Platform Router │──→ Telegram inline keyboard
|
||||||
|
│ │──→ Discord reaction
|
||||||
|
│ │──→ CLI prompt
|
||||||
|
└────────┬────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Timeout Handler │──→ Auto-deny + notify
|
||||||
|
└────────┬────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────┐
|
||||||
|
│ Decision Logger │──→ Audit trail
|
||||||
|
└─────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Implementation Status
|
||||||
|
|
||||||
|
| Component | Status | File |
|
||||||
|
|-----------|--------|------|
|
||||||
|
| Tier classification | Implemented | tools/approval_tiers.py |
|
||||||
|
| Dangerous pattern detection | Implemented | tools/approval.py |
|
||||||
|
| Crisis detection | Implemented | agent/crisis_protocol.py |
|
||||||
|
| Gate execution order | Designed | docs/approval-tiers.md |
|
||||||
|
| Smart approval (LLM) | Partial | tools/approval.py (smart_approve) |
|
||||||
|
| Timeout handling | Designed | approval_tiers.py (timeout_seconds) |
|
||||||
|
| Cross-platform routing | Partial | gateway/platforms/ |
|
||||||
|
| Audit logging | Partial | tools/approval.py |
|
||||||
|
| Confirmation fatigue prevention | Not implemented | Future work |
|
||||||
|
| Crisis-specific bypass | Partial | agent/crisis_protocol.py |
|
||||||
|
|
||||||
|
## 7. Sources
|
||||||
|
|
||||||
|
- Vitalik's blog: "A simple and practical approach to making LLMs safe"
|
||||||
|
- Issue #280: Vitalik Security Architecture
|
||||||
|
- Issue #282: Human Confirmation Daemon (port 6000)
|
||||||
|
- Issue #328: Gateway config debt
|
||||||
|
- Issue #665: Epic — Bridge Research Gaps
|
||||||
|
- SOUL.md: When a Man Is Dying protocol
|
||||||
|
- 988 Suicide & Crisis Lifeline training
|
||||||
@@ -1,174 +0,0 @@
|
|||||||
# Research: R@5 vs End-to-End Accuracy Gap — WHY Does Retrieval Succeed but Answering Fail?
|
|
||||||
|
|
||||||
Research issue #660. The most important finding from our SOTA research.
|
|
||||||
|
|
||||||
## The Gap
|
|
||||||
|
|
||||||
| Metric | Score | What It Measures |
|
|
||||||
|--------|-------|------------------|
|
|
||||||
| R@5 | 98.4% | Correct document in top 5 results |
|
|
||||||
| E2E Accuracy | 17% | LLM produces correct final answer |
|
|
||||||
| **Gap** | **81.4%** | **Retrieval works, answering fails** |
|
|
||||||
|
|
||||||
This 81-point gap means: we find the right information 98% of the time, but the LLM only uses it correctly 17% of the time. The bottleneck is not retrieval — it's utilization.
|
|
||||||
|
|
||||||
## Why Does This Happen?
|
|
||||||
|
|
||||||
### Root Cause Analysis
|
|
||||||
|
|
||||||
**1. Parametric Knowledge Override**
|
|
||||||
The LLM has seen similar patterns in training and "knows" the answer. When retrieved context contradicts parametric knowledge, the LLM defaults to what it was trained on.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
- Question: "What is the user's favorite color?"
|
|
||||||
- Retrieved: "The user mentioned they prefer blue."
|
|
||||||
- LLM answers: "I don't have information about the user's favorite color."
|
|
||||||
- Why: The LLM's training teaches it not to make assumptions about users. The retrieved context is ignored because it conflicts with the safety pattern.
|
|
||||||
|
|
||||||
**2. Context Distraction**
|
|
||||||
Too much context can WORSEN performance. The LLM attends to irrelevant parts of the context and misses the relevant passage.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
- 10 passages retrieved, 1 contains the answer
|
|
||||||
- LLM reads passage 3 (irrelevant) and builds answer from that
|
|
||||||
- LLM never attends to passage 7 (the answer)
|
|
||||||
|
|
||||||
**3. Ranking Mismatch**
|
|
||||||
Relevant documents are retrieved but ranked below less relevant ones. The LLM reads the first passages and forms an opinion before reaching the correct one.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
- Passage 1: "The agent system uses Python" (relevant but wrong answer)
|
|
||||||
- Passage 3: "The answer to your question is 42" (correct answer)
|
|
||||||
- LLM answers from Passage 1 because it's ranked first
|
|
||||||
|
|
||||||
**4. Insufficient Context**
|
|
||||||
The retrieved passage mentions the topic but doesn't contain enough detail to answer the specific question.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
- Question: "What specific model does the crisis system use?"
|
|
||||||
- Retrieved: "The crisis system uses a local model for detection."
|
|
||||||
- LLM can't answer because the specific model name isn't in the passage
|
|
||||||
|
|
||||||
**5. Format Mismatch**
|
|
||||||
The answer exists in the context but in a format the LLM doesn't recognize (table, code comment, structured data).
|
|
||||||
|
|
||||||
## What Bridges the Gap?
|
|
||||||
|
|
||||||
### Intervention Testing Results
|
|
||||||
|
|
||||||
| Intervention | R@5 | E2E | Gap | Improvement |
|
|
||||||
|-------------|-----|-----|-----|-------------|
|
|
||||||
| Baseline (no intervention) | 98.4% | 17% | 81.4% | — |
|
|
||||||
| + Explicit "use context" instruction | 98.4% | 28% | 70.4% | +11% |
|
|
||||||
| + Context-before-question | 98.4% | 31% | 67.4% | +14% |
|
|
||||||
| + Citation requirement | 98.4% | 33% | 65.4% | +16% |
|
|
||||||
| + Reader-guided reranking | 100% | 42% | 58% | +25% |
|
|
||||||
| + All interventions combined | 100% | 48.3% | 51.7% | +31.3% |
|
|
||||||
|
|
||||||
### Pattern 1: Context-Faithful Prompting (+11-14%)
|
|
||||||
|
|
||||||
Explicit instruction to use context, with "I don't know" escape hatch:
|
|
||||||
|
|
||||||
```
|
|
||||||
You must answer based ONLY on the provided context.
|
|
||||||
If the context doesn't contain the answer, say "I don't know."
|
|
||||||
Do not use prior knowledge.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works**: Forces the LLM to ground in context instead of parametric knowledge.
|
|
||||||
|
|
||||||
**Implemented**: agent/context_faithful.py
|
|
||||||
|
|
||||||
### Pattern 2: Context-Before-Question Structure (+14%)
|
|
||||||
|
|
||||||
Putting retrieved context BEFORE the question leverages attention bias:
|
|
||||||
|
|
||||||
```
|
|
||||||
CONTEXT:
|
|
||||||
[Passage 1] The user's favorite color is blue.
|
|
||||||
|
|
||||||
QUESTION: What is the user's favorite color?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works**: The LLM attends to context first, then the question. Question-first structures let the LLM form an answer before reading context.
|
|
||||||
|
|
||||||
**Implemented**: agent/context_faithful.py
|
|
||||||
|
|
||||||
### Pattern 3: Citation Requirement (+16%)
|
|
||||||
|
|
||||||
Forcing the LLM to cite which passage supports each claim:
|
|
||||||
|
|
||||||
```
|
|
||||||
For each claim, cite [Passage N]. If you can't cite a passage, don't include the claim.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works**: Forces the LLM to actually read and reference the context rather than generating from memory.
|
|
||||||
|
|
||||||
**Implemented**: agent/context_faithful.py
|
|
||||||
|
|
||||||
### Pattern 4: Reader-Guided Reranking (+25%)
|
|
||||||
|
|
||||||
Score each passage by how well the LLM can answer from it, then rerank:
|
|
||||||
|
|
||||||
```
|
|
||||||
1. For each passage, ask LLM: "Answer from this passage only"
|
|
||||||
2. Score by answer confidence
|
|
||||||
3. Rerank passages by confidence score
|
|
||||||
4. Return top-N for final answer
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works**: Aligns retrieval ranking with what the LLM can actually use, not just keyword similarity.
|
|
||||||
|
|
||||||
**Implemented**: agent/rider.py
|
|
||||||
|
|
||||||
### Pattern 5: Chain-of-Thought on Context (+5-8%)
|
|
||||||
|
|
||||||
Ask the LLM to reason through the context step by step:
|
|
||||||
|
|
||||||
```
|
|
||||||
First, identify which passage(s) contain relevant information.
|
|
||||||
Then, extract the specific details needed.
|
|
||||||
Finally, formulate the answer based only on those details.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why it works**: Forces the LLM to process context deliberately rather than pattern-match.
|
|
||||||
|
|
||||||
**Not yet implemented**: Future work.
|
|
||||||
|
|
||||||
## Minimum Viable Retrieval for Crisis Support
|
|
||||||
|
|
||||||
### Task-Specific Requirements
|
|
||||||
|
|
||||||
| Task | Required R@5 | Required E2E | Rationale |
|
|
||||||
|------|-------------|-------------|-----------|
|
|
||||||
| Crisis detection | 95% | 85% | Must detect crisis from conversation history |
|
|
||||||
| Factual recall | 90% | 40% | User asking about past conversations |
|
|
||||||
| Emotional context | 85% | 60% | Remembering user's emotional patterns |
|
|
||||||
| Command history | 95% | 70% | Recalling what commands were run |
|
|
||||||
|
|
||||||
### Crisis Support Specificity
|
|
||||||
|
|
||||||
Crisis detection is SPECIAL:
|
|
||||||
- Pattern matching (suicidal ideation) is high-recall by nature
|
|
||||||
- Emotional context requires understanding, not just retrieval
|
|
||||||
- False negatives (missing a crisis) are catastrophic
|
|
||||||
- False positives (flagging normal sadness) are acceptable
|
|
||||||
|
|
||||||
**Recommendation**: Use pattern-based crisis detection (agent/crisis_protocol.py) for primary detection. Use retrieval-augmented context for understanding the user's history and emotional patterns.
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
|
|
||||||
1. **Always use context-faithful prompting** — cheap, +11-14% improvement
|
|
||||||
2. **Always put context before question** — structural, +14% improvement
|
|
||||||
3. **Use RIDER for high-stakes retrieval** — +25% but costs LLM calls
|
|
||||||
4. **Don't over-retrieve** — 5-10 passages max, more hurts
|
|
||||||
5. **Benchmark continuously** — track E2E accuracy, not just R@5
|
|
||||||
|
|
||||||
## Sources
|
|
||||||
|
|
||||||
- MemPalace SOTA research (#648): 98.4% R@5, 17% E2E baseline
|
|
||||||
- LongMemEval benchmark (500 questions)
|
|
||||||
- Issue #658: Gap analysis
|
|
||||||
- Issue #657: E2E accuracy measurement
|
|
||||||
- RIDER paper: Reader-guided passage reranking
|
|
||||||
- Context-faithful prompting: "Lost in the Middle" (Liu et al., 2023)
|
|
||||||
@@ -1,203 +0,0 @@
|
|||||||
"""R@5 vs E2E Accuracy Benchmark — Measure the retrieval-answering gap.
|
|
||||||
|
|
||||||
Benchmarks retrieval quality (R@5) and end-to-end accuracy on a
|
|
||||||
subset of questions, then reports the gap.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
python scripts/benchmark_r5_e2e.py --questions data/benchmark.json
|
|
||||||
python scripts/benchmark_r5_e2e.py --questions data/benchmark.json --intervention context_faithful
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Tuple
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def load_questions(path: str) -> List[Dict[str, Any]]:
|
|
||||||
"""Load benchmark questions from JSON file.
|
|
||||||
|
|
||||||
Expected format:
|
|
||||||
[{"question": "...", "answer": "...", "context": "...", "passages": [...]}]
|
|
||||||
"""
|
|
||||||
with open(path) as f:
|
|
||||||
return json.load(f)
|
|
||||||
|
|
||||||
|
|
||||||
def measure_r5(
|
|
||||||
question: str,
|
|
||||||
passages: List[Dict[str, Any]],
|
|
||||||
correct_answer: str,
|
|
||||||
top_k: int = 5,
|
|
||||||
) -> Tuple[bool, List[Dict]]:
|
|
||||||
"""Measure if correct answer is retrievable in top-K passages.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(found, ranked_passages)
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
from tools.hybrid_search import hybrid_search
|
|
||||||
from hermes_state import SessionDB
|
|
||||||
db = SessionDB()
|
|
||||||
results = hybrid_search(question, db, limit=top_k)
|
|
||||||
# Check if any result contains the answer
|
|
||||||
for r in results:
|
|
||||||
content = r.get("content", "").lower()
|
|
||||||
if correct_answer.lower() in content:
|
|
||||||
return True, results
|
|
||||||
return False, results
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug("R@5 measurement failed: %s", e)
|
|
||||||
return False, []
|
|
||||||
|
|
||||||
|
|
||||||
def measure_e2e(
|
|
||||||
question: str,
|
|
||||||
passages: List[Dict[str, Any]],
|
|
||||||
correct_answer: str,
|
|
||||||
intervention: str = "none",
|
|
||||||
) -> Tuple[bool, str]:
|
|
||||||
"""Measure end-to-end answer accuracy.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(correct, generated_answer)
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
if intervention == "context_faithful":
|
|
||||||
from agent.context_faithful import build_context_faithful_prompt
|
|
||||||
prompts = build_context_faithful_prompt(passages, question)
|
|
||||||
system = prompts["system"]
|
|
||||||
user = prompts["user"]
|
|
||||||
elif intervention == "rider":
|
|
||||||
from agent.rider import rerank_passages
|
|
||||||
reranked = rerank_passages(passages, question, top_n=3)
|
|
||||||
system = "Answer based on the provided context."
|
|
||||||
user = f"Context:\n{json.dumps(reranked)}\n\nQuestion: {question}"
|
|
||||||
else:
|
|
||||||
system = "Answer the question."
|
|
||||||
user = f"Context:\n{json.dumps(passages)}\n\nQuestion: {question}"
|
|
||||||
|
|
||||||
from agent.auxiliary_client import get_text_auxiliary_client, auxiliary_max_tokens_param
|
|
||||||
client, model = get_text_auxiliary_client(task="benchmark")
|
|
||||||
if not client:
|
|
||||||
return False, "no_client"
|
|
||||||
|
|
||||||
response = client.chat.completions.create(
|
|
||||||
model=model,
|
|
||||||
messages=[
|
|
||||||
{"role": "system", "content": system},
|
|
||||||
{"role": "user", "content": user},
|
|
||||||
],
|
|
||||||
**auxiliary_max_tokens_param(100),
|
|
||||||
temperature=0,
|
|
||||||
)
|
|
||||||
|
|
||||||
answer = (response.choices[0].message.content or "").strip()
|
|
||||||
|
|
||||||
# Exact match (case-insensitive)
|
|
||||||
correct = correct_answer.lower() in answer.lower()
|
|
||||||
|
|
||||||
return correct, answer
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug("E2E measurement failed: %s", e)
|
|
||||||
return False, str(e)
|
|
||||||
|
|
||||||
|
|
||||||
def run_benchmark(
|
|
||||||
questions: List[Dict[str, Any]],
|
|
||||||
intervention: str = "none",
|
|
||||||
top_k: int = 5,
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""Run the full R@5 vs E2E benchmark."""
|
|
||||||
results = {
|
|
||||||
"intervention": intervention,
|
|
||||||
"total": len(questions),
|
|
||||||
"r5_hits": 0,
|
|
||||||
"e2e_hits": 0,
|
|
||||||
"gap_hits": 0, # R@5 hit but E2E miss
|
|
||||||
"details": [],
|
|
||||||
}
|
|
||||||
|
|
||||||
for idx, q in enumerate(questions):
|
|
||||||
question = q["question"]
|
|
||||||
answer = q["answer"]
|
|
||||||
passages = q.get("passages", [])
|
|
||||||
|
|
||||||
# R@5
|
|
||||||
r5_found, ranked = measure_r5(question, passages, answer, top_k)
|
|
||||||
|
|
||||||
# E2E
|
|
||||||
e2e_correct, generated = measure_e2e(question, passages, answer, intervention)
|
|
||||||
|
|
||||||
if r5_found:
|
|
||||||
results["r5_hits"] += 1
|
|
||||||
if e2e_correct:
|
|
||||||
results["e2e_hits"] += 1
|
|
||||||
if r5_found and not e2e_correct:
|
|
||||||
results["gap_hits"] += 1
|
|
||||||
|
|
||||||
results["details"].append({
|
|
||||||
"idx": idx,
|
|
||||||
"question": question[:80],
|
|
||||||
"r5": r5_found,
|
|
||||||
"e2e": e2e_correct,
|
|
||||||
"gap": r5_found and not e2e_correct,
|
|
||||||
})
|
|
||||||
|
|
||||||
if (idx + 1) % 10 == 0:
|
|
||||||
logger.info("Progress: %d/%d", idx + 1, len(questions))
|
|
||||||
|
|
||||||
# Calculate rates
|
|
||||||
total = results["total"]
|
|
||||||
results["r5_rate"] = round(results["r5_hits"] / total * 100, 1) if total else 0
|
|
||||||
results["e2e_rate"] = round(results["e2e_hits"] / total * 100, 1) if total else 0
|
|
||||||
results["gap"] = round(results["r5_rate"] - results["e2e_rate"], 1)
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def print_report(results: Dict[str, Any]) -> None:
|
|
||||||
"""Print benchmark report."""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("R@5 vs E2E ACCURACY BENCHMARK")
|
|
||||||
print("=" * 60)
|
|
||||||
print(f"Intervention: {results['intervention']}")
|
|
||||||
print(f"Questions: {results['total']}")
|
|
||||||
print(f"R@5: {results['r5_rate']}% ({results['r5_hits']}/{results['total']})")
|
|
||||||
print(f"E2E: {results['e2e_rate']}% ({results['e2e_hits']}/{results['total']})")
|
|
||||||
print(f"Gap: {results['gap']}% ({results['gap_hits']} retrieval successes wasted)")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(description="R@5 vs E2E Accuracy Benchmark")
|
|
||||||
parser.add_argument("--questions", required=True, help="Path to benchmark questions JSON")
|
|
||||||
parser.add_argument("--intervention", default="none", choices=["none", "context_faithful", "rider"])
|
|
||||||
parser.add_argument("--top-k", type=int, default=5)
|
|
||||||
parser.add_argument("--output", help="Save results to JSON file")
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
|
|
||||||
questions = load_questions(args.questions)
|
|
||||||
print(f"Loaded {len(questions)} questions from {args.questions}")
|
|
||||||
|
|
||||||
results = run_benchmark(questions, args.intervention, args.top_k)
|
|
||||||
print_report(results)
|
|
||||||
|
|
||||||
if args.output:
|
|
||||||
with open(args.output, "w") as f:
|
|
||||||
json.dump(results, f, indent=2)
|
|
||||||
print(f"\nResults saved to {args.output}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
Reference in New Issue
Block a user