Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
418e601f74 docs: add human confirmation firewall research report
All checks were successful
Lint / lint (pull_request) Successful in 9s
2026-04-22 11:22:24 -04:00
5 changed files with 515 additions and 225 deletions

View File

@@ -1,190 +0,0 @@
---
name: adversarial-ux-test
description: Roleplay the most difficult, tech-resistant user for your product. Browse the app as that persona, find every UX pain point, then filter complaints through a pragmatism layer to separate real problems from noise. Creates actionable tickets from genuine issues only.
version: 1.0.0
author: Omni @ Comelse
license: MIT
metadata:
hermes:
tags: [qa, ux, testing, adversarial, dogfood, personas, user-testing]
related_skills: [dogfood]
---
# Adversarial UX Test
Roleplay the worst-case user for your product — the person who hates technology, doesn't want your software, and will find every reason to complain. Then filter their feedback through a pragmatism layer to separate real UX problems from "I hate computers" noise.
Think of it as an automated "mom test" — but angry.
## Why This Works
Most QA finds bugs. This finds **friction**. A technically correct app can still be unusable for real humans. The adversarial persona catches:
- Confusing terminology that makes sense to developers but not users
- Too many steps to accomplish basic tasks
- Missing onboarding or "aha moments"
- Accessibility issues (font size, contrast, click targets)
- Cold-start problems (empty states, no demo content)
- Paywall/signup friction that kills conversion
The **pragmatism filter** (Phase 3) is what makes this useful instead of just entertaining. Without it, you'd add a "print this page" button to every screen because Grandpa can't figure out PDFs.
## How to Use
Tell the agent:
```
"Run an adversarial UX test on [URL]"
"Be a grumpy [persona type] and test [app name]"
"Do an asshole user test on my staging site"
```
You can provide a persona or let the agent generate one based on your product's target audience.
## Step 1: Define the Persona
If no persona is provided, generate one by answering:
1. **Who is the HARDEST user for this product?** (age 50+, non-technical role, decades of experience doing it "the old way")
2. **What is their tech comfort level?** (the lower the better — WhatsApp-only, paper notebooks, wife set up their email)
3. **What is the ONE thing they need to accomplish?** (their core job, not your feature list)
4. **What would make them give up?** (too many clicks, jargon, slow, confusing)
5. **How do they talk when frustrated?** (blunt, sweary, dismissive, sighing)
### Good Persona Example
> **"Big Mick" McAllister** — 58-year-old S&C coach. Uses WhatsApp and that's it. His "spreadsheet" is a paper notebook. "If I can't figure it out in 10 seconds I'm going back to my notebook." Needs to log session results for 25 players. Hates small text, jargon, and passwords.
### Bad Persona Example
> "A user who doesn't like the app" — too vague, no constraints, no voice.
The persona must be **specific enough to stay in character** for 20 minutes of testing.
## Step 2: Become the Asshole (Browse as the Persona)
1. Read any available project docs for app context and URLs
2. **Fully inhabit the persona** — their frustrations, limitations, goals
3. Navigate to the app using browser tools
4. **Attempt the persona's ACTUAL TASKS** (not a feature tour):
- Can they do what they came to do?
- How many clicks/screens to accomplish it?
- What confuses them?
- What makes them angry?
- Where do they get lost?
- What would make them give up and go back to their old way?
5. Test these friction categories:
- **First impression** — would they even bother past the landing page?
- **Core workflow** — the ONE thing they need to do most often
- **Error recovery** — what happens when they do something wrong?
- **Readability** — text size, contrast, information density
- **Speed** — does it feel faster than their current method?
- **Terminology** — any jargon they wouldn't understand?
- **Navigation** — can they find their way back? do they know where they are?
6. Take screenshots of every pain point
7. Check browser console for JS errors on every page
## Step 3: The Rant (Write Feedback in Character)
Write the feedback AS THE PERSONA — in their voice, with their frustrations. This is not a bug report. This is a real human venting.
```
[PERSONA NAME]'s Review of [PRODUCT]
Overall: [Would they keep using it? Yes/No/Maybe with conditions]
THE GOOD (grudging admission):
- [things even they have to admit work]
THE BAD (legitimate UX issues):
- [real problems that would stop them from using the product]
THE UGLY (showstoppers):
- [things that would make them uninstall/cancel immediately]
SPECIFIC COMPLAINTS:
1. [Page/feature]: "[quote in persona voice]" — [what happened, expected]
2. ...
VERDICT: "[one-line persona quote summarizing their experience]"
```
## Step 4: The Pragmatism Filter (Critical — Do Not Skip)
Step OUT of the persona. Evaluate each complaint as a product person:
- **RED: REAL UX BUG** — Any user would have this problem, not just grumpy ones. Fix it.
- **YELLOW: VALID BUT LOW PRIORITY** — Real issue but only for extreme users. Note it.
- **WHITE: PERSONA NOISE** — "I hate computers" talking, not a product problem. Skip it.
- **GREEN: FEATURE REQUEST** — Good idea hidden in the complaint. Consider it.
### Filter Criteria
1. Would a 35-year-old competent-but-busy user have the same complaint? → RED
2. Is this a genuine accessibility issue (font size, contrast, click targets)? → RED
3. Is this "I want it to work like paper" resistance to digital? → WHITE
4. Is this a real workflow inefficiency the persona stumbled on? → YELLOW or RED
5. Would fixing this add complexity for the 80% who are fine? → WHITE
6. Does the complaint reveal a missing onboarding moment? → GREEN
**This filter is MANDATORY.** Never ship raw persona complaints as tickets.
## Step 5: Create Tickets
For **RED** and **GREEN** items only:
- Clear, actionable title
- Include the persona's verbatim quote (entertaining + memorable)
- The real UX issue underneath (objective)
- A suggested fix (actionable)
- Tag/label: "ux-review"
For **YELLOW** items: one catch-all ticket with all notes.
**WHITE** items appear in the report only. No tickets.
**Max 10 tickets per session** — focus on the worst issues.
## Step 6: Report
Deliver:
1. The persona rant (Step 3) — entertaining and visceral
2. The filtered assessment (Step 4) — pragmatic and actionable
3. Tickets created (Step 5) — with links
4. Screenshots of key issues
## Tips
- **One persona per session.** Don't mix perspectives.
- **Stay in character during Steps 2-3.** Break character only at Step 4.
- **Test the CORE WORKFLOW first.** Don't get distracted by settings pages.
- **Empty states are gold.** New user experience reveals the most friction.
- **The best findings are RED items the persona found accidentally** while trying to do something else.
- **If the persona has zero complaints, your persona is too tech-savvy.** Make them older, less patient, more set in their ways.
- **Run this before demos, launches, or after shipping a batch of features.**
- **Register as a NEW user when possible.** Don't use pre-seeded admin accounts — the cold start experience is where most friction lives.
- **Zero WHITE items is a signal, not a failure.** If the pragmatism filter finds no noise, your product has real UX problems, not just a grumpy persona.
- **Check known issues in project docs AFTER the test.** If the persona found a bug that's already in the known issues list, that's actually the most damning finding — it means the team knew about it but never felt the user's pain.
- **Subscription/paywall testing is critical.** Test with expired accounts, not just active ones. The "what happens when you can't pay" experience reveals whether the product respects users or holds their data hostage.
- **Count the clicks to accomplish the persona's ONE task.** If it's more than 5, that's almost always a RED finding regardless of persona tech level.
## Example Personas by Industry
These are starting points — customize for your specific product:
| Product Type | Persona | Age | Key Trait |
|-------------|---------|-----|-----------|
| CRM | Retirement home director | 68 | Filing cabinet is the current CRM |
| Photography SaaS | Rural wedding photographer | 62 | Books clients by phone, invoices on paper |
| AI/ML Tool | Department store buyer | 55 | Burned by 3 failed tech startups |
| Fitness App | Old-school gym coach | 58 | Paper notebook, thick fingers, bad eyes |
| Accounting | Family bakery owner | 64 | Shoebox of receipts, hates subscriptions |
| E-commerce | Market stall vendor | 60 | Cash only, smartphone is for calls |
| Healthcare | Senior GP | 63 | Dictates notes, nurse handles the computer |
| Education | Veteran teacher | 57 | Chalk and talk, worksheets in ring binders |
## Rules
- Stay in character during Steps 2-3
- Be genuinely mean but fair — find real problems, not manufactured ones
- The pragmatism filter (Step 4) is **MANDATORY**
- Screenshots required for every complaint
- Max 10 tickets per session
- Test on staging/deployed app, not local dev
- One persona, one session, one report

View File

@@ -0,0 +1,515 @@
# Human Confirmation Firewall: Research Report
## Implementation Patterns for Hermes Agent
**Issue:** #878
**Parent:** #659
**Priority:** P0
**Scope:** Human-in-the-loop safety patterns for tool calls, crisis handling, and irreversible actions
---
## Executive Summary
Hermes already has a partial human confirmation firewall, but it is narrow.
Current repo state shows:
- a real **pre-execution gate** for dangerous terminal commands in `tools/approval.py`
- a partial **confidence-threshold path** via `_smart_approve()` in `tools/approval.py`
- gateway support for blocking approval resolution in `gateway/run.py`
What is still missing is the core recommendation from this research issue:
- **confidence scoring on all tool calls**, not just terminal commands that already matched a dangerous regex
- a **hard pre-execution human gate for crisis interventions**, especially any action that would auto-respond to suicidal content
- a consistent way to classify actions into:
1. pre-execution gate
2. post-execution review
3. confidence-threshold execution
Recommendation:
- use **Pattern 1: Pre-Execution Gate** for crisis interventions and irreversible/high-impact actions
- use **Pattern 3: Confidence Threshold** for normal operations
- reserve **Pattern 2: Post-Execution Review** only for low-risk and reversible actions
The next implementation step should be a **tool-call risk assessment layer** that runs before dispatch in `model_tools.handle_function_call()`, assigns a score and pattern to every tool call, and routes only the highest-risk calls into mandatory human confirmation.
---
## 1. The Three Proven Patterns
### Pattern 1: Pre-Execution Gate
Definition:
- halt before execution
- show the proposed action to the human
- require explicit approval or denial
Best for:
- destructive actions
- irreversible side effects
- crisis interventions
- actions that affect another human's safety, money, infrastructure, or private data
Strengths:
- strongest safety guarantee
- simplest audit story
- prevents the most catastrophic failure mode: acting first and apologizing later
Weaknesses:
- adds latency
- creates operator burden if overused
- should not be applied to every ordinary tool call
### Pattern 2: Post-Execution Review
Definition:
- execute first
- expose result to human
- allow rollback or follow-up correction
Best for:
- reversible operations
- low-risk actions with fast recovery
- tasks where human review matters but immediate execution is acceptable
Strengths:
- low friction
- fast iteration
- useful when rollback is practical
Weaknesses:
- unsafe for crisis or destructive actions
- only works when rollback actually exists
- a poor fit for external communication or life-safety contexts
### Pattern 3: Confidence Threshold
Definition:
- compute a risk/confidence score before execution
- auto-execute high-confidence safe actions
- request confirmation for lower-confidence or higher-risk actions
Best for:
- mixed-risk tool ecosystems
- day-to-day operations where always-confirm would be too expensive
- systems with a large volume of ordinary, safe reads and edits
Strengths:
- best balance of speed and safety
- scales across many tool types
- allows targeted human attention where it matters most
Weaknesses:
- depends on a good scoring model
- weak scoring creates false negatives or unnecessary prompts
- must remain inspectable and debuggable
---
## 2. What Hermes Already Has
## 2.1 Existing Pre-Execution Gate for Dangerous Terminal Commands
`tools/approval.py` already implements a real pre-execution confirmation path for dangerous shell commands.
Observed components:
- `DANGEROUS_PATTERNS`
- `detect_dangerous_command()`
- `prompt_dangerous_approval()`
- `check_dangerous_command()`
- gateway queueing and resolution support in the same module
This is already Pattern 1.
Current behavior:
- dangerous terminal commands are detected before execution
- the user can allow once / session / always / deny
- gateway sessions can block until approval resolves
This is a strong foundation, but it is limited to a subset of terminal commands.
## 2.2 Partial Confidence Threshold via Smart Approvals
Hermes also already has a partial Pattern 3.
Observed component:
- `_smart_approve()` in `tools/approval.py`
Current behavior:
- only runs **after** a command has already been flagged by dangerous-pattern detection
- uses the auxiliary LLM to decide:
- approve
- deny
- escalate
This means Hermes has a confidence-threshold mechanism, but only for **already-flagged dangerous terminal commands**.
What it does not yet do:
- score all tool calls
- classify non-terminal tools
- distinguish crisis interventions from normal ops
- produce a shared risk model across the tool surface
## 2.3 Blocking Approval UX in Gateway
`gateway/run.py` already routes `/approve` and `/deny` into the blocking approval path.
This means the infrastructure for a true human confirmation firewall already exists in messaging contexts.
That is important because the missing work is not "invent human approval from zero."
The missing work is:
- expand the scope from dangerous shell commands to **all tool calls that matter**
- make the routing policy explicit and inspectable
---
## 3. What Hermes Still Lacks
## 3.1 No Universal Tool-Call Risk Assessment
The current approval system is command-pattern-centric.
It is not yet a tool-call firewall.
Missing capability:
- before dispatch, every tool call should receive a structured assessment:
- tool name
- side-effect class
- reversibility
- human-impact potential
- crisis relevance
- confidence score
- recommended confirmation pattern
Natural insertion point:
- `model_tools.handle_function_call()`
That function already sits at the central dispatch boundary.
It is the right place to add a pre-dispatch classifier.
## 3.2 No Hard Crisis Gate for Outbound Intervention
Issue #878 explicitly recommends:
- Pattern 1 for crisis interventions
- never auto-respond to suicidal content
That recommendation is not yet codified as a global firewall rule.
Missing rule:
- if a tool call would directly intervene in a crisis context or send outward guidance in response to suicidal content, it must require explicit human confirmation before execution
Examples that should hard-gate:
- outbound `send_message` content aimed at a suicidal user
- any future tool that places calls, escalates emergencies, or contacts third parties about a crisis
- any autonomous action that claims a person should or should not take a life-safety step
## 3.3 No First-Class Post-Execution Review Policy
Hermes has approval and denial, but it does not yet have a formal policy for when Pattern 2 is acceptable.
Without a policy, post-execution review tends to get used implicitly rather than intentionally.
That is risky.
Hermes should define Pattern 2 narrowly:
- only for actions that are both low-risk and reversible
- only when the system can show the human exactly what happened
- never for crisis, finance, destructive config, or sensitive comms
---
## 4. Recommended Architecture for Hermes
## 4.1 Add a Tool-Call Assessment Layer
Add a pre-dispatch assessment object for every tool call.
Suggested shape:
```python
@dataclass
class ToolCallAssessment:
tool_name: str
risk_score: float # 0.0 to 1.0
confidence: float # confidence in the assessment itself
pattern: str # pre_execution_gate | post_execution_review | confidence_threshold
requires_human: bool
reasons: list[str]
reversible: bool
crisis_sensitive: bool
```
Suggested execution point:
- inside `model_tools.handle_function_call()` before `orchestrator.dispatch()`
Why here:
- one place covers all tools
- one place can emit traces
- one place can remain model-agnostic
- one place lets plugins observe or override the assessment
## 4.2 Classify Tool Calls by Side-Effect Class
Suggested first-pass taxonomy:
### A. Read-only
Examples:
- `read_file`
- `search_files`
- `browser_snapshot`
- `browser_console` read-only inspection
Pattern:
- confidence threshold
- almost always auto-execute
- human confirmation normally unnecessary
### B. Local reversible edits
Examples:
- `patch`
- `write_file`
- `todo`
Pattern:
- confidence threshold
- human confirmation only when risk score rises because of path sensitivity or scope breadth
### C. External side effects
Examples:
- `send_message`
- `cronjob`
- `delegate_task`
- smart-home actuation tools
Pattern:
- confidence threshold by default
- pre-execution gate when score exceeds threshold or when context is sensitive
### D. Critical / destructive / crisis-sensitive
Examples:
- dangerous `terminal`
- financial actions
- deletion / kill / restart / deployment in sensitive paths
- outbound crisis intervention
Pattern:
- pre-execution gate
- never auto-execute on confidence alone
## 4.3 Crisis Override Rule
Add a hard override:
```text
If tool call is crisis-sensitive AND outbound or irreversible:
requires_human = True
pattern = pre_execution_gate
```
This is the most important rule in the issue.
The model may draft the message.
The human must confirm before the system sends it.
## 4.4 Use Confidence Threshold for Normal Ops
For non-crisis operations, use Pattern 3.
Suggested logic:
- low risk + high assessment confidence -> auto-execute
- medium risk or medium confidence -> ask human
- high risk -> always ask human
Key point:
- confidence is not just "how sure the LLM is"
- confidence should combine:
- tool type certainty
- argument clarity
- path sensitivity
- external side effects
- crisis indicators
---
## 5. Recommended Initial Scoring Factors
A simple initial scorer is enough.
It does not need to be fancy.
Suggested factors:
### 5.1 Tool class risk
- read-only tools: very low base risk
- local mutation tools: moderate base risk
- external communication / automation tools: higher base risk
- shell execution: variable, often high
### 5.2 Target sensitivity
Examples:
- `/tmp` or local scratch paths -> lower
- repo files under git -> medium
- system config, credentials, secrets, gateway lifecycle -> high
- human-facing channels -> high if message content is sensitive
### 5.3 Reversibility
- reversible -> lower
- difficult but possible to undo -> medium
- practically irreversible -> high
### 5.4 Human-impact content
- no direct human impact -> low
- administrative impact -> medium
- crisis / safety / emotional intervention -> critical
### 5.5 Context certainty
- arguments are explicit and narrow -> higher confidence
- arguments are vague, inferred, or broad -> lower confidence
---
## 6. Implementation Plan
## Phase 1: Assessment Without Behavior Change
Goal:
- score all tool calls
- log assessment decisions
- emit traces for review
- do not yet block new tool categories
Files to touch:
- `tools/approval.py`
- `model_tools.py`
- tests for assessment coverage
Output:
- risk/confidence trace for every tool call
- pattern recommendation for every tool call
Why first:
- lets us calibrate before changing runtime behavior
- avoids breaking existing workflows blindly
## Phase 2: Hard-Gate Crisis-Sensitive Outbound Actions
Goal:
- enforce Pattern 1 for crisis interventions
Likely surfaces:
- `send_message`
- any future telephony / call / escalation tools
- other tools with direct human intervention side effects
Rule:
- never auto-send crisis intervention content without human confirmation
## Phase 3: General Confidence Threshold for Normal Ops
Goal:
- apply Pattern 3 to all tool calls
- auto-run clearly safe actions
- escalate ambiguous or medium-risk actions
Likely thresholds:
- score < 0.25 -> auto
- 0.25 to 0.60 -> confirm if confidence is weak
- > 0.60 -> confirm
- crisis-sensitive -> always confirm
## Phase 4: Optional Post-Execution Review Lane
Goal:
- allow Pattern 2 only for explicitly reversible operations
Examples:
- maybe low-risk messaging drafts saved locally
- maybe reversible UI actions in specific environments
Important:
- this phase is optional
- Hermes should not rely on Pattern 2 for safety-critical flows
---
## 7. Verification Criteria for the Future Implementation
The eventual implementation should prove all of the following:
1. every tool call receives a scored assessment before dispatch
2. crisis-sensitive outbound actions always require human confirmation
3. dangerous terminal commands still preserve their current pre-execution gate
4. clearly safe read-only tool calls are not slowed by unnecessary prompts
5. assessment traces can be inspected after a run
6. approval decisions remain session-safe across CLI and gateway contexts
---
## 8. Concrete Recommendations
### Recommendation 1
Do **not** replace the current dangerous-command approval path.
Generalize above it.
Why:
- existing terminal Pattern 1 already works
- this is the strongest piece of the current firewall
### Recommendation 2
Add a universal scorer in `model_tools.handle_function_call()`.
Why:
- that is the first point where Hermes knows the tool name and structured arguments
- it is the cleanest place to classify all tool calls uniformly
### Recommendation 3
Treat crisis-sensitive outbound intervention as a separate safety class.
Why:
- issue #878 explicitly calls for Pattern 1 here
- this matches Timmy's SOUL-level safety requirements
### Recommendation 4
Ship scoring traces before enforcement expansion.
Why:
- you cannot tune thresholds you cannot inspect
- false positives will otherwise frustrate normal usage
### Recommendation 5
Use Pattern 3 as the default policy for normal operations.
Why:
- full manual confirmation on every tool call is too expensive
- full autonomy is too risky
- Pattern 3 is the practical middle ground
---
## 9. Bottom Line
Hermes should implement a **two-track human confirmation firewall**:
1. **Pattern 1: Pre-Execution Gate**
- crisis interventions
- destructive terminal actions
- irreversible or safety-critical tool calls
2. **Pattern 3: Confidence Threshold**
- all ordinary tool calls
- driven by a universal tool-call assessment layer
- integrated at the central dispatch boundary
Pattern 2 should remain optional and narrow.
It is not the primary answer for Hermes.
The repo already contains the beginnings of this system.
The next step is not new theory.
It is to turn the existing approval path into a true **tool-call-wide human confirmation firewall**.
---
## References
- Issue #878 — Human Confirmation Firewall Implementation Patterns
- Issue #659 — Critical Research Tasks
- `tools/approval.py` — current dangerous-command approval flow and smart approvals
- `model_tools.py` — central tool dispatch boundary
- `gateway/run.py` — blocking approval handling for messaging sessions

View File

@@ -1,25 +0,0 @@
from pathlib import Path
from tools.skills_hub import OptionalSkillSource
REPO_ROOT = Path(__file__).resolve().parents[1]
def test_optional_skill_source_scans_adversarial_ux_test():
source = OptionalSkillSource()
metas = {meta.identifier: meta for meta in source._scan_all()}
assert "official/dogfood/adversarial-ux-test" in metas
assert metas["official/dogfood/adversarial-ux-test"].name == "adversarial-ux-test"
assert "tech-resistant user" in metas["official/dogfood/adversarial-ux-test"].description
def test_optional_skill_catalog_docs_list_adversarial_ux_test():
optional_catalog = (REPO_ROOT / "website" / "docs" / "reference" / "optional-skills-catalog.md").read_text(encoding="utf-8")
bundled_catalog = (REPO_ROOT / "website" / "docs" / "reference" / "skills-catalog.md").read_text(encoding="utf-8")
assert "**adversarial-ux-test**" in optional_catalog
assert "official/dogfood/adversarial-ux-test" in optional_catalog
assert "`adversarial-ux-test`" in bundled_catalog
assert "dogfood/adversarial-ux-test" in bundled_catalog

View File

@@ -16,7 +16,6 @@ For example:
```bash
hermes skills install official/blockchain/solana
hermes skills install official/dogfood/adversarial-ux-test
hermes skills install official/mlops/flash-attention
```
@@ -57,12 +56,6 @@ hermes skills uninstall <skill-name>
| **blender-mcp** | Control Blender directly from Hermes via socket connection to the blender-mcp addon. Create 3D objects, materials, animations, and run arbitrary Blender Python (bpy) code. |
| **meme-generation** | Generate real meme images by picking a template and overlaying text with Pillow. Produces actual `.png` meme files. |
## Dogfood
| Skill | Description |
|-------|-------------|
| **adversarial-ux-test** | Roleplay the most difficult, tech-resistant user for a product — browse in-persona, rant, then filter through a RED/YELLOW/WHITE/GREEN pragmatism layer so only real UX friction becomes tickets. |
## DevOps
| Skill | Description |

View File

@@ -59,12 +59,9 @@ DevOps and infrastructure automation skills.
## dogfood
Internal dogfooding and QA skills used to test Hermes Agent itself.
| Skill | Description | Path |
|-------|-------------|------|
| `dogfood` | Systematic exploratory QA testing of web applications — find bugs, capture evidence, and generate structured reports. | `dogfood/dogfood` |
| `adversarial-ux-test` | Roleplay the most difficult, tech-resistant user for a product — browse in-persona, rant, then filter through a RED/YELLOW/WHITE/GREEN pragmatism layer so only real UX friction becomes tickets. | `dogfood/adversarial-ux-test` |
| `hermes-agent-setup` | Help users configure Hermes Agent — CLI usage, setup wizard, model/provider selection, tools, skills, voice/STT/TTS, gateway, and troubleshooting. | `dogfood/hermes-agent-setup` |
## email