Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3ce1f829a2 |
16
cli.py
16
cli.py
@@ -589,6 +589,7 @@ from tools.terminal_tool import set_sudo_password_callback, set_approval_callbac
|
||||
from tools.skills_tool import set_secret_capture_callback
|
||||
from hermes_cli.callbacks import prompt_for_secret
|
||||
from tools.browser_tool import _emergency_cleanup_all_sessions as _cleanup_all_browsers
|
||||
from utils import repair_and_load_json
|
||||
|
||||
# Guard to prevent cleanup from running multiple times on exit
|
||||
_cleanup_done = False
|
||||
@@ -3569,7 +3570,11 @@ class HermesCLI:
|
||||
result_json = _asyncio.run(
|
||||
vision_analyze_tool(image_url=str(img_path), user_prompt=analysis_prompt)
|
||||
)
|
||||
result = _json.loads(result_json)
|
||||
result = repair_and_load_json(
|
||||
result_json,
|
||||
default={},
|
||||
context="cli_image_analysis",
|
||||
) if isinstance(result_json, str) else {}
|
||||
if result.get("success"):
|
||||
description = result.get("analysis", "")
|
||||
enriched_parts.append(
|
||||
@@ -4960,7 +4965,14 @@ class HermesCLI:
|
||||
from tools.cronjob_tools import cronjob as cronjob_tool
|
||||
|
||||
def _cron_api(**kwargs):
|
||||
return json.loads(cronjob_tool(**kwargs))
|
||||
result = repair_and_load_json(
|
||||
cronjob_tool(**kwargs),
|
||||
default=None,
|
||||
context="cli_cron_command",
|
||||
)
|
||||
if isinstance(result, dict):
|
||||
return result
|
||||
return {"success": False, "error": "Invalid JSON from cronjob tool"}
|
||||
|
||||
def _normalize_skills(values):
|
||||
normalized = []
|
||||
|
||||
@@ -1,190 +0,0 @@
|
||||
---
|
||||
name: adversarial-ux-test
|
||||
description: Roleplay the most difficult, tech-resistant user for your product. Browse the app as that persona, find every UX pain point, then filter complaints through a pragmatism layer to separate real problems from noise. Creates actionable tickets from genuine issues only.
|
||||
version: 1.0.0
|
||||
author: Omni @ Comelse
|
||||
license: MIT
|
||||
metadata:
|
||||
hermes:
|
||||
tags: [qa, ux, testing, adversarial, dogfood, personas, user-testing]
|
||||
related_skills: [dogfood]
|
||||
---
|
||||
|
||||
# Adversarial UX Test
|
||||
|
||||
Roleplay the worst-case user for your product — the person who hates technology, doesn't want your software, and will find every reason to complain. Then filter their feedback through a pragmatism layer to separate real UX problems from "I hate computers" noise.
|
||||
|
||||
Think of it as an automated "mom test" — but angry.
|
||||
|
||||
## Why This Works
|
||||
|
||||
Most QA finds bugs. This finds **friction**. A technically correct app can still be unusable for real humans. The adversarial persona catches:
|
||||
- Confusing terminology that makes sense to developers but not users
|
||||
- Too many steps to accomplish basic tasks
|
||||
- Missing onboarding or "aha moments"
|
||||
- Accessibility issues (font size, contrast, click targets)
|
||||
- Cold-start problems (empty states, no demo content)
|
||||
- Paywall/signup friction that kills conversion
|
||||
|
||||
The **pragmatism filter** (Phase 3) is what makes this useful instead of just entertaining. Without it, you'd add a "print this page" button to every screen because Grandpa can't figure out PDFs.
|
||||
|
||||
## How to Use
|
||||
|
||||
Tell the agent:
|
||||
```
|
||||
"Run an adversarial UX test on [URL]"
|
||||
"Be a grumpy [persona type] and test [app name]"
|
||||
"Do an asshole user test on my staging site"
|
||||
```
|
||||
|
||||
You can provide a persona or let the agent generate one based on your product's target audience.
|
||||
|
||||
## Step 1: Define the Persona
|
||||
|
||||
If no persona is provided, generate one by answering:
|
||||
|
||||
1. **Who is the HARDEST user for this product?** (age 50+, non-technical role, decades of experience doing it "the old way")
|
||||
2. **What is their tech comfort level?** (the lower the better — WhatsApp-only, paper notebooks, wife set up their email)
|
||||
3. **What is the ONE thing they need to accomplish?** (their core job, not your feature list)
|
||||
4. **What would make them give up?** (too many clicks, jargon, slow, confusing)
|
||||
5. **How do they talk when frustrated?** (blunt, sweary, dismissive, sighing)
|
||||
|
||||
### Good Persona Example
|
||||
> **"Big Mick" McAllister** — 58-year-old S&C coach. Uses WhatsApp and that's it. His "spreadsheet" is a paper notebook. "If I can't figure it out in 10 seconds I'm going back to my notebook." Needs to log session results for 25 players. Hates small text, jargon, and passwords.
|
||||
|
||||
### Bad Persona Example
|
||||
> "A user who doesn't like the app" — too vague, no constraints, no voice.
|
||||
|
||||
The persona must be **specific enough to stay in character** for 20 minutes of testing.
|
||||
|
||||
## Step 2: Become the Asshole (Browse as the Persona)
|
||||
|
||||
1. Read any available project docs for app context and URLs
|
||||
2. **Fully inhabit the persona** — their frustrations, limitations, goals
|
||||
3. Navigate to the app using browser tools
|
||||
4. **Attempt the persona's ACTUAL TASKS** (not a feature tour):
|
||||
- Can they do what they came to do?
|
||||
- How many clicks/screens to accomplish it?
|
||||
- What confuses them?
|
||||
- What makes them angry?
|
||||
- Where do they get lost?
|
||||
- What would make them give up and go back to their old way?
|
||||
|
||||
5. Test these friction categories:
|
||||
- **First impression** — would they even bother past the landing page?
|
||||
- **Core workflow** — the ONE thing they need to do most often
|
||||
- **Error recovery** — what happens when they do something wrong?
|
||||
- **Readability** — text size, contrast, information density
|
||||
- **Speed** — does it feel faster than their current method?
|
||||
- **Terminology** — any jargon they wouldn't understand?
|
||||
- **Navigation** — can they find their way back? do they know where they are?
|
||||
|
||||
6. Take screenshots of every pain point
|
||||
7. Check browser console for JS errors on every page
|
||||
|
||||
## Step 3: The Rant (Write Feedback in Character)
|
||||
|
||||
Write the feedback AS THE PERSONA — in their voice, with their frustrations. This is not a bug report. This is a real human venting.
|
||||
|
||||
```
|
||||
[PERSONA NAME]'s Review of [PRODUCT]
|
||||
|
||||
Overall: [Would they keep using it? Yes/No/Maybe with conditions]
|
||||
|
||||
THE GOOD (grudging admission):
|
||||
- [things even they have to admit work]
|
||||
|
||||
THE BAD (legitimate UX issues):
|
||||
- [real problems that would stop them from using the product]
|
||||
|
||||
THE UGLY (showstoppers):
|
||||
- [things that would make them uninstall/cancel immediately]
|
||||
|
||||
SPECIFIC COMPLAINTS:
|
||||
1. [Page/feature]: "[quote in persona voice]" — [what happened, expected]
|
||||
2. ...
|
||||
|
||||
VERDICT: "[one-line persona quote summarizing their experience]"
|
||||
```
|
||||
|
||||
## Step 4: The Pragmatism Filter (Critical — Do Not Skip)
|
||||
|
||||
Step OUT of the persona. Evaluate each complaint as a product person:
|
||||
|
||||
- **RED: REAL UX BUG** — Any user would have this problem, not just grumpy ones. Fix it.
|
||||
- **YELLOW: VALID BUT LOW PRIORITY** — Real issue but only for extreme users. Note it.
|
||||
- **WHITE: PERSONA NOISE** — "I hate computers" talking, not a product problem. Skip it.
|
||||
- **GREEN: FEATURE REQUEST** — Good idea hidden in the complaint. Consider it.
|
||||
|
||||
### Filter Criteria
|
||||
1. Would a 35-year-old competent-but-busy user have the same complaint? → RED
|
||||
2. Is this a genuine accessibility issue (font size, contrast, click targets)? → RED
|
||||
3. Is this "I want it to work like paper" resistance to digital? → WHITE
|
||||
4. Is this a real workflow inefficiency the persona stumbled on? → YELLOW or RED
|
||||
5. Would fixing this add complexity for the 80% who are fine? → WHITE
|
||||
6. Does the complaint reveal a missing onboarding moment? → GREEN
|
||||
|
||||
**This filter is MANDATORY.** Never ship raw persona complaints as tickets.
|
||||
|
||||
## Step 5: Create Tickets
|
||||
|
||||
For **RED** and **GREEN** items only:
|
||||
- Clear, actionable title
|
||||
- Include the persona's verbatim quote (entertaining + memorable)
|
||||
- The real UX issue underneath (objective)
|
||||
- A suggested fix (actionable)
|
||||
- Tag/label: "ux-review"
|
||||
|
||||
For **YELLOW** items: one catch-all ticket with all notes.
|
||||
|
||||
**WHITE** items appear in the report only. No tickets.
|
||||
|
||||
**Max 10 tickets per session** — focus on the worst issues.
|
||||
|
||||
## Step 6: Report
|
||||
|
||||
Deliver:
|
||||
1. The persona rant (Step 3) — entertaining and visceral
|
||||
2. The filtered assessment (Step 4) — pragmatic and actionable
|
||||
3. Tickets created (Step 5) — with links
|
||||
4. Screenshots of key issues
|
||||
|
||||
## Tips
|
||||
|
||||
- **One persona per session.** Don't mix perspectives.
|
||||
- **Stay in character during Steps 2-3.** Break character only at Step 4.
|
||||
- **Test the CORE WORKFLOW first.** Don't get distracted by settings pages.
|
||||
- **Empty states are gold.** New user experience reveals the most friction.
|
||||
- **The best findings are RED items the persona found accidentally** while trying to do something else.
|
||||
- **If the persona has zero complaints, your persona is too tech-savvy.** Make them older, less patient, more set in their ways.
|
||||
- **Run this before demos, launches, or after shipping a batch of features.**
|
||||
- **Register as a NEW user when possible.** Don't use pre-seeded admin accounts — the cold start experience is where most friction lives.
|
||||
- **Zero WHITE items is a signal, not a failure.** If the pragmatism filter finds no noise, your product has real UX problems, not just a grumpy persona.
|
||||
- **Check known issues in project docs AFTER the test.** If the persona found a bug that's already in the known issues list, that's actually the most damning finding — it means the team knew about it but never felt the user's pain.
|
||||
- **Subscription/paywall testing is critical.** Test with expired accounts, not just active ones. The "what happens when you can't pay" experience reveals whether the product respects users or holds their data hostage.
|
||||
- **Count the clicks to accomplish the persona's ONE task.** If it's more than 5, that's almost always a RED finding regardless of persona tech level.
|
||||
|
||||
## Example Personas by Industry
|
||||
|
||||
These are starting points — customize for your specific product:
|
||||
|
||||
| Product Type | Persona | Age | Key Trait |
|
||||
|-------------|---------|-----|-----------|
|
||||
| CRM | Retirement home director | 68 | Filing cabinet is the current CRM |
|
||||
| Photography SaaS | Rural wedding photographer | 62 | Books clients by phone, invoices on paper |
|
||||
| AI/ML Tool | Department store buyer | 55 | Burned by 3 failed tech startups |
|
||||
| Fitness App | Old-school gym coach | 58 | Paper notebook, thick fingers, bad eyes |
|
||||
| Accounting | Family bakery owner | 64 | Shoebox of receipts, hates subscriptions |
|
||||
| E-commerce | Market stall vendor | 60 | Cash only, smartphone is for calls |
|
||||
| Healthcare | Senior GP | 63 | Dictates notes, nurse handles the computer |
|
||||
| Education | Veteran teacher | 57 | Chalk and talk, worksheets in ring binders |
|
||||
|
||||
## Rules
|
||||
|
||||
- Stay in character during Steps 2-3
|
||||
- Be genuinely mean but fair — find real problems, not manufactured ones
|
||||
- The pragmatism filter (Step 4) is **MANDATORY**
|
||||
- Screenshots required for every complaint
|
||||
- Max 10 tickets per session
|
||||
- Test on staging/deployed app, not local dev
|
||||
- One persona, one session, one report
|
||||
62
tests/cli/test_cli_json_repair.py
Normal file
62
tests/cli/test_cli_json_repair.py
Normal file
@@ -0,0 +1,62 @@
|
||||
import sys
|
||||
import types
|
||||
from unittest.mock import patch
|
||||
|
||||
|
||||
def _stub_auxiliary_client():
|
||||
stub = types.ModuleType("agent.auxiliary_client")
|
||||
stub.call_llm = lambda *args, **kwargs: None
|
||||
stub.resolve_provider_client = lambda *args, **kwargs: (None, None)
|
||||
stub.get_text_auxiliary_client = lambda *args, **kwargs: (None, None)
|
||||
stub.async_call_llm = lambda *args, **kwargs: None
|
||||
stub.extract_content_or_reasoning = lambda *args, **kwargs: ""
|
||||
stub._OR_HEADERS = {}
|
||||
stub._get_task_timeout = lambda *args, **kwargs: 30
|
||||
sys.modules["agent.auxiliary_client"] = stub
|
||||
|
||||
|
||||
def _stub_vision_tools(vision_analyze_tool):
|
||||
stub = types.ModuleType("tools.vision_tools")
|
||||
stub.vision_analyze_tool = vision_analyze_tool
|
||||
sys.modules["tools.vision_tools"] = stub
|
||||
|
||||
|
||||
def test_preprocess_images_with_vision_repairs_malformed_json(tmp_path):
|
||||
_stub_auxiliary_client()
|
||||
from cli import HermesCLI
|
||||
|
||||
cli_obj = HermesCLI.__new__(HermesCLI)
|
||||
image_path = tmp_path / "test.png"
|
||||
image_path.write_bytes(b"fake-image-bytes")
|
||||
|
||||
async def fake_vision(**kwargs):
|
||||
return "{'success': true, 'analysis': 'Recovered image description',}"
|
||||
|
||||
_stub_vision_tools(fake_vision)
|
||||
result = HermesCLI._preprocess_images_with_vision(
|
||||
cli_obj,
|
||||
"Describe this",
|
||||
[image_path],
|
||||
announce=False,
|
||||
)
|
||||
|
||||
assert "Recovered image description" in result
|
||||
assert "Describe this" in result
|
||||
assert str(image_path) in result
|
||||
|
||||
|
||||
def test_handle_cron_command_repairs_malformed_json(capsys):
|
||||
_stub_auxiliary_client()
|
||||
from cli import HermesCLI
|
||||
|
||||
cli_obj = HermesCLI.__new__(HermesCLI)
|
||||
malformed_result = """{'success': true, 'jobs': [{'job_id': 'job-1234567890ab', 'name': 'Nightly Check', 'state': 'scheduled', 'schedule': 'every 1h', 'repeat': 'forever', 'prompt_preview': 'Check server status', 'skills': ['blogwatcher',], 'next_run_at': '2026-04-22T01:00:00Z',},],}"""
|
||||
|
||||
with patch("tools.cronjob_tools.cronjob", return_value=malformed_result):
|
||||
HermesCLI._handle_cron_command(cli_obj, "/cron list")
|
||||
|
||||
out = capsys.readouterr().out
|
||||
assert "Scheduled Jobs:" in out
|
||||
assert "job-1234567890ab" in out
|
||||
assert "Nightly Check" in out
|
||||
assert "blogwatcher" in out
|
||||
@@ -1,25 +0,0 @@
|
||||
from pathlib import Path
|
||||
|
||||
from tools.skills_hub import OptionalSkillSource
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
|
||||
|
||||
def test_optional_skill_source_scans_adversarial_ux_test():
|
||||
source = OptionalSkillSource()
|
||||
metas = {meta.identifier: meta for meta in source._scan_all()}
|
||||
|
||||
assert "official/dogfood/adversarial-ux-test" in metas
|
||||
assert metas["official/dogfood/adversarial-ux-test"].name == "adversarial-ux-test"
|
||||
assert "tech-resistant user" in metas["official/dogfood/adversarial-ux-test"].description
|
||||
|
||||
|
||||
def test_optional_skill_catalog_docs_list_adversarial_ux_test():
|
||||
optional_catalog = (REPO_ROOT / "website" / "docs" / "reference" / "optional-skills-catalog.md").read_text(encoding="utf-8")
|
||||
bundled_catalog = (REPO_ROOT / "website" / "docs" / "reference" / "skills-catalog.md").read_text(encoding="utf-8")
|
||||
|
||||
assert "**adversarial-ux-test**" in optional_catalog
|
||||
assert "official/dogfood/adversarial-ux-test" in optional_catalog
|
||||
assert "`adversarial-ux-test`" in bundled_catalog
|
||||
assert "dogfood/adversarial-ux-test" in bundled_catalog
|
||||
108
tests/tools/test_browser_json_repair.py
Normal file
108
tests/tools/test_browser_json_repair.py
Normal file
@@ -0,0 +1,108 @@
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
import types
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
|
||||
def _stub_auxiliary_client():
|
||||
stub = types.ModuleType("agent.auxiliary_client")
|
||||
stub.call_llm = lambda *args, **kwargs: None
|
||||
stub.resolve_provider_client = lambda *args, **kwargs: (None, None)
|
||||
stub.get_text_auxiliary_client = lambda *args, **kwargs: (None, None)
|
||||
stub.async_call_llm = lambda *args, **kwargs: None
|
||||
stub.extract_content_or_reasoning = lambda *args, **kwargs: ""
|
||||
stub._OR_HEADERS = {}
|
||||
stub._get_task_timeout = lambda *args, **kwargs: 30
|
||||
sys.modules["agent.auxiliary_client"] = stub
|
||||
|
||||
|
||||
def test_run_browser_command_repairs_malformed_stdout_envelope(tmp_path):
|
||||
_stub_auxiliary_client()
|
||||
from tools.browser_tool import _run_browser_command
|
||||
|
||||
mock_proc = MagicMock()
|
||||
mock_proc.returncode = 0
|
||||
mock_proc.wait.return_value = 0
|
||||
fake_session = {
|
||||
"session_name": "test-session",
|
||||
"session_id": "test-id",
|
||||
"cdp_url": None,
|
||||
}
|
||||
malformed_stdout = "{'success': true, 'data': {'url': 'https://example.com',},}"
|
||||
|
||||
def fake_open(path, mode="r", *args, **kwargs):
|
||||
path = str(path)
|
||||
if path.endswith("_stdout_navigate"):
|
||||
return io.StringIO(malformed_stdout)
|
||||
if path.endswith("_stderr_navigate"):
|
||||
return io.StringIO("")
|
||||
raise FileNotFoundError(path)
|
||||
|
||||
with (
|
||||
patch("tools.browser_tool._find_agent_browser", return_value="/usr/bin/agent-browser"),
|
||||
patch("tools.browser_tool._get_session_info", return_value=fake_session),
|
||||
patch("tools.browser_tool._socket_safe_tmpdir", return_value=str(tmp_path)),
|
||||
patch("tools.browser_tool._merge_browser_path", side_effect=lambda p: p),
|
||||
patch("tools.interrupt.is_interrupted", return_value=False),
|
||||
patch("subprocess.Popen", return_value=mock_proc),
|
||||
patch("os.open", return_value=99),
|
||||
patch("os.close"),
|
||||
patch("os.unlink"),
|
||||
patch("builtins.open", side_effect=fake_open),
|
||||
):
|
||||
result = _run_browser_command("task-1", "navigate", ["https://example.com"])
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["data"]["url"] == "https://example.com"
|
||||
|
||||
|
||||
def test_agent_browser_eval_repairs_malformed_json_result():
|
||||
_stub_auxiliary_client()
|
||||
from tools.browser_tool import _browser_eval
|
||||
|
||||
with patch(
|
||||
"tools.browser_tool._run_browser_command",
|
||||
return_value={"success": True, "data": {"result": "{'items': ['a', 'b',],}"}},
|
||||
):
|
||||
result = json.loads(_browser_eval("document.body.innerText", task_id="test"))
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["result"] == {"items": ["a", "b"]}
|
||||
assert result["result_type"] == "dict"
|
||||
|
||||
|
||||
def test_camofox_eval_repairs_malformed_json_result():
|
||||
_stub_auxiliary_client()
|
||||
from tools.browser_tool import _camofox_eval
|
||||
|
||||
with (
|
||||
patch("tools.browser_camofox._ensure_tab", return_value={"tab_id": "tab-1", "user_id": "user-1"}),
|
||||
patch("tools.browser_camofox._post", return_value={"result": "{'count': 3,}"}),
|
||||
):
|
||||
result = json.loads(_camofox_eval("2+1", task_id="test"))
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["result"] == {"count": 3}
|
||||
assert result["result_type"] == "dict"
|
||||
|
||||
|
||||
def test_browser_get_images_repairs_malformed_json_result():
|
||||
_stub_auxiliary_client()
|
||||
from tools.browser_tool import browser_get_images
|
||||
|
||||
with patch(
|
||||
"tools.browser_tool._run_browser_command",
|
||||
return_value={
|
||||
"success": True,
|
||||
"data": {
|
||||
"result": "[{\"src\": \"https://example.com/cat.png\", \"alt\": \"cat\",}]"
|
||||
},
|
||||
},
|
||||
):
|
||||
result = json.loads(browser_get_images(task_id="test"))
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["count"] == 1
|
||||
assert result["images"] == [{"src": "https://example.com/cat.png", "alt": "cat"}]
|
||||
assert "warning" not in result
|
||||
@@ -67,6 +67,7 @@ from typing import Dict, Any, Optional, List
|
||||
from pathlib import Path
|
||||
from agent.auxiliary_client import call_llm
|
||||
from hermes_constants import get_hermes_home
|
||||
from utils import repair_and_load_json
|
||||
|
||||
try:
|
||||
from tools.website_policy import check_website_access
|
||||
@@ -1171,8 +1172,12 @@ def _run_browser_command(
|
||||
return {"success": False, "error": f"Browser command '{command}' returned no output"}
|
||||
|
||||
if stdout_text:
|
||||
try:
|
||||
parsed = json.loads(stdout_text)
|
||||
parsed = repair_and_load_json(
|
||||
stdout_text,
|
||||
default=None,
|
||||
context=f"browser_{command}_stdout",
|
||||
)
|
||||
if isinstance(parsed, dict):
|
||||
# Warn if snapshot came back empty (common sign of daemon/CDP issues)
|
||||
if command == "snapshot" and parsed.get("success"):
|
||||
snap_data = parsed.get("data", {})
|
||||
@@ -1181,35 +1186,35 @@ def _run_browser_command(
|
||||
"Possible stale daemon or CDP connection issue. "
|
||||
"returncode=%s", returncode)
|
||||
return parsed
|
||||
except json.JSONDecodeError:
|
||||
raw = stdout_text[:2000]
|
||||
logger.warning("browser '%s' returned non-JSON output (rc=%s): %s",
|
||||
command, returncode, raw[:500])
|
||||
|
||||
if command == "screenshot":
|
||||
stderr_text = (stderr or "").strip()
|
||||
combined_text = "\n".join(
|
||||
part for part in [stdout_text, stderr_text] if part
|
||||
raw = stdout_text[:2000]
|
||||
logger.warning("browser '%s' returned non-JSON output (rc=%s): %s",
|
||||
command, returncode, raw[:500])
|
||||
|
||||
if command == "screenshot":
|
||||
stderr_text = (stderr or "").strip()
|
||||
combined_text = "\n".join(
|
||||
part for part in [stdout_text, stderr_text] if part
|
||||
)
|
||||
recovered_path = _extract_screenshot_path_from_text(combined_text)
|
||||
|
||||
if recovered_path and Path(recovered_path).exists():
|
||||
logger.info(
|
||||
"browser 'screenshot' recovered file from non-JSON output: %s",
|
||||
recovered_path,
|
||||
)
|
||||
recovered_path = _extract_screenshot_path_from_text(combined_text)
|
||||
return {
|
||||
"success": True,
|
||||
"data": {
|
||||
"path": recovered_path,
|
||||
"raw": raw,
|
||||
},
|
||||
}
|
||||
|
||||
if recovered_path and Path(recovered_path).exists():
|
||||
logger.info(
|
||||
"browser 'screenshot' recovered file from non-JSON output: %s",
|
||||
recovered_path,
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"data": {
|
||||
"path": recovered_path,
|
||||
"raw": raw,
|
||||
},
|
||||
}
|
||||
|
||||
return {
|
||||
"success": False,
|
||||
"error": f"Non-JSON output from agent-browser for '{command}': {raw}"
|
||||
}
|
||||
return {
|
||||
"success": False,
|
||||
"error": f"Non-JSON output from agent-browser for '{command}': {raw}"
|
||||
}
|
||||
|
||||
# Check for errors
|
||||
if returncode != 0:
|
||||
@@ -1777,10 +1782,11 @@ def _browser_eval(expression: str, task_id: Optional[str] = None) -> str:
|
||||
# is valid JSON, parse it so the model gets structured data.
|
||||
parsed = raw_result
|
||||
if isinstance(raw_result, str):
|
||||
try:
|
||||
parsed = json.loads(raw_result)
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
pass # keep as string
|
||||
parsed = repair_and_load_json(
|
||||
raw_result,
|
||||
default=raw_result,
|
||||
context="browser_eval_result",
|
||||
)
|
||||
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
@@ -1801,10 +1807,11 @@ def _camofox_eval(expression: str, task_id: Optional[str] = None) -> str:
|
||||
raw_result = resp.get("result") if isinstance(resp, dict) else resp
|
||||
parsed = raw_result
|
||||
if isinstance(raw_result, str):
|
||||
try:
|
||||
parsed = json.loads(raw_result)
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
pass
|
||||
parsed = repair_and_load_json(
|
||||
raw_result,
|
||||
default=raw_result,
|
||||
context="camofox_eval_result",
|
||||
)
|
||||
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
@@ -1904,26 +1911,29 @@ def browser_get_images(task_id: Optional[str] = None) -> str:
|
||||
if result.get("success"):
|
||||
data = result.get("data", {})
|
||||
raw_result = data.get("result", "[]")
|
||||
|
||||
try:
|
||||
# Parse the JSON string returned by JavaScript
|
||||
if isinstance(raw_result, str):
|
||||
images = json.loads(raw_result)
|
||||
else:
|
||||
images = raw_result
|
||||
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
"images": images,
|
||||
"count": len(images)
|
||||
}, ensure_ascii=False)
|
||||
except json.JSONDecodeError:
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
"images": [],
|
||||
"count": 0,
|
||||
"warning": "Could not parse image data"
|
||||
}, ensure_ascii=False)
|
||||
|
||||
warning = None
|
||||
if isinstance(raw_result, str):
|
||||
images = repair_and_load_json(
|
||||
raw_result,
|
||||
default=None,
|
||||
context="browser_get_images_result",
|
||||
)
|
||||
else:
|
||||
images = raw_result
|
||||
|
||||
if not isinstance(images, list):
|
||||
images = []
|
||||
warning = "Could not parse image data"
|
||||
|
||||
payload = {
|
||||
"success": True,
|
||||
"images": images,
|
||||
"count": len(images),
|
||||
}
|
||||
if warning:
|
||||
payload["warning"] = warning
|
||||
return json.dumps(payload, ensure_ascii=False)
|
||||
else:
|
||||
return json.dumps({
|
||||
"success": False,
|
||||
|
||||
@@ -16,7 +16,6 @@ For example:
|
||||
|
||||
```bash
|
||||
hermes skills install official/blockchain/solana
|
||||
hermes skills install official/dogfood/adversarial-ux-test
|
||||
hermes skills install official/mlops/flash-attention
|
||||
```
|
||||
|
||||
@@ -57,12 +56,6 @@ hermes skills uninstall <skill-name>
|
||||
| **blender-mcp** | Control Blender directly from Hermes via socket connection to the blender-mcp addon. Create 3D objects, materials, animations, and run arbitrary Blender Python (bpy) code. |
|
||||
| **meme-generation** | Generate real meme images by picking a template and overlaying text with Pillow. Produces actual `.png` meme files. |
|
||||
|
||||
## Dogfood
|
||||
|
||||
| Skill | Description |
|
||||
|-------|-------------|
|
||||
| **adversarial-ux-test** | Roleplay the most difficult, tech-resistant user for a product — browse in-persona, rant, then filter through a RED/YELLOW/WHITE/GREEN pragmatism layer so only real UX friction becomes tickets. |
|
||||
|
||||
## DevOps
|
||||
|
||||
| Skill | Description |
|
||||
|
||||
@@ -59,12 +59,9 @@ DevOps and infrastructure automation skills.
|
||||
|
||||
## dogfood
|
||||
|
||||
Internal dogfooding and QA skills used to test Hermes Agent itself.
|
||||
|
||||
| Skill | Description | Path |
|
||||
|-------|-------------|------|
|
||||
| `dogfood` | Systematic exploratory QA testing of web applications — find bugs, capture evidence, and generate structured reports. | `dogfood/dogfood` |
|
||||
| `adversarial-ux-test` | Roleplay the most difficult, tech-resistant user for a product — browse in-persona, rant, then filter through a RED/YELLOW/WHITE/GREEN pragmatism layer so only real UX friction becomes tickets. | `dogfood/adversarial-ux-test` |
|
||||
| `hermes-agent-setup` | Help users configure Hermes Agent — CLI usage, setup wizard, model/provider selection, tools, skills, voice/STT/TTS, gateway, and troubleshooting. | `dogfood/hermes-agent-setup` |
|
||||
|
||||
## email
|
||||
|
||||
Reference in New Issue
Block a user