**Priority:** High -- this is foundational for scaling to complex tasks
The main agent becomes an orchestrator that delegates context-heavy tasks to subagents with isolated context. Each subagent returns a summary, keeping the orchestrator's context clean.
**Core design:**
-`delegate_task(goal, context, toolsets=[])` tool -- spawns a fresh AIAgent with its own conversation, limited toolset, and task-specific system prompt
- Parent passes a goal string + optional context blob; child returns a structured summary
- Configurable depth limit (e.g., max 2 levels) to prevent runaway recursion
- Subagent inherits the same terminal/browser session (task_id) but gets a fresh message history
**What other agents do:**
- **OpenClaw**: `sessions_spawn` + `subagents` tool with list/kill/steer actions, depth limits, rate limiting. Cross-session agent-to-agent coordination via `sessions_send`.
- **Codex**: `spawn_agent` / `send_input` / `close_agent` / `wait_for_agent` with configurable timeouts. Thread manager for concurrent agents.
- **Cline**: Up to 5 parallel subagents per invocation. Subagents get restricted tool access (read, list, search, bash, skill, attempt). Progress tracking with stats (tool calls, tokens, cost, context usage).
- **OpenCode**: `TaskTool` creates subagent sessions with permission inheritance. Resumable tasks via `task_id`. Parent-child session relationships.
**Our approach:**
- Start with a single `delegate_task` tool (like Cline's model -- simple, bounded)
**Priority:** High -- every serious agent has this now
A `todo` tool the agent uses to decompose complex tasks, track progress, and recover from failures. Must be **cache-friendly** -- no system prompt mutation, no injected messages that invalidate the KV cache prefix.
**What other agents do:**
- **Cursor (Claude)**: `TodoWrite` tool. Flat list, 4 states, `merge` flag. Todos injected into context every turn. Effective but **cache-hostile** -- mutating the system prompt on every turn invalidates the entire prefix cache for all prior tokens.
- **OpenCode**: `todowrite` / `todoread` as separate tools. State lives server-side. Agent reads it back when needed. **Cache-friendly** -- no context mutation; the todo state appears only as tool call/response pairs in the normal conversation flow.
- **Cline**: `focus_chain` tool for task management with sequential focus
- **OpenClaw**: `/mesh <goal>` command auto-plans and runs multi-step workflows
### The caching problem
With OpenRouter / vLLM / any prefix-caching provider, the prompt is cached from the start of the sequence. If we inject a changing todo list into the system prompt (or anywhere before the most recent messages), we invalidate the cache for the entire conversation. On a 100K-token conversation, this means re-processing the full prefix on every turn instead of just the new tokens. That's a massive cost and latency penalty.
**Rule: Never modify anything before the latest assistant/user turn.** Anything we add must go at the *end* of the conversation (as a tool response or appended to the latest message), or live entirely inside tool call/response pairs that the agent initiated.
### Design: Pure tool-based, no context mutation
**Two tools: `todo_write` and `todo_read`**
`todo_write` -- create or update the task list:
```
Parameters:
todos: [{id, content, status}] # the items to write
merge: bool (default false) # false = replace list, true = update by id
```
Returns: the full current todo list (so the agent immediately sees the result)
`todo_read` -- retrieve the current task list:
```
Parameters: (none)
```
Returns: the full current todo list, plus metadata (items completed, items remaining, tool calls since last update)
This is how OpenCode does it and it's the right call. The agent's own tool call history is the "memory" -- the `todo_write` response containing the full list is right there in the conversation. No injection needed.
**Item schema:**
```
{
"id": "1", # unique string identifier
"content": "...", # description of the task
"status": "pending" # one of: pending, in_progress, completed, cancelled
}
```
**No priority field.** Order in the list is the priority.
**Only 4 states.** No "failed" or "blocked" -- if something fails, cancel it and add a revised item.
### Where the behavior rules live: the tool description
Instead of adding instructions to the system prompt, we put all the usage guidance directly in the tool description. This is part of the tool schema, which is sent once at the start and **is cached perfectly** -- it never changes mid-conversation.
The `todo_write` tool description teaches the agent everything:
```
Manage your task list for the current session. Use this to plan and track
multi-step work.
When to use: Complex tasks with 3+ steps, when the user gives multiple tasks,
or when you need to plan before acting. Skip for simple single-step requests.
Behavior:
- Set merge=false to create a fresh plan (replaces existing todos)
- Set merge=true to update status or add follow-up items
- Mark the first item in_progress immediately and start working on it
- Only keep ONE item in_progress at a time
- Mark items completed as soon as you finish them
- If something fails, mark it cancelled and add a revised item
- Don't add "test" or "verify" items unless the user asks for them
- Update todos silently alongside your other work
Returns the full current list after every write.
```
This is ~150 tokens in the tool schema. It's static, cached, and teaches the agent the same behavioral rules without touching the system prompt.
### How it survives context compression
The state lives server-side on the AIAgent instance (in-memory dict). The conversation history just has the tool call/response pairs as a trail of what happened.
**On context compression, re-inject the current todo state.** Compression already invalidates the cache (the middle of the conversation is being rewritten anyway), so there's zero additional cache cost to appending the todo state once at that moment. When `run_agent.py` runs compression, if `_todo_state` is non-empty, it appends a synthetic tool response at the end of the compressed history:
```
[System: Your current task list was preserved across context compression]
- [x] 1. Set up project structure (completed)
- [>] 2. Implement the search endpoint (in_progress)
- [ ] 3. Add error handling (pending)
```
This is the one place we do inject -- but only on compression events, which are rare (once every ~50-100 turns). The cache was already blown by compression itself, so it costs nothing extra.
The agent does NOT need to "know" to call `todo_read` after compression -- it just sees its plan right there in the history. `todo_read` still exists as a tool for any time the agent wants to double-check, but it's not load-bearing for the compression case.
Instead of injecting system messages (which mutate the context), we **piggyback on existing tool responses.** When any tool returns its result, and the tool call counter has crossed a threshold, we append a one-line note to that tool's response:
```
[10+ tool calls since last todo update -- consider calling todo_read to review your plan]
```
This is a tiny addition to a response that's already at the end of the conversation -- zero cache impact on previous turns. It costs ~20 tokens and only appears once per threshold crossing.
Similarly, on tool errors, we can append to the error response:
```
[This error may affect your plan -- call todo_read to review]
```
These hints go in the tool response, not in a new system message. The agent processes them naturally as part of reading the tool output.
### Server-side state
```python
# On AIAgent class
_todo_state: Dict[str, List[dict]] # session_key -> list of todo items
_todo_tool_calls_since_update: int # counter for checkpoint nudges
```
-`todo_write` updates `_todo_state` and resets the counter
-`todo_read` reads `_todo_state` and resets the counter
- Every `handle_function_call` increments the counter
- When counter > threshold (default 10), the next tool response gets the checkpoint hint appended, then the hint flag is cleared until the next threshold crossing
### Summary: what we took from Cursor, what we changed
| Aspect | Cursor's approach | Our approach |
|--------|------------------|--------------|
| State visibility | Injected into context every turn | Tool call/response pairs in normal conversation flow |
Extend the skills system so the agent can create, edit, and delete skills at runtime. Skill acquisition from successful task patterns.
**What other agents do:**
- **OpenClaw**: ClawHub registry -- bundled, managed, and workspace skills with install gating. Agent can auto-search and pull skills from a remote hub.
- **OpenCode**: SKILL.md format with URL download support via `Discovery.pull`. Compatible with Claude Code's skill format.
- **Pi**: Skills as npm-publishable packages. Prompt templates and themes alongside skills.
- **Cline**: Global + project-level skill directories. `use_skill` tool + `new_rule` tool for creating rules.
**Our approach:**
- Add `skill_create`, `skill_edit`, `skill_delete` actions to the existing `skill_view` tool (or a new `skill_manage` tool)
- New skills saved to `~/.hermes/skills/` (user-created, separate from bundled)
- SKILL.md format stays the same (YAML frontmatter + markdown body)
- Skill acquisition: after a successful multi-step task, offer to save the approach as a new skill (agent-initiated, user-confirmed)
- Later: skill chaining with dependency graphs, parameterized templates
- Later: remote skill registry (like ClawHub) for community-shared skills
Allow the agent to present structured choices to the user when it needs clarification. Rich terminal UI in CLI mode, graceful fallback on messaging platforms.
**What other agents do:**
- **Codex**: `request_user_input` tool for open-ended questions
- **Cline**: `ask_followup_question` tool with structured options
- **OpenCode**: `question` tool for asking the user
**Our approach:**
-`clarify` tool with parameters: `question` (string), `choices` (list of up to 6 strings), `allow_freetext` (bool)
**Priority:** High -- biggest gap vs. OpenClaw and Dash
Persistent memory that survives across sessions. The agent remembers what it learned, who the user is, and what worked before.
**What other agents do:**
- **OpenClaw**: 78+ file memory subsystem. SQLite + sqlite-vec for vector search. Embeddings via OpenAI/Voyage/Gemini. Hybrid search (vector + keyword). MMR for diversity. Temporal decay. File watcher for auto-indexing. Session-scoped memory with citations.
- **Dash**: 6-layer context system. LearningMachine with agentic mode -- errors get diagnosed, fixed, and saved as learnings that are never repeated. Business rules injection.
- **Codex**: 2-phase memory pipeline -- extract memories from rollouts with a dedicated model, then consolidate with global locking.
**Our approach (phased):**
### Phase 1: File-based memory (MVP)
-`~/.hermes/MEMORY.md` -- curated long-term memory, agent can read/append/edit
-`~/.hermes/USER.md` -- user profile the agent maintains (preferences, context, projects, communication style)
-`memory` tool with actions: `read`, `append`, `search` (simple text search within the file)
- Both files injected into system prompt (or summarized if too large)
- Agent prompted to update memory at session end or before context compression
### Phase 2: Learning store (inspired by Dash)
-`~/.hermes/learnings.jsonl` -- structured error patterns and discovered fixes
**Status:** Not started (currently Browserbase cloud only)
**Priority:** Medium
Support local Chrome/Chromium via Chrome DevTools Protocol alongside existing Browserbase cloud backend.
**What other agents do:**
- **OpenClaw**: Full CDP-based Chrome control with snapshots, actions, uploads, profiles, file chooser, PDF save, console messages, tab management. Uses local Chrome for persistent login sessions.
- **Cline**: Headless browser with Computer Use (click, type, scroll, screenshot, console logs)
**Our approach:**
- Add a `local` backend option to `browser_tool.py` using Playwright or raw CDP
- Config toggle: `browser.backend: local | browserbase | auto`
-`auto` mode: try local first, fall back to Browserbase
- Local advantages: free, persistent login sessions, no API key needed
- Local disadvantages: no CAPTCHA solving, no stealth mode, requires Chrome installed
- Reuse the same 10-tool interface -- just swap the backend
- Later: Chrome profile management for persistent sessions across restarts
**Reference:** OpenClaw has Signal support via signal-cli.
---
## 9. Plugin/Extension System 🔌
**Status:** Partially implemented (event hooks exist in `gateway/hooks.py`)
**Priority:** Medium
Full Python plugin interface that goes beyond the current hook system.
**What other agents do:**
- **OpenClaw**: Plugin SDK with tool-send capabilities, lifecycle phase hooks (before-agent-start, after-tool-call, model-override), plugin registry with install/uninstall.
- **Pi**: Extensions are TypeScript modules that can register tools, commands, keyboard shortcuts, custom UI widgets, overlays, status lines, dialogs, compaction hooks, raw terminal input listeners. Extremely comprehensive.
- **OpenCode**: MCP client support (stdio, SSE, StreamableHTTP), OAuth auth for MCP servers. Also has Copilot/Codex plugins.
- **Codex**: Full MCP integration with skill dependencies.
- **Cline**: MCP integration + lifecycle hooks with cancellation support.
**Our approach (phased):**
### Phase 1: Enhanced hooks
- Expand the existing `gateway/hooks.py` to support more events: `before-tool-call`, `after-tool-call`, `before-response`, `context-compress`, `session-end`
- Each pack is a set of SKILL.md files installable via `hermes skills install <pack>`
---
## NEW: Items Discovered from Agent Codebases Analysis
---
## 14. MCP (Model Context Protocol) Support 🔗
**Status:** Not started
**Priority:** High -- this is becoming an industry standard
MCP is the protocol that Codex, Cline, and OpenCode all support for connecting to external tool servers. Supporting MCP would instantly give Hermes access to hundreds of community tool servers.
**What other agents do:**
- **Codex**: Full MCP integration with skill dependencies
- Implement an MCP client that can connect to external MCP servers
- Config: list of MCP servers in `~/.hermes/config.yaml` with transport type and connection details
- Each MCP server's tools auto-registered as a dynamic toolset
- Start with stdio transport (most common), then add SSE and HTTP
- Could also be part of the Plugin system (item 9, Phase 3) since MCP is essentially a plugin protocol
---
## 15. Permission / Safety System 🛡️
**Status:** Partially implemented (dangerous command approval in gateway)
**Priority:** Medium
Formalize the tool permission system beyond the current ad-hoc dangerous command checks.
**What other agents do:**
- **OpenCode**: Sophisticated permission system with wildcards, deny/allow/ask per tool pattern. Configurable `primary_tools` that bypass restrictions.
- **Codex**: Exec policy system for allowed/denied commands. Seatbelt (macOS) / Landlock (Linux) sandboxing. Network proxy for controlling outbound access.
- **OpenClaw**: Tool policy pipeline with allowlist/denylist per sandbox. Docker per-session sandboxing.
**Our approach:**
- Config-based permission rules: `permissions.yaml` with patterns like `terminal.*: ask`, `file.write: allow`, `browser.*: deny`
- Per-platform overrides (stricter on Discord groups than Telegram DMs)
- Tool call logging with audit trail (who ran what, when, approved by whom)
- Later: sandboxing options (Docker per-session, like OpenClaw)
---
## 16. Self-Learning from Errors 📖
**Status:** Not started
**Priority:** Medium-High -- unique differentiator from Dash
Automatic learning loop: when tool calls fail, the agent diagnoses the error, fixes it, and saves the pattern so it never repeats the same mistake.
- Before executing operations that have failed before, auto-inject relevant learnings
- Agent prompted: "This is similar to a previous error. Here's what worked last time: ..."
- Consolidation: periodically merge similar learnings and increment `times_used`
- Could be triggered automatically on tool call errors or manually by the agent
---
## 17. Session Branching / Checkpoints 🌿
**Status:** Not started
**Priority:** Low-Medium
Save and restore conversation state at any point. Branch off to explore alternatives without losing progress.
**What other agents do:**
- **Pi**: Full branching -- create branches from any point in conversation. Branch summary entries. Parent session tracking for tree-like session structures.
- **Cline**: Checkpoints -- workspace snapshots at each step with Compare/Restore UI
- **OpenCode**: Git-backed workspace snapshots per step, with weekly gc
**Our approach:**
-`checkpoint` tool: saves current message history + working directory state as a named snapshot
-`restore` tool: rolls back to a named checkpoint
- Stored in `~/.hermes/checkpoints/<session_id>/<name>.json`
- For file changes: git stash or tar snapshot of working directory
- Useful for: "let me try approach A, and if it doesn't work, roll back and try B"
- Later: full branching with tree visualization
---
## 18. File Watcher / Project Awareness 👁️
**Status:** Not started
**Priority:** Low
Monitor the working directory for changes and notify the agent of relevant updates.
**What other agents do:**
- **Codex**: File watcher for live project change detection
- **OpenCode**: Git info integration, worktree support
**Our approach:**
- Watchdog-based file watcher on the working directory
- Inject a system message when relevant files change (e.g., "Note: `src/main.py` was modified externally")
- Filter noise: ignore `.git/`, `node_modules/`, `__pycache__/`, etc.
- Useful for pair-programming scenarios where the user is also editing files
- Could also track git status changes
---
## 19. Heartbeat System 💓
**Status:** Not started
**Priority:** Low-Medium
Periodic agent wake-up for checking reminders, monitoring tasks, and running scheduled introspection.
**What other agents do:**
- **OpenClaw**: Heartbeat runner with configurable intervals per-agent (e.g., every 30m, 10m, 15m). Ghost reminder system. Transcript pruning on heartbeat. Dynamic config update without restart. Wake-on-demand.
**Our approach:**
-`HEARTBEAT.md` file in `~/.hermes/` -- agent reads this on each heartbeat
- Configurable interval (default: 30 minutes when gateway is running)
- Runs inside an existing session (not a new one) to maintain context
- Can be used for: checking on background processes, sending reminders, running periodic tasks
-`HEARTBEAT_OK` response suppresses output when nothing needs attention
- Integrates with the existing cronjob system for scheduling
**Priority:** High -- potentially the single biggest efficiency win for agent loops
Instead of the LLM making one tool call, reading the result, deciding what to do next, making another tool call (N round trips), the LLM writes a Python script that calls multiple tools, processes results, branches on conditions, and returns a final summary -- all in one turn.
**What Anthropic just shipped (Feb 2026):**
Anthropic's new `web_search_20260209` and `web_fetch_20260209` tools use "dynamic filtering" -- Claude writes and executes Python code that calls the search/fetch tools, filters the HTML, cross-references results, retries with different queries, and returns only what's relevant. Results: **+11% accuracy, -24% input tokens** on average across BrowseComp and DeepsearchQA. Quora/Poe found it "achieved the highest accuracy on our internal evals" and described it as behaving "like an actual researcher, writing Python to parse, filter, and cross-reference results rather than reasoning over raw HTML in context."
LLM call -> tool call -> result -> LLM call -> tool call -> result -> LLM call -> ...
```
Every round trip costs: a full LLM inference (prompt + generation), network latency, and the growing context window carrying all previous tool results. For a 10-step task, that's 10+ LLM calls with increasingly large contexts.
branches on conditions, retries on failure -> returns summary -> LLM call
```
One LLM call replaces many. The intermediate tool results never enter the context window -- they're processed in the code sandbox and only the final summary comes back. The LLM pre-plans its decision tree in code rather than making decisions one-at-a-time in the conversation.
**Which of our tools benefit most:**
| Tool | Current pattern (N round trips) | With programmatic calling (1 round trip) |
| **file ops (read, search, patch)** | Search for pattern -> read matching files -> decide which to patch -> patch each | Script: search, read all matches, filter by criteria, apply patches, verify, return diff summary |
| **session_search** | Search -> read results -> search again with refined query -> ... | Script: search with multiple queries, deduplicate, rank by relevance, return top N |
| **terminal** | Run command -> check output -> run follow-up -> check again -> ... | Script: run command, parse output, branch on exit code, run follow-ups, return final state |
### The hard problem: where does the code run?
Our tools don't all live in one place. The terminal backend can be local, Docker, Singularity, SSH, or Modal. Browser runs on Browserbase cloud. Web search/extract are Firecrawl API calls. File ops go through whatever terminal backend is active. Vision, image gen, TTS are all remote APIs.
If we just "run Python in the terminal," we hit a wall:
- **Docker/Modal/SSH backends**: The remote environment doesn't have our Python tool code, our API keys, our `handle_function_call` dispatcher, or any of the hermes-agent packages. It's a bare sandbox.
- **Local backend**: Could import our code directly, but that couples execution to local-only and creates a security mess (LLM-generated code running in the agent process).
- **API-based tools** (web, browser, vision): These need API keys and specific client libraries that aren't in the terminal backend.
**The code sandbox is NOT the terminal backend.** This is the key insight. The sandbox runs on the **agent host machine** (where `run_agent.py` lives), separate from both the LLM and the terminal backend. It calls tools through the same `handle_function_call` dispatcher that the normal agent loop uses. No inbound network connections needed -- everything is local IPC on the agent host.
### Architecture: Local subprocess with Unix domain socket RPC
The sandbox is a **child process** on the same machine. RPC goes over a **Unix domain socket** (not stdin/stdout/stderr -- those stay free for their natural purposes). The parent dispatches each tool call through the existing `handle_function_call` -- the exact same codepath the normal agent loop uses. Works with every terminal backend because the sandbox doesn't touch the terminal backend directly.
- **stdout** is the script's natural output channel (`print()`). Multiplexing RPC and output on the same stream requires fragile marker parsing. Keep it clean: stdout = final output for the LLM.
- **stderr** is for Python errors, tracebacks, warnings, and `logging` output. Multiplexing RPC here means any stray `logging.warning()` or exception traceback corrupts the RPC stream.
- **Extra file descriptors (fd 3/4)** work on Linux/macOS but are clunky with subprocess.Popen.
# ... generated for each enabled tool in the session
```
This module is generated dynamically because the available tools vary per session and per toolset configuration. The generator reads the session's enabled tools and emits a function for each one that's on the sandbox allow-list.
### What Python libraries are available in the sandbox
The sandbox runs the same Python interpreter as the agent. Available imports:
**Not restricted but discouraged via tool description:**
`subprocess`, `socket`, `requests`, `urllib.request`, `os.system` -- the tool description says "use hermes_tools for all I/O." We don't hard-block these because the user already trusts the agent with `terminal()`, which is unrestricted shell access. Soft-guiding the LLM via the description is sufficient. If it occasionally uses `import os; os.listdir()` instead of `hermes_tools.search()`, no real harm done.
**The tool description tells the LLM:**
```
Available imports:
- from hermes_tools import web_search, web_extract, read_file, terminal, ...
- Python standard library: json, re, math, csv, datetime, collections, etc.
Use hermes_tools for all I/O (web, files, commands, browser).
Use stdlib for processing between tool calls (parsing, filtering, formatting).
Print your final result to stdout.
```
### Platform support
**Linux / macOS**: Fully supported. Unix domain sockets work natively.
**Windows**: Not supported. `AF_UNIX` nominally exists on Windows 10 17063+ but is unreliable in practice, and Hermes-Agent's primary target is Linux/macOS (bash-based install, systemd gateway, etc.). The `execute_code` tool is **disabled at startup on Windows**:
```python
import sys
SANDBOX_AVAILABLE = sys.platform != "win32"
def check_sandbox_requirements():
return SANDBOX_AVAILABLE
```
If the LLM tries to use `execute_code` on Windows, it gets: `{"error": "execute_code is not available on Windows. Use normal tool calls instead."}`. The tool is excluded from the tool schema entirely on Windows so the LLM never sees it.
### Which tools to expose in the sandbox (full audit)
The purpose of the sandbox is **reading, filtering, and processing data across multiple tool calls in code**, collapsing what would be many LLM round trips into one. Every tool needs to justify its inclusion against that purpose. The parent only generates RPC stubs for tools that pass this filter AND are enabled in the session.
| `web_extract` | **YES** | Core use case. Fetch N pages, parse, keep only relevant sections. |
| `read_file` | **YES** | Core use case. Bulk read + filter. Note: reads files on the terminal backend (Docker/SSH/Modal), not the agent host -- this is correct and intentional. |
| `search` | **YES** | Core use case. Find files/content, then process matching results. |
| `terminal` | **YES (restricted)** | Command chains with branching on exit codes. Foreground only -- `background`, `check_interval`, and `pty` parameters are stripped/blocked. |
| `write_file` | **YES (with caution)** | Scripts need to write computed outputs (generated configs, processed data). Partial-write risk if script fails midway, but same risk as normal tool calls. |
| `patch` | **YES (with caution)** | Bulk search-and-replace across files. Powerful but risky if the script's patch logic has bugs. The upside: script can read → patch → verify in a loop, which is actually safer than blind patching. |
| `browser_navigate` | **YES** | Browser automation loops are one of the biggest wins. |
| `browser_snapshot` | **YES** | Needed for reading page state in browser loops. Parent passes `user_task` from the session context. |
| `browser_close` | **NO** | Ends the entire browser session. If the script errors out after closing, the agent has no browser to recover with. Too destructive for unsupervised code. |
| `browser_get_images` | **NO** | Niche. Usually paired with vision analysis, which is excluded. |
| `browser_vision` | **NO** | This calls the vision LLM API -- expensive per call and requires LLM reasoning on the result. Defeats the purpose of avoiding LLM round trips. |
| `vision_analyze` | **NO** | Expensive API call per invocation. The LLM needs to SEE and reason about images directly, not filter them in code. One-shot nature. |
| `mixture_of_agents` | **NO** | This IS multiple LLM calls. Defeats the entire purpose. |
| `image_generate` | **NO** | Media generation. One-shot, no filtering logic benefits. |
| `text_to_speech` | **NO** | Media generation. One-shot. |
| `process` | **NO** | Background process management from an ephemeral script is incoherent. The script exits, but the process lives on -- who monitors it? |
| `skills_list` | **NO** | Skills are knowledge for the LLM to read and reason about. Loading a skill inside code that can't reason about it is pointless. |
| `skill_view` | **NO** | Same as above. |
| `schedule_cronjob` | **NO** | Side effect. Should not happen silently inside a script. |
| `list_cronjobs` | **NO** | Read-only but not useful in a code-mediation context. |
| `remove_cronjob` | **NO** | Side effect. |
| `send_message` | **NO** | Cross-platform side effect. Must not happen unsupervised. |
The allow-list is a constant in `code_execution_tool.py`, not derived from the session's enabled toolsets. Even if the session has `vision_analyze` enabled, it won't appear in the sandbox. The intersection of the allow-list and the session's enabled tools determines what's generated.
### Error handling
| Scenario | What happens |
|----------|-------------|
| **Syntax error in script** | Child exits immediately, traceback on stderr. Parent returns stderr as the tool response so the LLM sees the error and can retry. |
| **Runtime exception** | Same -- traceback on stderr, parent returns it. |
| **Tool call fails** | RPC returns the error JSON (same as normal tool errors). Script decides: retry, skip, or raise. |
| **Slow tool call (e.g., terminal make)** | Overall timeout covers total execution. One slow call is fine if total is under limit. |
| **Tool response too large in memory** | `web_extract` can return 500KB per page. If the script fetches 10 pages, that's 5MB in the child's memory. Not a problem on modern machines, and the whole point is the script FILTERS this down before printing. |
| **User interrupt (new message on gateway)** | Parent catches the interrupt event (same as existing `_interrupt_event` in terminal_tool), sends SIGTERM to child, returns `{"status": "interrupted"}`. |
| **Script tries to call excluded tool** | RPC returns `{"error": "Tool 'vision_analyze' is not available in execute_code. Use it as a normal tool call instead."}` |
| **Script calls terminal with background=True** | RPC strips the parameter and runs foreground, or returns an error. Background processes from ephemeral scripts are not supported. |
- **Tool call limit**: max 50 RPC tool calls per execution. After 50, further calls return an error. Prevents infinite tool-call loops.
- **Output size**: stdout capped at 50KB. Truncated with `[output truncated at 50KB]`. Prevents the script from flooding the LLM's context with a huge result (which would defeat the purpose).
- **Stderr capture**: capped at 10KB for error reporting.
- **No recursive sandboxing**: `execute_code` is not in the sandbox's tool list.
- **Interrupt support**: respects the same `_interrupt_event` mechanism as terminal_tool. If the user sends a new message while the sandbox is running, the child is killed and the agent can process the interrupt.
### Tool call logging and observability
Each tool call made inside the sandbox is **logged to the session transcript** for debugging, but **NOT added to the LLM conversation history** (that's the whole point -- keeping intermediate results out of context).
These appear in the JSONL transcript and in verbose logging, but the LLM only sees the final `execute_code` response containing the script's stdout.
For the gateway (messaging platforms): show one typing indicator + notification for the entire `execute_code` duration. Internal tool calls are silent. Later enhancement: progress updates like "execute_code (3/7 tool calls)".
### Stateful tools work correctly
Tools like `terminal` (working directory, env vars) and `browser_*` (page state, cookies) maintain state per `task_id`. The parent passes the session's `task_id` to every RPC-dispatched `handle_function_call`. So if the script runs:
```python
terminal("cd /tmp")
terminal("pwd") # returns /tmp -- state persists between calls
```
This works because both calls go through the same terminal environment, same as normal tool calls.
### Each `execute_code` invocation is stateless
The sandbox subprocess is fresh each time. No Python state carries over between `execute_code` calls. If the agent needs state across multiple `execute_code` calls, it should:
- Output the state as part of the result, then pass it back in the next script as a variable
- Or use the tools themselves for persistence (write to a file, then read it in the next script)
The underlying *tools* are stateful (same terminal session, same browser session), but the *Python sandbox* is not.
### When should the LLM use `execute_code` vs normal tool calls?
This goes in the tool description:
**Use `execute_code` when:**
- You need 3+ tool calls with processing logic between them
- You need to filter/reduce large tool outputs before they enter your context
- You need conditional branching (if X then do Y, else do Z)
- You need to loop (fetch N pages, process N files, retry on failure)
**Use normal tool calls when:**
- Single tool call with no processing needed
- You need to see the full result and apply complex reasoning (the LLM is better at reasoning than the code it writes)
- The task requires human interaction (clarify tool)
### Open questions for implementation
1.**Should the parent block its main thread while the sandbox runs?** Currently `handle_function_call` is synchronous, so yes -- same as any other tool call. For long sandbox runs (up to 120s), the gateway's typing indicator stays active. The agent can't process new messages during this time, but it can't during any long tool call either. Interrupt support (above) handles the "user sends a new message" case.
2.**Should `browser_snapshot` pass `user_task`?**`handle_browser_function_call` accepts `user_task` for task-aware content extraction. The parent should pass the user's original query from the session context when dispatching sandbox browser calls.
3.**Terminal parameter restrictions**: The sandbox version of `terminal` should strip/ignore: `background=True` (no background processes from ephemeral scripts), `check_interval` (gateway-only feature for background watchers), `pty=True` (interactive PTY makes no sense in a script). Only `command`, `timeout`, and `workdir` are passed through.
### Future enhancements (not MVP)
- **Concurrent tool calls via threading**: The script could use `ThreadPoolExecutor` to fetch 5 URLs in parallel. Requires the UDS client to be thread-safe (add a `threading.Lock` around socket send/receive). Significant speedup for I/O-bound workflows like multi-page web extraction.
- **Streaming progress to gateway**: Instead of one notification for the entire run, send periodic progress updates ("execute_code: 3/7 tool calls, 12s elapsed").
- **Persistent sandbox sessions**: Keep the subprocess alive between `execute_code` calls so Python variables carry over. Adds complexity but enables iterative multi-step workflows where the agent refines its script.
- **RL/batch integration**: Atropos RL environments use ToolContext instead of handle_function_call. Would need an adapter so the RPC bridge dispatches through the right mechanism per execution context.
- **Windows support**: If there's demand, fall back to TCP localhost (127.0.0.1:random_port) instead of UDS. Same protocol, different transport. Security concern: localhost port is accessible to other processes on the machine. Could mitigate with a random auth token in the RPC handshake.
### Relationship to other items
- **Subagent Architecture (#1)**: A code sandbox that calls tools IS a lightweight subagent without its own LLM inference. This handles many of the "mechanical multi-step" cases (search+filter, bulk file ops, browser loops) at near-zero LLM cost. Full subagents are still needed for tasks requiring LLM reasoning at each step.
- **Browser automation (#7)**: Biggest win. Browser workflows are 10+ round trips today. A script that navigates, clicks, extracts, paginates in a loop collapses that to 1 LLM turn.