* feat(memory): add pluggable memory provider interface with profile isolation Introduces a pluggable MemoryProvider ABC so external memory backends can integrate with Hermes without modifying core files. Each backend becomes a plugin implementing a standard interface, orchestrated by MemoryManager. Key architecture: - agent/memory_provider.py — ABC with core + optional lifecycle hooks - agent/memory_manager.py — single integration point in the agent loop - agent/builtin_memory_provider.py — wraps existing MEMORY.md/USER.md Profile isolation fixes applied to all 6 shipped plugins: - Cognitive Memory: use get_hermes_home() instead of raw env var - Hindsight Memory: check $HERMES_HOME/hindsight/config.json first, fall back to legacy ~/.hindsight/ for backward compat - Hermes Memory Store: replace hardcoded ~/.hermes paths with get_hermes_home() for config loading and DB path defaults - Mem0 Memory: use get_hermes_home() instead of raw env var - RetainDB Memory: auto-derive profile-scoped project name from hermes_home path (hermes-<profile>), explicit env var overrides - OpenViking Memory: read-only, no local state, isolation via .env MemoryManager.initialize_all() now injects hermes_home into kwargs so every provider can resolve profile-scoped storage without importing get_hermes_home() themselves. Plugin system: adds register_memory_provider() to PluginContext and get_plugin_memory_providers() accessor. Based on PR #3825. 46 tests (37 unit + 5 E2E + 4 plugin registration). * refactor(memory): drop cognitive plugin, rewrite OpenViking as full provider Remove cognitive-memory plugin (#727) — core mechanics are broken: decay runs 24x too fast (hourly not daily), prefetch uses row ID as timestamp, search limited by importance not similarity. Rewrite openviking-memory plugin from a read-only search wrapper into a full bidirectional memory provider using the complete OpenViking session lifecycle API: - sync_turn: records user/assistant messages to OpenViking session (threaded, non-blocking) - on_session_end: commits session to trigger automatic memory extraction into 6 categories (profile, preferences, entities, events, cases, patterns) - prefetch: background semantic search via find() endpoint - on_memory_write: mirrors built-in memory writes to the session - is_available: checks env var only, no network calls (ABC compliance) Tools expanded from 3 to 5: - viking_search: semantic search with mode/scope/limit - viking_read: tiered content (abstract ~100tok / overview ~2k / full) - viking_browse: filesystem-style navigation (list/tree/stat) - viking_remember: explicit memory storage via session - viking_add_resource: ingest URLs/docs into knowledge base Uses direct HTTP via httpx (no openviking SDK dependency needed). Response truncation on viking_read to prevent context flooding. * fix(memory): harden Mem0 plugin — thread safety, non-blocking sync, circuit breaker - Remove redundant mem0_context tool (identical to mem0_search with rerank=true, top_k=5 — wastes a tool slot and confuses the model) - Thread sync_turn so it's non-blocking — Mem0's server-side LLM extraction can take 5-10s, was stalling the agent after every turn - Add threading.Lock around _get_client() for thread-safe lazy init (prefetch and sync threads could race on first client creation) - Add circuit breaker: after 5 consecutive API failures, pause calls for 120s instead of hammering a down server every turn. Auto-resets after cooldown. Logs a warning when tripped. - Track success/failure in prefetch, sync_turn, and all tool calls - Wait for previous sync to finish before starting a new one (prevents unbounded thread accumulation on rapid turns) - Clean up shutdown to join both prefetch and sync threads * fix(memory): enforce single external memory provider limit MemoryManager now rejects a second non-builtin provider with a warning. Built-in memory (MEMORY.md/USER.md) is always accepted. Only ONE external plugin provider is allowed at a time. This prevents tool schema bloat (some providers add 3-5 tools each) and conflicting memory backends. The warning message directs users to configure memory.provider in config.yaml to select which provider to activate. Updated all 47 tests to use builtin + one external pattern instead of multiple externals. Added test_second_external_rejected to verify the enforcement. * feat(memory): add ByteRover memory provider plugin Implements the ByteRover integration (from PR #3499 by hieuntg81) as a MemoryProvider plugin instead of direct run_agent.py modifications. ByteRover provides persistent memory via the brv CLI — a hierarchical knowledge tree with tiered retrieval (fuzzy text then LLM-driven search). Local-first with optional cloud sync. Plugin capabilities: - prefetch: background brv query for relevant context - sync_turn: curate conversation turns (threaded, non-blocking) - on_memory_write: mirror built-in memory writes to brv - on_pre_compress: extract insights before context compression Tools (3): - brv_query: search the knowledge tree - brv_curate: store facts/decisions/patterns - brv_status: check CLI version and context tree state Profile isolation: working directory at $HERMES_HOME/byterover/ (scoped per profile). Binary resolution cached with thread-safe double-checked locking. All write operations threaded to avoid blocking the agent (curate can take 120s with LLM processing). * fix(memory): thread remaining sync_turns, fix holographic, add config key Plugin fixes: - Hindsight: thread sync_turn (was blocking up to 30s via _run_in_thread) - RetainDB: thread sync_turn (was blocking on HTTP POST) - Both: shutdown now joins sync threads alongside prefetch threads Holographic retrieval fixes: - reason(): removed dead intersection_key computation (bundled but never used in scoring). Now reuses pre-computed entity_residuals directly, moved role_content encoding outside the inner loop. - contradict(): added _MAX_CONTRADICT_FACTS=500 scaling guard. Above 500 facts, only checks the most recently updated ones to avoid O(n^2) explosion (~125K comparisons at 500 is acceptable). Config: - Added memory.provider key to DEFAULT_CONFIG ("" = builtin only). No version bump needed (deep_merge handles new keys automatically). * feat(memory): extract Honcho as a MemoryProvider plugin Creates plugins/honcho-memory/ as a thin adapter over the existing honcho_integration/ package. All 4 Honcho tools (profile, search, context, conclude) move from the normal tool registry to the MemoryProvider interface. The plugin delegates all work to HonchoSessionManager — no Honcho logic is reimplemented. It uses the existing config chain: $HERMES_HOME/honcho.json -> ~/.honcho/config.json -> env vars. Lifecycle hooks: - initialize: creates HonchoSessionManager via existing client factory - prefetch: background dialectic query - sync_turn: records messages + flushes to API (threaded) - on_memory_write: mirrors user profile writes as conclusions - on_session_end: flushes all pending messages This is a prerequisite for the MemoryManager wiring in run_agent.py. Once wired, Honcho goes through the same provider interface as all other memory plugins, and the scattered Honcho code in run_agent.py can be consolidated into the single MemoryManager integration point. * feat(memory): wire MemoryManager into run_agent.py Adds 8 integration points for the external memory provider plugin, all purely additive (zero existing code modified): 1. Init (~L1130): Create MemoryManager, find matching plugin provider from memory.provider config, initialize with session context 2. Tool injection (~L1160): Append provider tool schemas to self.tools and self.valid_tool_names after memory_manager init 3. System prompt (~L2705): Add external provider's system_prompt_block alongside existing MEMORY.md/USER.md blocks 4. Tool routing (~L5362): Route provider tool calls through memory_manager.handle_tool_call() before the catchall handler 5. Memory write bridge (~L5353): Notify external provider via on_memory_write() when the built-in memory tool writes 6. Pre-compress (~L5233): Call on_pre_compress() before context compression discards messages 7. Prefetch (~L6421): Inject provider prefetch results into the current-turn user message (same pattern as Honcho turn context) 8. Turn sync + session end (~L8161, ~L8172): sync_all() after each completed turn, queue_prefetch_all() for next turn, on_session_end() + shutdown_all() at conversation end All hooks are wrapped in try/except — a failing provider never breaks the agent. The existing memory system, Honcho integration, and all other code paths are completely untouched. Full suite: 7222 passed, 4 pre-existing failures. * refactor(memory): remove legacy Honcho integration from core Extracts all Honcho-specific code from run_agent.py, model_tools.py, toolsets.py, and gateway/run.py. Honcho is now exclusively available as a memory provider plugin (plugins/honcho-memory/). Removed from run_agent.py (-457 lines): - Honcho init block (session manager creation, activation, config) - 8 Honcho methods: _honcho_should_activate, _strip_honcho_tools, _activate_honcho, _register_honcho_exit_hook, _queue_honcho_prefetch, _honcho_prefetch, _honcho_save_user_observation, _honcho_sync - _inject_honcho_turn_context module-level function - Honcho system prompt block (tool descriptions, CLI commands) - Honcho context injection in api_messages building - Honcho params from __init__ (honcho_session_key, honcho_manager, honcho_config) - HONCHO_TOOL_NAMES constant - All honcho-specific tool dispatch forwarding Removed from other files: - model_tools.py: honcho_tools import, honcho params from handle_function_call - toolsets.py: honcho toolset definition, honcho tools from core tools list - gateway/run.py: honcho params from AIAgent constructor calls Removed tests (-339 lines): - 9 Honcho-specific test methods from test_run_agent.py - TestHonchoAtexitFlush class from test_exit_cleanup_interrupt.py Restored two regex constants (_SURROGATE_RE, _BUDGET_WARNING_RE) that were accidentally removed during the honcho function extraction. The honcho_integration/ package is kept intact — the plugin delegates to it. tools/honcho_tools.py registry entries are now dead code (import commented out in model_tools.py) but the file is preserved for reference. Full suite: 7207 passed, 4 pre-existing failures. Zero regressions. * refactor(memory): restructure plugins, add CLI, clean gateway, migration notice Plugin restructure: - Move all memory plugins from plugins/<name>-memory/ to plugins/memory/<name>/ (byterover, hindsight, holographic, honcho, mem0, openviking, retaindb) - New plugins/memory/__init__.py discovery module that scans the directory directly, loading providers by name without the general plugin system - run_agent.py uses load_memory_provider() instead of get_plugin_memory_providers() CLI wiring: - hermes memory setup — interactive curses picker + config wizard - hermes memory status — show active provider, config, availability - hermes memory off — disable external provider (built-in only) - hermes honcho — now shows migration notice pointing to hermes memory setup Gateway cleanup: - Remove _get_or_create_gateway_honcho (already removed in prev commit) - Remove _shutdown_gateway_honcho and _shutdown_all_gateway_honcho methods - Remove all calls to shutdown methods (4 call sites) - Remove _honcho_managers/_honcho_configs dict references Dead code removal: - Delete tools/honcho_tools.py (279 lines, import was already commented out) - Delete tests/gateway/test_honcho_lifecycle.py (131 lines, tested removed methods) - Remove if False placeholder from run_agent.py Migration: - Honcho migration notice on startup: detects existing honcho.json or ~/.honcho/config.json, prints guidance to run hermes memory setup. Only fires when memory.provider is not set and not in quiet mode. Full suite: 7203 passed, 4 pre-existing failures. Zero regressions. * feat(memory): standardize plugin config + add per-plugin documentation Config architecture: - Add save_config(values, hermes_home) to MemoryProvider ABC - Honcho: writes to $HERMES_HOME/honcho.json (SDK native) - Mem0: writes to $HERMES_HOME/mem0.json - Hindsight: writes to $HERMES_HOME/hindsight/config.json - Holographic: writes to config.yaml under plugins.hermes-memory-store - OpenViking/RetainDB/ByteRover: env-var only (default no-op) Setup wizard (hermes memory setup): - Now calls provider.save_config() for non-secret config - Secrets still go to .env via env vars - Only memory.provider activation key goes to config.yaml Documentation: - README.md for each of the 7 providers in plugins/memory/<name>/ - Requirements, setup (wizard + manual), config reference, tools table - Consistent format across all providers The contract for new memory plugins: - get_config_schema() declares all fields (REQUIRED) - save_config() writes native config (REQUIRED if not env-var-only) - Secrets use env_var field in schema, written to .env by wizard - README.md in the plugin directory * docs: add memory providers user guide + developer guide New pages: - user-guide/features/memory-providers.md — comprehensive guide covering all 7 shipped providers (Honcho, OpenViking, Mem0, Hindsight, Holographic, RetainDB, ByteRover). Each with setup, config, tools, cost, and unique features. Includes comparison table and profile isolation notes. - developer-guide/memory-provider-plugin.md — how to build a new memory provider plugin. Covers ABC, required methods, config schema, save_config, threading contract, profile isolation, testing. Updated pages: - user-guide/features/memory.md — replaced Honcho section with link to new Memory Providers page - user-guide/features/honcho.md — replaced with migration redirect to the new Memory Providers page - sidebars.ts — added both new pages to navigation * fix(memory): auto-migrate Honcho users to memory provider plugin When honcho.json or ~/.honcho/config.json exists but memory.provider is not set, automatically set memory.provider: honcho in config.yaml and activate the plugin. The plugin reads the same config files, so all data and credentials are preserved. Zero user action needed. Persists the migration to config.yaml so it only fires once. Prints a one-line confirmation in non-quiet mode. * fix(memory): only auto-migrate Honcho when enabled + credentialed Check HonchoClientConfig.enabled AND (api_key OR base_url) before auto-migrating — not just file existence. Prevents false activation for users who disabled Honcho, stopped using it (config lingers), or have ~/.honcho/ from a different tool. * feat(memory): auto-install pip dependencies during hermes memory setup Reads pip_dependencies from plugin.yaml, checks which are missing, installs them via pip before config walkthrough. Also shows install guidance for external_dependencies (e.g. brv CLI for ByteRover). Updated all 7 plugin.yaml files with pip_dependencies: - honcho: honcho-ai - mem0: mem0ai - openviking: httpx - hindsight: hindsight-client - holographic: (none) - retaindb: requests - byterover: (external_dependencies for brv CLI) * fix: remove remaining Honcho crash risks from cli.py and gateway cli.py: removed Honcho session re-mapping block (would crash importing deleted tools/honcho_tools.py), Honcho flush on compress, Honcho session display on startup, Honcho shutdown on exit, honcho_session_key AIAgent param. gateway/run.py: removed honcho_session_key params from helper methods, sync_honcho param, _honcho.shutdown() block. tests: fixed test_cron_session_with_honcho_key_skipped (was passing removed honcho_key param to _flush_memories_for_session). * fix: include plugins/ in pyproject.toml package list Without this, plugins/memory/ wouldn't be included in non-editable installs. Hermes always runs from the repo checkout so this is belt- and-suspenders, but prevents breakage if the install method changes. * fix(memory): correct pip-to-import name mapping for dep checks The heuristic dep.replace('-', '_') fails for packages where the pip name differs from the import name: honcho-ai→honcho, mem0ai→mem0, hindsight-client→hindsight_client. Added explicit mapping table so hermes memory setup doesn't try to reinstall already-installed packages. * chore: remove dead code from old plugin memory registration path - hermes_cli/plugins.py: removed register_memory_provider(), _memory_providers list, get_plugin_memory_providers() — memory providers now use plugins/memory/ discovery, not the general plugin system - hermes_cli/main.py: stripped 74 lines of dead honcho argparse subparsers (setup, status, sessions, map, peer, mode, tokens, identity, migrate) — kept only the migration redirect - agent/memory_provider.py: updated docstring to reflect new registration path - tests: replaced TestPluginMemoryProviderRegistration with TestPluginMemoryDiscovery that tests the actual plugins/memory/ discovery system. Added 3 new tests (discover, load, nonexistent). * chore: delete dead honcho_integration/cli.py and its tests cli.py (794 lines) was the old 'hermes honcho' command handler — nobody calls it since cmd_honcho was replaced with a migration redirect. Deleted tests that imported from removed code: - tests/honcho_integration/test_cli.py (tested _resolve_api_key) - tests/honcho_integration/test_config_isolation.py (tested CLI config paths) - tests/tools/test_honcho_tools.py (tested the deleted tools/honcho_tools.py) Remaining honcho_integration/ files (actively used by the plugin): - client.py (445 lines) — config loading, SDK client creation - session.py (991 lines) — session management, queries, flush * refactor: move honcho_integration/ into the honcho plugin Moves client.py (445 lines) and session.py (991 lines) from the top-level honcho_integration/ package into plugins/memory/honcho/. No Honcho code remains in the main codebase. - plugins/memory/honcho/client.py — config loading, SDK client creation - plugins/memory/honcho/session.py — session management, queries, flush - Updated all imports: run_agent.py (auto-migration), hermes_cli/doctor.py, plugin __init__.py, session.py cross-import, all tests - Removed honcho_integration/ package and pyproject.toml entry - Renamed tests/honcho_integration/ → tests/honcho_plugin/ * docs: update architecture + gateway-internals for memory provider system - architecture.md: replaced honcho_integration/ with plugins/memory/ - gateway-internals.md: replaced Honcho-specific session routing and flush lifecycle docs with generic memory provider interface docs * fix: update stale mock path for resolve_active_host after honcho plugin migration * fix(memory): address review feedback — P0 lifecycle, ABC contract, honcho CLI restore Review feedback from Honcho devs (erosika): P0 — Provider lifecycle: - Remove on_session_end() + shutdown_all() from run_conversation() tail (was killing providers after every turn in multi-turn sessions) - Add shutdown_memory_provider() method on AIAgent for callers - Wire shutdown into CLI atexit, reset_conversation, gateway stop/expiry Bug fixes: - Remove sync_honcho=False kwarg from /btw callsites (TypeError crash) - Fix doctor.py references to dead 'hermes honcho setup' command - Cache prefetch_all() before tool loop (was re-calling every iteration) ABC contract hardening (all backwards-compatible): - Add session_id kwarg to prefetch/sync_turn/queue_prefetch - Make on_pre_compress() return str (provider insights in compression) - Add **kwargs to on_turn_start() for runtime context - Add on_delegation() hook for parent-side subagent observation - Document agent_context/agent_identity/agent_workspace kwargs on initialize() (prevents cron corruption, enables profile scoping) - Fix docstring: single external provider, not multiple Honcho CLI restoration: - Add plugins/memory/honcho/cli.py (from main's honcho_integration/cli.py with imports adapted to plugin path) - Restore full hermes honcho command with all subcommands (status, peer, mode, tokens, identity, enable/disable, sync, peers, --target-profile) - Restore auto-clone on profile creation + sync on hermes update - hermes honcho setup now redirects to hermes memory setup * fix(memory): wire on_delegation, skip_memory for cron/flush, fix ByteRover return type - Wire on_delegation() in delegate_tool.py — parent's memory provider is notified with task+result after each subagent completes - Add skip_memory=True to cron scheduler (prevents cron system prompts from corrupting user representations — closes #4052) - Add skip_memory=True to gateway flush agent (throwaway agent shouldn't activate memory provider) - Fix ByteRover on_pre_compress() return type: None -> str * fix(honcho): port profile isolation fixes from PR #4632 Ports 5 bug fixes found during profile testing (erosika's PR #4632): 1. 3-tier config resolution — resolve_config_path() now checks $HERMES_HOME/honcho.json → ~/.hermes/honcho.json → ~/.honcho/config.json (non-default profiles couldn't find shared host blocks) 2. Thread host=_host_key() through from_global_config() in cmd_setup, cmd_status, cmd_identity (--target-profile was being ignored) 3. Use bare profile name as aiPeer (not host key with dots) — Honcho's peer ID pattern is ^[a-zA-Z0-9_-]+$, dots are invalid 4. Wrap add_peers() in try/except — was fatal on new AI peers, killed all message uploads for the session 5. Gate Honcho clone behind --clone/--clone-all on profile create (bare create should be blank-slate) Also: sanitize assistant_peer_id via _sanitize_id() * fix(tests): add module cleanup fixture to test_cli_provider_resolution test_cli_provider_resolution._import_cli() wipes tools.*, cli, and run_agent from sys.modules to force fresh imports, but had no cleanup. This poisoned all subsequent tests on the same xdist worker — mocks targeting tools.file_tools, tools.send_message_tool, etc. patched the NEW module object while already-imported functions still referenced the OLD one. Caused ~25 cascade failures: send_message KeyError, process_registry FileNotFoundError, file_read_guards timeouts, read_loop_detection file-not-found, mcp_oauth None port, and provider_parity/codex_execution stale tool lists. Fix: autouse fixture saves all affected modules before each test and restores them after, matching the pattern in test_managed_browserbase_and_modal.py.
808 lines
32 KiB
Python
808 lines
32 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
Delegate Tool -- Subagent Architecture
|
|
|
|
Spawns child AIAgent instances with isolated context, restricted toolsets,
|
|
and their own terminal sessions. Supports single-task and batch (parallel)
|
|
modes. The parent blocks until all children complete.
|
|
|
|
Each child gets:
|
|
- A fresh conversation (no parent history)
|
|
- Its own task_id (own terminal session, file ops cache)
|
|
- A restricted toolset (configurable, with blocked tools always stripped)
|
|
- A focused system prompt built from the delegated goal + context
|
|
|
|
The parent's context only sees the delegation call and the summary result,
|
|
never the child's intermediate tool calls or reasoning.
|
|
"""
|
|
|
|
import json
|
|
import logging
|
|
logger = logging.getLogger(__name__)
|
|
import os
|
|
import time
|
|
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
from typing import Any, Dict, List, Optional
|
|
|
|
|
|
# Tools that children must never have access to
|
|
DELEGATE_BLOCKED_TOOLS = frozenset([
|
|
"delegate_task", # no recursive delegation
|
|
"clarify", # no user interaction
|
|
"memory", # no writes to shared MEMORY.md
|
|
"send_message", # no cross-platform side effects
|
|
"execute_code", # children should reason step-by-step, not write scripts
|
|
])
|
|
|
|
MAX_CONCURRENT_CHILDREN = 3
|
|
MAX_DEPTH = 2 # parent (0) -> child (1) -> grandchild rejected (2)
|
|
DEFAULT_MAX_ITERATIONS = 50
|
|
DEFAULT_TOOLSETS = ["terminal", "file", "web"]
|
|
|
|
|
|
def check_delegate_requirements() -> bool:
|
|
"""Delegation has no external requirements -- always available."""
|
|
return True
|
|
|
|
|
|
def _build_child_system_prompt(goal: str, context: Optional[str] = None) -> str:
|
|
"""Build a focused system prompt for a child agent."""
|
|
parts = [
|
|
"You are a focused subagent working on a specific delegated task.",
|
|
"",
|
|
f"YOUR TASK:\n{goal}",
|
|
]
|
|
if context and context.strip():
|
|
parts.append(f"\nCONTEXT:\n{context}")
|
|
parts.append(
|
|
"\nComplete this task using the tools available to you. "
|
|
"When finished, provide a clear, concise summary of:\n"
|
|
"- What you did\n"
|
|
"- What you found or accomplished\n"
|
|
"- Any files you created or modified\n"
|
|
"- Any issues encountered\n\n"
|
|
"Be thorough but concise -- your response is returned to the "
|
|
"parent agent as a summary."
|
|
)
|
|
return "\n".join(parts)
|
|
|
|
|
|
def _strip_blocked_tools(toolsets: List[str]) -> List[str]:
|
|
"""Remove toolsets that contain only blocked tools."""
|
|
blocked_toolset_names = {
|
|
"delegation", "clarify", "memory", "code_execution",
|
|
}
|
|
return [t for t in toolsets if t not in blocked_toolset_names]
|
|
|
|
|
|
def _build_child_progress_callback(task_index: int, parent_agent, task_count: int = 1) -> Optional[callable]:
|
|
"""Build a callback that relays child agent tool calls to the parent display.
|
|
|
|
Two display paths:
|
|
CLI: prints tree-view lines above the parent's delegation spinner
|
|
Gateway: batches tool names and relays to parent's progress callback
|
|
|
|
Returns None if no display mechanism is available, in which case the
|
|
child agent runs with no progress callback (identical to current behavior).
|
|
"""
|
|
spinner = getattr(parent_agent, '_delegate_spinner', None)
|
|
parent_cb = getattr(parent_agent, 'tool_progress_callback', None)
|
|
|
|
if not spinner and not parent_cb:
|
|
return None # No display → no callback → zero behavior change
|
|
|
|
# Show 1-indexed prefix only in batch mode (multiple tasks)
|
|
prefix = f"[{task_index + 1}] " if task_count > 1 else ""
|
|
|
|
# Gateway: batch tool names, flush periodically
|
|
_BATCH_SIZE = 5
|
|
_batch: List[str] = []
|
|
|
|
def _callback(tool_name: str, preview: str = None):
|
|
# Special "_thinking" event: model produced text content (reasoning)
|
|
if tool_name == "_thinking":
|
|
if spinner:
|
|
short = (preview[:55] + "...") if preview and len(preview) > 55 else (preview or "")
|
|
try:
|
|
spinner.print_above(f" {prefix}├─ 💭 \"{short}\"")
|
|
except Exception as e:
|
|
logger.debug("Spinner print_above failed: %s", e)
|
|
# Don't relay thinking to gateway (too noisy for chat)
|
|
return
|
|
|
|
# Regular tool call event
|
|
if spinner:
|
|
short = (preview[:35] + "...") if preview and len(preview) > 35 else (preview or "")
|
|
from agent.display import get_tool_emoji
|
|
emoji = get_tool_emoji(tool_name)
|
|
line = f" {prefix}├─ {emoji} {tool_name}"
|
|
if short:
|
|
line += f" \"{short}\""
|
|
try:
|
|
spinner.print_above(line)
|
|
except Exception as e:
|
|
logger.debug("Spinner print_above failed: %s", e)
|
|
|
|
if parent_cb:
|
|
_batch.append(tool_name)
|
|
if len(_batch) >= _BATCH_SIZE:
|
|
summary = ", ".join(_batch)
|
|
try:
|
|
parent_cb("subagent_progress", f"🔀 {prefix}{summary}")
|
|
except Exception as e:
|
|
logger.debug("Parent callback failed: %s", e)
|
|
_batch.clear()
|
|
|
|
def _flush():
|
|
"""Flush remaining batched tool names to gateway on completion."""
|
|
if parent_cb and _batch:
|
|
summary = ", ".join(_batch)
|
|
try:
|
|
parent_cb("subagent_progress", f"🔀 {prefix}{summary}")
|
|
except Exception as e:
|
|
logger.debug("Parent callback flush failed: %s", e)
|
|
_batch.clear()
|
|
|
|
_callback._flush = _flush
|
|
return _callback
|
|
|
|
|
|
def _build_child_agent(
|
|
task_index: int,
|
|
goal: str,
|
|
context: Optional[str],
|
|
toolsets: Optional[List[str]],
|
|
model: Optional[str],
|
|
max_iterations: int,
|
|
parent_agent,
|
|
# Credential overrides from delegation config (provider:model resolution)
|
|
override_provider: Optional[str] = None,
|
|
override_base_url: Optional[str] = None,
|
|
override_api_key: Optional[str] = None,
|
|
override_api_mode: Optional[str] = None,
|
|
):
|
|
"""
|
|
Build a child AIAgent on the main thread (thread-safe construction).
|
|
Returns the constructed child agent without running it.
|
|
|
|
When override_* params are set (from delegation config), the child uses
|
|
those credentials instead of inheriting from the parent. This enables
|
|
routing subagents to a different provider:model pair (e.g. cheap/fast
|
|
model on OpenRouter while the parent runs on Nous Portal).
|
|
"""
|
|
from run_agent import AIAgent
|
|
|
|
# When no explicit toolsets given, inherit from parent's enabled toolsets
|
|
# so disabled tools (e.g. web) don't leak to subagents.
|
|
parent_toolsets = set(getattr(parent_agent, "enabled_toolsets", None) or DEFAULT_TOOLSETS)
|
|
if toolsets:
|
|
# Intersect with parent — subagent must not gain tools the parent lacks
|
|
child_toolsets = _strip_blocked_tools([t for t in toolsets if t in parent_toolsets])
|
|
elif parent_agent and getattr(parent_agent, "enabled_toolsets", None):
|
|
child_toolsets = _strip_blocked_tools(parent_agent.enabled_toolsets)
|
|
else:
|
|
child_toolsets = _strip_blocked_tools(DEFAULT_TOOLSETS)
|
|
|
|
child_prompt = _build_child_system_prompt(goal, context)
|
|
# Extract parent's API key so subagents inherit auth (e.g. Nous Portal).
|
|
parent_api_key = getattr(parent_agent, "api_key", None)
|
|
if (not parent_api_key) and hasattr(parent_agent, "_client_kwargs"):
|
|
parent_api_key = parent_agent._client_kwargs.get("api_key")
|
|
|
|
# Build progress callback to relay tool calls to parent display
|
|
child_progress_cb = _build_child_progress_callback(task_index, parent_agent)
|
|
|
|
# Each subagent gets its own iteration budget capped at max_iterations
|
|
# (configurable via delegation.max_iterations, default 50). This means
|
|
# total iterations across parent + subagents can exceed the parent's
|
|
# max_iterations. The user controls the per-subagent cap in config.yaml.
|
|
|
|
# Resolve effective credentials: config override > parent inherit
|
|
effective_model = model or parent_agent.model
|
|
effective_provider = override_provider or getattr(parent_agent, "provider", None)
|
|
effective_base_url = override_base_url or parent_agent.base_url
|
|
effective_api_key = override_api_key or parent_api_key
|
|
effective_api_mode = override_api_mode or getattr(parent_agent, "api_mode", None)
|
|
effective_acp_command = getattr(parent_agent, "acp_command", None)
|
|
effective_acp_args = list(getattr(parent_agent, "acp_args", []) or [])
|
|
|
|
child = AIAgent(
|
|
base_url=effective_base_url,
|
|
api_key=effective_api_key,
|
|
model=effective_model,
|
|
provider=effective_provider,
|
|
api_mode=effective_api_mode,
|
|
acp_command=effective_acp_command,
|
|
acp_args=effective_acp_args,
|
|
max_iterations=max_iterations,
|
|
max_tokens=getattr(parent_agent, "max_tokens", None),
|
|
reasoning_config=getattr(parent_agent, "reasoning_config", None),
|
|
prefill_messages=getattr(parent_agent, "prefill_messages", None),
|
|
enabled_toolsets=child_toolsets,
|
|
quiet_mode=True,
|
|
ephemeral_system_prompt=child_prompt,
|
|
log_prefix=f"[subagent-{task_index}]",
|
|
platform=parent_agent.platform,
|
|
skip_context_files=True,
|
|
skip_memory=True,
|
|
clarify_callback=None,
|
|
session_db=getattr(parent_agent, '_session_db', None),
|
|
providers_allowed=parent_agent.providers_allowed,
|
|
providers_ignored=parent_agent.providers_ignored,
|
|
providers_order=parent_agent.providers_order,
|
|
provider_sort=parent_agent.provider_sort,
|
|
tool_progress_callback=child_progress_cb,
|
|
iteration_budget=None, # fresh budget per subagent
|
|
)
|
|
# Set delegation depth so children can't spawn grandchildren
|
|
child._delegate_depth = getattr(parent_agent, '_delegate_depth', 0) + 1
|
|
|
|
# Register child for interrupt propagation
|
|
if hasattr(parent_agent, '_active_children'):
|
|
lock = getattr(parent_agent, '_active_children_lock', None)
|
|
if lock:
|
|
with lock:
|
|
parent_agent._active_children.append(child)
|
|
else:
|
|
parent_agent._active_children.append(child)
|
|
|
|
return child
|
|
|
|
def _run_single_child(
|
|
task_index: int,
|
|
goal: str,
|
|
child=None,
|
|
parent_agent=None,
|
|
**_kwargs,
|
|
) -> Dict[str, Any]:
|
|
"""
|
|
Run a pre-built child agent. Called from within a thread.
|
|
Returns a structured result dict.
|
|
"""
|
|
child_start = time.monotonic()
|
|
|
|
# Get the progress callback from the child agent
|
|
child_progress_cb = getattr(child, 'tool_progress_callback', None)
|
|
|
|
# Restore parent tool names using the value saved before child construction
|
|
# mutated the global. This is the correct parent toolset, not the child's.
|
|
import model_tools
|
|
_saved_tool_names = getattr(child, "_delegate_saved_tool_names",
|
|
list(model_tools._last_resolved_tool_names))
|
|
|
|
try:
|
|
result = child.run_conversation(user_message=goal)
|
|
|
|
# Flush any remaining batched progress to gateway
|
|
if child_progress_cb and hasattr(child_progress_cb, '_flush'):
|
|
try:
|
|
child_progress_cb._flush()
|
|
except Exception as e:
|
|
logger.debug("Progress callback flush failed: %s", e)
|
|
|
|
duration = round(time.monotonic() - child_start, 2)
|
|
|
|
summary = result.get("final_response") or ""
|
|
completed = result.get("completed", False)
|
|
interrupted = result.get("interrupted", False)
|
|
api_calls = result.get("api_calls", 0)
|
|
|
|
if interrupted:
|
|
status = "interrupted"
|
|
elif summary:
|
|
# A summary means the subagent produced usable output.
|
|
# exit_reason ("completed" vs "max_iterations") already
|
|
# tells the parent *how* the task ended.
|
|
status = "completed"
|
|
else:
|
|
status = "failed"
|
|
|
|
# Build tool trace from conversation messages (already in memory).
|
|
# Uses tool_call_id to correctly pair parallel tool calls with results.
|
|
tool_trace: list[Dict[str, Any]] = []
|
|
trace_by_id: Dict[str, Dict[str, Any]] = {}
|
|
messages = result.get("messages") or []
|
|
if isinstance(messages, list):
|
|
for msg in messages:
|
|
if not isinstance(msg, dict):
|
|
continue
|
|
if msg.get("role") == "assistant":
|
|
for tc in (msg.get("tool_calls") or []):
|
|
fn = tc.get("function", {})
|
|
entry_t = {
|
|
"tool": fn.get("name", "unknown"),
|
|
"args_bytes": len(fn.get("arguments", "")),
|
|
}
|
|
tool_trace.append(entry_t)
|
|
tc_id = tc.get("id")
|
|
if tc_id:
|
|
trace_by_id[tc_id] = entry_t
|
|
elif msg.get("role") == "tool":
|
|
content = msg.get("content", "")
|
|
is_error = bool(
|
|
content and "error" in content[:80].lower()
|
|
)
|
|
result_meta = {
|
|
"result_bytes": len(content),
|
|
"status": "error" if is_error else "ok",
|
|
}
|
|
# Match by tool_call_id for parallel calls
|
|
tc_id = msg.get("tool_call_id")
|
|
target = trace_by_id.get(tc_id) if tc_id else None
|
|
if target is not None:
|
|
target.update(result_meta)
|
|
elif tool_trace:
|
|
# Fallback for messages without tool_call_id
|
|
tool_trace[-1].update(result_meta)
|
|
|
|
# Determine exit reason
|
|
if interrupted:
|
|
exit_reason = "interrupted"
|
|
elif completed:
|
|
exit_reason = "completed"
|
|
else:
|
|
exit_reason = "max_iterations"
|
|
|
|
# Extract token counts (safe for mock objects)
|
|
_input_tokens = getattr(child, "session_prompt_tokens", 0)
|
|
_output_tokens = getattr(child, "session_completion_tokens", 0)
|
|
_model = getattr(child, "model", None)
|
|
|
|
entry: Dict[str, Any] = {
|
|
"task_index": task_index,
|
|
"status": status,
|
|
"summary": summary,
|
|
"api_calls": api_calls,
|
|
"duration_seconds": duration,
|
|
"model": _model if isinstance(_model, str) else None,
|
|
"exit_reason": exit_reason,
|
|
"tokens": {
|
|
"input": _input_tokens if isinstance(_input_tokens, (int, float)) else 0,
|
|
"output": _output_tokens if isinstance(_output_tokens, (int, float)) else 0,
|
|
},
|
|
"tool_trace": tool_trace,
|
|
}
|
|
if status == "failed":
|
|
entry["error"] = result.get("error", "Subagent did not produce a response.")
|
|
|
|
return entry
|
|
|
|
except Exception as exc:
|
|
duration = round(time.monotonic() - child_start, 2)
|
|
logging.exception(f"[subagent-{task_index}] failed")
|
|
return {
|
|
"task_index": task_index,
|
|
"status": "error",
|
|
"summary": None,
|
|
"error": str(exc),
|
|
"api_calls": 0,
|
|
"duration_seconds": duration,
|
|
}
|
|
|
|
finally:
|
|
# Restore the parent's tool names so the process-global is correct
|
|
# for any subsequent execute_code calls or other consumers.
|
|
import model_tools
|
|
|
|
saved_tool_names = getattr(child, "_delegate_saved_tool_names", None)
|
|
if isinstance(saved_tool_names, list):
|
|
model_tools._last_resolved_tool_names = list(saved_tool_names)
|
|
|
|
# Unregister child from interrupt propagation
|
|
if hasattr(parent_agent, '_active_children'):
|
|
try:
|
|
lock = getattr(parent_agent, '_active_children_lock', None)
|
|
if lock:
|
|
with lock:
|
|
parent_agent._active_children.remove(child)
|
|
else:
|
|
parent_agent._active_children.remove(child)
|
|
except (ValueError, UnboundLocalError) as e:
|
|
logger.debug("Could not remove child from active_children: %s", e)
|
|
|
|
def delegate_task(
|
|
goal: Optional[str] = None,
|
|
context: Optional[str] = None,
|
|
toolsets: Optional[List[str]] = None,
|
|
tasks: Optional[List[Dict[str, Any]]] = None,
|
|
max_iterations: Optional[int] = None,
|
|
parent_agent=None,
|
|
) -> str:
|
|
"""
|
|
Spawn one or more child agents to handle delegated tasks.
|
|
|
|
Supports two modes:
|
|
- Single: provide goal (+ optional context, toolsets)
|
|
- Batch: provide tasks array [{goal, context, toolsets}, ...]
|
|
|
|
Returns JSON with results array, one entry per task.
|
|
"""
|
|
if parent_agent is None:
|
|
return json.dumps({"error": "delegate_task requires a parent agent context."})
|
|
|
|
# Depth limit
|
|
depth = getattr(parent_agent, '_delegate_depth', 0)
|
|
if depth >= MAX_DEPTH:
|
|
return json.dumps({
|
|
"error": (
|
|
f"Delegation depth limit reached ({MAX_DEPTH}). "
|
|
"Subagents cannot spawn further subagents."
|
|
)
|
|
})
|
|
|
|
# Load config
|
|
cfg = _load_config()
|
|
default_max_iter = cfg.get("max_iterations", DEFAULT_MAX_ITERATIONS)
|
|
effective_max_iter = max_iterations or default_max_iter
|
|
|
|
# Resolve delegation credentials (provider:model pair).
|
|
# When delegation.provider is configured, this resolves the full credential
|
|
# bundle (base_url, api_key, api_mode) via the same runtime provider system
|
|
# used by CLI/gateway startup. When unconfigured, returns None values so
|
|
# children inherit from the parent.
|
|
try:
|
|
creds = _resolve_delegation_credentials(cfg, parent_agent)
|
|
except ValueError as exc:
|
|
return json.dumps({"error": str(exc)})
|
|
|
|
# Normalize to task list
|
|
if tasks and isinstance(tasks, list):
|
|
task_list = tasks[:MAX_CONCURRENT_CHILDREN]
|
|
elif goal and isinstance(goal, str) and goal.strip():
|
|
task_list = [{"goal": goal, "context": context, "toolsets": toolsets}]
|
|
else:
|
|
return json.dumps({"error": "Provide either 'goal' (single task) or 'tasks' (batch)."})
|
|
|
|
if not task_list:
|
|
return json.dumps({"error": "No tasks provided."})
|
|
|
|
# Validate each task has a goal
|
|
for i, task in enumerate(task_list):
|
|
if not task.get("goal", "").strip():
|
|
return json.dumps({"error": f"Task {i} is missing a 'goal'."})
|
|
|
|
overall_start = time.monotonic()
|
|
results = []
|
|
|
|
n_tasks = len(task_list)
|
|
# Track goal labels for progress display (truncated for readability)
|
|
task_labels = [t["goal"][:40] for t in task_list]
|
|
|
|
# Save parent tool names BEFORE any child construction mutates the global.
|
|
# _build_child_agent() calls AIAgent() which calls get_tool_definitions(),
|
|
# which overwrites model_tools._last_resolved_tool_names with child's toolset.
|
|
import model_tools as _model_tools
|
|
_parent_tool_names = list(_model_tools._last_resolved_tool_names)
|
|
|
|
# Build all child agents on the main thread (thread-safe construction)
|
|
# Wrapped in try/finally so the global is always restored even if a
|
|
# child build raises (otherwise _last_resolved_tool_names stays corrupted).
|
|
children = []
|
|
try:
|
|
for i, t in enumerate(task_list):
|
|
child = _build_child_agent(
|
|
task_index=i, goal=t["goal"], context=t.get("context"),
|
|
toolsets=t.get("toolsets") or toolsets, model=creds["model"],
|
|
max_iterations=effective_max_iter, parent_agent=parent_agent,
|
|
override_provider=creds["provider"], override_base_url=creds["base_url"],
|
|
override_api_key=creds["api_key"],
|
|
override_api_mode=creds["api_mode"],
|
|
)
|
|
# Override with correct parent tool names (before child construction mutated global)
|
|
child._delegate_saved_tool_names = _parent_tool_names
|
|
children.append((i, t, child))
|
|
finally:
|
|
# Authoritative restore: reset global to parent's tool names after all children built
|
|
_model_tools._last_resolved_tool_names = _parent_tool_names
|
|
|
|
if n_tasks == 1:
|
|
# Single task -- run directly (no thread pool overhead)
|
|
_i, _t, child = children[0]
|
|
result = _run_single_child(0, _t["goal"], child, parent_agent)
|
|
results.append(result)
|
|
else:
|
|
# Batch -- run in parallel with per-task progress lines
|
|
completed_count = 0
|
|
spinner_ref = getattr(parent_agent, '_delegate_spinner', None)
|
|
|
|
with ThreadPoolExecutor(max_workers=MAX_CONCURRENT_CHILDREN) as executor:
|
|
futures = {}
|
|
for i, t, child in children:
|
|
future = executor.submit(
|
|
_run_single_child,
|
|
task_index=i,
|
|
goal=t["goal"],
|
|
child=child,
|
|
parent_agent=parent_agent,
|
|
)
|
|
futures[future] = i
|
|
|
|
for future in as_completed(futures):
|
|
try:
|
|
entry = future.result()
|
|
except Exception as exc:
|
|
idx = futures[future]
|
|
entry = {
|
|
"task_index": idx,
|
|
"status": "error",
|
|
"summary": None,
|
|
"error": str(exc),
|
|
"api_calls": 0,
|
|
"duration_seconds": 0,
|
|
}
|
|
results.append(entry)
|
|
completed_count += 1
|
|
|
|
# Print per-task completion line above the spinner
|
|
idx = entry["task_index"]
|
|
label = task_labels[idx] if idx < len(task_labels) else f"Task {idx}"
|
|
dur = entry.get("duration_seconds", 0)
|
|
status = entry.get("status", "?")
|
|
icon = "✓" if status == "completed" else "✗"
|
|
remaining = n_tasks - completed_count
|
|
completion_line = f"{icon} [{idx+1}/{n_tasks}] {label} ({dur}s)"
|
|
if spinner_ref:
|
|
try:
|
|
spinner_ref.print_above(completion_line)
|
|
except Exception:
|
|
print(f" {completion_line}")
|
|
else:
|
|
print(f" {completion_line}")
|
|
|
|
# Update spinner text to show remaining count
|
|
if spinner_ref and remaining > 0:
|
|
try:
|
|
spinner_ref.update_text(f"🔀 {remaining} task{'s' if remaining != 1 else ''} remaining")
|
|
except Exception as e:
|
|
logger.debug("Spinner update_text failed: %s", e)
|
|
|
|
# Sort by task_index so results match input order
|
|
results.sort(key=lambda r: r["task_index"])
|
|
|
|
# Notify parent's memory provider of delegation outcomes
|
|
if parent_agent and hasattr(parent_agent, '_memory_manager') and parent_agent._memory_manager:
|
|
for entry in results:
|
|
try:
|
|
_task_goal = tasks[entry["task_index"]]["goal"] if entry["task_index"] < len(tasks) else ""
|
|
parent_agent._memory_manager.on_delegation(
|
|
task=_task_goal,
|
|
result=entry.get("summary", "") or "",
|
|
child_session_id=getattr(children[entry["task_index"]][2], "session_id", "") if entry["task_index"] < len(children) else "",
|
|
)
|
|
except Exception:
|
|
pass
|
|
|
|
total_duration = round(time.monotonic() - overall_start, 2)
|
|
|
|
return json.dumps({
|
|
"results": results,
|
|
"total_duration_seconds": total_duration,
|
|
}, ensure_ascii=False)
|
|
|
|
|
|
def _resolve_delegation_credentials(cfg: dict, parent_agent) -> dict:
|
|
"""Resolve credentials for subagent delegation.
|
|
|
|
If ``delegation.base_url`` is configured, subagents use that direct
|
|
OpenAI-compatible endpoint. Otherwise, if ``delegation.provider`` is
|
|
configured, the full credential bundle (base_url, api_key, api_mode,
|
|
provider) is resolved via the runtime provider system — the same path used
|
|
by CLI/gateway startup. This lets subagents run on a completely different
|
|
provider:model pair.
|
|
|
|
If neither base_url nor provider is configured, returns None values so the
|
|
child inherits everything from the parent agent.
|
|
|
|
Raises ValueError with a user-friendly message on credential failure.
|
|
"""
|
|
configured_model = str(cfg.get("model") or "").strip() or None
|
|
configured_provider = str(cfg.get("provider") or "").strip() or None
|
|
configured_base_url = str(cfg.get("base_url") or "").strip() or None
|
|
configured_api_key = str(cfg.get("api_key") or "").strip() or None
|
|
|
|
if configured_base_url:
|
|
api_key = (
|
|
configured_api_key
|
|
or os.getenv("OPENAI_API_KEY", "").strip()
|
|
)
|
|
if not api_key:
|
|
raise ValueError(
|
|
"Delegation base_url is configured but no API key was found. "
|
|
"Set delegation.api_key or OPENAI_API_KEY."
|
|
)
|
|
|
|
base_lower = configured_base_url.lower()
|
|
provider = "custom"
|
|
api_mode = "chat_completions"
|
|
if "chatgpt.com/backend-api/codex" in base_lower:
|
|
provider = "openai-codex"
|
|
api_mode = "codex_responses"
|
|
elif "api.anthropic.com" in base_lower:
|
|
provider = "anthropic"
|
|
api_mode = "anthropic_messages"
|
|
|
|
return {
|
|
"model": configured_model,
|
|
"provider": provider,
|
|
"base_url": configured_base_url,
|
|
"api_key": api_key,
|
|
"api_mode": api_mode,
|
|
}
|
|
|
|
if not configured_provider:
|
|
# No provider override — child inherits everything from parent
|
|
return {
|
|
"model": configured_model,
|
|
"provider": None,
|
|
"base_url": None,
|
|
"api_key": None,
|
|
"api_mode": None,
|
|
}
|
|
|
|
# Provider is configured — resolve full credentials
|
|
try:
|
|
from hermes_cli.runtime_provider import resolve_runtime_provider
|
|
runtime = resolve_runtime_provider(requested=configured_provider)
|
|
except Exception as exc:
|
|
raise ValueError(
|
|
f"Cannot resolve delegation provider '{configured_provider}': {exc}. "
|
|
f"Check that the provider is configured (API key set, valid provider name), "
|
|
f"or set delegation.base_url/delegation.api_key for a direct endpoint. "
|
|
f"Available providers: openrouter, nous, zai, kimi-coding, minimax."
|
|
) from exc
|
|
|
|
api_key = runtime.get("api_key", "")
|
|
if not api_key:
|
|
raise ValueError(
|
|
f"Delegation provider '{configured_provider}' resolved but has no API key. "
|
|
f"Set the appropriate environment variable or run 'hermes login'."
|
|
)
|
|
|
|
return {
|
|
"model": configured_model,
|
|
"provider": runtime.get("provider"),
|
|
"base_url": runtime.get("base_url"),
|
|
"api_key": api_key,
|
|
"api_mode": runtime.get("api_mode"),
|
|
"command": runtime.get("command"),
|
|
"args": list(runtime.get("args") or []),
|
|
}
|
|
|
|
|
|
def _load_config() -> dict:
|
|
"""Load delegation config from CLI_CONFIG or persistent config.
|
|
|
|
Checks the runtime config (cli.py CLI_CONFIG) first, then falls back
|
|
to the persistent config (hermes_cli/config.py load_config()) so that
|
|
``delegation.model`` / ``delegation.provider`` are picked up regardless
|
|
of the entry point (CLI, gateway, cron).
|
|
"""
|
|
try:
|
|
from cli import CLI_CONFIG
|
|
cfg = CLI_CONFIG.get("delegation", {})
|
|
if cfg:
|
|
return cfg
|
|
except Exception:
|
|
pass
|
|
try:
|
|
from hermes_cli.config import load_config
|
|
full = load_config()
|
|
return full.get("delegation", {})
|
|
except Exception:
|
|
return {}
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# OpenAI Function-Calling Schema
|
|
# ---------------------------------------------------------------------------
|
|
|
|
DELEGATE_TASK_SCHEMA = {
|
|
"name": "delegate_task",
|
|
"description": (
|
|
"Spawn one or more subagents to work on tasks in isolated contexts. "
|
|
"Each subagent gets its own conversation, terminal session, and toolset. "
|
|
"Only the final summary is returned -- intermediate tool results "
|
|
"never enter your context window.\n\n"
|
|
"TWO MODES (one of 'goal' or 'tasks' is required):\n"
|
|
"1. Single task: provide 'goal' (+ optional context, toolsets)\n"
|
|
"2. Batch (parallel): provide 'tasks' array with up to 3 items. "
|
|
"All run concurrently and results are returned together.\n\n"
|
|
"WHEN TO USE delegate_task:\n"
|
|
"- Reasoning-heavy subtasks (debugging, code review, research synthesis)\n"
|
|
"- Tasks that would flood your context with intermediate data\n"
|
|
"- Parallel independent workstreams (research A and B simultaneously)\n\n"
|
|
"WHEN NOT TO USE (use these instead):\n"
|
|
"- Mechanical multi-step work with no reasoning needed -> use execute_code\n"
|
|
"- Single tool call -> just call the tool directly\n"
|
|
"- Tasks needing user interaction -> subagents cannot use clarify\n\n"
|
|
"IMPORTANT:\n"
|
|
"- Subagents have NO memory of your conversation. Pass all relevant "
|
|
"info (file paths, error messages, constraints) via the 'context' field.\n"
|
|
"- Subagents CANNOT call: delegate_task, clarify, memory, send_message, "
|
|
"execute_code.\n"
|
|
"- Each subagent gets its own terminal session (separate working directory and state).\n"
|
|
"- Results are always returned as an array, one entry per task."
|
|
),
|
|
"parameters": {
|
|
"type": "object",
|
|
"properties": {
|
|
"goal": {
|
|
"type": "string",
|
|
"description": (
|
|
"What the subagent should accomplish. Be specific and "
|
|
"self-contained -- the subagent knows nothing about your "
|
|
"conversation history."
|
|
),
|
|
},
|
|
"context": {
|
|
"type": "string",
|
|
"description": (
|
|
"Background information the subagent needs: file paths, "
|
|
"error messages, project structure, constraints. The more "
|
|
"specific you are, the better the subagent performs."
|
|
),
|
|
},
|
|
"toolsets": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": (
|
|
"Toolsets to enable for this subagent. "
|
|
"Default: inherits your enabled toolsets. "
|
|
"Common patterns: ['terminal', 'file'] for code work, "
|
|
"['web'] for research, ['terminal', 'file', 'web'] for "
|
|
"full-stack tasks."
|
|
),
|
|
},
|
|
"tasks": {
|
|
"type": "array",
|
|
"items": {
|
|
"type": "object",
|
|
"properties": {
|
|
"goal": {"type": "string", "description": "Task goal"},
|
|
"context": {"type": "string", "description": "Task-specific context"},
|
|
"toolsets": {
|
|
"type": "array",
|
|
"items": {"type": "string"},
|
|
"description": "Toolsets for this specific task",
|
|
},
|
|
},
|
|
"required": ["goal"],
|
|
},
|
|
"maxItems": 3,
|
|
"description": (
|
|
"Batch mode: up to 3 tasks to run in parallel. Each gets "
|
|
"its own subagent with isolated context and terminal session. "
|
|
"When provided, top-level goal/context/toolsets are ignored."
|
|
),
|
|
},
|
|
"max_iterations": {
|
|
"type": "integer",
|
|
"description": (
|
|
"Max tool-calling turns per subagent (default: 50). "
|
|
"Only set lower for simple tasks."
|
|
),
|
|
},
|
|
},
|
|
"required": [],
|
|
},
|
|
}
|
|
|
|
|
|
# --- Registry ---
|
|
from tools.registry import registry
|
|
|
|
registry.register(
|
|
name="delegate_task",
|
|
toolset="delegation",
|
|
schema=DELEGATE_TASK_SCHEMA,
|
|
handler=lambda args, **kw: delegate_task(
|
|
goal=args.get("goal"),
|
|
context=args.get("context"),
|
|
toolsets=args.get("toolsets"),
|
|
tasks=args.get("tasks"),
|
|
max_iterations=args.get("max_iterations"),
|
|
parent_agent=kw.get("parent_agent")),
|
|
check_fn=check_delegate_requirements,
|
|
emoji="🔀",
|
|
)
|