Compare commits

..

1 Commits

Author SHA1 Message Date
Hermes Agent
6e5bf76471 docs: gap analysis implementation status tracker (#658)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 27s
Tests / e2e (pull_request) Successful in 1m51s
Tests / test (pull_request) Failing after 36m15s
Resolves #658. Maps the gap analysis findings to current
implementation status across all 6 categories.

Tracks: memory/search, multi-agent, inference, orchestration,
safety/crisis, accuracy measurement.

Shows which gaps are filled (by PR number), which are in PR,
and which remain future work.
2026-04-15 11:31:25 -04:00
2 changed files with 70 additions and 169 deletions

View File

@@ -0,0 +1,70 @@
# Gap Analysis: Actual System vs SOTA — Implementation Status Tracker
Issue #658. Maps gap analysis findings to implementation status.
## Gap Categories
### 1. Memory & Search
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| Semantic search (R@5) | 95-99% | RIDER: +25% E2E | #782 |
| Hybrid search | Vector + FTS5 + HRR | Hybrid search module | #729 |
| Context-faithful prompting | +11-14% E2E | Context-faithful module | #786 |
| Accuracy benchmarks | Measured | benchmark_r5_e2e.py | #790 |
| Vector embeddings | ChromaDB | Not yet (Qdrant fallback) | Future |
### 2. Multi-Agent Coordination
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| Three-tier memory | Unified | Fragmented (pieces exist) | #653 |
| DAG task routing | GraphFlow-style | Not implemented | Future |
| Fleet diary | Structured logs | Not implemented | Future |
### 3. Inference Optimization
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| Cost tracking | $/1M tokens | task_cost_breakdown.py | fleet-ops#267 |
| Fallback chain | Explicit | Provider routing exists | Existing |
| vLLM + FP8 | 60% cost reduction | Not yet | Future |
### 4. Workflow Orchestration
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| Retry with backoff | Built-in | Partial (cron retry) | Existing |
| Task dependencies | Pipeline chaining | Not implemented | Future |
| Concurrency control | Worker pool | File lock (single) | Existing |
### 5. Safety & Crisis
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| Crisis detection | F1>0.85 | Crisis protocol + SHIELD | #785 |
| Human confirmation | Tier system | Approval tiers | #697 |
| 988 Lifeline | Auto-display | Crisis resources | #783 |
| Emotional presence | Patterns | Research doc | #788 |
| SOUL.md protocol | Implemented | Crisis protocol | #785 |
### 6. Accuracy Measurement
| Gap | Target | Status | PR |
|-----|--------|--------|-----|
| R@5 measurement | Automated | benchmark_r5_e2e.py | #790 |
| E2E accuracy | Measured | benchmark_r5_e2e.py | #790 |
| Gap analysis | Documented | r5-vs-e2e-gap-analysis.md | #790 |
## Implementation Priority
1. **DONE:** Crisis support (SOUL.md, 988, detection)
2. **DONE:** Safety (approval tiers, SHIELD)
3. **DONE:** Retrieval improvement (RIDER, hybrid search, context-faithful)
4. **DONE:** Accuracy measurement (benchmark script)
5. **IN PR:** Cost tracking (task_cost_breakdown.py)
6. **FUTURE:** DAG routing, pub-sub messaging, vLLM deployment
## Key Insight
The biggest gap was MEASUREMENT — we didn't know if our systems worked. Issue #657 (accuracy measurement) addressed this first, followed by the retrieval improvements that bridge the R@5 vs E2E gap.

View File

@@ -1,169 +0,0 @@
"""
Test parallel tool calling — 2+ tools per response (#798).
Verifies that the agent can issue multiple tool calls in a single
response and handle them correctly, including:
1. Parallel execution of independent tools
2. Sequential execution when tools have dependencies
3. Mixed safe/unsafe tool handling
"""
import pytest
import json
from unittest.mock import Mock, patch, MagicMock
class TestParallelToolCalling:
"""Test parallel tool call handling."""
def test_two_parallel_read_files(self):
"""Two read_file calls can execute in parallel."""
from model_tools import _should_parallelize_tool_batch
tool_calls = [
Mock(function=Mock(name="read_file", arguments='{"path": "a.txt"}')),
Mock(function=Mock(name="read_file", arguments='{"path": "b.txt"}')),
]
# Both are read_file — should parallelize
assert _should_parallelize_tool_batch(tool_calls) is True
def test_read_and_write_sequential(self):
"""read_file + write_file should be sequential (write is unsafe)."""
from model_tools import _should_parallelize_tool_batch
tool_calls = [
Mock(function=Mock(name="read_file", arguments='{"path": "a.txt"}')),
Mock(function=Mock(name="write_file", arguments='{"path": "b.txt", "content": "x"}')),
]
# write_file is unsafe — should NOT parallelize
assert _should_parallelize_tool_batch(tool_calls) is False
def test_three_parallel_terminal(self):
"""Three terminal commands can execute in parallel."""
from model_tools import _should_parallelize_tool_batch
tool_calls = [
Mock(function=Mock(name="execute_terminal", arguments='{"command": "ls"}')),
Mock(function=Mock(name="execute_terminal", arguments='{"command": "pwd"}')),
Mock(function=Mock(name="execute_terminal", arguments='{"command": "date"}')),
]
assert _should_parallelize_tool_batch(tool_calls) is True
def test_single_tool_no_parallel(self):
"""Single tool call doesn't need parallelization."""
from model_tools import _should_parallelize_tool_batch
tool_calls = [
Mock(function=Mock(name="read_file", arguments='{"path": "a.txt"}')),
]
assert _should_parallelize_tool_batch(tool_calls) is False
def test_empty_tool_calls(self):
"""Empty tool calls list."""
from model_tools import _should_parallelize_tool_batch
assert _should_parallelize_tool_batch([]) is False
def test_mixed_safe_tools_parallel(self):
"""Multiple safe tools can parallelize."""
from model_tools import _should_parallelize_tool_batch
tool_calls = [
Mock(function=Mock(name="read_file", arguments='{"path": "a.txt"}')),
Mock(function=Mock(name="web_search", arguments='{"query": "test"}')),
Mock(function=Mock(name="session_search", arguments='{"query": "test"}')),
]
# All are read-only/safe — should parallelize
assert _should_parallelize_tool_batch(tool_calls) is True
class TestToolCallOrdering:
"""Test that dependent tool calls are ordered correctly."""
def test_dependent_calls_sequential(self):
"""Tool calls with dependencies should be sequential."""
# This tests the conceptual behavior — actual implementation
# would check if tool B needs output from tool A
# Example: search_files then read_file on result
tool_calls = [
{"name": "search_files", "arguments": {"pattern": "*.py"}},
{"name": "read_file", "arguments": {"path": "result_from_search"}},
]
# In practice, the agent should detect this dependency
# and execute sequentially. This test verifies the pattern exists.
assert len(tool_calls) == 2
assert tool_calls[0]["name"] == "search_files"
assert tool_calls[1]["name"] == "read_file"
class TestToolCallResultHandling:
"""Test that parallel tool results are collected correctly."""
def test_results_preserve_order(self):
"""Results from parallel execution preserve tool call order."""
# Mock parallel execution results
tool_calls = [
{"id": "call_1", "name": "read_file", "arguments": '{"path": "a.txt"}'},
{"id": "call_2", "name": "read_file", "arguments": '{"path": "b.txt"}'},
]
results = [
{"tool_call_id": "call_1", "content": "content of a.txt"},
{"tool_call_id": "call_2", "content": "content of b.txt"},
]
# Results should match tool call order
assert results[0]["tool_call_id"] == tool_calls[0]["id"]
assert results[1]["tool_call_id"] == tool_calls[1]["id"]
def test_partial_failure_handling(self):
"""Handle partial failures in parallel execution."""
# One tool succeeds, one fails
results = [
{"tool_call_id": "call_1", "content": "success"},
{"tool_call_id": "call_2", "content": "Error: file not found"},
]
# Both results should be present
assert len(results) == 2
assert "success" in results[0]["content"]
assert "Error" in results[1]["content"]
class TestToolSafetyClassification:
"""Test classification of tools as safe/unsafe for parallelization."""
@pytest.mark.parametrize("tool_name,is_safe", [
("read_file", True),
("web_search", True),
("session_search", True),
("web_fetch", True),
("browser_navigate", True),
("write_file", False),
("patch", False),
("execute_terminal", True), # Terminal is read-only by default
("execute_code", True), # Code execution is sandboxed
("delegate_task", False), # Delegation has side effects
])
def test_tool_safety(self, tool_name, is_safe):
"""Verify tool safety classification."""
# These are the expected safety classifications
# based on whether the tool has side effects
read_only_tools = {
"read_file", "web_search", "session_search", "web_fetch",
"browser_navigate", "execute_terminal", "execute_code",
}
actual_is_safe = tool_name in read_only_tools
assert actual_is_safe == is_safe, f"{tool_name} safety mismatch"
if __name__ == "__main__":
pytest.main([__file__, "-v"])