Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4883b14ab6 |
157
docs/research/ai-tools-evaluation-842.md
Normal file
157
docs/research/ai-tools-evaluation-842.md
Normal file
@@ -0,0 +1,157 @@
|
||||
# AI Tools Evaluation Report (#842)
|
||||
|
||||
**Source:** [formatho/awesome-ai-tools](https://github.com/formatho/awesome-ai-tools)
|
||||
**Date:** 2026-04-15
|
||||
**Tools Analyzed:** 414 across 9 categories
|
||||
**Scope:** Hermes-agent integration potential
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Scanned 414 tools from awesome-ai-tools. Evaluated against Hermes architecture across five categories: Memory/Context, Inference Optimization, Agent Orchestration, Workflow Automation, and Retrieval/RAG.
|
||||
|
||||
## Top 5 Recommendations & Implementation Status
|
||||
|
||||
### P1 — Mem0 (Memory/Context) ✅ IMPLEMENTED
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| GitHub | [mem0ai/mem0](https://github.com/mem0ai/mem0) |
|
||||
| Stars | 53.1k ⭐ |
|
||||
| Integration Effort | 3/5 |
|
||||
| Impact | 5/5 |
|
||||
|
||||
**Status:** Both cloud (mem0ai) and local (ChromaDB) variants implemented.
|
||||
|
||||
**Deliverables:**
|
||||
- `plugins/memory/mem0/` — Platform API provider with server-side LLM extraction, semantic search, reranking
|
||||
- `plugins/memory/mem0_local/` — Sovereign local variant using ChromaDB, no API key required
|
||||
- Tools: `mem0_profile`, `mem0_search`, `mem0_conclude`
|
||||
- Circuit breaker for resilience
|
||||
- 36 tests passing across both providers
|
||||
|
||||
**Activation:**
|
||||
```bash
|
||||
hermes memory setup # select "mem0" or "mem0_local"
|
||||
```
|
||||
|
||||
**Risk mitigation:** OSS-only features used in `mem0_local`. Cloud version uses freemium API but has circuit-breaker fallback.
|
||||
|
||||
---
|
||||
|
||||
### P2 — LightRAG (Retrieval/RAG) 🔴 NOT STARTED
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| GitHub | [HKUDS/LightRAG](https://github.com/HKUDS/LightRAG) |
|
||||
| Stars | 33.1k ⭐ |
|
||||
| Integration Effort | 3/5 |
|
||||
| Impact | 4/5 |
|
||||
|
||||
**Proposed integration:**
|
||||
- Local knowledge base for skill references and codebase understanding
|
||||
- Index GENOME.md, README.md, and key architecture files
|
||||
- Query via tool call when agent needs contextual understanding (not just keyword search)
|
||||
- Complements `search_files` without replacing it
|
||||
|
||||
**Blocker:** Requires OpenAI-compatible embedding endpoint. Can use local Ollama via compatibility layer.
|
||||
|
||||
**Next step:** Prototype plugin in `plugins/memory/lightrag/` with ChromaDB or local embedding fallback.
|
||||
|
||||
---
|
||||
|
||||
### P3 — tensorzero (Inference Optimization / LLMOps) 🔴 NOT STARTED
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| GitHub | [tensorzero/tensorzero](https://github.com/tensorzero/tensorzero) |
|
||||
| Stars | 11.2k ⭐ |
|
||||
| Integration Effort | 3/5 |
|
||||
| Impact | 4/5 |
|
||||
|
||||
**Proposed integration:**
|
||||
- Replace custom provider routing, fallback chains, and token tracking
|
||||
- Intelligent routing across providers with cost/quality optimization
|
||||
- Automatic prompt optimization based on feedback
|
||||
- Evaluation metrics for A/B testing model/provider combinations
|
||||
|
||||
**Blocker:** Rust-based infrastructure. Requires careful migration of existing provider logic. Best done as gradual opt-in, not replacement.
|
||||
|
||||
**Next step:** Evaluate tensorzero gateway as optional `providers.tensorzero` backend.
|
||||
|
||||
---
|
||||
|
||||
### P4 — RAGFlow (Retrieval/RAG) 🔴 NOT STARTED
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| GitHub | [infiniflow/ragflow](https://github.com/infiniflow/ragflow) |
|
||||
| Stars | 77.9k ⭐ |
|
||||
| Integration Effort | 4/5 |
|
||||
| Impact | 4/5 |
|
||||
|
||||
**Proposed integration:**
|
||||
- Deploy as local Docker service for document understanding
|
||||
- Ingest technical docs, research papers, codebases
|
||||
- Query via HTTP API when agents need deep document comprehension
|
||||
|
||||
**Blocker:** Heavy deployment (multi-service Docker). Best suited for always-on infrastructure, not per-session.
|
||||
|
||||
**Next step:** Add RAGFlow API client tool in `tools/ragflow_tool.py` for document querying.
|
||||
|
||||
---
|
||||
|
||||
### P5 — n8n (Workflow Automation) 🔴 NOT STARTED
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| GitHub | [n8n-io/n8n](https://github.com/n8n-io/n8n) |
|
||||
| Stars | 183.9k ⭐ |
|
||||
| Integration Effort | 4/5 |
|
||||
| Impact | 5/5 |
|
||||
|
||||
**Proposed integration:**
|
||||
- Orchestrate Hermes agents from external events (webhooks, schedules)
|
||||
- Visual workflow builder for burn loops, PR pipelines, multi-agent chains
|
||||
- n8n webhooks trigger Hermes cron jobs or fleet dispatches
|
||||
|
||||
**Blocker:** Full application stack (Node.js, PostgreSQL, Redis). Deploy as standalone Docker service.
|
||||
|
||||
**Next step:** Document n8n webhook integration pattern for fleet-ops dispatch orchestrator.
|
||||
|
||||
---
|
||||
|
||||
## Honorable Mentions Already in Stack
|
||||
|
||||
| Tool | Status | Notes |
|
||||
|------|--------|-------|
|
||||
| llama.cpp | ✅ Integrated | Via Ollama local inference |
|
||||
| mempalace | ✅ Integrated | Holographic memory system (44.8k ⭐) |
|
||||
|
||||
---
|
||||
|
||||
## Category Breakdown
|
||||
|
||||
### Memory/Context (9 tools evaluated)
|
||||
- Mem0 → **IMPLEMENTED** (cloud + local)
|
||||
- memvid, mempalace, nocturne_memory, rowboat, byterover-cli, letta-code, hindsight, agentic-context-engine → Evaluated, no action
|
||||
|
||||
### Inference Optimization (5 tools evaluated)
|
||||
- llama.cpp → **Already integrated**
|
||||
- vllm, tensorzero, mistral.rs, pruna → Evaluated, no action
|
||||
|
||||
### Retrieval/RAG (5 tools evaluated)
|
||||
- RAGFlow, LightRAG, PageIndex, WeKnora, RAG-Anything → Evaluated, no action
|
||||
|
||||
### Agent Orchestration (5 tools evaluated)
|
||||
- n8n, Langflow, agent-framework, deepagents, multica → Evaluated, no action
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- Source repository: https://github.com/formatho/awesome-ai-tools
|
||||
- Total tools: 414 across 9 categories
|
||||
- Freshness distribution: 🟢 303 | 🟡 49 | 🟠 22 | 🔴 40
|
||||
- Hermes issue: [#842](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/842)
|
||||
@@ -1302,9 +1302,9 @@ class TestConcurrentToolExecution:
|
||||
mock_con.assert_not_called()
|
||||
|
||||
def test_malformed_json_args_forces_sequential(self, agent):
|
||||
"""Non-dict tool arguments (e.g. JSON array) should fall back to sequential."""
|
||||
"""Unparseable tool arguments should fall back to sequential."""
|
||||
tc1 = _mock_tool_call(name="web_search", arguments='{}', call_id="c1")
|
||||
tc2 = _mock_tool_call(name="web_search", arguments='[1, 2, 3]', call_id="c2")
|
||||
tc2 = _mock_tool_call(name="web_search", arguments="NOT JSON {{{", call_id="c2")
|
||||
mock_msg = _mock_assistant_msg(content="", tool_calls=[tc1, tc2])
|
||||
messages = []
|
||||
with patch.object(agent, "_execute_tool_calls_sequential") as mock_seq:
|
||||
@@ -1384,9 +1384,10 @@ class TestConcurrentToolExecution:
|
||||
mock_msg = _mock_assistant_msg(content="", tool_calls=[tc1, tc2])
|
||||
messages = []
|
||||
|
||||
call_count = [0]
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
# Deterministic failure based on tool_call_id to avoid race conditions
|
||||
if kwargs.get("tool_call_id") == "c1":
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
raise RuntimeError("boom")
|
||||
return "success"
|
||||
|
||||
|
||||
@@ -416,219 +416,3 @@ class TestEdgeCases:
|
||||
"""Verify max workers constant exists and is reasonable."""
|
||||
from run_agent import _MAX_TOOL_WORKERS
|
||||
assert 1 <= _MAX_TOOL_WORKERS <= 32
|
||||
|
||||
|
||||
# ── Integration Tests: AIAgent Concurrent Execution ───────────────────────────
|
||||
|
||||
class TestAIAgentConcurrentExecution:
|
||||
"""Exercise _execute_tool_calls_concurrent through an AIAgent instance."""
|
||||
|
||||
@pytest.fixture
|
||||
def agent(self):
|
||||
"""Minimal AIAgent with mocked OpenAI client and tool loading."""
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import patch
|
||||
from run_agent import AIAgent
|
||||
|
||||
def _make_tool_defs(*names):
|
||||
return [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": n,
|
||||
"description": f"{n} tool",
|
||||
"parameters": {"type": "object", "properties": {}},
|
||||
},
|
||||
}
|
||||
for n in names
|
||||
]
|
||||
|
||||
with (
|
||||
patch("run_agent.get_tool_definitions", return_value=_make_tool_defs("web_search", "read_file")),
|
||||
patch("run_agent.check_toolset_requirements", return_value={}),
|
||||
patch("run_agent.OpenAI"),
|
||||
):
|
||||
a = AIAgent(
|
||||
api_key="test-key-1234567890",
|
||||
quiet_mode=True,
|
||||
skip_context_files=True,
|
||||
skip_memory=True,
|
||||
)
|
||||
a.client = MagicMock()
|
||||
return a
|
||||
|
||||
def _mock_assistant_msg(self, tool_calls=None):
|
||||
from types import SimpleNamespace
|
||||
return SimpleNamespace(content="", tool_calls=tool_calls)
|
||||
|
||||
def _mock_tool_call(self, name, arguments, call_id):
|
||||
from types import SimpleNamespace
|
||||
return SimpleNamespace(
|
||||
id=call_id,
|
||||
type="function",
|
||||
function=SimpleNamespace(name=name, arguments=json.dumps(arguments)),
|
||||
)
|
||||
|
||||
def test_two_tool_batch_executes_concurrently(self, agent):
|
||||
"""2-tool parallel batch: all execute, results ordered, 100% pass."""
|
||||
tc1 = self._mock_tool_call("read_file", {"path": "a.txt"}, "c1")
|
||||
tc2 = self._mock_tool_call("read_file", {"path": "b.txt"}, "c2")
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=[tc1, tc2])
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"file": args.get("path", ""), "content": f"content_of_{args.get('path', '')}"})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 2
|
||||
assert messages[0]["tool_call_id"] == "c1"
|
||||
assert messages[1]["tool_call_id"] == "c2"
|
||||
assert "a.txt" in messages[0]["content"]
|
||||
assert "b.txt" in messages[1]["content"]
|
||||
|
||||
def test_three_tool_batch_executes_concurrently(self, agent):
|
||||
"""3-tool parallel batch: all execute, results ordered, 100% pass."""
|
||||
tcs = [
|
||||
self._mock_tool_call("web_search", {"query": f"q{i}"}, f"c{i}")
|
||||
for i in range(3)
|
||||
]
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=tcs)
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"query": args.get("query", ""), "results": [f"result_{args.get('query', '')}"]})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 3
|
||||
for i, tc in enumerate(tcs):
|
||||
assert messages[i]["tool_call_id"] == tc.id
|
||||
assert f"q{i}" in messages[i]["content"]
|
||||
|
||||
def test_four_tool_batch_executes_concurrently(self, agent):
|
||||
"""4-tool parallel batch: all execute, results ordered, 100% pass."""
|
||||
tcs = [
|
||||
self._mock_tool_call("read_file", {"path": f"file{i}.txt"}, f"c{i}")
|
||||
for i in range(4)
|
||||
]
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=tcs)
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"path": args.get("path", ""), "size": 100})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 4
|
||||
for i, tc in enumerate(tcs):
|
||||
assert messages[i]["tool_call_id"] == tc.id
|
||||
assert f"file{i}.txt" in messages[i]["content"]
|
||||
|
||||
def test_mixed_read_and_search_batch(self, agent):
|
||||
"""read_file + search_files: safe parallel, different scopes."""
|
||||
tc1 = self._mock_tool_call("read_file", {"path": "config.yaml"}, "c1")
|
||||
tc2 = self._mock_tool_call("web_search", {"query": "provider"}, "c2")
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=[tc1, tc2])
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"tool": name, "args": args})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 2
|
||||
assert messages[0]["tool_call_id"] == "c1"
|
||||
assert messages[1]["tool_call_id"] == "c2"
|
||||
assert "config.yaml" in messages[0]["content"]
|
||||
assert "provider" in messages[1]["content"]
|
||||
|
||||
def test_concurrent_pass_rate_report(self, agent):
|
||||
"""Simulate 2/3/4-tool batches and report pass rate."""
|
||||
batch_sizes = [2, 3, 4]
|
||||
pass_rates = {}
|
||||
|
||||
for size in batch_sizes:
|
||||
tcs = [
|
||||
self._mock_tool_call("web_search", {"query": f"q{i}"}, f"c{i}")
|
||||
for i in range(size)
|
||||
]
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=tcs)
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"ok": True, "query": args.get("query", "")})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
passed = sum(1 for m in messages if "ok" in m.get("content", ""))
|
||||
pass_rates[size] = passed / size if size > 0 else 0.0
|
||||
|
||||
for size, rate in pass_rates.items():
|
||||
assert rate == 1.0, f"Expected 100% pass rate for {size}-tool batch, got {rate:.0%}"
|
||||
|
||||
def test_gemma4_style_two_read_files(self, agent):
|
||||
"""Gemma 4 may issue two reads simultaneously — verify both returned."""
|
||||
tc1 = self._mock_tool_call("read_file", {"path": "src/main.py"}, "c1")
|
||||
tc2 = self._mock_tool_call("read_file", {"path": "src/utils.py"}, "c2")
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=[tc1, tc2])
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"content": f"# {args['path']}\nprint('hello')"})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 2
|
||||
assert "main.py" in messages[0]["content"]
|
||||
assert "utils.py" in messages[1]["content"]
|
||||
|
||||
def test_gemma4_style_three_reads(self, agent):
|
||||
"""Gemma 4 may issue 3 reads for different files — all returned."""
|
||||
tcs = [
|
||||
self._mock_tool_call("read_file", {"path": f"mod{i}.py"}, f"c{i}")
|
||||
for i in range(3)
|
||||
]
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=tcs)
|
||||
messages = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
return json.dumps({"content": f"# {args['path']}"})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 3
|
||||
for i in range(3):
|
||||
assert f"mod{i}.py" in messages[i]["content"]
|
||||
|
||||
def test_mixed_safe_and_write_tools_parallel(self, agent):
|
||||
"""Mix of read (safe) and write (path-scoped) on different paths — parallel."""
|
||||
tc1 = self._mock_tool_call("read_file", {"path": "input.txt"}, "c1")
|
||||
tc2 = self._mock_tool_call("write_file", {"path": "output.txt", "content": "x"}, "c2")
|
||||
tc3 = self._mock_tool_call("read_file", {"path": "config.txt"}, "c3")
|
||||
mock_msg = self._mock_assistant_msg(tool_calls=[tc1, tc2, tc3])
|
||||
messages = []
|
||||
|
||||
call_order = []
|
||||
|
||||
def fake_handle(name, args, task_id, **kwargs):
|
||||
call_order.append(name)
|
||||
return json.dumps({"tool": name, "path": args.get("path", "")})
|
||||
|
||||
with patch("run_agent.handle_function_call", side_effect=fake_handle):
|
||||
agent._execute_tool_calls_concurrent(mock_msg, messages, "task-1")
|
||||
|
||||
assert len(messages) == 3
|
||||
# Results ordered by tool call ID, not completion order
|
||||
assert messages[0]["tool_call_id"] == "c1"
|
||||
assert messages[1]["tool_call_id"] == "c2"
|
||||
assert messages[2]["tool_call_id"] == "c3"
|
||||
# All three should have executed
|
||||
assert len(call_order) == 3
|
||||
|
||||
Reference in New Issue
Block a user