Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
73984ca72f feat: Add queue health check script
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 29s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 31s
Tests / e2e (pull_request) Successful in 2m13s
Tests / test (pull_request) Failing after 28m10s
2026-04-17 01:26:07 -04:00
3 changed files with 147 additions and 96 deletions

View File

@@ -1,68 +0,0 @@
# Tool Investigation Report: Top 5 Recommendations from awesome-ai-tools
**Generated:** 2026-04-20 | **Source:** [formatho/awesome-ai-tools](https://github.com/formatho/awesome-ai-tools)
---
## Methodology
Scanned 795 tools across 10 categories from the awesome-ai-tools repository. Evaluated each tool against Hermes Agent's architecture and needs:
- **Memory/Context**: Persistent memory, conversation history, knowledge graphs
- **Inference Optimization**: Token efficiency, local model serving, routing
- **Agent Orchestration**: Multi-agent coordination, fleet management
- **Workflow Automation**: Task decomposition, scheduling, pipelines
- **Retrieval/RAG**: Semantic search, document understanding, context injection
Each tool scored on: GitHub stars, development activity (freshness), integration potential, and impact on Hermes.
---
## Top 5 Recommended Tools
| Rank | Tool | Stars | Category | Integration Effort | Impact | Why It Fits Hermes |
|------|------|-------|----------|-------------------|--------|---------------------|
| 1 | **[LiteLLM](https://github.com/BerriAI/litellm)** | 76k+ | Inference Optimization | 2/5 | 5/5 | Unified API gateway for 100+ LLM providers with cost tracking, guardrails, load balancing, and logging. Hermes already routes through multiple providers — LiteLLM could replace custom provider routing with battle-tested load balancing and automatic fallback. Direct drop-in for `provider` abstraction layer. Native support for Bedrock, Azure, OpenAI, VertexAI, Anthropic, Ollama, vLLM. Would reduce Hermes's provider management code by ~60%. |
| 2 | **[Mem0](https://github.com/mem0ai/mem0)** | 53k+ | Memory/Context | 3/5 | 5/5 | Universal memory layer for AI agents with persistent, searchable memory across sessions. Hermes has session memory but lacks a structured long-term memory system. Mem0 provides automatic memory extraction from conversations, semantic search over memories, and memory decay/pruning. Could replace/enhance the current memory tool with a purpose-built agent memory infrastructure. Supports Pinecone, Qdrant, ChromaDB backends. |
| 3 | **[RAGFlow](https://github.com/infiniflow/ragflow)** | 77k+ | Retrieval/RAG | 4/5 | 4/5 | Open-source RAG engine with deep document understanding, OCR, and agent capabilities. Hermes's current retrieval is limited to web search and file reading. RAGFlow adds visual document parsing (PDF/Word/PPT with tables, charts, formulas), chunk-level citation, and configurable retrieval strategies. Would massively upgrade Hermes's document processing capabilities. Docker-deployable, compatible with local models. |
| 4 | **[LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM)** | 3.7k | Inference Optimization | 3/5 | 4/5 | C++ implementation of Google's LiteRT for efficient on-device language model inference. Hermes supports local models via Ollama but lacks optimized on-device inference for edge/mobile. LiteRT-LM provides sub-second inference on commodity hardware with minimal memory footprint. Could power a "Hermes lite" mode for offline/edge deployments. Active development (Fresh status), backed by Google AI Edge team. |
| 5 | **[Claude-Mem](https://github.com/thedotmack/claude-mem)** | 61k+ | Memory/Context | 2/5 | 3/5 | Automatic session capture and context injection for coding agents. Compresses session history with AI and injects relevant context into future sessions. Pattern directly applicable to Hermes's cross-session persistence problem. Uses agent SDK for intelligent compression — could enhance Hermes's session_search with automatic relevance-weighted recall. Lightweight integration, focused on the exact pain point of context loss between sessions. |
---
## Category Coverage Analysis
| Category | Tools Scanned | Top Pick | Coverage Gap |
|----------|--------------|----------|-------------|
| Memory/Context | 45+ | Mem0 (53k⭐) | Hermes lacks structured long-term memory — Mem0 or Claude-Mem would fill this |
| Inference Optimization | 80+ | LiteLLM (76k⭐) | Provider routing is custom-built; LiteLLM standardizes it |
| Agent Orchestration | 120+ | langgraph (29k⭐) | Hermes's fleet model is unique — langgraph patterns could improve DAG workflows |
| Workflow Automation | 90+ | n8n (183k⭐) | Cron system exists but n8n patterns could improve visual pipeline design |
| Retrieval/RAG | 60+ | RAGFlow (77k⭐) | Document processing is weak; RAGFlow adds OCR + visual parsing |
---
## Implementation Priority
**Phase 1 (Immediate):** LiteLLM integration — highest impact, lowest effort. Replace custom provider routing with LiteLLM's unified API. Estimated: 2-3 days.
**Phase 2 (Short-term):** Mem0 memory layer — critical for agent maturity. Add structured memory extraction and retrieval. Estimated: 1 week.
**Phase 3 (Medium-term):** RAGFlow document engine — significant capability upgrade. Requires Docker setup and integration with existing file tools. Estimated: 1-2 weeks.
---
## Honorable Mentions
- **[GPTCache](https://github.com/zilliztech/GPTCache)** (8k⭐): Semantic cache for LLMs — could reduce API costs by 30-50% for repeated queries
- **[promptfoo](https://github.com/promptfoo/promptfoo)** (20k⭐): LLM testing/evaluation framework — essential for quality assurance
- **[PageIndex](https://github.com/VectifyAI/PageIndex)** (25k⭐): Vectorless reasoning-based RAG — next-gen retrieval without embeddings
- **[rtk](https://github.com/rtk-ai/rtk)** (28k⭐): CLI proxy that reduces token consumption 60-90% — directly relevant to cost optimization
---
## Data Sources
- Repository: https://github.com/formatho/awesome-ai-tools
- Total tools cataloged: 795
- Categories analyzed: Agents & Automation, Developer Tools, LLMs & Chatbots, Research & Data, Productivity
- Freshness filter: Prioritized tools with Fresh (≤7d) or Recent (≤30d) status

147
scripts/queue_health_check.py Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env python3
"""
Queue Health Check — Verify dispatch queue is operational.
Checks:
1. Queue file exists and is readable
2. Queue has pending items
3. Queue is not stuck (items processing)
4. Queue age (stale items)
Usage:
python scripts/queue_health_check.py
python scripts/queue_health_check.py --json
"""
import json
import sys
from datetime import datetime, timedelta
from pathlib import Path
def check_queue_health(queue_path: str = "~/.hermes/queue.json") -> dict:
"""Check queue health status."""
path = Path(queue_path).expanduser()
result = {
"healthy": True,
"checks": {},
"warnings": [],
"errors": []
}
# Check 1: File exists
if not path.exists():
result["healthy"] = False
result["errors"].append(f"Queue file not found: {path}")
result["checks"]["file_exists"] = False
return result
result["checks"]["file_exists"] = True
# Check 2: File is readable
try:
with open(path, "r") as f:
data = json.load(f)
except Exception as e:
result["healthy"] = False
result["errors"].append(f"Cannot read queue: {e}")
result["checks"]["readable"] = False
return result
result["checks"]["readable"] = True
# Check 3: Queue structure
if not isinstance(data, dict):
result["healthy"] = False
result["errors"].append("Queue is not a dict")
result["checks"]["valid_structure"] = False
return result
result["checks"]["valid_structure"] = True
# Check 4: Pending items
pending = data.get("pending", [])
processing = data.get("processing", [])
completed = data.get("completed", [])
result["checks"]["pending_count"] = len(pending)
result["checks"]["processing_count"] = len(processing)
result["checks"]["completed_count"] = len(completed)
if len(pending) == 0 and len(processing) == 0:
result["warnings"].append("Queue is empty")
# Check 5: Stale processing items
now = datetime.now()
stale_threshold = timedelta(hours=1)
for item in processing:
started = item.get("started_at")
if started:
try:
started_time = datetime.fromisoformat(started.replace("Z", "+00:00"))
if now - started_time > stale_threshold:
result["warnings"].append(f"Stale item: {item.get('id', 'unknown')} (started {started})")
except:
pass
# Check 6: Queue age
if pending:
oldest = min(pending, key=lambda x: x.get("added_at", ""))
added = oldest.get("added_at")
if added:
try:
added_time = datetime.fromisoformat(added.replace("Z", "+00:00"))
age = now - added_time
if age > timedelta(hours=24):
result["warnings"].append(f"Old item in queue: {oldest.get('id', 'unknown')} (added {added})")
except:
pass
return result
def main():
"""Main function."""
import argparse
parser = argparse.ArgumentParser(description="Queue health check")
parser.add_argument("--queue", default="~/.hermes/queue.json", help="Queue file path")
parser.add_argument("--json", action="store_true", help="Output as JSON")
args = parser.parse_args()
result = check_queue_health(args.queue)
if args.json:
print(json.dumps(result, indent=2))
else:
print("Queue Health Check")
print("=" * 50)
print(f"Healthy: {'' if result['healthy'] else ''}")
print()
print("Checks:")
for check, value in result["checks"].items():
if isinstance(value, bool):
print(f" {check}: {'' if value else ''}")
else:
print(f" {check}: {value}")
if result["warnings"]:
print()
print("Warnings:")
for warning in result["warnings"]:
print(f"{warning}")
if result["errors"]:
print()
print("Errors:")
for error in result["errors"]:
print(f"{error}")
sys.exit(0 if result["healthy"] else 1)
if __name__ == "__main__":
main()

View File

@@ -44,34 +44,6 @@ from typing import Dict, Any, Optional, Tuple
logger = logging.getLogger(__name__)
def _format_error(
message: str,
skill_name: str = None,
file_path: str = None,
suggestion: str = None,
context: dict = None,
) -> Dict[str, Any]:
"""Format an error with rich context for better debugging."""
parts = [message]
if skill_name:
parts.append(f"Skill: {skill_name}")
if file_path:
parts.append(f"File: {file_path}")
if suggestion:
parts.append(f"Suggestion: {suggestion}")
if context:
for key, value in context.items():
parts.append(f"{key}: {value}")
return {
"success": False,
"error": " | ".join(parts),
"skill_name": skill_name,
"file_path": file_path,
"suggestion": suggestion,
}
# Import security scanner — agent-created skills get the same scrutiny as
# community hub installs.
try: