Compare commits

...

3 Commits

Author SHA1 Message Date
fe619a1774 test: Add session model metadata tests (#741)
Some checks failed
Docker Build and Publish / build-and-push (pull_request) Has been skipped
Contributor Attribution Check / check-attribution (pull_request) Failing after 53s
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 45s
Tests / e2e (pull_request) Successful in 3m15s
Tests / test (pull_request) Failing after 54m21s
2026-04-15 03:52:10 +00:00
8194e9c651 feat: Add session model metadata persistence (#741) 2026-04-15 03:51:14 +00:00
Teknium
95d11dfd8e docs: automation templates gallery + comparison post (#9821)
Some checks failed
Deploy Site / build-and-deploy (push) Has been skipped
Docker Build and Publish / build-and-push (push) Has been skipped
Nix / nix (ubuntu-latest) (push) Failing after 2s
Tests / e2e (push) Successful in 2m6s
Tests / test (push) Failing after 17m54s
Build Skills Index / build-index (push) Has been cancelled
Build Skills Index / deploy-with-index (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
* feat(skills): add fitness-nutrition skill to optional-skills

Cherry-picked from PR #9177 by @haileymarshall.

Adds a fitness and nutrition skill for gym-goers and health-conscious users:
- Exercise search via wger API (690+ exercises, free, no auth)
- Nutrition lookup via USDA FoodData Central (380K+ foods, DEMO_KEY fallback)
- Offline body composition calculators (BMI, TDEE, 1RM, macros, body fat %)
- Pure stdlib Python, no pip dependencies

Changes from original PR:
- Moved from skills/ to optional-skills/health/ (correct location)
- Fixed BMR formula in FORMULAS.md (removed confusing -5+10, now just +5)
- Fixed author attribution to match PR submitter
- Marked USDA_API_KEY as optional (DEMO_KEY works without signup)

Also adds optional env var support to the skill readiness checker:
- New 'optional: true' field in required_environment_variables entries
- Optional vars are preserved in metadata but don't block skill readiness
- Optional vars skip the CLI capture prompt flow
- Skills with only optional missing vars show as 'available' not 'setup_needed'

* docs: add automation templates gallery and comparison post

- New docs page: guides/automation-templates.md with 15+ ready-to-use
  automation recipes covering development workflow, devops, research,
  GitHub events, and business operations
- Comparison post (hermes-already-has-routines.md) showing Hermes has
  had schedule/webhook/API triggers since March 2026
- Added automation-templates to sidebar navigation

---------

Co-authored-by: haileymarshall <haileymarshall@users.noreply.github.com>
2026-04-14 12:30:50 -07:00
5 changed files with 1082 additions and 0 deletions

View File

@@ -0,0 +1,223 @@
"""
Session Model Metadata — Persist model context info per session
When a session switches models mid-conversation, context length and
token budget need to be updated to prevent silent truncation.
Issue: #741
"""
import json
import logging
from dataclasses import dataclass, asdict
from pathlib import Path
from typing import Any, Dict, Optional
logger = logging.getLogger(__name__)
HERMES_HOME = Path.home() / ".hermes"
# Common model context lengths (tokens)
KNOWN_CONTEXT_LENGTHS = {
# Anthropic
"claude-opus-4-6": 200000,
"claude-sonnet-4": 200000,
"claude-3.5-sonnet": 200000,
"claude-3-haiku": 200000,
# OpenAI
"gpt-4o": 128000,
"gpt-4-turbo": 128000,
"gpt-4": 8192,
"gpt-3.5-turbo": 16385,
# Nous / open models
"hermes-3-llama-3.1-405b": 131072,
"hermes-3-llama-3.1-70b": 131072,
"deepseek-r1": 131072,
"deepseek-v3": 131072,
# Local
"llama-3.1-8b": 131072,
"llama-3.1-70b": 131072,
"qwen-2.5-72b": 131072,
# Xiaomi
"mimo-v2-pro": 131072,
"mimo-v2-flash": 131072,
# Defaults
"default": 4096,
}
# Reserve tokens for system prompt, response, and overhead
TOKEN_RESERVE = 2000
@dataclass
class ModelMetadata:
"""Metadata for a model in a session."""
model: str
provider: str
context_length: int
available_for_input: int # context_length - reserve
current_tokens_used: int = 0
@property
def remaining_tokens(self) -> int:
"""Tokens remaining for new input."""
return max(0, self.available_for_input - self.current_tokens_used)
@property
def utilization_pct(self) -> float:
"""Percentage of context used."""
if self.available_for_input == 0:
return 0.0
return (self.current_tokens_used / self.available_for_input) * 100
def to_dict(self) -> Dict[str, Any]:
return asdict(self)
def get_context_length(model: str) -> int:
"""Get context length for a model."""
model_lower = model.lower()
# Check exact match
if model_lower in KNOWN_CONTEXT_LENGTHS:
return KNOWN_CONTEXT_LENGTHS[model_lower]
# Check partial match
for key, length in KNOWN_CONTEXT_LENGTHS.items():
if key in model_lower:
return length
return KNOWN_CONTEXT_LENGTHS["default"]
def create_metadata(model: str, provider: str = "", current_tokens: int = 0) -> ModelMetadata:
"""Create model metadata."""
context_length = get_context_length(model)
available = max(0, context_length - TOKEN_RESERVE)
return ModelMetadata(
model=model,
provider=provider,
context_length=context_length,
available_for_input=available,
current_tokens_used=current_tokens
)
def check_model_switch(
old_model: str,
new_model: str,
current_tokens: int
) -> Dict[str, Any]:
"""
Check impact of switching models mid-session.
Returns:
Dict with switch analysis including warnings
"""
old_ctx = get_context_length(old_model)
new_ctx = get_context_length(new_model)
old_available = old_ctx - TOKEN_RESERVE
new_available = new_ctx - TOKEN_RESERVE
result = {
"old_model": old_model,
"new_model": new_model,
"old_context": old_ctx,
"new_context": new_ctx,
"current_tokens": current_tokens,
"fits_in_new": current_tokens <= new_available,
"truncation_needed": max(0, current_tokens - new_available),
"warning": None,
}
if not result["fits_in_new"]:
result["warning"] = (
f"Switching to {new_model} ({new_ctx:,} ctx) with {current_tokens:,} tokens "
f"will truncate {result['truncation_needed']:,} tokens of history. "
f"Consider starting a new session."
)
if new_ctx < old_ctx:
reduction = old_ctx - new_ctx
result["warning"] = (
f"New model has {reduction:,} fewer tokens of context. "
f"({old_ctx:,} -> {new_ctx:,})"
)
return result
class SessionModelTracker:
"""Track model metadata for a session."""
def __init__(self, session_id: str):
self.session_id = session_id
self.metadata: Optional[ModelMetadata] = None
self.history: list = [] # Model switch history
def set_model(self, model: str, provider: str = "", tokens_used: int = 0):
"""Set the current model for the session."""
old_model = self.metadata.model if self.metadata else None
self.metadata = create_metadata(model, provider, tokens_used)
# Record switch in history
if old_model and old_model != model:
self.history.append({
"from": old_model,
"to": model,
"tokens_at_switch": tokens_used,
"context_length": self.metadata.context_length
})
logger.info(
"Session %s: model=%s context=%d available=%d",
self.session_id[:12], model,
self.metadata.context_length,
self.metadata.available_for_input
)
def update_tokens(self, tokens: int):
"""Update current token usage."""
if self.metadata:
self.metadata.current_tokens_used = tokens
def get_remaining(self) -> int:
"""Get remaining tokens."""
if not self.metadata:
return 0
return self.metadata.remaining_tokens
def can_fit(self, additional_tokens: int) -> bool:
"""Check if additional tokens fit in context."""
if not self.metadata:
return False
return self.metadata.remaining_tokens >= additional_tokens
def get_warning(self) -> Optional[str]:
"""Get warning if context is running low."""
if not self.metadata:
return None
util = self.metadata.utilization_pct
if util > 90:
return f"Context {util:.0f}% full. Consider compression or new session."
if util > 75:
return f"Context {util:.0f}% full."
return None
def to_dict(self) -> Dict[str, Any]:
"""Export state."""
return {
"session_id": self.session_id,
"metadata": self.metadata.to_dict() if self.metadata else None,
"history": self.history
}

View File

@@ -0,0 +1,160 @@
# Hermes Agent Has Had "Routines" Since March
Anthropic just announced [Claude Code Routines](https://claude.com/blog/introducing-routines-in-claude-code) — scheduled tasks, GitHub event triggers, and API-triggered agent runs. Bundled prompt + repo + connectors, running on their infrastructure.
It's a good feature. We shipped it two months ago.
---
## The Three Trigger Types — Side by Side
Claude Code Routines offers three ways to trigger an automation:
**1. Scheduled (cron)**
> "Every night at 2am: pull the top bug from Linear, attempt a fix, and open a draft PR."
Hermes equivalent — works today:
```bash
hermes cron create "0 2 * * *" \
"Pull the top bug from the issue tracker, attempt a fix, and open a draft PR." \
--name "Nightly bug fix" \
--deliver telegram
```
**2. GitHub Events (webhook)**
> "Flag PRs that touch the /auth-provider module and post to #auth-changes."
Hermes equivalent — works today:
```bash
hermes webhook subscribe auth-watch \
--events "pull_request" \
--prompt "PR #{pull_request.number}: {pull_request.title} by {pull_request.user.login}. Check if it touches the auth-provider module. If yes, summarize the changes." \
--deliver slack
```
**3. API Triggers**
> "Read the alert payload, find the owning service, post a triage summary to #oncall."
Hermes equivalent — works today:
```bash
hermes webhook subscribe alert-triage \
--prompt "Alert: {alert.name} — Severity: {alert.severity}. Find the owning service, investigate, and post a triage summary with proposed first steps." \
--deliver slack
```
Every use case in their blog post — backlog triage, docs drift, deploy verification, alert correlation, library porting, bespoke PR review — has a working Hermes implementation. No new features needed. It's been shipping since March 2026.
---
## What's Different
| | Claude Code Routines | Hermes Agent |
|---|---|---|
| **Scheduled tasks** | ✅ Schedule-based | ✅ Any cron expression + human-readable intervals |
| **GitHub triggers** | ✅ PR, issue, push events | ✅ Any GitHub event via webhook subscriptions |
| **API triggers** | ✅ POST to unique endpoint | ✅ POST to webhook routes with HMAC auth |
| **MCP connectors** | ✅ Native connectors | ✅ Full MCP client support |
| **Script pre-processing** | ❌ | ✅ Python scripts run before agent, inject context |
| **Skill chaining** | ❌ | ✅ Load multiple skills per automation |
| **Daily limit** | 5-25 runs/day | **Unlimited** |
| **Model choice** | Claude only | **Any model** — Claude, GPT, Gemini, DeepSeek, Qwen, local |
| **Delivery targets** | GitHub comments | Telegram, Discord, Slack, SMS, email, GitHub comments, webhooks, local files |
| **Infrastructure** | Anthropic's servers | **Your infrastructure** — VPS, home server, laptop |
| **Data residency** | Anthropic's cloud | **Your machines** |
| **Cost** | Pro/Max/Team/Enterprise subscription | Your API key, your rates |
| **Open source** | No | **Yes** — MIT license |
---
## Things Hermes Does That Routines Can't
### Script Injection
Run a Python script *before* the agent. The script's stdout becomes context. The script handles mechanical work (fetching, diffing, computing); the agent handles reasoning.
```bash
hermes cron create "every 1h" \
"If CHANGE DETECTED, summarize what changed. If NO_CHANGE, respond with [SILENT]." \
--script ~/.hermes/scripts/watch-site.py \
--name "Pricing monitor" \
--deliver telegram
```
The `[SILENT]` pattern means you only get notified when something actually happens. No spam.
### Multi-Skill Workflows
Chain specialized skills together. Each skill teaches the agent a specific capability, and the prompt ties them together.
```bash
hermes cron create "0 8 * * *" \
"Search arXiv for papers on language model reasoning. Save the top 3 as Obsidian notes." \
--skills "arxiv,obsidian" \
--name "Paper digest"
```
### Deliver Anywhere
One automation, any destination:
```bash
--deliver telegram # Telegram home channel
--deliver discord # Discord home channel
--deliver slack # Slack channel
--deliver sms:+15551234567 # Text message
--deliver telegram:-1001234567890:42 # Specific Telegram forum topic
--deliver local # Save to file, no notification
```
### Model-Agnostic
Your nightly triage can run on Claude. Your deploy verification can run on GPT. Your cost-sensitive monitors can run on DeepSeek or a local model. Same automation system, any backend.
---
## The Limits Tell the Story
Claude Code Routines: **5 routines per day** on Pro. **25 on Enterprise.** That's their ceiling.
Hermes has no daily limit. Run 500 automations a day if you want. The only constraint is your API budget, and you choose which models to use for which tasks.
A nightly backlog triage on Sonnet costs roughly $0.02-0.05. A monitoring check on DeepSeek costs fractions of a cent. You control the economics.
---
## Get Started
Hermes Agent is open source and free. The automation infrastructure — cron scheduler, webhook platform, skill system, multi-platform delivery — is built in.
```bash
pip install hermes-agent
hermes setup
```
Set up a scheduled task in 30 seconds:
```bash
hermes cron create "0 9 * * 1" \
"Generate a weekly AI news digest. Search the web for major announcements, trending repos, and notable papers. Keep it under 500 words with links." \
--name "Weekly digest" \
--deliver telegram
```
Set up a GitHub webhook in 60 seconds:
```bash
hermes gateway setup # enable webhooks
hermes webhook subscribe pr-review \
--events "pull_request" \
--prompt "Review PR #{pull_request.number}: {pull_request.title}" \
--skills "github-code-review" \
--deliver github_comment
```
Full automation templates gallery: [hermes-agent.nousresearch.com/docs/guides/automation-templates](https://hermes-agent.nousresearch.com/docs/guides/automation-templates)
Documentation: [hermes-agent.nousresearch.com](https://hermes-agent.nousresearch.com)
GitHub: [github.com/NousResearch/hermes-agent](https://github.com/NousResearch/hermes-agent)
---
*Hermes Agent is built by [Nous Research](https://nousresearch.com). Open source, model-agnostic, runs on your infrastructure.*

View File

@@ -0,0 +1,105 @@
"""
Tests for session model metadata
Issue: #741
"""
import unittest
from agent.session_model_metadata import (
get_context_length,
create_metadata,
check_model_switch,
SessionModelTracker,
)
class TestContextLength(unittest.TestCase):
def test_known_model(self):
ctx = get_context_length("claude-opus-4-6")
self.assertEqual(ctx, 200000)
def test_partial_match(self):
ctx = get_context_length("anthropic/claude-sonnet-4")
self.assertEqual(ctx, 200000)
def test_unknown_model(self):
ctx = get_context_length("unknown-model-xyz")
self.assertEqual(ctx, 4096)
class TestModelMetadata(unittest.TestCase):
def test_create(self):
meta = create_metadata("gpt-4o", "openai", 1000)
self.assertEqual(meta.context_length, 128000)
self.assertEqual(meta.current_tokens_used, 1000)
self.assertGreater(meta.remaining_tokens, 0)
def test_utilization(self):
meta = create_metadata("gpt-4o", "openai", 64000)
self.assertAlmostEqual(meta.utilization_pct, 50.0, delta=1)
class TestModelSwitch(unittest.TestCase):
def test_safe_switch(self):
result = check_model_switch("gpt-3.5-turbo", "gpt-4o", 5000)
self.assertTrue(result["fits_in_new"])
self.assertIsNone(result["warning"])
def test_truncation_warning(self):
result = check_model_switch("gpt-4o", "gpt-3.5-turbo", 20000)
self.assertFalse(result["fits_in_new"])
self.assertIsNotNone(result["warning"])
self.assertIn("truncate", result["warning"].lower())
def test_downgrade_warning(self):
result = check_model_switch("claude-opus-4-6", "gpt-4", 5000)
self.assertIsNotNone(result["warning"])
class TestSessionModelTracker(unittest.TestCase):
def test_set_model(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o", "openai")
self.assertEqual(tracker.metadata.model, "gpt-4o")
def test_update_tokens(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o")
tracker.update_tokens(5000)
self.assertEqual(tracker.metadata.current_tokens_used, 5000)
def test_remaining(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o")
tracker.update_tokens(10000)
self.assertGreater(tracker.get_remaining(), 0)
def test_can_fit(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o")
tracker.update_tokens(10000)
self.assertTrue(tracker.can_fit(5000))
self.assertFalse(tracker.can_fit(200000))
def test_warning_low_context(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o")
tracker.update_tokens(115000) # ~90% used
warning = tracker.get_warning()
self.assertIsNotNone(warning)
def test_model_switch_history(self):
tracker = SessionModelTracker("test")
tracker.set_model("gpt-4o", "openai")
tracker.update_tokens(5000)
tracker.set_model("claude-opus-4-6", "anthropic")
self.assertEqual(len(tracker.history), 1)
self.assertEqual(tracker.history[0]["from"], "gpt-4o")
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,593 @@
---
sidebar_position: 15
title: "Automation Templates"
description: "Ready-to-use automation recipes — scheduled tasks, GitHub event triggers, API webhooks, and multi-skill workflows"
---
# Automation Templates
Copy-paste recipes for common automation patterns. Each template uses Hermes's built-in [cron scheduler](/docs/user-guide/features/cron) for time-based triggers and [webhook platform](/docs/user-guide/messaging/webhooks) for event-driven triggers.
Every template works with **any model** — not locked to a single provider.
:::tip Three Trigger Types
| Trigger | How | Tool |
|---------|-----|------|
| **Schedule** | Runs on a cadence (hourly, nightly, weekly) | `cronjob` tool or `/cron` slash command |
| **GitHub Event** | Fires on PR opens, pushes, issues, CI results | Webhook platform (`hermes webhook subscribe`) |
| **API Call** | External service POSTs JSON to your endpoint | Webhook platform (config.yaml routes or `hermes webhook subscribe`) |
All three support delivery to Telegram, Discord, Slack, SMS, email, GitHub comments, or local files.
:::
---
## Development Workflow
### Nightly Backlog Triage
Label, prioritize, and summarize new issues every night. Delivers a digest to your team channel.
**Trigger:** Schedule (nightly)
```bash
hermes cron create "0 2 * * *" \
"You are a project manager triaging the NousResearch/hermes-agent GitHub repo.
1. Run: gh issue list --repo NousResearch/hermes-agent --state open --json number,title,labels,author,createdAt --limit 30
2. Identify issues opened in the last 24 hours
3. For each new issue:
- Suggest a priority label (P0-critical, P1-high, P2-medium, P3-low)
- Suggest a category label (bug, feature, docs, security)
- Write a one-line triage note
4. Summarize: total open issues, new today, breakdown by priority
Format as a clean digest. If no new issues, respond with [SILENT]." \
--name "Nightly backlog triage" \
--deliver telegram
```
### Automatic PR Code Review
Review every pull request automatically when it's opened. Posts a review comment directly on the PR.
**Trigger:** GitHub webhook
**Option A — Dynamic subscription (CLI):**
```bash
hermes webhook subscribe github-pr-review \
--events "pull_request" \
--prompt "Review this pull request:
Repository: {repository.full_name}
PR #{pull_request.number}: {pull_request.title}
Author: {pull_request.user.login}
Action: {action}
Diff URL: {pull_request.diff_url}
Fetch the diff with: curl -sL {pull_request.diff_url}
Review for:
- Security issues (injection, auth bypass, secrets in code)
- Performance concerns (N+1 queries, unbounded loops, memory leaks)
- Code quality (naming, duplication, error handling)
- Missing tests for new behavior
Post a concise review. If the PR is a trivial docs/typo change, say so briefly." \
--skills "github-code-review" \
--deliver github_comment
```
**Option B — Static route (config.yaml):**
```yaml
platforms:
webhook:
enabled: true
extra:
port: 8644
secret: "your-global-secret"
routes:
github-pr-review:
events: ["pull_request"]
secret: "github-webhook-secret"
prompt: |
Review PR #{pull_request.number}: {pull_request.title}
Repository: {repository.full_name}
Author: {pull_request.user.login}
Diff URL: {pull_request.diff_url}
Review for security, performance, and code quality.
skills: ["github-code-review"]
deliver: "github_comment"
deliver_extra:
repo: "{repository.full_name}"
pr_number: "{pull_request.number}"
```
Then in GitHub: **Settings → Webhooks → Add webhook** → Payload URL: `http://your-server:8644/webhooks/github-pr-review`, Content type: `application/json`, Secret: `github-webhook-secret`, Events: **Pull requests**.
### Docs Drift Detection
Weekly scan of merged PRs to find API changes that need documentation updates.
**Trigger:** Schedule (weekly)
```bash
hermes cron create "0 9 * * 1" \
"Scan the NousResearch/hermes-agent repo for documentation drift.
1. Run: gh pr list --repo NousResearch/hermes-agent --state merged --json number,title,files,mergedAt --limit 30
2. Filter to PRs merged in the last 7 days
3. For each merged PR, check if it modified:
- Tool schemas (tools/*.py) — may need docs/reference/tools-reference.md update
- CLI commands (hermes_cli/commands.py, hermes_cli/main.py) — may need docs/reference/cli-commands.md update
- Config options (hermes_cli/config.py) — may need docs/user-guide/configuration.md update
- Environment variables — may need docs/reference/environment-variables.md update
4. Cross-reference: for each code change, check if the corresponding docs page was also updated in the same PR
Report any gaps where code changed but docs didn't. If everything is in sync, respond with [SILENT]." \
--name "Docs drift detection" \
--deliver telegram
```
### Dependency Security Audit
Daily scan for known vulnerabilities in project dependencies.
**Trigger:** Schedule (daily)
```bash
hermes cron create "0 6 * * *" \
"Run a dependency security audit on the hermes-agent project.
1. cd ~/.hermes/hermes-agent && source .venv/bin/activate
2. Run: pip audit --format json 2>/dev/null || pip audit 2>&1
3. Run: npm audit --json 2>/dev/null (in website/ directory if it exists)
4. Check for any CVEs with CVSS score >= 7.0
If vulnerabilities found:
- List each one with package name, version, CVE ID, severity
- Check if an upgrade is available
- Note if it's a direct dependency or transitive
If no vulnerabilities, respond with [SILENT]." \
--name "Dependency audit" \
--deliver telegram
```
---
## DevOps & Monitoring
### Deploy Verification
Trigger smoke tests after every deployment. Your CI/CD pipeline POSTs to the webhook when a deploy completes.
**Trigger:** API call (webhook)
```bash
hermes webhook subscribe deploy-verify \
--events "deployment" \
--prompt "A deployment just completed:
Service: {service}
Environment: {environment}
Version: {version}
Deployed by: {deployer}
Run these verification steps:
1. Check if the service is responding: curl -s -o /dev/null -w '%{http_code}' {health_url}
2. Search recent logs for errors: check the deployment payload for any error indicators
3. Verify the version matches: curl -s {health_url}/version
Report: deployment status (healthy/degraded/failed), response time, any errors found.
If healthy, keep it brief. If degraded or failed, provide detailed diagnostics." \
--deliver telegram
```
Your CI/CD pipeline triggers it:
```bash
curl -X POST http://your-server:8644/webhooks/deploy-verify \
-H "Content-Type: application/json" \
-H "X-Hub-Signature-256: sha256=$(echo -n '{"service":"api","environment":"prod","version":"2.1.0","deployer":"ci","health_url":"https://api.example.com/health"}' | openssl dgst -sha256 -hmac 'your-secret' | cut -d' ' -f2)" \
-d '{"service":"api","environment":"prod","version":"2.1.0","deployer":"ci","health_url":"https://api.example.com/health"}'
```
### Alert Triage
Correlate monitoring alerts with recent changes to draft a response. Works with Datadog, PagerDuty, Grafana, or any alerting system that can POST JSON.
**Trigger:** API call (webhook)
```bash
hermes webhook subscribe alert-triage \
--prompt "Monitoring alert received:
Alert: {alert.name}
Severity: {alert.severity}
Service: {alert.service}
Message: {alert.message}
Timestamp: {alert.timestamp}
Investigate:
1. Search the web for known issues with this error pattern
2. Check if this correlates with any recent deployments or config changes
3. Draft a triage summary with:
- Likely root cause
- Suggested first response steps
- Escalation recommendation (P1-P4)
Be concise. This goes to the on-call channel." \
--deliver slack
```
### Uptime Monitor
Check endpoints every 30 minutes. Only notify when something is down.
**Trigger:** Schedule (every 30 min)
```python title="~/.hermes/scripts/check-uptime.py"
import urllib.request, json, time
ENDPOINTS = [
{"name": "API", "url": "https://api.example.com/health"},
{"name": "Web", "url": "https://www.example.com"},
{"name": "Docs", "url": "https://docs.example.com"},
]
results = []
for ep in ENDPOINTS:
try:
start = time.time()
req = urllib.request.Request(ep["url"], headers={"User-Agent": "Hermes-Monitor/1.0"})
resp = urllib.request.urlopen(req, timeout=10)
elapsed = round((time.time() - start) * 1000)
results.append({"name": ep["name"], "status": resp.getcode(), "ms": elapsed})
except Exception as e:
results.append({"name": ep["name"], "status": "DOWN", "error": str(e)})
down = [r for r in results if r.get("status") == "DOWN" or (isinstance(r.get("status"), int) and r["status"] >= 500)]
if down:
print("OUTAGE DETECTED")
for r in down:
print(f" {r['name']}: {r.get('error', f'HTTP {r[\"status\"]}')} ")
print(f"\nAll results: {json.dumps(results, indent=2)}")
else:
print("NO_ISSUES")
```
```bash
hermes cron create "every 30m" \
"If the script reports OUTAGE DETECTED, summarize which services are down and suggest likely causes. If NO_ISSUES, respond with [SILENT]." \
--script ~/.hermes/scripts/check-uptime.py \
--name "Uptime monitor" \
--deliver telegram
```
---
## Research & Intelligence
### Competitive Repository Scout
Monitor competitor repos for interesting PRs, features, and architectural decisions.
**Trigger:** Schedule (daily)
```bash
hermes cron create "0 8 * * *" \
"Scout these AI agent repositories for notable activity in the last 24 hours:
Repos to check:
- anthropics/claude-code
- openai/codex
- All-Hands-AI/OpenHands
- Aider-AI/aider
For each repo:
1. gh pr list --repo <repo> --state all --json number,title,author,createdAt,mergedAt --limit 15
2. gh issue list --repo <repo> --state open --json number,title,labels,createdAt --limit 10
Focus on:
- New features being developed
- Architectural changes
- Integration patterns we could learn from
- Security fixes that might affect us too
Skip routine dependency bumps and CI fixes. If nothing notable, respond with [SILENT].
If there are findings, organize by repo with brief analysis of each item." \
--skills "competitive-pr-scout" \
--name "Competitor scout" \
--deliver telegram
```
### AI News Digest
Weekly roundup of AI/ML developments.
**Trigger:** Schedule (weekly)
```bash
hermes cron create "0 9 * * 1" \
"Generate a weekly AI news digest covering the past 7 days:
1. Search the web for major AI announcements, model releases, and research breakthroughs
2. Search for trending ML repositories on GitHub
3. Check arXiv for highly-cited papers on language models and agents
Structure:
## Headlines (3-5 major stories)
## Notable Papers (2-3 papers with one-sentence summaries)
## Open Source (interesting new repos or major releases)
## Industry Moves (funding, acquisitions, launches)
Keep each item to 1-2 sentences. Include links. Total under 600 words." \
--name "Weekly AI digest" \
--deliver telegram
```
### Paper Digest with Notes
Daily arXiv scan that saves summaries to your note-taking system.
**Trigger:** Schedule (daily)
```bash
hermes cron create "0 8 * * *" \
"Search arXiv for the 3 most interesting papers on 'language model reasoning' OR 'tool-use agents' from the past day. For each paper, create an Obsidian note with the title, authors, abstract summary, key contribution, and potential relevance to Hermes Agent development." \
--skills "arxiv,obsidian" \
--name "Paper digest" \
--deliver local
```
---
## GitHub Event Automations
### Issue Auto-Labeling
Automatically label and respond to new issues.
**Trigger:** GitHub webhook
```bash
hermes webhook subscribe github-issues \
--events "issues" \
--prompt "New GitHub issue received:
Repository: {repository.full_name}
Issue #{issue.number}: {issue.title}
Author: {issue.user.login}
Action: {action}
Body: {issue.body}
Labels: {issue.labels}
If this is a new issue (action=opened):
1. Read the issue title and body carefully
2. Suggest appropriate labels (bug, feature, docs, security, question)
3. If it's a bug report, check if you can identify the affected component from the description
4. Post a helpful initial response acknowledging the issue
If this is a label or assignment change, respond with [SILENT]." \
--deliver github_comment
```
### CI Failure Analysis
Analyze CI failures and post diagnostics on the PR.
**Trigger:** GitHub webhook
```yaml
# config.yaml route
platforms:
webhook:
enabled: true
extra:
routes:
ci-failure:
events: ["check_run"]
secret: "ci-secret"
prompt: |
CI check failed:
Repository: {repository.full_name}
Check: {check_run.name}
Status: {check_run.conclusion}
PR: #{check_run.pull_requests.0.number}
Details URL: {check_run.details_url}
If conclusion is "failure":
1. Fetch the log from the details URL if accessible
2. Identify the likely cause of failure
3. Suggest a fix
If conclusion is "success", respond with [SILENT].
deliver: "github_comment"
deliver_extra:
repo: "{repository.full_name}"
pr_number: "{check_run.pull_requests.0.number}"
```
### Auto-Port Changes Across Repos
When a PR merges in one repo, automatically port the equivalent change to another.
**Trigger:** GitHub webhook
```bash
hermes webhook subscribe auto-port \
--events "pull_request" \
--prompt "PR merged in the source repository:
Repository: {repository.full_name}
PR #{pull_request.number}: {pull_request.title}
Author: {pull_request.user.login}
Action: {action}
Merge commit: {pull_request.merge_commit_sha}
If action is 'closed' and pull_request.merged is true:
1. Fetch the diff: curl -sL {pull_request.diff_url}
2. Analyze what changed
3. Determine if this change needs to be ported to the Go SDK equivalent
4. If yes, create a branch, apply the equivalent changes, and open a PR on the target repo
5. Reference the original PR in the new PR description
If action is not 'closed' or not merged, respond with [SILENT]." \
--skills "github-pr-workflow" \
--deliver log
```
---
## Business Operations
### Stripe Payment Monitoring
Track payment events and get summaries of failures.
**Trigger:** API call (webhook)
```bash
hermes webhook subscribe stripe-payments \
--events "payment_intent.succeeded,payment_intent.payment_failed,charge.dispute.created" \
--prompt "Stripe event received:
Event type: {type}
Amount: {data.object.amount} cents ({data.object.currency})
Customer: {data.object.customer}
Status: {data.object.status}
For payment_intent.payment_failed:
- Identify the failure reason from {data.object.last_payment_error}
- Suggest whether this is a transient issue (retry) or permanent (contact customer)
For charge.dispute.created:
- Flag as urgent
- Summarize the dispute details
For payment_intent.succeeded:
- Brief confirmation only
Keep responses concise for the ops channel." \
--deliver slack
```
### Daily Revenue Summary
Compile key business metrics every morning.
**Trigger:** Schedule (daily)
```bash
hermes cron create "0 8 * * *" \
"Generate a morning business metrics summary.
Search the web for:
1. Current Bitcoin and Ethereum prices
2. S&P 500 status (pre-market or previous close)
3. Any major tech/AI industry news from the last 12 hours
Format as a brief morning briefing, 3-4 bullet points max.
Deliver as a clean, scannable message." \
--name "Morning briefing" \
--deliver telegram
```
---
## Multi-Skill Workflows
### Security Audit Pipeline
Combine multiple skills for a comprehensive weekly security review.
**Trigger:** Schedule (weekly)
```bash
hermes cron create "0 3 * * 0" \
"Run a comprehensive security audit of the hermes-agent codebase.
1. Check for dependency vulnerabilities (pip audit, npm audit)
2. Search the codebase for common security anti-patterns:
- Hardcoded secrets or API keys
- SQL injection vectors (string formatting in queries)
- Path traversal risks (user input in file paths without validation)
- Unsafe deserialization (pickle.loads, yaml.load without SafeLoader)
3. Review recent commits (last 7 days) for security-relevant changes
4. Check if any new environment variables were added without being documented
Write a security report with findings categorized by severity (Critical, High, Medium, Low).
If nothing found, report a clean bill of health." \
--skills "codebase-security-audit" \
--name "Weekly security audit" \
--deliver telegram
```
### Content Pipeline
Research, draft, and prepare content on a schedule.
**Trigger:** Schedule (weekly)
```bash
hermes cron create "0 10 * * 3" \
"Research and draft a technical blog post outline about a trending topic in AI agents.
1. Search the web for the most discussed AI agent topics this week
2. Pick the most interesting one that's relevant to open-source AI agents
3. Create an outline with:
- Hook/intro angle
- 3-4 key sections
- Technical depth appropriate for developers
- Conclusion with actionable takeaway
4. Save the outline to ~/drafts/blog-$(date +%Y%m%d).md
Keep the outline to ~300 words. This is a starting point, not a finished post." \
--name "Blog outline" \
--deliver local
```
---
## Quick Reference
### Cron Schedule Syntax
| Expression | Meaning |
|-----------|---------|
| `every 30m` | Every 30 minutes |
| `every 2h` | Every 2 hours |
| `0 2 * * *` | Daily at 2:00 AM |
| `0 9 * * 1` | Every Monday at 9:00 AM |
| `0 9 * * 1-5` | Weekdays at 9:00 AM |
| `0 3 * * 0` | Every Sunday at 3:00 AM |
| `0 */6 * * *` | Every 6 hours |
### Delivery Targets
| Target | Flag | Notes |
|--------|------|-------|
| Same chat | `--deliver origin` | Default — delivers to where the job was created |
| Local file | `--deliver local` | Saves output, no notification |
| Telegram | `--deliver telegram` | Home channel, or `telegram:CHAT_ID` for specific |
| Discord | `--deliver discord` | Home channel, or `discord:CHANNEL_ID` |
| Slack | `--deliver slack` | Home channel |
| SMS | `--deliver sms:+15551234567` | Direct to phone number |
| Specific thread | `--deliver telegram:-100123:456` | Telegram forum topic |
### Webhook Template Variables
| Variable | Description |
|----------|-------------|
| `{pull_request.title}` | PR title |
| `{issue.number}` | Issue number |
| `{repository.full_name}` | `owner/repo` |
| `{action}` | Event action (opened, closed, etc.) |
| `{__raw__}` | Full JSON payload (truncated at 4000 chars) |
| `{sender.login}` | GitHub user who triggered the event |
### The [SILENT] Pattern
When a cron job's response contains `[SILENT]`, delivery is suppressed. Use this to avoid notification spam on quiet runs:
```
If nothing noteworthy happened, respond with [SILENT].
```
This means you only get notified when the agent has something to report.

View File

@@ -153,6 +153,7 @@ const sidebars: SidebarsConfig = {
'guides/use-voice-mode-with-hermes',
'guides/build-a-hermes-plugin',
'guides/automate-with-cron',
'guides/automation-templates',
'guides/cron-troubleshooting',
'guides/work-with-skills',
'guides/delegation-patterns',