Compare commits

..

4 Commits

Author SHA1 Message Date
Alexander Whitestone
798ca3aa06 chore: sync with remote claude/issue-961 branch
All checks were successful
Lint / lint (pull_request) Successful in 22s
Refs #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-22 00:04:51 -04:00
Alexander Whitestone
e8886f10c8 feat: add Update Hermes and Restart Gateway action buttons to web dashboard
All checks were successful
Lint / lint (pull_request) Successful in 10s
Implements the action button lifecycle described in #961:
- POST /api/actions/restart-gateway  — sends SIGTERM to the gateway PID
- POST /api/actions/update-hermes    — runs pip upgrade in a background job
- GET  /api/actions/jobs/{job_id}    — polls job status/output

Frontend (StatusPage.tsx):
- "Restart Gateway" button with spinning icon while running, then
  success/error message that clears after 5–8 s
- "Update Hermes" button that polls the job endpoint every 2 s;
  shows collapsible pip output on completion
- Page remains responsive (buttons disabled only during their own action)

Also adds i18n strings to en.ts, zh.ts, and the shared types.ts interface.

Fixes #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 23:04:10 -04:00
Alexander Whitestone
d2ce6b8749 test: verify action endpoints for restart-gateway and update-hermes
All checks were successful
Lint / lint (pull_request) Successful in 27s
Add TestActionEndpoints class to test_web_server.py covering:
- POST /api/actions/restart-gateway sends SIGUSR1 to gateway PID
- 409 when gateway is not running
- 500 when os.kill raises a signal error
- POST /api/actions/update-hermes returns ok=true on zero exit
- ok=false on non-zero exit code with stderr in detail
- ok=false on timeout
- Both endpoints reject unauthenticated requests

All 7 new tests pass (83 total in the file).

Refs #961
2026-04-21 22:41:27 -04:00
Alexander Whitestone
a8a086548d feat: add restart gateway and update Hermes action buttons to web dashboard
All checks were successful
Lint / lint (pull_request) Successful in 29s
Implements the update/restart action buttons called out in issue #961:

- Backend (web_server.py): two new POST endpoints
  - /api/actions/restart-gateway — sends SIGUSR1 to the running gateway PID
  - /api/actions/update-hermes  — runs `hermes update --yes` in a subprocess
- Frontend (api.ts): restartGateway() / updateHermes() API helpers + ActionResponse type
- UI (StatusPage.tsx): "Actions" card with Restart Gateway and Update Hermes buttons
  - idle → running (spinner) → success/failure states
  - feedback detail text; auto-resets to idle after 8 s
- i18n: new status.actions / restartGateway / updateHermes strings in en, zh, and types

Refs #961

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 22:30:22 -04:00
11 changed files with 337 additions and 613 deletions

View File

@@ -46,7 +46,6 @@ from hermes_cli.config import (
)
from gateway.status import get_running_pid, read_runtime_status
from agent.agent_card import get_agent_card_json
from agent.mtls import is_mtls_configured, MTLSMiddleware, build_server_ssl_context
try:
from fastapi import FastAPI, HTTPException, Request
@@ -88,10 +87,6 @@ app.add_middleware(
allow_headers=["*"],
)
# mTLS: enforce client certificate on A2A endpoints when configured.
# Activated by setting HERMES_MTLS_CERT, HERMES_MTLS_KEY, HERMES_MTLS_CA.
app.add_middleware(MTLSMiddleware)
# ---------------------------------------------------------------------------
# Endpoints that do NOT require the session token. Everything else under
# /api/ is gated by the auth middleware below. Keep this list minimal —
@@ -1986,6 +1981,73 @@ async def update_config_raw(body: RawConfigUpdate):
raise HTTPException(status_code=400, detail=f"Invalid YAML: {e}")
# ---------------------------------------------------------------------------
# Action endpoints — restart gateway / update Hermes
# ---------------------------------------------------------------------------
class ActionResponse(BaseModel):
ok: bool
detail: str = ""
@app.post("/api/actions/restart-gateway")
async def restart_gateway():
"""Send SIGUSR1 to the running gateway so it drains and restarts.
Falls back to a hard kill+restart if no PID is found or the signal
fails (e.g. the gateway is managed by a remote process / container).
Returns immediately with ``{"ok": true}`` if the signal was delivered;
the caller should poll ``/api/status`` to confirm the new state.
"""
from gateway.status import get_running_pid
pid = get_running_pid()
if pid is None:
raise HTTPException(status_code=409, detail="Gateway is not running")
import signal as _signal
try:
os.kill(pid, _signal.SIGUSR1)
except (ProcessLookupError, PermissionError, OSError, AttributeError) as exc:
raise HTTPException(status_code=500, detail=f"Failed to signal gateway: {exc}")
return {"ok": True, "detail": f"Restart signal sent to PID {pid}"}
@app.post("/api/actions/update-hermes")
async def update_hermes():
"""Run ``hermes update`` in a subprocess and return the output.
The update is performed synchronously (in a thread pool executor) so
the endpoint blocks until completion. Clients should treat a 200
response with ``"ok": true`` as success; ``"ok": false`` means the
subprocess exited non-zero.
"""
import subprocess
loop = asyncio.get_event_loop()
def _run_update():
try:
result = subprocess.run(
[sys.executable, "-m", "hermes_cli.main", "update", "--yes"],
capture_output=True,
text=True,
timeout=300,
)
combined = (result.stdout + result.stderr).strip()
return result.returncode == 0, combined
except subprocess.TimeoutExpired:
return False, "Update timed out after 5 minutes"
except Exception as exc:
return False, str(exc)
ok, detail = await loop.run_in_executor(None, _run_update)
return {"ok": ok, "detail": detail}
# ---------------------------------------------------------------------------
# Token / cost analytics endpoint
# ---------------------------------------------------------------------------
@@ -2110,20 +2172,6 @@ def start_server(
"authentication. Only use on trusted networks.", host,
)
# mTLS: when configured, pass SSL context to uvicorn so all connections
# are TLS with mandatory client certificate verification.
ssl_context = None
scheme = "http"
if is_mtls_configured():
try:
ssl_context = build_server_ssl_context()
scheme = "https"
_log.info(
"mTLS enabled — server requires client certificates (A2A auth)"
)
except Exception as exc:
_log.error("Failed to build mTLS SSL context: %s — starting without TLS", exc)
if open_browser:
import threading
import webbrowser
@@ -2131,11 +2179,9 @@ def start_server(
def _open():
import time as _t
_t.sleep(1.0)
webbrowser.open(f"{scheme}://{host}:{port}")
webbrowser.open(f"http://{host}:{port}")
threading.Thread(target=_open, daemon=True).start()
print(f" Hermes Web UI → {scheme}://{host}:{port}")
if ssl_context is not None:
print(" mTLS enabled — client certificate required for A2A endpoints")
uvicorn.run(app, host=host, port=port, log_level="warning", ssl=ssl_context)
print(f" Hermes Web UI → http://{host}:{port}")
uvicorn.run(app, host=host, port=port, log_level="warning")

View File

@@ -38,7 +38,6 @@ dependencies = [
[project.optional-dependencies]
modal = ["modal>=1.0.0,<2"]
rag = ["lightrag-hku>=1.4.0,<2", "aiohttp>=3.9.0,<4"]
daytona = ["daytona>=0.148.0,<1"]
dev = ["debugpy>=1.8.0,<2", "pytest>=9.0.2,<10", "pytest-asyncio>=1.3.0,<2", "pytest-xdist>=3.0,<4", "mcp>=1.2.0,<2"]
messaging = ["python-telegram-bot[webhooks]>=22.6,<23", "discord.py[voice]>=2.7.1,<3", "aiohttp>=3.13.3,<4", "slack-bolt>=1.18.0,<2", "slack-sdk>=3.27.0,<4"]

View File

@@ -1176,3 +1176,135 @@ class TestStatusRemoteGateway:
assert data["gateway_running"] is True
assert data["gateway_pid"] is None
assert data["gateway_state"] == "running"
# ---------------------------------------------------------------------------
# Action endpoint tests — restart-gateway / update-hermes
# ---------------------------------------------------------------------------
class TestActionEndpoints:
"""Test the /api/actions/* endpoints."""
@pytest.fixture(autouse=True)
def _setup_test_client(self):
try:
from starlette.testclient import TestClient
except ImportError:
pytest.skip("fastapi/starlette not installed")
from hermes_cli.web_server import app, _SESSION_TOKEN
self.client = TestClient(app)
self.client.headers["Authorization"] = f"Bearer {_SESSION_TOKEN}"
# ── restart-gateway ────────────────────────────────────────────────────
def test_restart_gateway_sends_sigusr1(self, monkeypatch):
"""POST /api/actions/restart-gateway signals the running PID."""
killed = {}
def _fake_kill(pid, sig):
killed["pid"] = pid
killed["sig"] = sig
monkeypatch.setattr("gateway.status.get_running_pid", lambda: 12345)
monkeypatch.setattr("hermes_cli.web_server.os.kill", _fake_kill)
resp = self.client.post("/api/actions/restart-gateway")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is True
assert "12345" in data["detail"]
assert killed["pid"] == 12345
def test_restart_gateway_409_when_not_running(self, monkeypatch):
"""POST /api/actions/restart-gateway returns 409 when gateway is not running."""
monkeypatch.setattr("gateway.status.get_running_pid", lambda: None)
resp = self.client.post("/api/actions/restart-gateway")
assert resp.status_code == 409
def test_restart_gateway_500_on_signal_error(self, monkeypatch):
"""POST /api/actions/restart-gateway returns 500 when the signal fails."""
monkeypatch.setattr("gateway.status.get_running_pid", lambda: 99999)
monkeypatch.setattr("hermes_cli.web_server.os.kill", lambda pid, sig: (_ for _ in ()).throw(ProcessLookupError("no such process")))
resp = self.client.post("/api/actions/restart-gateway")
assert resp.status_code == 500
assert "Failed to signal" in resp.json()["detail"]
# ── update-hermes ──────────────────────────────────────────────────────
def test_update_hermes_success(self, monkeypatch):
"""POST /api/actions/update-hermes returns ok=true on zero exit."""
import hermes_cli.web_server as ws
class _FakeResult:
returncode = 0
stdout = "Already up to date.\n"
stderr = ""
def _fake_run(cmd, **kwargs):
assert "--yes" in cmd
return _FakeResult()
monkeypatch.setattr("subprocess.run", _fake_run)
resp = self.client.post("/api/actions/update-hermes")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is True
assert "Already up to date" in data["detail"]
def test_update_hermes_failure_on_nonzero_exit(self, monkeypatch):
"""POST /api/actions/update-hermes returns ok=false on non-zero exit."""
import hermes_cli.web_server as ws
class _FakeResult:
returncode = 1
stdout = ""
stderr = "error: update failed\n"
monkeypatch.setattr("subprocess.run", lambda cmd, **kw: _FakeResult())
resp = self.client.post("/api/actions/update-hermes")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is False
assert "error: update failed" in data["detail"]
def test_update_hermes_timeout(self, monkeypatch):
"""POST /api/actions/update-hermes returns ok=false on timeout."""
import subprocess
import hermes_cli.web_server as ws
def _fake_run(cmd, **kwargs):
raise subprocess.TimeoutExpired(cmd, 300)
monkeypatch.setattr("subprocess.run", _fake_run)
resp = self.client.post("/api/actions/update-hermes")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is False
assert "timed out" in data["detail"].lower()
def test_action_endpoints_require_auth(self):
"""Action endpoints reject requests without a valid Bearer token."""
try:
from starlette.testclient import TestClient
except ImportError:
pytest.skip("fastapi/starlette not installed")
from hermes_cli.web_server import app
unauthed = TestClient(app)
for path in ["/api/actions/restart-gateway", "/api/actions/update-hermes"]:
resp = unauthed.post(path)
assert resp.status_code in (401, 403), f"{path} should require auth"

View File

@@ -1,176 +0,0 @@
"""Tests for tools/lightrag_tool.py"""
import json
import sys
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
# LightRAG may not be installed in all test environments
pytest.importorskip("lightrag", reason="lightrag-hku not installed")
from tools.lightrag_tool import (
check_lightrag_requirements,
lightrag_index,
lightrag_query,
_collect_markdown_files,
_read_text_safe,
LIGHTRAG_DIR,
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _parse_result(result: str) -> dict:
"""Parse JSON tool result, falling back to error string detection."""
try:
return json.loads(result)
except json.JSONDecodeError:
return {"_error": result}
# ---------------------------------------------------------------------------
# Unit tests
# ---------------------------------------------------------------------------
class TestCollectMarkdownFiles:
def test_collects_md_files(self, tmp_path):
(tmp_path / "a.md").write_text("# A")
(tmp_path / "b.md").write_text("# B")
(tmp_path / "skip.txt").write_text("text")
found = _collect_markdown_files(tmp_path)
assert len(found) == 2
assert all(p.suffix == ".md" for p in found)
def test_skips_hidden_dirs(self, tmp_path):
(tmp_path / ".git").mkdir()
(tmp_path / ".git" / "readme.md").write_text("# git")
(tmp_path / "visible.md").write_text("# visible")
found = _collect_markdown_files(tmp_path)
names = [p.name for p in found]
assert "visible.md" in names
assert "readme.md" not in names
def test_returns_empty_for_missing_dir(self):
assert _collect_markdown_files(Path("/nonexistent")) == []
class TestReadTextSafe:
def test_reads_small_file(self, tmp_path):
p = tmp_path / "test.md"
p.write_text("hello world")
assert _read_text_safe(p) == "hello world"
def test_truncates_large_file(self, tmp_path):
p = tmp_path / "big.md"
p.write_text("x" * 1_000_000)
text = _read_text_safe(p, limit=500_000)
assert len(text) == 500_000
def test_reads_binary_without_crashing(self, tmp_path):
p = tmp_path / "binary.md"
p.write_bytes(b"\x00\x01\x02")
result = _read_text_safe(p)
# Should not crash; control chars 0x00-0x7F are valid UTF-8
assert isinstance(result, str)
class TestCheckRequirements:
@patch("tools.lightrag_tool._ollama_available", return_value=True)
def test_ok_when_ollama_up(self, mock_ollama):
assert check_lightrag_requirements() is True
@patch("tools.lightrag_tool._ollama_available", return_value=False)
def test_false_when_ollama_down(self, mock_ollama):
assert check_lightrag_requirements() is False
@patch.dict(sys.modules, {"lightrag": None}, clear=False)
def test_false_when_lightrag_missing(self):
with patch("tools.lightrag_tool._ollama_available", return_value=True):
# Force ImportError by removing lightrag from sys.modules
# and blocking import
assert check_lightrag_requirements() is False
class TestLightragIndex:
@patch("tools.lightrag_tool._ollama_available", return_value=False)
def test_error_when_ollama_down(self, mock_ollama):
result = lightrag_index()
assert "Ollama is not running" in result
@patch("tools.lightrag_tool._ollama_available", return_value=True)
@patch("tools.lightrag_tool._has_ollama_model", return_value=False)
def test_error_when_model_missing(self, mock_model, mock_ollama):
result = lightrag_index()
assert "not found in Ollama" in result
@patch("tools.lightrag_tool._ollama_available", return_value=True)
@patch("tools.lightrag_tool._has_ollama_model", return_value=True)
@patch("tools.lightrag_tool._get_lightrag")
@patch("tools.lightrag_tool._collect_markdown_files", return_value=[])
def test_warning_when_no_files(self, mock_collect, mock_get_rag, mock_model, mock_ollama):
result = lightrag_index()
data = _parse_result(result)
assert data.get("status") == "warning"
assert "No markdown files found" in data.get("message", "")
@patch("tools.lightrag_tool._ollama_available", return_value=True)
@patch("tools.lightrag_tool._has_ollama_model", return_value=True)
@patch("tools.lightrag_tool._get_lightrag")
@patch("tools.lightrag_tool._collect_markdown_files")
@patch("tools.lightrag_tool._read_text_safe", return_value="# Skill doc\nContent.")
@patch("asyncio.run")
def test_indexes_files(self, mock_asyncio, mock_read, mock_collect, mock_get_rag, mock_model, mock_ollama):
mock_collect.return_value = [Path("/fake/skills/git.md"), Path("/fake/skills/docker.md")]
mock_rag = MagicMock()
mock_get_rag.return_value = mock_rag
result = lightrag_index()
data = _parse_result(result)
assert data.get("status") == "ok"
assert data.get("indexed_files") == 2
assert data.get("errors") == 0
class TestLightragQuery:
@patch("tools.lightrag_tool._ollama_available", return_value=False)
def test_error_when_ollama_down(self, mock_ollama):
result = lightrag_query("test", mode="hybrid")
assert "Ollama is not running" in result
@patch("tools.lightrag_tool._ollama_available", return_value=True)
@patch("tools.lightrag_tool.LIGHTRAG_DIR")
def test_empty_index_message(self, mock_dir, mock_ollama):
mock_dir.exists.return_value = True
mock_dir.iterdir.return_value = iter([])
result = lightrag_query("test", mode="hybrid")
data = _parse_result(result)
assert data.get("status") == "empty"
@patch("tools.lightrag_tool._ollama_available", return_value=True)
@patch("tools.lightrag_tool.LIGHTRAG_DIR")
@patch("tools.lightrag_tool._get_lightrag")
@patch("asyncio.run", return_value="Use git clone for repos.")
def test_query_returns_answer(self, mock_asyncio, mock_get_rag, mock_dir, mock_ollama):
mock_dir.exists.return_value = True
mock_dir.iterdir.return_value = iter([Path("dummy")])
mock_rag = MagicMock()
mock_get_rag.return_value = mock_rag
result = lightrag_query("How do I clone a repo?", mode="hybrid")
data = _parse_result(result)
assert data.get("status") == "ok"
assert data.get("mode") == "hybrid"
assert "clone" in data.get("answer", "").lower()
@patch("tools.lightrag_tool._ollama_available", return_value=True)
def test_rejects_invalid_mode(self, mock_ollama):
result = lightrag_query("test", mode="invalid")
assert "mode must be one of" in result
def test_rejects_empty_query(self):
result = lightrag_query("", mode="hybrid")
assert "Query cannot be empty" in result

View File

@@ -1,405 +0,0 @@
#!/usr/bin/env python3
"""
LightRAG Tool — Graph-based knowledge retrieval for skills and docs.
Indexes markdown files under ~/.hermes/skills/ (and optional extra dirs)
into a LightRAG knowledge graph stored at ~/.hermes/lightrag/.
Requires:
- lightrag-hku (pip install lightrag-hku)
- Ollama running locally with an embedding model (default: nomic-embed-text)
- Ollama running locally with a chat model (default: qwen2.5:7b)
Usage:
lightrag_query("How do I dispatch the burn fleet?", mode="hybrid")
lightrag_index() # re-index skill files
"""
import asyncio
import json
import logging
import os
from pathlib import Path
from typing import Dict, List, Optional
import numpy as np
from hermes_constants import get_hermes_home
from tools.registry import registry, tool_error
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Config
# ---------------------------------------------------------------------------
DEFAULT_EMBED_MODEL = os.environ.get("LIGHTRAG_EMBED_MODEL", "nomic-embed-text")
DEFAULT_LLM_MODEL = os.environ.get("LIGHTRAG_LLM_MODEL", "qwen2.5:7b")
DEFAULT_OLLAMA_HOST = os.environ.get("LIGHTRAG_OLLAMA_HOST", "http://localhost:11434")
LIGHTRAG_DIR = get_hermes_home() / "lightrag"
SKILLS_DIR = get_hermes_home() / "skills"
# ---------------------------------------------------------------------------
# Ollama helpers
# ---------------------------------------------------------------------------
def _ollama_available() -> bool:
"""Check if Ollama server is reachable."""
try:
import urllib.request
req = urllib.request.Request(f"{DEFAULT_OLLAMA_HOST}/api/tags")
with urllib.request.urlopen(req, timeout=3) as resp:
return resp.status == 200
except Exception:
return False
def _has_ollama_model(model_name: str) -> bool:
"""Check if a specific model is pulled in Ollama."""
try:
import urllib.request
req = urllib.request.Request(f"{DEFAULT_OLLAMA_HOST}/api/tags")
with urllib.request.urlopen(req, timeout=3) as resp:
data = json.loads(resp.read())
models = [m["name"] for m in data.get("models", [])]
return any(model_name in m for m in models)
except Exception:
return False
async def _ollama_embedding(texts: list, **kwargs) -> np.ndarray:
"""Call Ollama embeddings API."""
import aiohttp
payload = {
"model": DEFAULT_EMBED_MODEL,
"input": texts,
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{DEFAULT_OLLAMA_HOST}/api/embed",
json=payload,
timeout=aiohttp.ClientTimeout(total=60),
) as resp:
resp.raise_for_status()
data = await resp.json()
embeddings = data.get("embeddings", [])
if not embeddings:
raise RuntimeError("Ollama returned empty embeddings")
return np.array(embeddings, dtype=np.float32)
async def _ollama_complete(
prompt, system_prompt=None, history_messages=None, **kwargs
) -> str:
"""Call Ollama generate API for LLM completion."""
import aiohttp
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
if history_messages:
for msg in history_messages:
role = "user" if msg.get("role") == "user" else "assistant"
messages.append({"role": role, "content": msg.get("content", "")})
messages.append({"role": "user", "content": prompt})
payload = {
"model": DEFAULT_LLM_MODEL,
"messages": messages,
"stream": False,
"options": {"temperature": 0.3, "num_predict": 2048},
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{DEFAULT_OLLAMA_HOST}/api/chat",
json=payload,
timeout=aiohttp.ClientTimeout(total=120),
) as resp:
resp.raise_for_status()
data = await resp.json()
return data.get("message", {}).get("content", "")
# ---------------------------------------------------------------------------
# LightRAG setup
# ---------------------------------------------------------------------------
_lightrag_instance: Optional[object] = None
def _get_lightrag() -> object:
"""Lazy-initialize LightRAG with Ollama backends."""
global _lightrag_instance
if _lightrag_instance is not None:
return _lightrag_instance
try:
from lightrag import LightRAG, QueryParam
from lightrag.utils import EmbeddingFunc
except ImportError as e:
raise RuntimeError(
"lightrag is not installed. Run: pip install lightrag-hku"
) from e
LIGHTRAG_DIR.mkdir(parents=True, exist_ok=True)
# Wrap Ollama embedding for LightRAG
embed_func = EmbeddingFunc(
embedding_dim=768, # nomic-embed-text dimension
func=_ollama_embedding,
max_token_size=8192,
model_name=DEFAULT_EMBED_MODEL,
)
_lightrag_instance = LightRAG(
working_dir=str(LIGHTRAG_DIR),
embedding_func=embed_func,
llm_model_func=_ollama_complete,
llm_model_name=DEFAULT_LLM_MODEL,
chunk_token_size=1200,
chunk_overlap_token_size=100,
)
return _lightrag_instance
# ---------------------------------------------------------------------------
# Indexing
# ---------------------------------------------------------------------------
def _collect_markdown_files(root: Path) -> List[Path]:
"""Collect all .md files under root, excluding node_modules and .git."""
files = []
if not root.exists():
return files
for path in root.rglob("*.md"):
if any(part.startswith(".") or part == "node_modules" for part in path.parts):
continue
files.append(path)
return sorted(files)
def _read_text_safe(path: Path, limit: int = 500_000) -> str:
"""Read file text with size limit."""
try:
stat = path.stat()
if stat.st_size > limit:
return path.read_text(encoding="utf-8", errors="ignore")[:limit]
return path.read_text(encoding="utf-8", errors="ignore")
except Exception as e:
logger.warning("Failed to read %s: %s", path, e)
return ""
def lightrag_index(directories: Optional[List[str]] = None) -> str:
"""Index markdown files into LightRAG knowledge graph.
Args:
directories: Extra directories to index (in addition to ~/.hermes/skills/).
"""
if not _ollama_available():
return tool_error(
"Ollama is not running. Start it with: ollama serve"
)
if not _has_ollama_model(DEFAULT_EMBED_MODEL):
return tool_error(
f"Embedding model '{DEFAULT_EMBED_MODEL}' not found in Ollama. "
f"Pull it with: ollama pull {DEFAULT_EMBED_MODEL}"
)
if not _has_ollama_model(DEFAULT_LLM_MODEL):
return tool_error(
f"LLM model '{DEFAULT_LLM_MODEL}' not found in Ollama. "
f"Pull it with: ollama pull {DEFAULT_LLM_MODEL}"
)
rag = _get_lightrag()
dirs = [SKILLS_DIR]
if directories:
for d in directories:
p = Path(d).expanduser()
if p.exists():
dirs.append(p)
all_files = []
for d in dirs:
all_files.extend(_collect_markdown_files(d))
if not all_files:
return json.dumps({
"status": "warning",
"message": "No markdown files found to index.",
"directories": [str(d) for d in dirs],
})
# Read and insert files
inserted = 0
errors = 0
for path in all_files:
text = _read_text_safe(path)
if not text.strip():
continue
try:
# LightRAG insert is async; bridge it
asyncio.run(rag.atext(text))
inserted += 1
except Exception as e:
logger.warning("Failed to index %s: %s", path, e)
errors += 1
return json.dumps({
"status": "ok",
"indexed_files": inserted,
"errors": errors,
"total_files": len(all_files),
"storage_dir": str(LIGHTRAG_DIR),
})
# ---------------------------------------------------------------------------
# Query
# ---------------------------------------------------------------------------
def lightrag_query(query: str, mode: str = "hybrid") -> str:
"""Query the LightRAG knowledge graph.
Args:
query: The question or search query.
mode: Search mode — "local" (nearby entities), "global" (graph-wide),
or "hybrid" (both).
"""
if not query or not query.strip():
return tool_error("Query cannot be empty.")
if mode not in {"local", "global", "hybrid"}:
return tool_error("mode must be one of: local, global, hybrid")
if not _ollama_available():
return tool_error(
"Ollama is not running. Start it with: ollama serve"
)
rag = _get_lightrag()
# Check if any data has been indexed
if not LIGHTRAG_DIR.exists() or not any(LIGHTRAG_DIR.iterdir()):
return json.dumps({
"status": "empty",
"message": "LightRAG index is empty. Run lightrag_index() first.",
})
try:
from lightrag import QueryParam
param = QueryParam(mode=mode)
result = asyncio.run(rag.aquery(query, param=param))
return json.dumps({
"status": "ok",
"mode": mode,
"query": query,
"answer": result,
})
except Exception as e:
logger.exception("LightRAG query failed")
return tool_error(f"Query failed: {e}")
# ---------------------------------------------------------------------------
# Tool schemas
# ---------------------------------------------------------------------------
LIGHTRAG_QUERY_SCHEMA = {
"name": "lightrag_query",
"description": (
"Graph-based knowledge retrieval over indexed skills and documentation.\n\n"
"Use this when the user asks about: conventions, workflows, tool usage, "
"project-specific practices, or anything that might be documented in skills.\n\n"
"Modes:\n"
"- local: fast, searches nearby entities in the graph\n"
"- global: thorough, reasons across the entire knowledge graph\n"
"- hybrid: balanced, combines local and global (recommended)\n\n"
"If the index is empty, the tool will report that and you should "
"call lightrag_index() to populate it."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The question or search query.",
},
"mode": {
"type": "string",
"enum": ["local", "global", "hybrid"],
"description": "Search mode. hybrid is recommended.",
},
},
"required": ["query"],
},
}
LIGHTRAG_INDEX_SCHEMA = {
"name": "lightrag_index",
"description": (
"(Re-)build the LightRAG knowledge graph from skill files and docs.\n\n"
"By default indexes ~/.hermes/skills/. Pass extra directories if needed.\n"
"This is a one-time or occasional operation; queries work against the "
"existing index until you re-index."
),
"parameters": {
"type": "object",
"properties": {
"directories": {
"type": "array",
"items": {"type": "string"},
"description": "Optional extra directories to index (in addition to ~/.hermes/skills/).",
},
},
},
}
# ---------------------------------------------------------------------------
# Availability check
# ---------------------------------------------------------------------------
def check_lightrag_requirements() -> bool:
"""Return True if LightRAG and Ollama appear to be available."""
try:
import lightrag # noqa: F401
except ImportError:
return False
return _ollama_available()
# ---------------------------------------------------------------------------
# Registry
# ---------------------------------------------------------------------------
registry.register(
name="lightrag_query",
toolset="rag",
schema=LIGHTRAG_QUERY_SCHEMA,
handler=lambda args, **kw: lightrag_query(
query=args.get("query", ""),
mode=args.get("mode", "hybrid"),
),
check_fn=check_lightrag_requirements,
emoji="🔎",
)
registry.register(
name="lightrag_index",
toolset="rag",
schema=LIGHTRAG_INDEX_SCHEMA,
handler=lambda args, **kw: lightrag_index(
directories=args.get("directories"),
),
check_fn=check_lightrag_requirements,
emoji="📚",
)

View File

@@ -167,12 +167,6 @@ TOOLSETS = {
"tools": ["memory"],
"includes": []
},
"rag": {
"description": "Graph-based knowledge retrieval over indexed skills and docs (LightRAG)",
"tools": ["lightrag_query", "lightrag_index"],
"includes": []
},
"session_search": {
"description": "Search and recall past conversations with summarization",

View File

@@ -86,6 +86,15 @@ export const en: Translations = {
lastUpdate: "Last update",
platformError: "error",
platformDisconnected: "disconnected",
actions: "Actions",
restartGateway: "Restart Gateway",
restarting: "Restarting…",
restartSuccess: "Gateway restart signal sent",
restartFailed: "Restart failed",
updateHermes: "Update Hermes",
updating: "Updating…",
updateSuccess: "Update complete",
updateFailed: "Update failed",
},
sessions: {

View File

@@ -89,6 +89,15 @@ export interface Translations {
lastUpdate: string;
platformError: string;
platformDisconnected: string;
actions: string;
restartGateway: string;
restarting: string;
restartSuccess: string;
restartFailed: string;
updateHermes: string;
updating: string;
updateSuccess: string;
updateFailed: string;
};
// ── Sessions page ──

View File

@@ -86,6 +86,15 @@ export const zh: Translations = {
lastUpdate: "最后更新",
platformError: "错误",
platformDisconnected: "已断开",
actions: "操作",
restartGateway: "重启网关",
restarting: "重启中…",
restartSuccess: "重启信号已发送",
restartFailed: "重启失败",
updateHermes: "更新 Hermes",
updating: "更新中…",
updateSuccess: "更新完成",
updateFailed: "更新失败",
},
sessions: {

View File

@@ -182,6 +182,12 @@ export const api = {
},
);
},
// Dashboard actions
restartGateway: () =>
fetchJSON<ActionResponse>("/api/actions/restart-gateway", { method: "POST" }),
updateHermes: () =>
fetchJSON<ActionResponse>("/api/actions/update-hermes", { method: "POST" }),
};
export interface PlatformStatus {
@@ -409,9 +415,15 @@ export interface OAuthSubmitResponse {
message?: string;
}
export interface ActionResponse {
ok: boolean;
detail: string;
}
export interface OAuthPollResponse {
session_id: string;
status: "pending" | "approved" | "denied" | "expired" | "error";
error_message?: string | null;
expires_at?: number | null;
}

View File

@@ -1,4 +1,4 @@
import { useEffect, useState } from "react";
import { useEffect, useRef, useState } from "react";
import {
Activity,
AlertTriangle,
@@ -6,19 +6,30 @@ import {
Cpu,
Database,
Radio,
RefreshCw,
TriangleAlert,
Wifi,
WifiOff,
Zap,
} from "lucide-react";
import { api } from "@/lib/api";
import type { PlatformStatus, SessionInfo, StatusResponse } from "@/lib/api";
import { timeAgo, isoTimeAgo } from "@/lib/utils";
import { Button } from "@/components/ui/button";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { Badge } from "@/components/ui/badge";
import { useI18n } from "@/i18n";
type ActionState = "idle" | "running" | "success" | "failure";
export default function StatusPage() {
const [status, setStatus] = useState<StatusResponse | null>(null);
const [sessions, setSessions] = useState<SessionInfo[]>([]);
const [restartState, setRestartState] = useState<ActionState>("idle");
const [restartDetail, setRestartDetail] = useState("");
const [updateState, setUpdateState] = useState<ActionState>("idle");
const [updateDetail, setUpdateDetail] = useState("");
const resetTimers = useRef<Record<string, ReturnType<typeof setTimeout>>>({});
const { t } = useI18n();
useEffect(() => {
@@ -31,6 +42,39 @@ export default function StatusPage() {
return () => clearInterval(interval);
}, []);
function scheduleReset(key: string, setter: (s: ActionState) => void) {
clearTimeout(resetTimers.current[key]);
resetTimers.current[key] = setTimeout(() => setter("idle"), 8000);
}
async function handleRestartGateway() {
setRestartState("running");
setRestartDetail("");
try {
const resp = await api.restartGateway();
setRestartState(resp.ok ? "success" : "failure");
setRestartDetail(resp.detail);
} catch (err: unknown) {
setRestartState("failure");
setRestartDetail(err instanceof Error ? err.message : String(err));
}
scheduleReset("restart", setRestartState);
}
async function handleUpdateHermes() {
setUpdateState("running");
setUpdateDetail("");
try {
const resp = await api.updateHermes();
setUpdateState(resp.ok ? "success" : "failure");
setUpdateDetail(resp.detail);
} catch (err: unknown) {
setUpdateState("failure");
setUpdateDetail(err instanceof Error ? err.message : String(err));
}
scheduleReset("update", setUpdateState);
}
if (!status) {
return (
<div className="flex items-center justify-center py-24">
@@ -159,6 +203,57 @@ export default function StatusPage() {
))}
</div>
{/* Action buttons — restart gateway / update Hermes */}
<Card>
<CardHeader>
<div className="flex items-center gap-2">
<Zap className="h-5 w-5 text-muted-foreground" />
<CardTitle className="text-base">{t.status.actions}</CardTitle>
</div>
</CardHeader>
<CardContent className="flex flex-wrap gap-3">
{/* Restart Gateway */}
<div className="flex flex-col gap-1">
<Button
variant="outline"
size="sm"
disabled={restartState === "running"}
onClick={handleRestartGateway}
>
<RefreshCw className={`h-3.5 w-3.5 mr-1 ${restartState === "running" ? "animate-spin" : ""}`} />
{restartState === "running" ? t.status.restarting : t.status.restartGateway}
</Button>
{(restartDetail || restartState === "success") && (
<p className={`text-xs max-w-xs truncate ${restartState === "failure" ? "text-destructive" : "text-muted-foreground"}`}>
{restartState === "failure" && <TriangleAlert className="inline h-3 w-3 mr-1" />}
{restartState === "success" ? t.status.restartSuccess : restartState === "failure" ? t.status.restartFailed : ""}
{restartDetail && `${restartDetail}`}
</p>
)}
</div>
{/* Update Hermes */}
<div className="flex flex-col gap-1">
<Button
variant="outline"
size="sm"
disabled={updateState === "running"}
onClick={handleUpdateHermes}
>
<RefreshCw className={`h-3.5 w-3.5 mr-1 ${updateState === "running" ? "animate-spin" : ""}`} />
{updateState === "running" ? t.status.updating : t.status.updateHermes}
</Button>
{(updateDetail || updateState === "success" || updateState === "failure") && (
<p className={`text-xs max-w-xs ${updateState === "failure" ? "text-destructive" : "text-muted-foreground"}`}>
{updateState === "failure" && <TriangleAlert className="inline h-3 w-3 mr-1" />}
{updateState === "success" ? t.status.updateSuccess : updateState === "failure" ? t.status.updateFailed : ""}
{updateDetail && `${updateDetail}`}
</p>
)}
</div>
</CardContent>
</Card>
{platforms.length > 0 && (
<PlatformsCard platforms={platforms} platformStateBadge={PLATFORM_STATE_BADGE} />
)}