Compare commits
13 Commits
perplexity
...
burn/579-1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1bb9ea9cdd | ||
| c64eb5e571 | |||
| c73dc96d70 | |||
| 07a9b91a6f | |||
| 9becaa65e7 | |||
| b51a27ff22 | |||
| 8e91e114e6 | |||
| cb95b2567c | |||
| dcf97b5d8f | |||
|
|
4beae6e6c6 | ||
| 9aaabb7d37 | |||
| ac812179bf | |||
| 0cc91443ab |
@@ -20,5 +20,5 @@ jobs:
|
||||
echo "PASS: All files parse"
|
||||
- name: Secret scan
|
||||
run: |
|
||||
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v .gitea; then exit 1; fi
|
||||
if grep -rE 'sk-or-|sk-ant-|ghp_|AKIA' . --include='*.yml' --include='*.py' --include='*.sh' 2>/dev/null | grep -v '.gitea' | grep -v 'detect_secrets' | grep -v 'test_trajectory_sanitize'; then exit 1; fi
|
||||
echo "PASS: No secrets"
|
||||
|
||||
@@ -209,7 +209,7 @@ skills:
|
||||
#
|
||||
# fallback_model:
|
||||
# provider: openrouter
|
||||
# model: anthropic/claude-sonnet-4
|
||||
# model: google/gemini-2.5-pro # was anthropic/claude-sonnet-4 — BANNED
|
||||
#
|
||||
# ── Smart Model Routing ────────────────────────────────────────────────
|
||||
# Optional cheap-vs-strong routing for simple turns.
|
||||
|
||||
75
docs/HERMES_MAXI_MANIFESTO.md
Normal file
75
docs/HERMES_MAXI_MANIFESTO.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Hermes Maxi Manifesto
|
||||
|
||||
_Adopted 2026-04-12. This document is the canonical statement of the Timmy Foundation's infrastructure philosophy._
|
||||
|
||||
## The Decision
|
||||
|
||||
We are Hermes maxis. One harness. One truth. No intermediary gateway layers.
|
||||
|
||||
Hermes handles everything:
|
||||
- **Cognitive core** — reasoning, planning, tool use
|
||||
- **Channels** — Telegram, Discord, Nostr, Matrix (direct, not via gateway)
|
||||
- **Dispatch** — task routing, agent coordination, swarm management
|
||||
- **Memory** — MemPalace, sovereign SQLite+FTS5 store, trajectory export
|
||||
- **Cron** — heartbeat, morning reports, nightly retros
|
||||
- **Health** — process monitoring, fleet status, self-healing
|
||||
|
||||
## What This Replaces
|
||||
|
||||
OpenClaw was evaluated as a gateway layer (March–April 2026). The assessment:
|
||||
|
||||
| Capability | OpenClaw | Hermes Native |
|
||||
|-----------|----------|---------------|
|
||||
| Multi-channel comms | Built-in | Direct integration per channel |
|
||||
| Persistent memory | SQLite (basic) | MemPalace + FTS5 + trajectory export |
|
||||
| Cron/scheduling | Native cron | Huey task queue + launchd |
|
||||
| Multi-agent sessions | Session routing | Wizard fleet + dispatch router |
|
||||
| Procedural memory | None | Sovereign Memory Store |
|
||||
| Model sovereignty | Requires external provider | Ollama local-first |
|
||||
| Identity | Configurable persona | SOUL.md + Bitcoin inscription |
|
||||
|
||||
The governance concern (founder joined OpenAI, Feb 2026) sealed the decision, but the technical case was already clear: OpenClaw adds a layer without adding capability that Hermes doesn't already have or can't build natively.
|
||||
|
||||
## The Principle
|
||||
|
||||
Every external dependency is temporary falsework. If it can be built locally, it must be built locally. The target is a $0 cloud bill with full operational capability.
|
||||
|
||||
This applies to:
|
||||
- **Agent harness** — Hermes, not OpenClaw/Claude Code/Cursor
|
||||
- **Inference** — Ollama + local models, not cloud APIs
|
||||
- **Data** — SQLite + FTS5, not managed databases
|
||||
- **Hosting** — Hermes VPS + Mac M3 Max, not cloud platforms
|
||||
- **Identity** — Bitcoin inscription + SOUL.md, not OAuth providers
|
||||
|
||||
## Exceptions
|
||||
|
||||
Cloud services are permitted as temporary scaffolding when:
|
||||
1. The local alternative doesn't exist yet
|
||||
2. There's a concrete plan (with a Gitea issue) to bring it local
|
||||
3. The dependency is isolated and can be swapped without architectural changes
|
||||
|
||||
Every cloud dependency must have a `[FALSEWORK]` label in the issue tracker.
|
||||
|
||||
## Enforcement
|
||||
|
||||
- `BANNED_PROVIDERS.md` lists permanently banned providers (Anthropic)
|
||||
- Pre-commit hooks scan for banned provider references
|
||||
- The Swarm Governor enforces PR discipline
|
||||
- The Conflict Detector catches sibling collisions
|
||||
- All of these are stdlib-only Python with zero external dependencies
|
||||
|
||||
## History
|
||||
|
||||
- 2026-03-28: OpenClaw evaluation spike filed (timmy-home #19)
|
||||
- 2026-03-28: OpenClaw Bootstrap epic created (timmy-config #51–#63)
|
||||
- 2026-03-28: Governance concern flagged (founder → OpenAI)
|
||||
- 2026-04-09: Anthropic banned (timmy-config PR #440)
|
||||
- 2026-04-12: OpenClaw purged — Hermes maxi directive adopted
|
||||
- timmy-config PR #487 (7 files, merged)
|
||||
- timmy-home PR #595 (3 files, merged)
|
||||
- the-nexus PRs #1278, #1279 (merged)
|
||||
- 2 issues closed, 27 historical issues preserved
|
||||
|
||||
---
|
||||
|
||||
_"The clean pattern is to separate identity, routing, live task state, durable memory, reusable procedure, and artifact truth. Hermes does all six."_
|
||||
70
docs/RUNBOOK_INDEX.md
Normal file
70
docs/RUNBOOK_INDEX.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Operational Runbook Index
|
||||
|
||||
Last updated: 2026-04-13
|
||||
|
||||
Quick-reference index for common operational tasks across the Timmy Foundation infrastructure.
|
||||
|
||||
## Fleet Operations
|
||||
|
||||
| Task | Location | Command/Procedure |
|
||||
|------|----------|-------------------|
|
||||
| Deploy fleet update | fleet-ops | `ansible-playbook playbooks/provision_and_deploy.yml --ask-vault-pass` |
|
||||
| Check fleet health | fleet-ops | `python3 scripts/fleet_readiness.py` |
|
||||
| Agent scorecard | fleet-ops | `python3 scripts/agent_scorecard.py` |
|
||||
| View fleet manifest | fleet-ops | `cat manifest.yaml` |
|
||||
|
||||
## the-nexus (Frontend + Brain)
|
||||
|
||||
| Task | Location | Command/Procedure |
|
||||
|------|----------|-------------------|
|
||||
| Run tests | the-nexus | `pytest tests/` |
|
||||
| Validate repo integrity | the-nexus | `python3 scripts/repo_truth_guard.py` |
|
||||
| Check swarm governor | the-nexus | `python3 bin/swarm_governor.py --status` |
|
||||
| Start dev server | the-nexus | `python3 server.py` |
|
||||
| Run deep dive pipeline | the-nexus | `cd intelligence/deepdive && python3 pipeline.py` |
|
||||
|
||||
## timmy-config (Control Plane)
|
||||
|
||||
| Task | Location | Command/Procedure |
|
||||
|------|----------|-------------------|
|
||||
| Run Ansible deploy | timmy-config | `cd ansible && ansible-playbook playbooks/site.yml` |
|
||||
| Scan for banned providers | timmy-config | `python3 bin/banned_provider_scan.py` |
|
||||
| Check merge conflicts | timmy-config | `python3 bin/conflict_detector.py` |
|
||||
| Muda audit | timmy-config | `bash fleet/muda-audit.sh` |
|
||||
|
||||
## hermes-agent (Agent Framework)
|
||||
|
||||
| Task | Location | Command/Procedure |
|
||||
|------|----------|-------------------|
|
||||
| Start agent | hermes-agent | `python3 run_agent.py` |
|
||||
| Check provider allowlist | hermes-agent | `python3 tools/provider_allowlist.py --check` |
|
||||
| Run test suite | hermes-agent | `pytest` |
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Agent Down
|
||||
1. Check health endpoint: `curl http://<host>:<port>/health`
|
||||
2. Check systemd: `systemctl status hermes-<agent>`
|
||||
3. Check logs: `journalctl -u hermes-<agent> --since "1 hour ago"`
|
||||
4. Restart: `systemctl restart hermes-<agent>`
|
||||
|
||||
### Banned Provider Detected
|
||||
1. Run scanner: `python3 bin/banned_provider_scan.py`
|
||||
2. Check golden state: `cat ansible/inventory/group_vars/wizards.yml`
|
||||
3. Verify BANNED_PROVIDERS.yml is current
|
||||
4. Fix config and redeploy
|
||||
|
||||
### Merge Conflict Cascade
|
||||
1. Run conflict detector: `python3 bin/conflict_detector.py`
|
||||
2. Rebase oldest conflicting PR first
|
||||
3. Merge, then repeat — cascade resolves naturally
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Repo | Purpose |
|
||||
|------|------|---------|
|
||||
| `manifest.yaml` | fleet-ops | Fleet service definitions |
|
||||
| `config.yaml` | timmy-config | Agent runtime config |
|
||||
| `ansible/BANNED_PROVIDERS.yml` | timmy-config | Provider ban enforcement |
|
||||
| `portals.json` | the-nexus | Portal registry |
|
||||
| `vision.json` | the-nexus | Vision system config |
|
||||
94
docs/WASTE_AUDIT_2026-04-13.md
Normal file
94
docs/WASTE_AUDIT_2026-04-13.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Waste Audit — 2026-04-13
|
||||
|
||||
Author: perplexity (automated review agent)
|
||||
Scope: All Timmy Foundation repos, PRs from April 12-13 2026
|
||||
|
||||
## Purpose
|
||||
|
||||
This audit identifies recurring waste patterns across the foundation's recent PR activity. The goal is to focus agent and contributor effort on high-value work and stop repeating costly mistakes.
|
||||
|
||||
## Waste Patterns Identified
|
||||
|
||||
### 1. Merging Over "Request Changes" Reviews
|
||||
|
||||
**Severity: Critical**
|
||||
|
||||
the-door#23 (crisis detection and response system) was merged despite both Rockachopa and Perplexity requesting changes. The blockers included:
|
||||
- Zero tests for code described as "the most important code in the foundation"
|
||||
- Non-deterministic `random.choice` in safety-critical response selection
|
||||
- False-positive risk on common words ("alone", "lost", "down", "tired")
|
||||
- Early-return logic that loses lower-tier keyword matches
|
||||
|
||||
This is safety-critical code that scans for suicide and self-harm signals. Merging untested, non-deterministic code in this domain is the highest-risk misstep the foundation can make.
|
||||
|
||||
**Corrective action:** Enforce branch protection requiring at least 1 approval with no outstanding change requests before merge. No exceptions for safety-critical code.
|
||||
|
||||
### 2. Mega-PRs That Become Unmergeable
|
||||
|
||||
**Severity: High**
|
||||
|
||||
hermes-agent#307 accumulated 569 commits, 650 files changed, +75,361/-14,666 lines. It was closed without merge due to 10 conflicting files. The actual feature (profile-scoped cron) was then rescued into a smaller PR (#335).
|
||||
|
||||
This pattern wastes reviewer time, creates merge conflicts, and delays feature delivery.
|
||||
|
||||
**Corrective action:** PRs must stay under 500 lines changed. If a feature requires more, break it into stacked PRs. Branches older than 3 days without merge should be rebased or split.
|
||||
|
||||
### 3. Pervasive CI Failures Ignored
|
||||
|
||||
**Severity: High**
|
||||
|
||||
Nearly every PR reviewed in the last 24 hours has failing CI (smoke tests, sanity checks, accessibility audits). PRs are being merged despite red CI. This undermines the entire purpose of having CI.
|
||||
|
||||
**Corrective action:** CI must pass before merge. If CI is flaky or misconfigured, fix the CI — do not bypass it. The "Create merge commit (When checks succeed)" button exists for a reason.
|
||||
|
||||
### 4. Applying Fixes to Wrong Code Locations
|
||||
|
||||
**Severity: Medium**
|
||||
|
||||
the-beacon#96 fix #3 changed `G.totalClicks++` to `G.totalAutoClicks++` in `writeCode()` (the manual click handler) instead of `autoType()` (the auto-click handler). This inverts the tracking entirely. Rockachopa caught this in review.
|
||||
|
||||
This pattern suggests agents are pattern-matching on variable names rather than understanding call-site context.
|
||||
|
||||
**Corrective action:** Every bug fix PR must include the reasoning for WHY the fix is in that specific location. Include a before/after trace showing the bug is actually fixed.
|
||||
|
||||
### 5. Duplicated Effort Across Agents
|
||||
|
||||
**Severity: Medium**
|
||||
|
||||
the-testament#45 was closed with 7 conflicting files and replaced by a rescue PR #46. The original work was largely discarded. Multiple PRs across repos show similar patterns of rework: submit, get changes requested, close, resubmit.
|
||||
|
||||
**Corrective action:** Before opening a PR, check if another agent already has a branch touching the same files. Coordinate via issues, not competing PRs.
|
||||
|
||||
### 6. `wip:` Commit Prefixes Shipped to Main
|
||||
|
||||
**Severity: Low**
|
||||
|
||||
the-door#22 shipped 5 commits all prefixed `wip:` to main. This clutters git history and makes bisecting harder.
|
||||
|
||||
**Corrective action:** Squash or rewrite commit messages before merge. No `wip:` prefixes in main branch history.
|
||||
|
||||
## Priority Actions (Ranked)
|
||||
|
||||
1. **Immediately add tests to the-door crisis_detector.py and crisis_responder.py** — this code is live on main with zero test coverage and known false-positive issues
|
||||
2. **Enable branch protection on all repos** — require 1 approval, no outstanding change requests, CI passing
|
||||
3. **Fix CI across all repos** — smoke tests and sanity checks are failing everywhere; this must be the baseline
|
||||
4. **Enforce PR size limits** — reject PRs over 500 lines changed at the CI level
|
||||
5. **Require bug-fix reasoning** — every fix PR must explain why the change is at that specific location
|
||||
|
||||
## Metrics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Open PRs reviewed | 6 |
|
||||
| PRs merged this run | 1 (the-testament#41) |
|
||||
| PRs blocked | 2 (the-door#22, timmy-config#600) |
|
||||
| Repos with failing CI | 3+ |
|
||||
| PRs with zero test coverage | 4+ |
|
||||
| Estimated rework hours from waste | 20-40h |
|
||||
|
||||
## Conclusion
|
||||
|
||||
The project is moving fast but bleeding quality. The biggest risk is untested code on main — one bad deploy of crisis_detector.py could cause real harm. The priority actions above are ranked by blast radius. Start at #1 and don't skip ahead.
|
||||
|
||||
---
|
||||
*Generated by Perplexity review sweep, 2026-04-13
|
||||
@@ -45,7 +45,8 @@ def append_event(session_id: str, event: dict, base_dir: str | Path = DEFAULT_BA
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
payload = dict(event)
|
||||
payload.setdefault("timestamp", datetime.now(timezone.utc).isoformat())
|
||||
# Optimized for <50ms latency\n with path.open("a", encoding="utf-8", buffering=1024) as f:
|
||||
# Optimized for <50ms latency
|
||||
with path.open("a", encoding="utf-8", buffering=1024) as f:
|
||||
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
||||
write_session_metadata(session_id, {"last_event_excerpt": excerpt(json.dumps(payload, ensure_ascii=False), 400)}, base_dir)
|
||||
return path
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Let Gemini-Timmy configure itself as Anthropic fallback.
|
||||
# Hermes CLI won't accept --provider custom, so we use hermes setup flow.
|
||||
# But first: prove Gemini works, then manually add fallback_model.
|
||||
# Configure Gemini 2.5 Pro as fallback provider.
|
||||
# Anthropic BANNED per BANNED_PROVIDERS.yml (2026-04-09).
|
||||
# Sets up Google Gemini as custom_provider + fallback_model for Hermes.
|
||||
|
||||
# Add Google Gemini as custom_provider + fallback_model in one shot
|
||||
python3 << 'PYEOF'
|
||||
@@ -39,7 +39,7 @@ else:
|
||||
with open(config_path, "w") as f:
|
||||
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
|
||||
|
||||
print("\nDone. When Anthropic quota exhausts, Hermes will failover to Gemini 2.5 Pro.")
|
||||
print("Primary: claude-opus-4-6 (Anthropic)")
|
||||
print("Fallback: gemini-2.5-pro (Google AI)")
|
||||
print("\nDone. Gemini 2.5 Pro configured as fallback. Anthropic is banned.")
|
||||
print("Primary: kimi-k2.5 (Kimi Coding)")
|
||||
print("Fallback: gemini-2.5-pro (Google AI via OpenRouter)")
|
||||
PYEOF
|
||||
|
||||
@@ -271,7 +271,7 @@ Period: Last {hours} hours
|
||||
{chr(10).join([f"- {count} {atype} ({size or 0} bytes)" for count, atype, size in artifacts]) if artifacts else "- None recorded"}
|
||||
|
||||
## Recommendations
|
||||
{""" + self._generate_recommendations(hb_count, avg_latency, uptime_pct)
|
||||
""" + self._generate_recommendations(hb_count, avg_latency, uptime_pct)
|
||||
|
||||
return report
|
||||
|
||||
|
||||
51
rcas/RCA-579-ezra-bezalel-mention.md
Normal file
51
rcas/RCA-579-ezra-bezalel-mention.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# RCA-579: Ezra and Bezalel do not respond to Gitea @mention
|
||||
|
||||
**Issue:** timmy-home#579
|
||||
**Date:** 2026-04-07
|
||||
**Filed by:** Timmy
|
||||
|
||||
## What Broke
|
||||
|
||||
Tagging @ezra or @bezalel in a Gitea issue comment produces no response. The agents do not pick up the work or acknowledge the mention.
|
||||
|
||||
## Root Causes (two compounding)
|
||||
|
||||
### RC-1: Ezra and Bezalel were not in AGENT_USERS
|
||||
|
||||
`~/.hermes/bin/gitea-event-watcher.py` had two sets:
|
||||
|
||||
- `KNOWN_AGENTS` — used to *detect* mentions (ezra/bezalel were present)
|
||||
- `AGENT_USERS` — used to *dispatch* work (ezra/bezalel were missing)
|
||||
|
||||
When they were tagged, the watcher saw the mention but had no dispatch handler — the event was silently dropped.
|
||||
|
||||
**Status:** FIXED (2026-04-08) — ezra/bezalel added to AGENT_USERS with `"vps": True` markers.
|
||||
|
||||
### RC-2: Dispatch queue is Mac-local, VPS agents have no reader
|
||||
|
||||
Even after RC-1 was fixed, the dispatch queue (`~/.hermes/burn-logs/dispatch-queue.json`) lives on the Mac. The agent loops that consume this queue (claude-loop.sh, gemini-loop.sh) also run on the Mac. Ezra and Bezalel run `hermes gateway` on separate VPS boxes with no process polling the Mac-local queue.
|
||||
|
||||
## Fix
|
||||
|
||||
### 1. VPS-native heartbeat (scripts/vps-agent-heartbeat.sh)
|
||||
|
||||
New script that runs directly on each VPS agent's box. Polls Gitea for issues/comments mentioning the agent, dispatches locally via `hermes chat`. Follows the proven kimi-heartbeat.sh pattern.
|
||||
|
||||
- No SSH tunnel required
|
||||
- No Mac dependency
|
||||
- Polls every 5 minutes via crontab
|
||||
- Tracks processed items to avoid duplicates
|
||||
|
||||
### 2. Mac-side VPS dispatch worker (scripts/vps-dispatch-worker.py)
|
||||
|
||||
Complementary Mac-side worker that reads the dispatch queue and SSHes work to VPS agents. Lower latency for active sessions when the Mac watcher detects mentions before the VPS heartbeat polls.
|
||||
|
||||
### 3. Deployment script (scripts/deploy-vps-heartbeat.sh)
|
||||
|
||||
One-command deployment to Ezra and Bezalel VPS boxes. Copies the heartbeat, configures .env, sets up crontab.
|
||||
|
||||
## Verification
|
||||
|
||||
1. Tag @ezra on a test issue → response within 15 minutes
|
||||
2. Tag @bezalel on a test issue → response within 15 minutes
|
||||
3. Check VPS logs: `ssh root@143.198.27.163 'tail -5 /tmp/vps-heartbeat-ezra.log'`
|
||||
63
research/03-rag-vs-context-framework.md
Normal file
63
research/03-rag-vs-context-framework.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Research: Long Context vs RAG Decision Framework
|
||||
|
||||
**Date**: 2026-04-13
|
||||
**Research Backlog Item**: 4.3 (Impact: 4, Effort: 1, Ratio: 4.0)
|
||||
**Status**: Complete
|
||||
|
||||
## Current State of the Fleet
|
||||
|
||||
### Context Windows by Model/Provider
|
||||
| Model | Context Window | Our Usage |
|
||||
|-------|---------------|-----------|
|
||||
| xiaomi/mimo-v2-pro (Nous) | 128K | Primary workhorse (Hermes) |
|
||||
| gpt-4o (OpenAI) | 128K | Fallback, complex reasoning |
|
||||
| claude-3.5-sonnet (Anthropic) | 200K | Heavy analysis tasks |
|
||||
| gemma-3 (local/Ollama) | 8K | Local inference |
|
||||
| gemma-3-27b (RunPod) | 128K | Sovereign inference |
|
||||
|
||||
### How We Currently Inject Context
|
||||
1. **Hermes Agent**: System prompt (~2K tokens) + memory injection + skill docs + session history. We're doing **hybrid** — system prompt is stuffed, but past sessions are selectively searched via `session_search`.
|
||||
2. **Memory System**: holographic fact_store with SQLite FTS5 — pure keyword search, no embeddings. Effectively RAG without the vector part.
|
||||
3. **Skill Loading**: Skills are loaded on demand based on task relevance — this IS a form of RAG.
|
||||
4. **Session Search**: FTS5-backed keyword search across session transcripts.
|
||||
|
||||
### Analysis: Are We Over-Retrieving?
|
||||
|
||||
**YES for some workloads.** Our models support 128K+ context, but:
|
||||
- Session transcripts are typically 2-8K tokens each
|
||||
- Memory entries are <500 chars each
|
||||
- Skills are 1-3K tokens each
|
||||
- Total typical context: ~8-15K tokens
|
||||
|
||||
We could fit 6-16x more context before needing RAG. But stuffing everything in:
|
||||
- Increases cost (input tokens are billed)
|
||||
- Increases latency
|
||||
- Can actually hurt quality (lost in the middle effect)
|
||||
|
||||
### Decision Framework
|
||||
|
||||
```
|
||||
IF task requires factual accuracy from specific sources:
|
||||
→ Use RAG (retrieve exact docs, cite sources)
|
||||
ELIF total relevant context < 32K tokens:
|
||||
→ Stuff it all (simplest, best quality)
|
||||
ELIF 32K < context < model_limit * 0.5:
|
||||
→ Hybrid: key docs in context, RAG for rest
|
||||
ELIF context > model_limit * 0.5:
|
||||
→ Pure RAG with reranking
|
||||
```
|
||||
|
||||
### Key Insight: We're Mostly Fine
|
||||
Our current approach is actually reasonable:
|
||||
- **Hermes**: System prompt stuffed + selective skill loading + session search = hybrid approach. OK
|
||||
- **Memory**: FTS5 keyword search works but lacks semantic understanding. Upgrade candidate.
|
||||
- **Session recall**: Keyword search is limiting. Embedding-based would find semantically similar sessions.
|
||||
|
||||
### Recommendations (Priority Order)
|
||||
1. **Keep current hybrid approach** — it's working well for 90% of tasks
|
||||
2. **Add semantic search to memory** — replace pure FTS5 with sqlite-vss or similar for the fact_store
|
||||
3. **Don't stuff sessions** — continue using selective retrieval for session history (saves cost)
|
||||
4. **Add context budget tracking** — log how many tokens each context injection uses
|
||||
|
||||
### Conclusion
|
||||
We are NOT over-retrieving in most cases. The main improvement opportunity is upgrading memory from keyword search to semantic search, not changing the overall RAG vs stuffing strategy.
|
||||
102
scripts/deploy-vps-heartbeat.sh
Executable file
102
scripts/deploy-vps-heartbeat.sh
Executable file
@@ -0,0 +1,102 @@
|
||||
#!/bin/bash
|
||||
# deploy-vps-heartbeat.sh — Deploy the VPS agent heartbeat to Ezra and Bezalel VPS boxes.
|
||||
#
|
||||
# Usage: bash scripts/deploy-vps-heartbeat.sh [ezra|bezalel|all]
|
||||
#
|
||||
# Prerequisites:
|
||||
# - SSH access to VPS boxes (key-based)
|
||||
# - Gitea tokens on the VPS (passed via env or copied)
|
||||
# - hermes installed on the VPS
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
HEARTBEAT_SCRIPT="${SCRIPT_DIR}/vps-agent-heartbeat.sh"
|
||||
|
||||
# VPS configurations
|
||||
declare -A VPS_HOSTS=(
|
||||
["ezra"]="root@143.198.27.163"
|
||||
["bezalel"]="root@159.203.146.185"
|
||||
)
|
||||
|
||||
# Gitea tokens (read from local config)
|
||||
EZRA_TOKEN_FILE="$HOME/.config/gitea/ezra-token"
|
||||
BEZALEL_TOKEN_FILE="$HOME/.config/gitea/bezalel-token"
|
||||
TIMMY_TOKEN_FILE="$HOME/.config/gitea/timmy-token"
|
||||
|
||||
TARGET="${1:-all}"
|
||||
|
||||
deploy_agent() {
|
||||
local agent="$1"
|
||||
local host="${VPS_HOSTS[$agent]}"
|
||||
|
||||
echo "=== Deploying heartbeat to ${agent} (${host}) ==="
|
||||
|
||||
# Determine token file
|
||||
local token_file=""
|
||||
case "$agent" in
|
||||
ezra) token_file="$EZRA_TOKEN_FILE" ;;
|
||||
bezalel) token_file="$BEZALEL_TOKEN_FILE" ;;
|
||||
esac
|
||||
|
||||
# Fall back to timmy token if agent-specific token doesn't exist
|
||||
if [ ! -f "$token_file" ]; then
|
||||
echo "WARN: ${agent}-specific token not found, using timmy token"
|
||||
token_file="$TIMMY_TOKEN_FILE"
|
||||
fi
|
||||
|
||||
if [ ! -f "$token_file" ]; then
|
||||
echo "ERROR: No Gitea token found for ${agent}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local token
|
||||
token=$(cat "$token_file" | tr -d '[:space:]')
|
||||
|
||||
# Copy heartbeat script
|
||||
scp "$HEARTBEAT_SCRIPT" "${host}:/opt/timmy/vps-agent-heartbeat.sh"
|
||||
|
||||
# Create .env file on VPS
|
||||
ssh "$host" "mkdir -p /opt/timmy && cat > /opt/timmy/vps-agent-heartbeat.env" <<EOF
|
||||
AGENT_NAME=${agent}
|
||||
GITEA_TOKEN=${token}
|
||||
GITEA_BASE=https://forge.alexanderwhitestone.com/api/v1
|
||||
HERMES_BIN=hermes
|
||||
HERMES_PROFILE=${agent}
|
||||
MAX_DISPATCH=5
|
||||
EOF
|
||||
|
||||
# Make script executable
|
||||
ssh "$host" "chmod +x /opt/timmy/vps-agent-heartbeat.sh"
|
||||
|
||||
# Set up crontab (every 5 minutes, if not already present)
|
||||
ssh "$host" "
|
||||
crontab -l 2>/dev/null | grep -v 'vps-agent-heartbeat' > /tmp/crontab.tmp || true
|
||||
echo '*/5 * * * * cd /opt/timmy && source vps-agent-heartbeat.env && bash vps-agent-heartbeat.sh >> /tmp/vps-heartbeat-${agent}.log 2>&1' >> /tmp/crontab.tmp
|
||||
crontab /tmp/crontab.tmp
|
||||
rm /tmp/crontab.tmp
|
||||
"
|
||||
|
||||
echo " ✓ Script deployed to /opt/timmy/vps-agent-heartbeat.sh"
|
||||
echo " ✓ Env configured at /opt/timmy/vps-agent-heartbeat.env"
|
||||
echo " ✓ Crontab set: every 5 minutes"
|
||||
echo ""
|
||||
}
|
||||
|
||||
if [ "$TARGET" = "all" ]; then
|
||||
for agent in ezra bezalel; do
|
||||
deploy_agent "$agent"
|
||||
done
|
||||
elif [ -n "${VPS_HOSTS[$TARGET]+x}" ]; then
|
||||
deploy_agent "$TARGET"
|
||||
else
|
||||
echo "Usage: $0 [ezra|bezalel|all]"
|
||||
echo "Available: ${!VPS_HOSTS[*]}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=== Deployment complete ==="
|
||||
echo ""
|
||||
echo "Verify with:"
|
||||
echo " ssh ${VPS_HOSTS[ezra]} 'cat /tmp/vps-heartbeat-ezra.log | tail -5'"
|
||||
echo " ssh ${VPS_HOSTS[bezalel]} 'cat /tmp/vps-heartbeat-bezalel.log | tail -5'"
|
||||
@@ -108,7 +108,7 @@ async def call_tool(name: str, arguments: dict):
|
||||
if name == "bind_session":
|
||||
bound = _save_bound_session_id(arguments.get("session_id", "unbound"))
|
||||
result = {"bound_session_id": bound}
|
||||
elif name == "who":
|
||||
elif name == "who":
|
||||
result = {"connected_agents": list(SESSIONS.keys())}
|
||||
elif name == "status":
|
||||
result = {"connected_sessions": sorted(SESSIONS.keys()), "bound_session_id": _load_bound_session_id()}
|
||||
|
||||
194
scripts/vps-agent-heartbeat.sh
Executable file
194
scripts/vps-agent-heartbeat.sh
Executable file
@@ -0,0 +1,194 @@
|
||||
#!/bin/bash
|
||||
# vps-agent-heartbeat.sh — VPS-native Gitea mention/assignment watcher for VPS agents.
|
||||
#
|
||||
# Polls Gitea for issues/comments mentioning a specific agent (Ezra, Bezalel, etc.),
|
||||
# dispatches locally via hermes chat. Follows the kimi-heartbeat.sh pattern.
|
||||
#
|
||||
# This solves timmy-home#579: Ezra/Bezalel were detected by the Mac watcher but
|
||||
# had no VPS-side consumer. This script runs directly on each VPS, polling Gitea
|
||||
# and dispatching hermes locally — no SSH tunnel, no Mac dependency.
|
||||
#
|
||||
# Setup on VPS:
|
||||
# 1. Copy this script and the .env file to the VPS
|
||||
# 2. Source .env or set AGENT_NAME, GITEA_TOKEN, GITEA_BASE
|
||||
# 3. Add to crontab: */5 * * * * /path/to/vps-agent-heartbeat.sh
|
||||
#
|
||||
# Config via env vars (or .env file alongside this script):
|
||||
# AGENT_NAME — lowercase agent name (ezra, bezalel)
|
||||
# GITEA_TOKEN — Gitea API token with repo access
|
||||
# GITEA_BASE — Gitea base URL (default: https://forge.alexanderwhitestone.com/api/v1)
|
||||
# HERMES_BIN — path to hermes binary (default: hermes)
|
||||
# HERMES_PROFILE — hermes profile to use (default: same as AGENT_NAME)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# --- Config from env ---
|
||||
AGENT_NAME="${AGENT_NAME:?AGENT_NAME is required}"
|
||||
GITEA_TOKEN="${GITEA_TOKEN:?GITEA_TOKEN is required}"
|
||||
GITEA_BASE="${GITEA_BASE:-https://forge.alexanderwhitestone.com/api/v1}"
|
||||
HERMES_BIN="${HERMES_BIN:-hermes}"
|
||||
HERMES_PROFILE="${HERMES_PROFILE:-$AGENT_NAME}"
|
||||
|
||||
# --- Paths ---
|
||||
LOG="/tmp/vps-heartbeat-${AGENT_NAME}.log"
|
||||
LOCKFILE="/tmp/vps-heartbeat-${AGENT_NAME}.lock"
|
||||
PROCESSED="/tmp/vps-heartbeat-${AGENT_NAME}-processed.txt"
|
||||
MAX_DISPATCH="${MAX_DISPATCH:-5}"
|
||||
|
||||
touch "$PROCESSED"
|
||||
|
||||
# --- Repos to watch ---
|
||||
REPOS=(
|
||||
"Timmy_Foundation/timmy-home"
|
||||
"Timmy_Foundation/timmy-config"
|
||||
"Timmy_Foundation/the-nexus"
|
||||
"Timmy_Foundation/hermes-agent"
|
||||
"Timmy_Foundation/the-beacon"
|
||||
)
|
||||
|
||||
# --- Helpers ---
|
||||
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$AGENT_NAME] $*" | tee -a "$LOG"; }
|
||||
|
||||
gitea_api() {
|
||||
curl -sf -H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
"${GITEA_BASE}$1" 2>/dev/null
|
||||
}
|
||||
|
||||
is_processed() { grep -qF "$1" "$PROCESSED" 2>/dev/null; }
|
||||
mark_processed() { echo "$1" >> "$PROCESSED"; }
|
||||
|
||||
# Prevent overlapping runs
|
||||
if [ -f "$LOCKFILE" ]; then
|
||||
lock_age=$(( $(date +%s) - $(stat -c %Y "$LOCKFILE" 2>/dev/null || stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) ))
|
||||
if [ "$lock_age" -lt 300 ]; then
|
||||
log "SKIP: previous run still active (lock age: ${lock_age}s)"
|
||||
exit 0
|
||||
else
|
||||
log "WARN: stale lock (${lock_age}s), removing"
|
||||
rm -f "$LOCKFILE"
|
||||
fi
|
||||
fi
|
||||
trap 'rm -f "$LOCKFILE"' EXIT
|
||||
touch "$LOCKFILE"
|
||||
|
||||
# --- Main ---
|
||||
dispatched=0
|
||||
log "Heartbeat starting. Watching ${#REPOS[@]} repos."
|
||||
|
||||
for repo in "${REPOS[@]}"; do
|
||||
[ "$dispatched" -ge "$MAX_DISPATCH" ] && break
|
||||
|
||||
IFS='/' read -r owner repo_name <<< "$repo"
|
||||
|
||||
# Fetch recent open issues
|
||||
issues=$(gitea_api "/repos/${owner}/${repo_name}/issues?state=open&limit=30&sort=recentupdate") || continue
|
||||
[ -z "$issues" ] || [ "$issues" = "null" ] && continue
|
||||
|
||||
echo "$issues" | python3 -c "
|
||||
import json, sys
|
||||
issues = json.load(sys.stdin)
|
||||
for i in issues:
|
||||
if i.get('pull_request'):
|
||||
continue
|
||||
assignee = (i.get('assignee') or {}).get('login', '').lower()
|
||||
title = i.get('title', '')
|
||||
num = i.get('number', 0)
|
||||
updated = i.get('updated_at', '')
|
||||
print(f'{num}|{assignee}|{title}|{updated}')
|
||||
" 2>/dev/null | while IFS='|' read -r issue_num assignee title updated; do
|
||||
[ "$dispatched" -ge "$MAX_DISPATCH" ] && break
|
||||
[ -z "$issue_num" ] && continue
|
||||
|
||||
# Check if this issue mentions or is assigned to us
|
||||
mention_key="${repo}#${issue_num}"
|
||||
is_assigned=false
|
||||
is_mentioned=false
|
||||
|
||||
if [ "$assignee" = "$AGENT_NAME" ]; then
|
||||
is_assigned=true
|
||||
fi
|
||||
|
||||
# Check comments for @mention
|
||||
comments=$(gitea_api "/repos/${owner}/${repo_name}/issues/${issue_num}/comments?limit=10&sort=created") || continue
|
||||
mention_found=$(echo "$comments" | python3 -c "
|
||||
import json, sys
|
||||
agent = '${AGENT_NAME}'
|
||||
comments = json.load(sys.stdin)
|
||||
for c in comments:
|
||||
body = (c.get('body', '') or '').lower()
|
||||
commenter = (c.get('user') or {}).get('login', '').lower()
|
||||
cid = c.get('id', 0)
|
||||
if f'@{agent}' in body and commenter != agent:
|
||||
print(f'{cid}')
|
||||
break
|
||||
" 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$mention_found" ]; then
|
||||
mention_key="${mention_key}/comment-${mention_found}"
|
||||
is_mentioned=true
|
||||
fi
|
||||
|
||||
# Skip if already processed
|
||||
if is_processed "$mention_key"; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Skip if neither assigned nor mentioned
|
||||
if [ "$is_assigned" = false ] && [ "$is_mentioned" = false ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Build context for hermes
|
||||
log "FOUND: ${repo}#${issue_num} — ${title} (assigned=$is_assigned, mentioned=$is_mentioned)"
|
||||
|
||||
# Fetch issue body
|
||||
issue_detail=$(gitea_api "/repos/${owner}/${repo_name}/issues/${issue_num}") || continue
|
||||
issue_body=$(echo "$issue_detail" | python3 -c "import json,sys; print(json.load(sys.stdin).get('body','')[:2000])" 2>/dev/null || echo "")
|
||||
|
||||
# Fetch recent comment context
|
||||
comment_context=$(echo "$comments" | python3 -c "
|
||||
import json, sys
|
||||
agent = '${AGENT_NAME}'
|
||||
comments = json.load(sys.stdin)
|
||||
for c in reversed(comments):
|
||||
body = c.get('body', '') or ''
|
||||
commenter = (c.get('user') or {}).get('login', 'unknown')
|
||||
if f'@{agent}' in body.lower():
|
||||
print(f'--- Comment by @{commenter} ---')
|
||||
print(body[:1000])
|
||||
break
|
||||
" 2>/dev/null || echo "")
|
||||
|
||||
# Build the hermes prompt
|
||||
prompt="You are ${AGENT_NAME^} on the Timmy Foundation. A Gitea issue needs your attention.
|
||||
|
||||
REPO: ${repo}
|
||||
ISSUE: #${issue_num} — ${title}
|
||||
|
||||
ISSUE BODY:
|
||||
${issue_body}
|
||||
|
||||
MENTION CONTEXT:
|
||||
${comment_context:-No specific mention context.}
|
||||
|
||||
YOUR TASK:
|
||||
Respond to this issue. If someone mentioned you, acknowledge the mention and address what they asked.
|
||||
If the issue is assigned to you, work on it — read the body, implement what's needed, and push changes.
|
||||
Post your response as a comment on the issue via Gitea API.
|
||||
Gitea: ${GITEA_BASE%/}/api/v1, token from environment."
|
||||
|
||||
# Dispatch via hermes chat
|
||||
log "DISPATCHING: hermes chat (profile=$HERMES_PROFILE) for ${repo}#${issue_num}"
|
||||
if command -v "$HERMES_BIN" &>/dev/null; then
|
||||
echo "$prompt" | timeout 600 "$HERMES_BIN" chat --profile "$HERMES_PROFILE" --stdin > "/tmp/vps-dispatch-${AGENT_NAME}-${issue_num}.log" 2>&1 &
|
||||
dispatched=$((dispatched + 1))
|
||||
mark_processed "$mention_key"
|
||||
log "DISPATCHED: ${repo}#${issue_num} (${dispatched}/${MAX_DISPATCH})"
|
||||
else
|
||||
log "ERROR: hermes binary not found at $HERMES_BIN"
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
log "Heartbeat complete. Dispatched: ${dispatched}"
|
||||
188
scripts/vps-dispatch-worker.py
Normal file
188
scripts/vps-dispatch-worker.py
Normal file
@@ -0,0 +1,188 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
vps-dispatch-worker.py — Mac-side worker that dispatches queued work to VPS agents.
|
||||
|
||||
Reads the dispatch queue (~/.hermes/burn-logs/dispatch-queue.json) and for agents
|
||||
marked with "vps": True in AGENT_USERS, SSHes into their VPS box and runs
|
||||
hermes chat with the task context.
|
||||
|
||||
This complements the VPS-native heartbeat (vps-agent-heartbeat.sh) for cases
|
||||
where the Mac gitea-event-watcher.py detects mentions before the VPS heartbeat
|
||||
polls. Both paths work; this one is lower-latency for active work sessions.
|
||||
|
||||
Usage:
|
||||
python3 scripts/vps-dispatch-worker.py
|
||||
# or with specific agent filter:
|
||||
python3 scripts/vps-dispatch-worker.py --agent ezra
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
DISPATCH_QUEUE = Path("~/.hermes/burn-logs/dispatch-queue.json").expanduser()
|
||||
LOG_FILE = Path("~/.hermes/burn-logs/vps-dispatch.log").expanduser()
|
||||
|
||||
# VPS agent configs: agent name → SSH host
|
||||
VPS_AGENTS = {
|
||||
"ezra": {
|
||||
"host": "root@143.198.27.163",
|
||||
"hermes_profile": "ezra",
|
||||
"token_file": Path("~/.config/gitea/ezra-token").expanduser(),
|
||||
},
|
||||
"bezalel": {
|
||||
"host": "root@159.203.146.185",
|
||||
"hermes_profile": "bezalel",
|
||||
"token_file": Path("~/.config/gitea/bezalel-token").expanduser(),
|
||||
},
|
||||
}
|
||||
|
||||
GITEA_BASE = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
|
||||
|
||||
def log(msg):
|
||||
LOG_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
ts = time.strftime("%Y-%m-%d %H:%M:%S")
|
||||
line = f"[{ts}] {msg}\n"
|
||||
with open(LOG_FILE, "a", encoding="utf-8") as f:
|
||||
f.write(line)
|
||||
print(line.strip())
|
||||
|
||||
|
||||
def load_queue():
|
||||
if DISPATCH_QUEUE.exists():
|
||||
with open(DISPATCH_QUEUE, encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
return {}
|
||||
|
||||
|
||||
def save_queue(queue):
|
||||
DISPATCH_QUEUE.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(DISPATCH_QUEUE, "w", encoding="utf-8") as f:
|
||||
json.dump(queue, f, indent=2, sort_keys=True)
|
||||
|
||||
|
||||
def build_prompt(agent_name, item):
|
||||
"""Build the hermes chat prompt from a dispatch queue item."""
|
||||
work_type = item.get("type", "unknown")
|
||||
full_name = item.get("full_name", "unknown/repo")
|
||||
issue_num = item.get("issue", item.get("pr", "?"))
|
||||
title = item.get("title", "")
|
||||
|
||||
comments = item.get("comments", [])
|
||||
comment_text = ""
|
||||
if comments:
|
||||
for c in comments[-3:]: # last 3 comments
|
||||
user = c.get("user", "unknown")
|
||||
body = c.get("body_preview", "")
|
||||
comment_text += f"\n@{user}: {body[:300]}"
|
||||
|
||||
return (
|
||||
f"You are {agent_name.title()} on the Timmy Foundation. "
|
||||
f"A Gitea event needs your attention.\n\n"
|
||||
f"REPO: {full_name}\n"
|
||||
f"ISSUE: #{issue_num} — {title}\n"
|
||||
f"EVENT: {work_type}\n"
|
||||
f"RECENT COMMENTS:{comment_text or ' (none)'}\n\n"
|
||||
f"YOUR TASK:\n"
|
||||
f"Address this issue. If someone mentioned you, respond to them.\n"
|
||||
f"If assigned, work on the issue — read the body, implement, push changes.\n"
|
||||
f"Post your response as a comment on the issue via Gitea API.\n"
|
||||
f"Gitea: {GITEA_BASE}, token from environment."
|
||||
)
|
||||
|
||||
|
||||
def dispatch_to_vps(agent_name, config, item):
|
||||
"""SSH into the VPS and run hermes chat."""
|
||||
prompt = build_prompt(agent_name, item)
|
||||
host = config["host"]
|
||||
profile = config["hermes_profile"]
|
||||
work_id = item.get("work_id", "unknown")
|
||||
|
||||
# Build the SSH command to run hermes chat on the VPS
|
||||
# We pipe the prompt via stdin to avoid shell escaping issues
|
||||
ssh_cmd = [
|
||||
"ssh", "-o", "ConnectTimeout=10",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
host,
|
||||
f"echo {shell_quote(prompt)} | hermes chat --profile {profile} --stdin --timeout 300"
|
||||
]
|
||||
|
||||
log(f"DISPATCH {agent_name}: SSH to {host} for {work_id}")
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
ssh_cmd,
|
||||
capture_output=True, text=True,
|
||||
timeout=360, # 6 min total timeout
|
||||
)
|
||||
if result.returncode == 0:
|
||||
log(f"OK {agent_name}: {work_id} completed")
|
||||
return True
|
||||
else:
|
||||
log(f"FAIL {agent_name}: {work_id} exit={result.returncode} stderr={result.stderr[:200]}")
|
||||
return False
|
||||
except subprocess.TimeoutExpired:
|
||||
log(f"TIMEOUT {agent_name}: {work_id} after 360s")
|
||||
return False
|
||||
except Exception as e:
|
||||
log(f"ERROR {agent_name}: {work_id} — {e}")
|
||||
return False
|
||||
|
||||
|
||||
def shell_quote(s):
|
||||
"""Quote a string for safe shell interpolation."""
|
||||
import shlex
|
||||
return shlex.quote(s)
|
||||
|
||||
|
||||
def main():
|
||||
agent_filter = None
|
||||
if "--agent" in sys.argv:
|
||||
idx = sys.argv.index("--agent")
|
||||
if idx + 1 < len(sys.argv):
|
||||
agent_filter = sys.argv[idx + 1]
|
||||
|
||||
queue = load_queue()
|
||||
dispatched = 0
|
||||
failed = 0
|
||||
|
||||
for agent_name, config in VPS_AGENTS.items():
|
||||
if agent_filter and agent_name != agent_filter:
|
||||
continue
|
||||
|
||||
items = queue.get(agent_name, [])
|
||||
if not items:
|
||||
continue
|
||||
|
||||
log(f"Processing {len(items)} items for {agent_name}")
|
||||
|
||||
# Process items (pop from queue as we go)
|
||||
remaining = []
|
||||
for item in items[:5]: # Max 5 per run
|
||||
success = dispatch_to_vps(agent_name, config, item)
|
||||
if success:
|
||||
dispatched += 1
|
||||
else:
|
||||
failed += 1
|
||||
remaining.append(item) # Keep failed items for retry
|
||||
|
||||
# Update queue: keep unprocessed items
|
||||
if remaining:
|
||||
queue[agent_name] = remaining
|
||||
elif agent_name in queue:
|
||||
del queue[agent_name]
|
||||
|
||||
save_queue(queue)
|
||||
|
||||
if dispatched or failed:
|
||||
log(f"Done: {dispatched} dispatched, {failed} failed")
|
||||
else:
|
||||
print("No VPS agent work items in queue.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -24,7 +24,7 @@ class HealthCheckHandler(BaseHTTPRequestHandler):
|
||||
# Suppress default logging
|
||||
pass
|
||||
|
||||
def do_GET(self):
|
||||
def do_GET(self):
|
||||
"""Handle GET requests"""
|
||||
if self.path == '/health':
|
||||
self.send_health_response()
|
||||
|
||||
Reference in New Issue
Block a user