Compare commits
2 Commits
fix/10-kno
...
burn/8-har
| Author | SHA1 | Date | |
|---|---|---|---|
| 9a2135b1df | |||
| b2a9bca162 |
@@ -1,114 +0,0 @@
|
||||
# Knowledge File Format Specification
|
||||
|
||||
**Version:** 1
|
||||
**Issue:** #10
|
||||
**Status:** Draft
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The knowledge system has two layers:
|
||||
|
||||
1. **index.json** — Machine-readable fact index. Fast lookups by ID, category, repo, tags.
|
||||
2. **Knowledge files** (YAML) — Human-readable, editable facts organized by domain.
|
||||
|
||||
The harvester writes to both. The bootstrapper reads from index.json. Humans edit the YAML files directly.
|
||||
|
||||
---
|
||||
|
||||
## index.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"last_updated": "ISO-8601 timestamp",
|
||||
"total_facts": 0,
|
||||
"facts": []
|
||||
}
|
||||
```
|
||||
|
||||
### Fact Object
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `id` | string | yes | Unique identifier: `{domain}:{category}:{sequence}` |
|
||||
| `fact` | string | yes | One-sentence description of the knowledge |
|
||||
| `category` | enum | yes | One of: `fact`, `pitfall`, `pattern`, `tool-quirk`, `question` |
|
||||
| `domain` | string | yes | Where this applies: repo name, `global`, or agent name |
|
||||
| `confidence` | float | yes | 0.0–1.0. How certain is this knowledge? |
|
||||
| `tags` | string[] | no | Searchable labels |
|
||||
| `source_count` | int | no | How many sessions confirmed this fact |
|
||||
| `first_seen` | date | no | ISO-8601 date first extracted |
|
||||
| `last_confirmed` | date | no | ISO-8601 date last seen in a session |
|
||||
| `expires` | date | no | Optional. After this date, fact is stale |
|
||||
| `related` | string[] | no | IDs of related facts |
|
||||
|
||||
### ID Format: `{domain}:{category}:{sequence}`
|
||||
|
||||
### Categories
|
||||
|
||||
| Category | Definition |
|
||||
|----------|------------|
|
||||
| `fact` | Concrete, verifiable information |
|
||||
| `pitfall` | Errors, wrong assumptions, time-wasters |
|
||||
| `pattern` | Successful sequences of actions |
|
||||
| `tool-quirk` | Environment-specific behaviors |
|
||||
| `question` | Identified but unanswered |
|
||||
|
||||
### Confidence Scoring
|
||||
|
||||
| Range | Meaning |
|
||||
|-------|---------|
|
||||
| 0.9–1.0 | Explicitly stated and verified |
|
||||
| 0.7–0.8 | Clearly implied by multiple data points |
|
||||
| 0.5–0.6 | Suggested but not fully verified |
|
||||
| 0.3–0.4 | Inferred from limited data |
|
||||
| 0.1–0.2 | Speculative or uncertain |
|
||||
|
||||
---
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
knowledge/
|
||||
├── index.json # Machine-readable fact index
|
||||
├── SCHEMA.md # This file
|
||||
├── global/ # Cross-repo knowledge
|
||||
│ ├── pitfalls.yaml
|
||||
│ ├── patterns.yaml
|
||||
│ └── tool-quirks.yaml
|
||||
├── repos/ # Per-repo knowledge
|
||||
│ ├── {repo-name}.yaml
|
||||
│ └── ...
|
||||
└── agents/ # Agent-type knowledge
|
||||
└── {agent-type}.yaml
|
||||
```
|
||||
|
||||
## YAML File Format
|
||||
|
||||
YAML files use frontmatter for metadata, then markdown sections with fact entries:
|
||||
|
||||
```yaml
|
||||
---
|
||||
domain: global
|
||||
category: tool-quirk
|
||||
version: 1
|
||||
last_updated: "2026-04-13"
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
## Section
|
||||
|
||||
- id: global:tool-quirk:001
|
||||
fact: "Description"
|
||||
confidence: 0.95
|
||||
tags: [tag1, tag2]
|
||||
source_count: 5
|
||||
first_seen: "2026-03-27"
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Run `python scripts/validate_knowledge.py` to validate index.json.
|
||||
@@ -1,80 +0,0 @@
|
||||
---
|
||||
domain: global
|
||||
category: pitfall
|
||||
version: 1
|
||||
last_updated: "2026-04-13"
|
||||
---
|
||||
|
||||
# Pitfalls (Global)
|
||||
|
||||
Cross-repo traps that waste time across the fleet.
|
||||
|
||||
## Git & Forge
|
||||
|
||||
- id: global:pitfall:001
|
||||
fact: "Branch protection requires 1 approval on main - API merges fail with 405 without it"
|
||||
confidence: 0.95
|
||||
tags: [git, merge, branch-protection, gitea]
|
||||
source_count: 12
|
||||
first_seen: "2026-04-05"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [the-nexus:pitfall:001]
|
||||
|
||||
- id: global:pitfall:002
|
||||
fact: "Never use --no-verify on git commits - it bypasses all hooks including safety checks"
|
||||
confidence: 0.95
|
||||
tags: [git, hooks, safety]
|
||||
source_count: 5
|
||||
first_seen: "2026-03-28"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: global:pitfall:003
|
||||
fact: "Gitea PR creation workaround needed on the-nexus - direct API call fails, use alternative endpoint"
|
||||
confidence: 0.9
|
||||
tags: [gitea, pr, api, workaround]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-06"
|
||||
last_confirmed: "2026-04-12"
|
||||
|
||||
## Agent Operations
|
||||
|
||||
- id: global:pitfall:004
|
||||
fact: "Anthropic is BANNED from fallback chain - if fallback triggers to Anthropic, something is wrong"
|
||||
confidence: 0.95
|
||||
tags: [provider, anthropic, fallback]
|
||||
source_count: 7
|
||||
first_seen: "2026-03-30"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: global:pitfall:005
|
||||
fact: "Telegram tokens expired - don't assume Telegram notifications work without checking"
|
||||
confidence: 0.85
|
||||
tags: [telegram, notifications, token]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-02"
|
||||
|
||||
- id: global:pitfall:006
|
||||
fact: "Multiple gateways = 'cannot schedule futures' error - only one gateway process should run"
|
||||
confidence: 0.9
|
||||
tags: [gateway, cron, process]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-04"
|
||||
last_confirmed: "2026-04-11"
|
||||
|
||||
## Testing
|
||||
|
||||
- id: global:pitfall:007
|
||||
fact: "pytest root collection picks up operational *_test.py scripts - restrict to tests/ directory"
|
||||
confidence: 0.9
|
||||
tags: [pytest, test, collection]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-07"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: global:pitfall:008
|
||||
fact: "TDD: test 1 before building 55 - verify the cycle works before scaling"
|
||||
confidence: 0.95
|
||||
tags: [tdd, testing, methodology]
|
||||
source_count: 8
|
||||
first_seen: "2026-03-25"
|
||||
last_confirmed: "2026-04-13"
|
||||
@@ -1,71 +0,0 @@
|
||||
---
|
||||
domain: global
|
||||
category: tool-quirk
|
||||
version: 1
|
||||
last_updated: "2026-04-13"
|
||||
---
|
||||
|
||||
# Tool Quirks (Global)
|
||||
|
||||
## Authentication
|
||||
|
||||
- id: global:tool-quirk:001
|
||||
fact: "Gitea token stored at ~/.config/gitea/token, not env var GITEA_TOKEN"
|
||||
confidence: 0.95
|
||||
tags: [git, auth, gitea, token]
|
||||
source_count: 23
|
||||
first_seen: "2026-03-27"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [global:pitfall:001]
|
||||
|
||||
- id: global:tool-quirk:002
|
||||
fact: "Gitea API uses 'Authorization: token TOKEN' header format, not Bearer"
|
||||
confidence: 0.9
|
||||
tags: [git, api, gitea]
|
||||
source_count: 8
|
||||
first_seen: "2026-03-28"
|
||||
last_confirmed: "2026-04-12"
|
||||
|
||||
- id: global:tool-quirk:003
|
||||
fact: "Gitea Issues API type=issues param does NOT filter PRs - use truthiness check on pull_request field"
|
||||
confidence: 0.95
|
||||
tags: [gitea, api, issues, pr]
|
||||
source_count: 6
|
||||
first_seen: "2026-04-01"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
## Paths & Environment
|
||||
|
||||
- id: global:tool-quirk:004
|
||||
fact: "~/.hermes is the default hermes home - check get_hermes_home() not the path literal"
|
||||
confidence: 0.9
|
||||
tags: [paths, hermes, env]
|
||||
source_count: 10
|
||||
first_seen: "2026-03-30"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [hermes-agent:pitfall:005]
|
||||
|
||||
- id: global:tool-quirk:005
|
||||
fact: "Ansible vault-encrypted vars in YAML require vault_inline_vars plugin"
|
||||
confidence: 0.85
|
||||
tags: [ansible, vault, config]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-02"
|
||||
|
||||
## Model & Inference
|
||||
|
||||
- id: global:tool-quirk:006
|
||||
fact: "mimo-v2-pro via Nous Research is the default model - don't assume Anthropic is available"
|
||||
confidence: 0.95
|
||||
tags: [model, provider, nous, default]
|
||||
source_count: 15
|
||||
first_seen: "2026-03-25"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: global:tool-quirk:007
|
||||
fact: "Kill + restart with 'hermes chat' preserves old model state - NEVER use --resume"
|
||||
confidence: 0.95
|
||||
tags: [hermes, model, restart, session]
|
||||
source_count: 8
|
||||
first_seen: "2026-03-29"
|
||||
last_confirmed: "2026-04-12"
|
||||
@@ -1,472 +1,6 @@
|
||||
{
|
||||
"version": 1,
|
||||
"last_updated": "2026-04-13T20:00:00Z",
|
||||
"total_facts": 29,
|
||||
"facts": [
|
||||
{
|
||||
"id": "hermes-agent:pitfall:001",
|
||||
"fact": "deploy-crons.py leaves jobs in mixed model format",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"cron",
|
||||
"deploy",
|
||||
"model",
|
||||
"config"
|
||||
],
|
||||
"source_count": 5,
|
||||
"first_seen": "2026-04-08",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"hermes-agent:pitfall:002",
|
||||
"hermes-agent:pitfall:003"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:002",
|
||||
"fact": "deploy-crons.py --deploy doesn't set legacy skill field from skills list",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"cron",
|
||||
"deploy",
|
||||
"skills"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-09",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"hermes-agent:pitfall:001"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:003",
|
||||
"fact": "Cron jobs with blank fallback_model fields trigger spurious gateway warnings",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"cron",
|
||||
"model",
|
||||
"fallback"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-07",
|
||||
"last_confirmed": "2026-04-12",
|
||||
"related": [
|
||||
"hermes-agent:pitfall:001"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:004",
|
||||
"fact": "model-watchdog.py checks first provider line, not model.provider - causes false drift alarms",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"watchdog",
|
||||
"model",
|
||||
"config"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-08",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:005",
|
||||
"fact": "10+ files read HERMES_HOME directly instead of get_hermes_home()",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.85,
|
||||
"tags": [
|
||||
"paths",
|
||||
"env",
|
||||
"hermes-home"
|
||||
],
|
||||
"source_count": 6,
|
||||
"first_seen": "2026-04-06",
|
||||
"last_confirmed": "2026-04-12",
|
||||
"related": [
|
||||
"global:pitfall:002"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:006",
|
||||
"fact": "get_hermes_home() doesn't expand tilde when HERMES_HOME=~/... is set",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.8,
|
||||
"tags": [
|
||||
"paths",
|
||||
"env",
|
||||
"bug"
|
||||
],
|
||||
"source_count": 2,
|
||||
"first_seen": "2026-04-05"
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:007",
|
||||
"fact": "vps-agent-dispatch reports OK while remote hermes binary path is broken",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"ssh",
|
||||
"dispatch",
|
||||
"vps"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-07",
|
||||
"last_confirmed": "2026-04-11"
|
||||
},
|
||||
{
|
||||
"id": "hermes-agent:pitfall:008",
|
||||
"fact": "nightwatch-health-monitor SSH check fails on cloud-model-only deployments",
|
||||
"category": "pitfall",
|
||||
"domain": "hermes-agent",
|
||||
"confidence": 0.85,
|
||||
"tags": [
|
||||
"ssh",
|
||||
"health",
|
||||
"cloud"
|
||||
],
|
||||
"source_count": 2,
|
||||
"first_seen": "2026-04-10"
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:001",
|
||||
"fact": "Merges fail with HTTP 405 due to branch protection",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"git",
|
||||
"merge",
|
||||
"branch-protection",
|
||||
"gitea"
|
||||
],
|
||||
"source_count": 12,
|
||||
"first_seen": "2026-04-05",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"global:pitfall:001"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:002",
|
||||
"fact": "ThreadingHTTPServer required for multi-user bridge - standard HTTPServer blocks on concurrent requests",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"server",
|
||||
"concurrency",
|
||||
"bridge"
|
||||
],
|
||||
"source_count": 5,
|
||||
"first_seen": "2026-04-10",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:003",
|
||||
"fact": "ChatLog.log() crashes on message persistence when index.html has orphaned button tags",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"html",
|
||||
"crash",
|
||||
"chatlog"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-12",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:004",
|
||||
"fact": "Three.js LOD not implemented - local hardware struggles with full scene",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.85,
|
||||
"tags": [
|
||||
"threejs",
|
||||
"performance",
|
||||
"lod"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-09",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:005",
|
||||
"fact": "Duplicate content blocks appear in index.html when PR merges conflict silently",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.8,
|
||||
"tags": [
|
||||
"html",
|
||||
"merge-conflict",
|
||||
"duplicate"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-11",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "the-nexus:pitfall:006",
|
||||
"fact": "Unified HTTP + WebSocket server required for proper URL deployment - separate servers break CORS",
|
||||
"category": "pitfall",
|
||||
"domain": "the-nexus",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"deploy",
|
||||
"websocket",
|
||||
"http",
|
||||
"cors"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-10",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:001",
|
||||
"fact": "Gitea token stored at ~/.config/gitea/token, not env var GITEA_TOKEN",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"git",
|
||||
"auth",
|
||||
"gitea",
|
||||
"token"
|
||||
],
|
||||
"source_count": 23,
|
||||
"first_seen": "2026-03-27",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"global:pitfall:001"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:002",
|
||||
"fact": "Gitea API uses 'Authorization: token TOKEN' header format, not Bearer",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"git",
|
||||
"api",
|
||||
"gitea"
|
||||
],
|
||||
"source_count": 8,
|
||||
"first_seen": "2026-03-28",
|
||||
"last_confirmed": "2026-04-12"
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:003",
|
||||
"fact": "Gitea Issues API type=issues param does NOT filter PRs",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"gitea",
|
||||
"api",
|
||||
"issues",
|
||||
"pr"
|
||||
],
|
||||
"source_count": 6,
|
||||
"first_seen": "2026-04-01",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:004",
|
||||
"fact": "~/.hermes is the default hermes home - check get_hermes_home() not the path literal",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"paths",
|
||||
"hermes",
|
||||
"env"
|
||||
],
|
||||
"source_count": 10,
|
||||
"first_seen": "2026-03-30",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"hermes-agent:pitfall:005"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:005",
|
||||
"fact": "Ansible vault-encrypted vars in YAML require vault_inline_vars plugin",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.85,
|
||||
"tags": [
|
||||
"ansible",
|
||||
"vault",
|
||||
"config"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-02"
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:006",
|
||||
"fact": "mimo-v2-pro via Nous Research is the default model - don't assume Anthropic is available",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"model",
|
||||
"provider",
|
||||
"nous",
|
||||
"default"
|
||||
],
|
||||
"source_count": 15,
|
||||
"first_seen": "2026-03-25",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:tool-quirk:007",
|
||||
"fact": "Kill + restart with 'hermes chat' preserves old model state - NEVER use --resume",
|
||||
"category": "tool-quirk",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"hermes",
|
||||
"model",
|
||||
"restart",
|
||||
"session"
|
||||
],
|
||||
"source_count": 8,
|
||||
"first_seen": "2026-03-29",
|
||||
"last_confirmed": "2026-04-12"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:001",
|
||||
"fact": "Branch protection requires 1 approval on main - API merges fail with 405 without it",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"git",
|
||||
"merge",
|
||||
"branch-protection",
|
||||
"gitea"
|
||||
],
|
||||
"source_count": 12,
|
||||
"first_seen": "2026-04-05",
|
||||
"last_confirmed": "2026-04-13",
|
||||
"related": [
|
||||
"the-nexus:pitfall:001"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:002",
|
||||
"fact": "Never use --no-verify on git commits",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"git",
|
||||
"hooks",
|
||||
"safety"
|
||||
],
|
||||
"source_count": 5,
|
||||
"first_seen": "2026-03-28",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:003",
|
||||
"fact": "Gitea PR creation workaround needed on the-nexus - direct API call fails",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"gitea",
|
||||
"pr",
|
||||
"api",
|
||||
"workaround"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-06",
|
||||
"last_confirmed": "2026-04-12"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:004",
|
||||
"fact": "Anthropic is BANNED from fallback chain",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"provider",
|
||||
"anthropic",
|
||||
"fallback"
|
||||
],
|
||||
"source_count": 7,
|
||||
"first_seen": "2026-03-30",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:005",
|
||||
"fact": "Telegram tokens expired - don't assume Telegram notifications work",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.85,
|
||||
"tags": [
|
||||
"telegram",
|
||||
"notifications",
|
||||
"token"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-02"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:006",
|
||||
"fact": "Multiple gateways = 'cannot schedule futures' error - only one gateway process should run",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"gateway",
|
||||
"cron",
|
||||
"process"
|
||||
],
|
||||
"source_count": 4,
|
||||
"first_seen": "2026-04-04",
|
||||
"last_confirmed": "2026-04-11"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:007",
|
||||
"fact": "pytest root collection picks up operational *_test.py scripts - restrict to tests/ directory",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.9,
|
||||
"tags": [
|
||||
"pytest",
|
||||
"test",
|
||||
"collection"
|
||||
],
|
||||
"source_count": 3,
|
||||
"first_seen": "2026-04-07",
|
||||
"last_confirmed": "2026-04-13"
|
||||
},
|
||||
{
|
||||
"id": "global:pitfall:008",
|
||||
"fact": "TDD: test 1 before building 55",
|
||||
"category": "pitfall",
|
||||
"domain": "global",
|
||||
"confidence": 0.95,
|
||||
"tags": [
|
||||
"tdd",
|
||||
"testing",
|
||||
"methodology"
|
||||
],
|
||||
"source_count": 8,
|
||||
"first_seen": "2026-03-25",
|
||||
"last_confirmed": "2026-04-13"
|
||||
}
|
||||
]
|
||||
"total_facts": 0,
|
||||
"facts": []
|
||||
}
|
||||
@@ -1,80 +0,0 @@
|
||||
---
|
||||
domain: hermes-agent
|
||||
category: pitfall
|
||||
version: 1
|
||||
last_updated: "2026-04-13"
|
||||
---
|
||||
|
||||
# Pitfalls (hermes-agent)
|
||||
|
||||
## Cron & Deployment
|
||||
|
||||
- id: hermes-agent:pitfall:001
|
||||
fact: "deploy-crons.py leaves jobs in mixed model format - some have provider/model, some just model"
|
||||
confidence: 0.95
|
||||
tags: [cron, deploy, model, config]
|
||||
source_count: 5
|
||||
first_seen: "2026-04-08"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [hermes-agent:pitfall:002, hermes-agent:pitfall:003]
|
||||
|
||||
- id: hermes-agent:pitfall:002
|
||||
fact: "deploy-crons.py --deploy doesn't set legacy skill field from skills list"
|
||||
confidence: 0.9
|
||||
tags: [cron, deploy, skills]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-09"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [hermes-agent:pitfall:001]
|
||||
|
||||
- id: hermes-agent:pitfall:003
|
||||
fact: "Cron jobs with blank fallback_model fields trigger spurious gateway warnings"
|
||||
confidence: 0.9
|
||||
tags: [cron, model, fallback]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-07"
|
||||
last_confirmed: "2026-04-12"
|
||||
related: [hermes-agent:pitfall:001]
|
||||
|
||||
- id: hermes-agent:pitfall:004
|
||||
fact: "model-watchdog.py checks first provider line, not model.provider - causes false drift alarms"
|
||||
confidence: 0.9
|
||||
tags: [watchdog, model, config]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-08"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
## Path & Environment
|
||||
|
||||
- id: hermes-agent:pitfall:005
|
||||
fact: "10+ files read HERMES_HOME directly instead of get_hermes_home() - breaks on custom paths"
|
||||
confidence: 0.85
|
||||
tags: [paths, env, hermes-home]
|
||||
source_count: 6
|
||||
first_seen: "2026-04-06"
|
||||
last_confirmed: "2026-04-12"
|
||||
related: [global:pitfall:002]
|
||||
|
||||
- id: hermes-agent:pitfall:006
|
||||
fact: "get_hermes_home() doesn't expand tilde when HERMES_HOME=~/... is set"
|
||||
confidence: 0.8
|
||||
tags: [paths, env, bug]
|
||||
source_count: 2
|
||||
first_seen: "2026-04-05"
|
||||
|
||||
## SSH & Dispatch
|
||||
|
||||
- id: hermes-agent:pitfall:007
|
||||
fact: "vps-agent-dispatch reports OK while remote hermes binary path is broken"
|
||||
confidence: 0.9
|
||||
tags: [ssh, dispatch, vps]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-07"
|
||||
last_confirmed: "2026-04-11"
|
||||
|
||||
- id: hermes-agent:pitfall:008
|
||||
fact: "nightwatch-health-monitor SSH check fails on cloud-model-only deployments"
|
||||
confidence: 0.85
|
||||
tags: [ssh, health, cloud]
|
||||
source_count: 2
|
||||
first_seen: "2026-04-10"
|
||||
@@ -1,63 +0,0 @@
|
||||
---
|
||||
domain: the-nexus
|
||||
category: pitfall
|
||||
version: 1
|
||||
last_updated: "2026-04-13"
|
||||
---
|
||||
|
||||
# Pitfalls (the-nexus)
|
||||
|
||||
## Git & Merging
|
||||
|
||||
- id: the-nexus:pitfall:001
|
||||
fact: "Merges fail with HTTP 405 due to branch protection - must use merge API with 1 approval"
|
||||
confidence: 0.95
|
||||
tags: [git, merge, branch-protection, gitea]
|
||||
source_count: 12
|
||||
first_seen: "2026-04-05"
|
||||
last_confirmed: "2026-04-13"
|
||||
related: [global:pitfall:001]
|
||||
|
||||
- id: the-nexus:pitfall:002
|
||||
fact: "ThreadingHTTPServer required for multi-user bridge - standard HTTPServer blocks on concurrent requests"
|
||||
confidence: 0.95
|
||||
tags: [server, concurrency, bridge]
|
||||
source_count: 5
|
||||
first_seen: "2026-04-10"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: the-nexus:pitfall:003
|
||||
fact: "ChatLog.log() crashes on message persistence when index.html has orphaned button tags"
|
||||
confidence: 0.9
|
||||
tags: [html, crash, chatlog]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-12"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
## Three.js & Performance
|
||||
|
||||
- id: the-nexus:pitfall:004
|
||||
fact: "Three.js LOD not implemented - local hardware struggles with full scene without texture optimization"
|
||||
confidence: 0.85
|
||||
tags: [threejs, performance, lod]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-09"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
- id: the-nexus:pitfall:005
|
||||
fact: "Duplicate content blocks appear in index.html when PR merges conflict silently"
|
||||
confidence: 0.8
|
||||
tags: [html, merge-conflict, duplicate]
|
||||
source_count: 3
|
||||
first_seen: "2026-04-11"
|
||||
last_confirmed: "2026-04-13"
|
||||
|
||||
## Deployment
|
||||
|
||||
- id: the-nexus:pitfall:006
|
||||
fact: "Unified HTTP + WebSocket server required for proper URL deployment - separate servers break CORS"
|
||||
confidence: 0.9
|
||||
tags: [deploy, websocket, http, cors]
|
||||
source_count: 4
|
||||
first_seen: "2026-04-10"
|
||||
last_confirmed: "2026-04-13"
|
||||
447
scripts/harvester.py
Normal file
447
scripts/harvester.py
Normal file
@@ -0,0 +1,447 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
harvester.py — Extract durable knowledge from Hermes session transcripts.
|
||||
|
||||
Combines session_reader + extraction prompt + LLM inference to pull
|
||||
facts, pitfalls, patterns, and tool quirks from finished sessions.
|
||||
|
||||
Usage:
|
||||
python3 harvester.py --session ~/.hermes/sessions/session_xxx.jsonl --output knowledge/
|
||||
python3 harvester.py --batch --since 2026-04-01 --limit 100
|
||||
python3 harvester.py --session session.jsonl --dry-run # Preview without writing
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import hashlib
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# Add scripts dir to path for sibling imports
|
||||
SCRIPT_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from session_reader import read_session, extract_conversation, truncate_for_context, messages_to_text
|
||||
|
||||
# --- Configuration ---
|
||||
|
||||
DEFAULT_API_BASE = os.environ.get("HARVESTER_API_BASE", "https://api.nousresearch.com/v1")
|
||||
DEFAULT_API_KEY = os.environ.get("HARVESTER_API_KEY", "")
|
||||
DEFAULT_MODEL = os.environ.get("HARVESTER_MODEL", "xiaomi/mimo-v2-pro")
|
||||
KNOWLEDGE_DIR = os.environ.get("HARVESTER_KNOWLEDGE_DIR", "knowledge")
|
||||
PROMPT_PATH = os.environ.get("HARVESTER_PROMPT_PATH", str(SCRIPT_DIR.parent / "templates" / "harvest-prompt.md"))
|
||||
|
||||
# Where to look for API keys if not set via env
|
||||
API_KEY_PATHS = [
|
||||
os.path.expanduser("~/.config/nous/key"),
|
||||
os.path.expanduser("~/.hermes/keymaxxing/active/minimax.key"),
|
||||
os.path.expanduser("~/.config/openrouter/key"),
|
||||
]
|
||||
|
||||
|
||||
def find_api_key() -> str:
|
||||
"""Find API key from common locations."""
|
||||
for path in API_KEY_PATHS:
|
||||
if os.path.exists(path):
|
||||
with open(path) as f:
|
||||
key = f.read().strip()
|
||||
if key:
|
||||
return key
|
||||
return ""
|
||||
|
||||
|
||||
def load_extraction_prompt() -> str:
|
||||
"""Load the extraction prompt template."""
|
||||
path = Path(PROMPT_PATH)
|
||||
if not path.exists():
|
||||
print(f"ERROR: Extraction prompt not found at {path}", file=sys.stderr)
|
||||
print("Expected templates/harvest-prompt.md from issue #7", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
return path.read_text(encoding='utf-8')
|
||||
|
||||
|
||||
def call_llm(prompt: str, transcript: str, api_base: str, api_key: str, model: str) -> Optional[list[dict]]:
|
||||
"""Call the LLM API to extract knowledge from a transcript."""
|
||||
import urllib.request
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": prompt},
|
||||
{"role": "user", "content": f"Extract knowledge from this session transcript:\n\n{transcript}"}
|
||||
]
|
||||
|
||||
payload = json.dumps({
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"temperature": 0.1, # Low temp for consistent extraction
|
||||
"max_tokens": 4096
|
||||
}).encode('utf-8')
|
||||
|
||||
req = urllib.request.Request(
|
||||
f"{api_base}/chat/completions",
|
||||
data=payload,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
method="POST"
|
||||
)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=60) as resp:
|
||||
result = json.loads(resp.read().decode('utf-8'))
|
||||
content = result["choices"][0]["message"]["content"]
|
||||
return parse_extraction_response(content)
|
||||
except Exception as e:
|
||||
print(f"ERROR: LLM API call failed: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
|
||||
def parse_extraction_response(content: str) -> Optional[list[dict]]:
|
||||
"""Parse the LLM response to extract knowledge items.
|
||||
|
||||
Handles various response formats: raw JSON, markdown-wrapped JSON, etc.
|
||||
"""
|
||||
# Try direct JSON parse first
|
||||
try:
|
||||
data = json.loads(content)
|
||||
if isinstance(data, dict) and 'knowledge' in data:
|
||||
return data['knowledge']
|
||||
if isinstance(data, list):
|
||||
return data
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Try extracting JSON from markdown code blocks
|
||||
import re
|
||||
json_match = re.search(r'```(?:json)?\s*({.*?})\s*```', content, re.DOTALL)
|
||||
if json_match:
|
||||
try:
|
||||
data = json.loads(json_match.group(1))
|
||||
if isinstance(data, dict) and 'knowledge' in data:
|
||||
return data['knowledge']
|
||||
if isinstance(data, list):
|
||||
return data
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Try finding any JSON object with knowledge array
|
||||
json_match = re.search(r'({[^{}]*"knowledge"[^{}]*[[sS]*?][^{}]*})', content)
|
||||
if json_match:
|
||||
try:
|
||||
data = json.loads(json_match.group(1))
|
||||
return data.get('knowledge', [])
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
print(f"WARNING: Could not parse LLM response as JSON", file=sys.stderr)
|
||||
print(f"Response preview: {content[:500]}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
|
||||
def load_existing_knowledge(knowledge_dir: str) -> dict:
|
||||
"""Load the existing knowledge index."""
|
||||
index_path = Path(knowledge_dir) / "index.json"
|
||||
if not index_path.exists():
|
||||
return {"version": 1, "last_updated": "", "total_facts": 0, "facts": []}
|
||||
|
||||
try:
|
||||
with open(index_path, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, IOError) as e:
|
||||
print(f"WARNING: Could not load knowledge index: {e}", file=sys.stderr)
|
||||
return {"version": 1, "last_updated": "", "total_facts": 0, "facts": []}
|
||||
|
||||
|
||||
def fact_fingerprint(fact: dict) -> str:
|
||||
"""Generate a deduplication fingerprint for a fact.
|
||||
|
||||
Uses the fact text normalized (lowercase, stripped) as the key.
|
||||
Similar facts will have similar fingerprints.
|
||||
"""
|
||||
text = fact.get('fact', '').lower().strip()
|
||||
# Normalize whitespace
|
||||
text = ' '.join(text.split())
|
||||
return hashlib.md5(text.encode('utf-8')).hexdigest()
|
||||
|
||||
|
||||
def deduplicate(new_facts: list[dict], existing: list[dict], similarity_threshold: float = 0.8) -> list[dict]:
|
||||
"""Remove duplicate facts from new_facts that already exist in the knowledge store.
|
||||
|
||||
Uses fingerprint matching for exact dedup and simple overlap check for near-dupes.
|
||||
"""
|
||||
existing_fingerprints = set()
|
||||
existing_texts = []
|
||||
for f in existing:
|
||||
fp = fact_fingerprint(f)
|
||||
existing_fingerprints.add(fp)
|
||||
existing_texts.append(f.get('fact', '').lower().strip())
|
||||
|
||||
unique = []
|
||||
for fact in new_facts:
|
||||
fp = fact_fingerprint(fact)
|
||||
if fp in existing_fingerprints:
|
||||
continue
|
||||
|
||||
# Check for near-duplicates using simple word overlap
|
||||
fact_words = set(fact.get('fact', '').lower().split())
|
||||
is_dup = False
|
||||
for existing_text in existing_texts:
|
||||
existing_words = set(existing_text.split())
|
||||
if not fact_words or not existing_words:
|
||||
continue
|
||||
overlap = len(fact_words & existing_words) / max(len(fact_words | existing_words), 1)
|
||||
if overlap >= similarity_threshold:
|
||||
is_dup = True
|
||||
break
|
||||
|
||||
if not is_dup:
|
||||
unique.append(fact)
|
||||
existing_fingerprints.add(fp)
|
||||
existing_texts.append(fact.get('fact', '').lower().strip())
|
||||
|
||||
return unique
|
||||
|
||||
|
||||
def validate_fact(fact: dict) -> bool:
|
||||
"""Validate a single knowledge item has required fields."""
|
||||
required = ['fact', 'category', 'repo', 'confidence']
|
||||
for field in required:
|
||||
if field not in fact:
|
||||
return False
|
||||
|
||||
if not isinstance(fact['fact'], str) or not fact['fact'].strip():
|
||||
return False
|
||||
|
||||
valid_categories = ['fact', 'pitfall', 'pattern', 'tool-quirk', 'question']
|
||||
if fact['category'] not in valid_categories:
|
||||
return False
|
||||
|
||||
if not isinstance(fact.get('confidence', 0), (int, float)):
|
||||
return False
|
||||
|
||||
if not (0.0 <= fact['confidence'] <= 1.0):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def write_knowledge(index: dict, new_facts: list[dict], knowledge_dir: str, source_session: str = ""):
|
||||
"""Write new facts to the knowledge store."""
|
||||
kdir = Path(knowledge_dir)
|
||||
kdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Add source tracking to each fact
|
||||
for fact in new_facts:
|
||||
fact['source_session'] = source_session
|
||||
fact['harvested_at'] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
# Update index
|
||||
index['facts'].extend(new_facts)
|
||||
index['total_facts'] = len(index['facts'])
|
||||
index['last_updated'] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
# Write index
|
||||
index_path = kdir / "index.json"
|
||||
with open(index_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(index, f, indent=2, ensure_ascii=False)
|
||||
|
||||
# Also write per-repo markdown files for human reading
|
||||
repos = {}
|
||||
for fact in new_facts:
|
||||
repo = fact.get('repo', 'global')
|
||||
repos.setdefault(repo, []).append(fact)
|
||||
|
||||
for repo, facts in repos.items():
|
||||
if repo == 'global':
|
||||
md_path = kdir / "global" / "harvested.md"
|
||||
else:
|
||||
md_path = kdir / "repos" / f"{repo}.md"
|
||||
|
||||
md_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Append to existing or create new
|
||||
mode = 'a' if md_path.exists() else 'w'
|
||||
with open(md_path, mode, encoding='utf-8') as f:
|
||||
if mode == 'w':
|
||||
f.write(f"# Knowledge: {repo}\n\n")
|
||||
f.write(f"## Harvested {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M')}\n\n")
|
||||
for fact in facts:
|
||||
icon = {'fact': '📋', 'pitfall': '⚠️', 'pattern': '🔄', 'tool-quirk': '🔧', 'question': '❓'}.get(fact['category'], '•')
|
||||
f.write(f"- {icon} **{fact['category']}** (conf: {fact['confidence']:.1f}): {fact['fact']}\n")
|
||||
f.write("\n")
|
||||
|
||||
|
||||
def harvest_session(session_path: str, knowledge_dir: str, api_base: str, api_key: str,
|
||||
model: str, dry_run: bool = False, min_confidence: float = 0.3) -> dict:
|
||||
"""Harvest knowledge from a single session.
|
||||
|
||||
Returns: dict with stats (facts_found, facts_new, facts_dup, elapsed_seconds, error)
|
||||
"""
|
||||
start_time = time.time()
|
||||
stats = {
|
||||
'session': session_path,
|
||||
'facts_found': 0,
|
||||
'facts_new': 0,
|
||||
'facts_dup': 0,
|
||||
'elapsed_seconds': 0,
|
||||
'error': None
|
||||
}
|
||||
|
||||
try:
|
||||
# 1. Read session
|
||||
messages = read_session(session_path)
|
||||
if not messages:
|
||||
stats['error'] = "Empty session file"
|
||||
return stats
|
||||
|
||||
# 2. Extract conversation
|
||||
conv = extract_conversation(messages)
|
||||
if not conv:
|
||||
stats['error'] = "No conversation turns found"
|
||||
return stats
|
||||
|
||||
# 3. Truncate for context window
|
||||
truncated = truncate_for_context(conv, head=50, tail=50)
|
||||
transcript = messages_to_text(truncated)
|
||||
|
||||
# 4. Load extraction prompt
|
||||
prompt = load_extraction_prompt()
|
||||
|
||||
# 5. Call LLM
|
||||
raw_facts = call_llm(prompt, transcript, api_base, api_key, model)
|
||||
if raw_facts is None:
|
||||
stats['error'] = "LLM extraction failed"
|
||||
return stats
|
||||
|
||||
# 6. Validate
|
||||
valid_facts = [f for f in raw_facts if validate_fact(f) and f.get('confidence', 0) >= min_confidence]
|
||||
stats['facts_found'] = len(valid_facts)
|
||||
|
||||
# 7. Deduplicate
|
||||
existing_index = load_existing_knowledge(knowledge_dir)
|
||||
existing_facts = existing_index.get('facts', [])
|
||||
new_facts = deduplicate(valid_facts, existing_facts)
|
||||
stats['facts_new'] = len(new_facts)
|
||||
stats['facts_dup'] = len(valid_facts) - len(new_facts)
|
||||
|
||||
# 8. Write (unless dry run)
|
||||
if new_facts and not dry_run:
|
||||
write_knowledge(existing_index, new_facts, knowledge_dir, source_session=session_path)
|
||||
|
||||
stats['elapsed_seconds'] = round(time.time() - start_time, 2)
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
stats['error'] = str(e)
|
||||
stats['elapsed_seconds'] = round(time.time() - start_time, 2)
|
||||
return stats
|
||||
|
||||
|
||||
def batch_harvest(sessions_dir: str, knowledge_dir: str, api_base: str, api_key: str,
|
||||
model: str, since: str = "", limit: int = 0, dry_run: bool = False) -> list[dict]:
|
||||
"""Harvest knowledge from multiple sessions in batch."""
|
||||
sessions_path = Path(sessions_dir)
|
||||
if not sessions_path.is_dir():
|
||||
print(f"ERROR: Sessions directory not found: {sessions_dir}", file=sys.stderr)
|
||||
return []
|
||||
|
||||
# Find session files
|
||||
session_files = sorted(sessions_path.glob("*.jsonl"), reverse=True) # Newest first
|
||||
|
||||
# Filter by date if --since provided
|
||||
if since:
|
||||
since_dt = datetime.fromisoformat(since.replace('Z', '+00:00'))
|
||||
filtered = []
|
||||
for sf in session_files:
|
||||
# Try to parse timestamp from filename (common format: session_YYYYMMDD_HHMMSS_hash.jsonl)
|
||||
try:
|
||||
parts = sf.stem.split('_')
|
||||
if len(parts) >= 3:
|
||||
date_str = parts[1]
|
||||
file_dt = datetime.strptime(date_str, '%Y%m%d').replace(tzinfo=timezone.utc)
|
||||
if file_dt >= since_dt:
|
||||
filtered.append(sf)
|
||||
except (ValueError, IndexError):
|
||||
# If we can't parse the date, include the file (be permissive)
|
||||
filtered.append(sf)
|
||||
session_files = filtered
|
||||
|
||||
# Apply limit
|
||||
if limit > 0:
|
||||
session_files = session_files[:limit]
|
||||
|
||||
print(f"Harvesting {len(session_files)} sessions...")
|
||||
|
||||
results = []
|
||||
for i, sf in enumerate(session_files, 1):
|
||||
print(f"[{i}/{len(session_files)}] {sf.name}...", end=" ", flush=True)
|
||||
stats = harvest_session(str(sf), knowledge_dir, api_base, api_key, model, dry_run)
|
||||
if stats['error']:
|
||||
print(f"ERROR: {stats['error']}")
|
||||
else:
|
||||
print(f"{stats['facts_new']} new, {stats['facts_dup']} dup ({stats['elapsed_seconds']}s)")
|
||||
results.append(stats)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Harvest knowledge from session transcripts")
|
||||
parser.add_argument('--session', help='Path to a single session JSONL file')
|
||||
parser.add_argument('--batch', action='store_true', help='Batch mode: process multiple sessions')
|
||||
parser.add_argument('--sessions-dir', default=os.path.expanduser('~/.hermes/sessions'),
|
||||
help='Directory containing session files (default: ~/.hermes/sessions)')
|
||||
parser.add_argument('--output', default='knowledge', help='Output directory for knowledge store')
|
||||
parser.add_argument('--since', default='', help='Only process sessions after this date (YYYY-MM-DD)')
|
||||
parser.add_argument('--limit', type=int, default=0, help='Max sessions to process (0=unlimited)')
|
||||
parser.add_argument('--api-base', default=DEFAULT_API_BASE, help='LLM API base URL')
|
||||
parser.add_argument('--api-key', default='', help='LLM API key (or set HARVESTER_API_KEY)')
|
||||
parser.add_argument('--model', default=DEFAULT_MODEL, help='Model to use for extraction')
|
||||
parser.add_argument('--dry-run', action='store_true', help='Preview without writing to knowledge store')
|
||||
parser.add_argument('--min-confidence', type=float, default=0.3, help='Minimum confidence threshold')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Resolve API key
|
||||
api_key = args.api_key or DEFAULT_API_KEY or find_api_key()
|
||||
if not api_key:
|
||||
print("ERROR: No API key found. Set HARVESTER_API_KEY or store in one of:", file=sys.stderr)
|
||||
for p in API_KEY_PATHS:
|
||||
print(f" {p}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Resolve knowledge directory
|
||||
knowledge_dir = args.output
|
||||
if not os.path.isabs(knowledge_dir):
|
||||
knowledge_dir = os.path.join(SCRIPT_DIR.parent, knowledge_dir)
|
||||
|
||||
if args.session:
|
||||
# Single session mode
|
||||
stats = harvest_session(
|
||||
args.session, knowledge_dir, args.api_base, api_key, args.model,
|
||||
dry_run=args.dry_run, min_confidence=args.min_confidence
|
||||
)
|
||||
print(json.dumps(stats, indent=2))
|
||||
if stats['error']:
|
||||
sys.exit(1)
|
||||
elif args.batch:
|
||||
# Batch mode
|
||||
results = batch_harvest(
|
||||
args.sessions_dir, knowledge_dir, args.api_base, api_key, args.model,
|
||||
since=args.since, limit=args.limit, dry_run=args.dry_run
|
||||
)
|
||||
total_new = sum(r['facts_new'] for r in results)
|
||||
total_dup = sum(r['facts_dup'] for r in results)
|
||||
errors = sum(1 for r in results if r['error'])
|
||||
print(f"\nDone: {total_new} new facts, {total_dup} duplicates, {errors} errors")
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
142
scripts/session_reader.py
Normal file
142
scripts/session_reader.py
Normal file
@@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
session_reader.py — Parse Hermes session JSONL transcripts.
|
||||
|
||||
Each line in a session file is a JSON object representing a message.
|
||||
Standard fields: role (user|assistant|system), content (str), timestamp (str).
|
||||
Tool calls and tool results are also captured.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Iterator, Optional
|
||||
|
||||
|
||||
def read_session(path: str) -> list[dict]:
|
||||
"""Read a session JSONL file and return all messages as a list."""
|
||||
messages = []
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
for line_num, line in enumerate(f, 1):
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
msg = json.loads(line)
|
||||
messages.append(msg)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"WARNING: Skipping malformed JSON at line {line_num}: {e}", file=sys.stderr)
|
||||
return messages
|
||||
|
||||
|
||||
def read_session_iter(path: str) -> Iterator[dict]:
|
||||
"""Iterate over session messages without loading all into memory."""
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
for line_num, line in enumerate(f, 1):
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
yield json.loads(line)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"WARNING: Skipping malformed JSON at line {line_num}: {e}", file=sys.stderr)
|
||||
|
||||
|
||||
def extract_conversation(messages: list[dict]) -> list[dict]:
|
||||
"""Extract user/assistant conversation turns, skipping tool-only messages."""
|
||||
conversation = []
|
||||
for msg in messages:
|
||||
role = msg.get('role', '')
|
||||
content = msg.get('content', '')
|
||||
|
||||
# Skip empty messages and pure tool calls
|
||||
if role in ('user', 'assistant', 'system'):
|
||||
if isinstance(content, str) and content.strip():
|
||||
conversation.append({
|
||||
'role': role,
|
||||
'content': content.strip(),
|
||||
'timestamp': msg.get('timestamp', '')
|
||||
})
|
||||
elif isinstance(content, list):
|
||||
# Multimodal content — extract text parts
|
||||
text_parts = []
|
||||
for part in content:
|
||||
if isinstance(part, dict) and part.get('type') == 'text':
|
||||
text_parts.append(part.get('text', ''))
|
||||
if text_parts:
|
||||
conversation.append({
|
||||
'role': role,
|
||||
'content': '\n'.join(text_parts),
|
||||
'timestamp': msg.get('timestamp', '')
|
||||
})
|
||||
return conversation
|
||||
|
||||
|
||||
def truncate_for_context(messages: list[dict], head: int = 50, tail: int = 50) -> list[dict]:
|
||||
"""Truncate long sessions: keep first N + last N messages.
|
||||
|
||||
This preserves session start (initial context) and end (final results),
|
||||
skipping the messy middle of long debugging sessions.
|
||||
"""
|
||||
if len(messages) <= head + tail:
|
||||
return messages
|
||||
|
||||
truncated = messages[:head]
|
||||
truncated.append({
|
||||
'role': 'system',
|
||||
'content': f'[{len(messages) - head - tail} messages truncated]',
|
||||
'timestamp': ''
|
||||
})
|
||||
truncated.extend(messages[-tail:])
|
||||
return truncated
|
||||
|
||||
|
||||
def messages_to_text(messages: list[dict]) -> str:
|
||||
"""Convert message list to plain text for LLM consumption."""
|
||||
lines = []
|
||||
for msg in messages:
|
||||
role = msg.get('role', 'unknown').upper()
|
||||
content = msg.get('content', '')
|
||||
if msg.get('role') == 'system' and 'truncated' in content:
|
||||
lines.append(f'--- {content} ---')
|
||||
else:
|
||||
lines.append(f'{role}: {content}')
|
||||
return '\n\n'.join(lines)
|
||||
|
||||
|
||||
def get_session_metadata(path: str) -> dict:
|
||||
"""Extract metadata from a session file (first message often has config info)."""
|
||||
messages = read_session(path)
|
||||
if not messages:
|
||||
return {'path': path, 'message_count': 0}
|
||||
|
||||
first = messages[0]
|
||||
last = messages[-1]
|
||||
|
||||
return {
|
||||
'path': path,
|
||||
'message_count': len(messages),
|
||||
'first_timestamp': first.get('timestamp', ''),
|
||||
'last_timestamp': last.get('timestamp', ''),
|
||||
'first_role': first.get('role', ''),
|
||||
'has_tool_calls': any(m.get('tool_calls') for m in messages),
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if len(sys.argv) < 2:
|
||||
print(f"Usage: {sys.argv[0]} <session.jsonl>")
|
||||
sys.exit(1)
|
||||
|
||||
path = sys.argv[1]
|
||||
meta = get_session_metadata(path)
|
||||
print(json.dumps(meta, indent=2))
|
||||
|
||||
messages = read_session(path)
|
||||
conv = extract_conversation(messages)
|
||||
print(f"\nConversation: {len(conv)} turns")
|
||||
|
||||
truncated = truncate_for_context(conv)
|
||||
print(f"After truncation: {len(truncated)} turns")
|
||||
print(f"\nPreview (first 500 chars):")
|
||||
print(messages_to_text(truncated[:5])[:500])
|
||||
@@ -1,38 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Validate knowledge files and index.json against the schema."""
|
||||
import json, sys
|
||||
from pathlib import Path
|
||||
|
||||
VALID_CATEGORIES = {"fact", "pitfall", "pattern", "tool-quirk", "question"}
|
||||
REQUIRED = {"id", "fact", "category", "domain", "confidence"}
|
||||
|
||||
def validate_fact(fact, src=""):
|
||||
errs = []
|
||||
for f in REQUIRED:
|
||||
if f not in fact: errs.append(f"{src}: missing '{f}'")
|
||||
if "category" in fact and fact["category"] not in VALID_CATEGORIES:
|
||||
errs.append(f"{src}: invalid category '{fact['category']}'")
|
||||
if "confidence" in fact:
|
||||
if not isinstance(fact["confidence"], (int, float)) or not (0 <= fact["confidence"] <= 1):
|
||||
errs.append(f"{src}: confidence must be 0.0-1.0")
|
||||
if "id" in fact:
|
||||
parts = fact["id"].split(":")
|
||||
if len(parts) != 3: errs.append(f"{src}: id must be domain:category:sequence")
|
||||
return errs
|
||||
|
||||
def main():
|
||||
idx = Path(__file__).parent.parent / "knowledge" / "index.json"
|
||||
if not idx.exists(): print(f"FAILED: {idx} not found"); sys.exit(1)
|
||||
data = json.load(open(idx))
|
||||
errs = []
|
||||
seen = set()
|
||||
for i, f in enumerate(data.get("facts", [])):
|
||||
errs.extend(validate_fact(f, f"[{i}]"))
|
||||
if "id" in f:
|
||||
if f["id"] in seen: errs.append(f"duplicate id '{f['id']}'")
|
||||
seen.add(f["id"])
|
||||
if errs:
|
||||
print(f"FAILED - {len(errs)} errors:"); [print(f" x {e}") for e in errs]; sys.exit(1)
|
||||
print(f"PASSED - {len(data.get('facts', []))} facts")
|
||||
|
||||
if __name__ == "__main__": main()
|
||||
Reference in New Issue
Block a user