Previous version was outdated (said scripts were 'not implemented'). Updated to reflect actual state: 18 scripts, 14 test files, populated knowledge store, active development.
10 KiB
GENOME.md — compounding-intelligence
Generated: 2026-04-17 Repo: Timmy_Foundation/compounding-intelligence Description: Turn 1B+ daily agent tokens into durable, compounding fleet intelligence.
Project Overview
Every agent session starts at zero. The same HTTP 405 gets rediscovered as a branch protection issue. The same token path gets searched from scratch. Intelligence evaporates when the session ends.
Compounding-intelligence solves this with three pipelines forming a loop:
SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION STARTS SMARTER
↓
MEASURER → Prove it's working
Status: Active development. Core pipelines implemented. 20+ scripts, 14 test files, knowledge store populated with real data.
Architecture
graph TD
TRANS[Session Transcripts<br/>~/.hermes/sessions/*.jsonl] --> READER[session_reader.py]
READER --> HARVESTER[harvester.py]
HARVESTER -->|LLM extraction| PROMPT[harvest-prompt.md]
HARVESTER --> DEDUP[deduplicate()]
DEDUP --> INDEX[knowledge/index.json]
DEDUP --> GLOBAL[knowledge/global/*.yaml]
DEDUP --> REPO[knowledge/repos/*.yaml]
INDEX --> BOOTSTRAPPER[bootstrapper.py]
BOOTSTRAPPER -->|filter + rank + truncate| CONTEXT[Bootstrap Context<br/>2k token injection]
CONTEXT --> SESSION[New Session starts smarter]
INDEX --> VALIDATOR[validate_knowledge.py]
INDEX --> STALENESS[knowledge_staleness_check.py]
INDEX --> GAPS[knowledge_gap_identifier.py]
TRANS --> SAMPLER[sampler.py]
SAMPLER -->|score + rank| BEST[High-value sessions]
BEST --> HARVESTER
TRANS --> METADATA[session_metadata.py]
METADATA --> SUMMARY[SessionSummary objects]
KNOWLEDGE --> DIFF[diff_analyzer.py]
DIFF --> PROPOSALS[improvement_proposals.py]
PROPOSALS --> PRIORITIES[priority_rebalancer.py]
Entry Points
Core Pipelines
| Script | Purpose | Key Functions |
|---|---|---|
harvester.py |
Extract knowledge from session transcripts | harvest_session(), call_llm(), deduplicate(), validate_fact() |
bootstrapper.py |
Build pre-session context from knowledge store | build_bootstrap_context(), filter_facts(), sort_facts(), truncate_to_tokens() |
session_reader.py |
Parse JSONL session transcripts | read_session(), extract_conversation(), messages_to_text() |
sampler.py |
Score and rank sessions for harvesting value | scan_session_fast(), score_session() |
session_metadata.py |
Extract structured metadata from sessions | extract_session_metadata(), SessionSummary |
Analysis & Quality
| Script | Purpose |
|---|---|
validate_knowledge.py |
Validate knowledge index schema compliance |
knowledge_staleness_check.py |
Detect stale knowledge (source changed since extraction) |
knowledge_gap_identifier.py |
Find untested functions, undocumented APIs, missing tests |
diff_analyzer.py |
Analyze code diffs for improvement signals |
improvement_proposals.py |
Generate ranked improvement proposals |
priority_rebalancer.py |
Rebalance priorities across proposals |
automation_opportunity_finder.py |
Find manual steps that can be automated |
dead_code_detector.py |
Detect unused code |
dependency_graph.py |
Map dependency relationships |
perf_bottleneck_finder.py |
Find performance bottlenecks |
refactoring_opportunity_finder.py |
Identify refactoring targets |
gitea_issue_parser.py |
Parse Gitea issues for knowledge extraction |
Automation
| Script | Purpose |
|---|---|
session_pair_harvester.py |
Extract training pairs from sessions |
Data Flow
1. Session ends → .jsonl written to ~/.hermes/sessions/
2. sampler.py scores sessions by age, recency, repo coverage
3. harvester.py reads top sessions, calls LLM with harvest-prompt.md
4. LLM extracts facts/pitfalls/patterns/quirks/questions
5. deduplicate() checks against existing index via fact_fingerprint()
6. validate_fact() checks schema compliance
7. write_knowledge() appends to knowledge/index.json + per-repo YAML
8. On next session start, bootstrapper.py:
a. Loads knowledge/index.json
b. Filters by session's repo and agent type
c. Sorts by confidence (high first), then recency
d. Truncates to 2k token budget
e. Injects as pre-context
9. Agent starts with full situational awareness instead of zero
Key Abstractions
Knowledge Item (fact/pitfall/pattern/quirk/question)
{
"fact": "Gitea token is at ~/.config/gitea/token",
"category": "tool-quirk",
"repo": "global",
"confidence": 0.9,
"evidence": "Found during clone attempt",
"source_session": "2026-04-13_abc123",
"extracted_at": "2026-04-13T20:00:00Z"
}
SessionSummary (session_metadata.py)
Extracted metadata per session: duration, token count, tools used, repos touched, error count, outcome.
Gap / GapReport (knowledge_gap_identifier.py)
Structured gap analysis: untested functions, undocumented APIs, missing tests. Severity: critical/high/medium/low.
Knowledge Index (knowledge/index.json)
Machine-readable fact store. 12KB, populated with real data. Categories: fact, pitfall, pattern, tool-quirk, question.
Knowledge Store
knowledge/
├── index.json # Master fact store (12KB, populated)
├── SCHEMA.md # Schema documentation
├── global/
│ ├── pitfalls.yaml # Cross-repo pitfalls (2KB)
│ └── tool-quirks.yaml # Tool-specific quirks (2KB)
├── repos/
│ ├── hermes-agent.yaml # hermes-agent knowledge (2KB)
│ └── the-nexus.yaml # the-nexus knowledge (2KB)
└── agents/ # Per-agent knowledge (empty)
API Surface
LLM API (consumed)
| Provider | Endpoint | Usage |
|---|---|---|
| Nous Research | https://inference-api.nousresearch.com/v1 |
Knowledge extraction |
| Ollama | http://localhost:11434/v1 |
Local fallback |
File API (consumed/produced)
| Path | Format | Direction |
|---|---|---|
~/.hermes/sessions/*.jsonl |
JSONL | Input (session transcripts) |
knowledge/index.json |
JSON | Output (master fact store) |
knowledge/global/*.yaml |
YAML | Output (cross-repo knowledge) |
knowledge/repos/*.yaml |
YAML | Output (per-repo knowledge) |
templates/harvest-prompt.md |
Markdown | Config (extraction prompt) |
Test Coverage
14 test files covering core pipelines:
| Test File | Covers |
|---|---|
test_harvest_prompt.py |
Prompt validation, hallucination detection |
test_harvest_prompt_comprehensive.py |
Extended prompt testing |
test_harvester_pipeline.py |
Harvester extraction + dedup |
test_bootstrapper.py |
Context building, filtering, truncation |
test_session_pair_harvester.py |
Training pair extraction |
test_improvement_proposals.py |
Proposal generation |
test_priority_rebalancer.py |
Priority scoring |
test_knowledge_staleness.py |
Staleness detection |
test_automation_opportunity_finder.py |
Automation detection |
test_diff_analyzer.py |
Diff analysis |
test_gitea_issue_parser.py |
Issue parsing |
test_refactoring_opportunity_finder.py |
Refactoring signals |
test_knowledge_gap_identifier.py |
Gap analysis |
test_perf_bottleneck_finder.py |
Perf bottleneck detection |
Coverage Gaps
- session_reader.py — No dedicated test file (tested indirectly)
- sampler.py — No test file (scoring logic untested)
- session_metadata.py — No test file
- validate_knowledge.py — No test file
- knowledge_staleness_check.py — Tested but limited
Security Considerations
API Key Handling
harvester.pyreads API key from~/.hermes/auth.jsonor env vars- Key passed to LLM API in request headers only
- No key logging
Knowledge Integrity
validate_fact()checks schema before writingdeduplicate()prevents duplicate entries via fingerprintknowledge_staleness_check.pydetects when source code changed but knowledge didn't- Confidence scores prevent low-quality knowledge from polluting the store
File Safety
- Knowledge writes are append-only (never deletes)
- Bootstrap context is truncated to budget (no prompt injection via knowledge)
- Session reader handles malformed JSONL gracefully
File Index
scripts/
harvester.py (473 lines) — Core knowledge extraction
bootstrapper.py (302 lines) — Pre-session context builder
session_reader.py (137 lines) — JSONL session parser
sampler.py (363 lines) — Session scoring + ranking
session_metadata.py (271 lines) — Session metadata extraction
validate_knowledge.py (44 lines) — Index validation
knowledge_staleness_check.py (125 lines) — Staleness detection
knowledge_gap_identifier.py (291 lines) — Gap analysis engine
diff_analyzer.py (203 lines) — Diff analysis
improvement_proposals.py (518 lines) — Proposal generation
priority_rebalancer.py (745 lines) — Priority scoring
automation_opportunity_finder.py (600 lines) — Automation detection
dead_code_detector.py (270 lines) — Dead code detection
dependency_graph.py (220 lines) — Dependency mapping
perf_bottleneck_finder.py (635 lines) — Perf analysis
refactoring_opportunity_finder.py (46 lines) — Refactoring signals
gitea_issue_parser.py (140 lines) — Gitea issue parsing
session_pair_harvester.py (224 lines) — Training pair extraction
knowledge/
index.json (12KB) — Master fact store
SCHEMA.md (3KB) — Schema docs
global/pitfalls.yaml (2KB) — Cross-repo pitfalls
global/tool-quirks.yaml (2KB) — Tool quirks
repos/hermes-agent.yaml (2KB) — Repo-specific knowledge
repos/the-nexus.yaml (2KB) — Repo-specific knowledge
templates/
harvest-prompt.md (4KB) — Extraction prompt
test_sessions/ (5 files) — Sample transcripts
tests/ + scripts/test_* (14 files)— Test suite
Total: ~6,500 lines of code across 18 scripts + 14 test files.
Generated by Codebase Genome pipeline — Issue #676