@@ -1,16 +1,16 @@
# GENOME.md — compounding-intelligence
**Generated: ** 2026-04-17
**Repo: ** Timmy_Foundation/compounding-intelligence
**Description: ** Turn 1B+ daily agent tokens into durable, compounding fleet intelligence.
* Auto-generated codebase genome. Addresses timmy-home#676. *
---
## Project Overview
Every agent session starts at zero. The same HTTP 405 gets rediscovered as a branch protection issue. The same token path gets searched from scratch. Intelligence evaporates when the session ends .
**What: ** A system that turns 1B+ daily agent tokens into durable, compounding fleet intelligence .
Compounding-intelligence solves this with three pipelines forming a loop:
**Why: ** Every agent session starts at zero. The same mistakes get made repeatedly — the same HTTP 405 is rediscovered as a branch protection issue, the same token path is searched for from scratch. Intelligence evaporates when the session ends.
**How: ** Three pipelines form a compounding loop:
```
SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION STARTS SMARTER
@@ -18,234 +18,222 @@ SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION
MEASURER → Prove it's working
```
**Status: ** Active development. Core pipelines implemented. 20+ scripts, 14 test files, knowledge store populated with re al data .
**Status: ** Early stage. Template and test scaffolding exist. Core pipeline scripts (harvester.py, bootstrapper.py, measurer.py, session_reader.py) are planned but not yet implemented. The knowledge extraction prompt is complete and v ali dated .
---
## Architecture
``` mermaid
graph TD
TRANS [Session Transcripts <br/>~/.hermes/sessions/*.jsonl] --> READER[session_reader.py ]
READER --> HARVESTER[harvester.py]
HARVESTER -->|LLM extraction| PROMPT[harvest-prompt.md ]
HARVESTER --> DEDUP[deduplicate() ]
DEDUP --> INDEX[knowledge/index.json ]
DEDUP --> GLOBAL [knowledge/global/*.yaml ]
DEDUP --> REPO[knowledge/repos/*.yaml ]
INDEX --> BOOTSTRAPPER[bootstrapper.py ]
BOOTSTRAPPER -->|filter + rank + truncate| CONTEXT[Bootstrap Context<br/>2k token injection]
CONTEXT --> SESSION[New Session starts smart er]
INDEX --> VALIDATOR[validate_knowledge.py ]
INDEX --> STALENESS[knowledge_staleness_check.py]
INDEX --> GAPS[knowledge_gap_identifier.py]
TRANS --> SAMPLER[sampler.py]
SAMPLER -->|score + rank| BEST[High-value sessions]
BEST --> HARVESTER
TRANS --> METADATA[session_metadata.py]
METADATA --> SUMMARY[SessionSummary objects]
KNOWLEDGE --> DIFF[diff_analyzer.py]
DIFF --> PROPOSALS[improvement_proposals.py]
PROPOSALS --> PRIORITIES[priority_rebalancer.py]
A [Session Transcript<br/>.jsonl] --> B[Harvester ]
B --> C{Extract Knowledge}
C --> D[knowledge/index.json ]
C --> E[knowledge/global/*.md ]
C --> F[knowledge/repos/{repo}.md ]
C --> G[knowledge/agents/{agent}.md ]
D --> H[Bootstrapper ]
H --> I[Bootstrap Context<br/>2k token injection]
I --> J[New Session<br/>starts smarter ]
J --> A
D --> K[Measur er]
K --> L[metrics/dashboard.md]
K --> M[Velocity / Hit Rate<br/>Error Reduction ]
```
## Entry Points
### Pipeline 1: Harvester
### Core Pipelines
**Status: ** Prompt designed. Script not implemented.
| Script | Purpose | Key Functions |
|--------|---------|---------------|
| `harvester.py` | Extract knowledge from session transcripts | `harvest_session()` , `call_llm()` , `deduplicate()` , `validate_fact()` |
| `bootstrapper.py` | Build pre-session context from knowledge store | `build_bootstrap_context()` , `filter_facts()` , `sort_facts()` , `truncate_to_tokens()` |
| `session_reader.py` | Parse JSONL session transcripts | `read_session()` , `extract_conversation()` , `messages_to_text()` |
| `sampler.py` | Score and rank sessions for harvesting value | `scan_session_fast()` , `score_session()` |
| `session_metadata.py` | Extract structured metadata from sessions | `extract_session_metadata()` , `SessionSummary` |
Reads finished session transcripts (JSONL). Uses `templates/harvest-prompt.md` to extract durable knowledge into five categories:
### Analysis & Quality
| Category | Description | Example |
|----------|-------------|---------|
| `fact` | Concrete, verifiable information | "Repository X has 5 files" |
| `pitfall` | Errors encountered, wrong assumptions | "Token is at ~/.config/gitea/token, not env var" |
| `pattern` | Successful action sequences | "Deploy: test → build → push → webhook" |
| `tool-quirk` | Environment-specific behaviors | "URL format requires trailing slash" |
| `question` | Identified but unanswered | "Need optimal batch size for harvesting" |
| Script | Purpose |
|--------|---------|
| `validate_knowledge.py` | Validate knowledge index schema compliance |
| `knowledge_staleness_check.py` | Detect stale knowledge (source changed since extraction) |
| `knowledge_gap_identifier.py` | Find untested functions, undocumented APIs, missing tests |
| `diff_analyzer.py` | Analyze code diffs for improvement signals |
| `improvement_proposals.py` | Generate ranked improvement proposals |
| `priority_rebalancer.py` | Rebalance priorities across proposals |
| `automation_opportunity_finder.py` | Find manual steps that can be automated |
| `dead_code_detector.py` | Detect unused code |
| `dependency_graph.py` | Map dependency relationships |
| `perf_bottleneck_finder.py` | Find performance bottlenecks |
| `refactoring_opportunity_finder.py` | Identify refactoring targets |
| `gitea_issue_parser.py` | Parse Gitea issues for knowledge extraction |
### Automation
| Script | Purpose |
|--------|---------|
| `session_pair_harvester.py` | Extract training pairs from sessions |
## Data Flow
```
1. Session ends → .jsonl written to ~/.hermes/sessions/
2. sampler.py scores sessions by age, recency, repo coverage
3. harvester.py reads top sessions, calls LLM with harvest-prompt.md
4. LLM extracts facts/pitfalls/patterns/quirks/questions
5. deduplicate() checks against existing index via fact_fingerprint()
6. validate_fact() checks schema compliance
7. write_knowledge() appends to knowledge/index.json + per-repo YAML
8. On next session start, bootstrapper.py:
a. Loads knowledge/index.json
b. Filters by session's repo and agent type
c. Sorts by confidence (high first), then recency
d. Truncates to 2k token budget
e. Injects as pre-context
9. Agent starts with full situational awareness instead of zero
```
## Key Abstractions
### Knowledge Item (fact/pitfall/pattern/quirk/question)
Output schema per knowledge item:
``` json
{
"fact" : "Gitea token is at ~/.config/gitea/toke n" ,
"category" : "tool-quirk" ,
"repo" : "global" ,
"confidence" : 0.9 ,
"evidence" : "Found during clone attempt" ,
"source_session" : "2026-04-13_abc123" ,
"extracted_at" : "2026-04-13T20:00:00Z"
"fact" : "One sentence descriptio n" ,
"category" : "fact|pitfall|pattern| tool-quirk|question " ,
"repo" : "repo-name or ' global' " ,
"confidence" : 0.0 -1.0
}
```
### SessionSummary (session_metadata.py)
Extracted metadata per session: duration, token count, tools used, repos touched, error count, outcome.
### Pipeline 2: Bootstrapper
### Gap / GapReport (knowledge_gap_identifier.py)
Structured gap analysis: untested functions, undocumented APIs, missing tests. Severity: critical/high/medium/low.
**Status: ** Not implemented.
### Knowledge Index (knowledge/index.json)
Machine-readable fact store. 12KB, populated with real data. Categories: fact, pitfall, pattern, tool-quirk, question.
Queries knowledge store before session start. Assembles a compact 2k-token context from relevant facts. Injects into session startup so the agent begins with full situational awareness.
## Knowledge Sto re
### Pipeline 3: Measu rer
```
knowledge/
├── index.json # Master fact store (12KB, populated)
├── SCHEMA.md # Schema documentation
├── global/
│ ├── pitfalls.yaml # Cross-repo pitfalls (2KB)
│ └── tool-quirks.yaml # Tool-specific quirks (2KB)
├── repos/
│ ├── hermes-agent.yaml # hermes-agent knowledge (2KB)
│ └── the-nexus.yaml # the-nexus knowledge (2KB)
└── agents/ # Per-agent knowledge (empty)
```
**Status: ** Not implemented.
## API Surface
### LLM API (consumed)
| Provider | Endpoint | Usage |
|----------|----------|-------|
| Nous Research | `https://inference-api.nousresearch.com/v1` | Knowledge extraction |
| Ollama | `http://localhost:11434/v1` | Local fallback |
### File API (consumed/produced)
| Path | Format | Direction |
|------|--------|-----------|
| `~/.hermes/sessions/*.jsonl` | JSONL | Input (session transcripts) |
| `knowledge/index.json` | JSON | Output (master fact store) |
| `knowledge/global/*.yaml` | YAML | Output (cross-repo knowledge) |
| `knowledge/repos/*.yaml` | YAML | Output (per-repo knowledge) |
| `templates/harvest-prompt.md` | Markdown | Config (extraction prompt) |
## Test Coverage
**14 test files ** covering core pipelines:
| Test File | Covers |
|-----------|--------|
| `test_harvest_prompt.py` | Prompt validation, hallucination detection |
| `test_harvest_prompt_comprehensive.py` | Extended prompt testing |
| `test_harvester_pipeline.py` | Harvester extraction + dedup |
| `test_bootstrapper.py` | Context building, filtering, truncation |
| `test_session_pair_harvester.py` | Training pair extraction |
| `test_improvement_proposals.py` | Proposal generation |
| `test_priority_rebalancer.py` | Priority scoring |
| `test_knowledge_staleness.py` | Staleness detection |
| `test_automation_opportunity_finder.py` | Automation detection |
| `test_diff_analyzer.py` | Diff analysis |
| `test_gitea_issue_parser.py` | Issue parsing |
| `test_refactoring_opportunity_finder.py` | Refactoring signals |
| `test_knowledge_gap_identifier.py` | Gap analysis |
| `test_perf_bottleneck_finder.py` | Perf bottleneck detection |
### Coverage Gaps
1. **session_reader.py ** — No dedicated test file (tested indirectly)
2. **sampler.py ** — No test file (scoring logic untested)
3. **session_metadata.py ** — No test file
4. **validate_knowledge.py ** — No test file
5. **knowledge_staleness_check.py ** — Tested but limited
## Security Considerations
### API Key Handling
- `harvester.py` reads API key from `~/.hermes/auth.json` or env vars
- Key passed to LLM API in request headers only
- No key logging
### Knowledge Integrity
- `validate_fact()` checks schema before writing
- `deduplicate()` prevents duplicate entries via fingerprint
- `knowledge_staleness_check.py` detects when source code changed but knowledge didn't
- Confidence scores prevent low-quality knowledge from polluting the store
### File Safety
- Knowledge writes are append-only (never deletes)
- Bootstrap context is truncated to budget (no prompt injection via knowledge)
- Session reader handles malformed JSONL gracefully
## File Index
```
scripts/
harvester.py (473 lines) — Core knowledge extraction
bootstrapper.py (302 lines) — Pre-session context builder
session_reader.py (137 lines) — JSONL session parser
sampler.py (363 lines) — Session scoring + ranking
session_metadata.py (271 lines) — Session metadata extraction
validate_knowledge.py (44 lines) — Index validation
knowledge_staleness_check.py (125 lines) — Staleness detection
knowledge_gap_identifier.py (291 lines) — Gap analysis engine
diff_analyzer.py (203 lines) — Diff analysis
improvement_proposals.py (518 lines) — Proposal generation
priority_rebalancer.py (745 lines) — Priority scoring
automation_opportunity_finder.py (600 lines) — Automation detection
dead_code_detector.py (270 lines) — Dead code detection
dependency_graph.py (220 lines) — Dependency mapping
perf_bottleneck_finder.py (635 lines) — Perf analysis
refactoring_opportunity_finder.py (46 lines) — Refactoring signals
gitea_issue_parser.py (140 lines) — Gitea issue parsing
session_pair_harvester.py (224 lines) — Training pair extraction
knowledge/
index.json (12KB) — Master fact store
SCHEMA.md (3KB) — Schema docs
global/pitfalls.yaml (2KB) — Cross-repo pitfalls
global/tool-quirks.yaml (2KB) — Tool quirks
repos/hermes-agent.yaml (2KB) — Repo-specific knowledge
repos/the-nexus.yaml (2KB) — Repo-specific knowledge
templates/
harvest-prompt.md (4KB) — Extraction prompt
test_sessions/ (5 files) — Sample transcripts
tests/ + scripts/test_* (14 files)— Test suite
```
**Total: ** ~6,500 lines of code across 18 scripts + 14 test files.
Tracks compounding metrics: knowledge velocity (facts/day), error reduction (%), hit rate (knowledge used / knowledge available), task completion improvement.
---
* Generated by Codebase Genome pipeline — Issue #676 *
## Directory Structure
```
compounding-intelligence/
├── README.md # Project overview and architecture
├── GENOME.md # This file (codebase genome)
├── knowledge/ # [PLANNED] Knowledge store
│ ├── index.json # Machine-readable fact index
│ ├── global/ # Cross-repo knowledge
│ ├── repos/{repo}.md # Per-repo knowledge
│ └── agents/{agent}.md # Agent-type notes
├── scripts/
│ ├── test_harvest_prompt.py # Basic prompt validation (2.5KB)
│ └── test_harvest_prompt_comprehensive.py # Full prompt structure test (6.8KB)
├── templates/
│ └── harvest-prompt.md # Knowledge extraction prompt (3.5KB)
├── test_sessions/
│ ├── session_success.jsonl # Happy path test data
│ ├── session_failure.jsonl # Failure path test data
│ ├── session_partial.jsonl # Incomplete session test data
│ ├── session_patterns.jsonl # Pattern extraction test data
│ └── session_questions.jsonl # Question identification test data
└── metrics/ # [PLANNED] Compounding metrics
└── dashboard.md
```
---
## Entry Points and Data Flow
### Entry Point 1: Knowledge Extraction (Harvester)
```
Input: Session transcript (JSONL)
↓
templates/harvest-prompt.md (LLM prompt)
↓
Knowledge items (JSON array)
↓
Output: knowledge/index.json + per-repo/per-agent markdown files
```
### Entry Point 2: Session Bootstrap (Bootstrapper)
```
Input: Session context (repo, agent type, task type)
↓
knowledge/index.json (query relevant facts)
↓
2k-token bootstrap context
↓
Output: Injected into session startup
```
### Entry Point 3: Measurement (Measurer)
```
Input: knowledge/index.json + session history
↓
Velocity, hit rate, error reduction calculations
↓
Output: metrics/dashboard.md
```
---
## Key Abstractions
### Knowledge Item
The atomic unit. One sentence, one category, one confidence score. Designed to be small enough that 1000 items fit in a 2k-token bootstrap context.
### Knowledge Store
A directory structure that mirrors the fleet's mental model:
- `global/` — knowledge that applies everywhere (tool quirks, environment facts)
- `repos/` — knowledge specific to each repo
- `agents/` — knowledge specific to each agent type
### Confidence Score
0.0– 1.0 scale. Defines how certain the harvester is about each extracted fact:
- 0.9– 1.0: Explicitly stated with verification
- 0.7– 0.8: Clearly implied by multiple data points
- 0.5– 0.6: Suggested but not fully verified
- 0.3– 0.4: Inferred from limited data
- 0.1– 0.2: Speculative or uncertain
### Bootstrap Context
The 2k-token injection that a new session receives. Assembled from the most relevant knowledge items for the current task, filtered by confidence > 0.7, deduplicated, and compressed.
---
## API Surface
### Internal (scripts not yet implemented)
| Script | Input | Output | Status |
|--------|-------|--------|--------|
| `harvester.py` | Session JSONL path | Knowledge items JSON | PLANNED |
| `bootstrapper.py` | Repo + agent type | 2k-token context string | PLANNED |
| `measurer.py` | Knowledge store path | Metrics JSON | PLANNED |
| `session_reader.py` | Session JSONL path | Parsed transcript | PLANNED |
### Prompt (templates/harvest-prompt.md)
The extraction prompt is the core "API." It takes a session transcript and returns structured JSON. It defines:
- Five extraction categories
- Output format (JSON array of knowledge items)
- Confidence scoring rubric
- Constraints (no hallucination, specificity, relevance, brevity)
- Example input/output pair
---
## Test Coverage
### What Exists
| File | Tests | Coverage |
|------|-------|----------|
| `scripts/test_harvest_prompt.py` | 2 tests | Prompt file existence, sample transcript |
| `scripts/test_harvest_prompt_comprehensive.py` | 5 tests | Prompt structure, categories, fields, confidence scoring, size limits |
| `test_sessions/*.jsonl` | 5 sessions | Success, failure, partial, patterns, questions |
### What's Missing
1. **Harvester integration test ** — Does the prompt actually extract correct knowledge from real transcripts?
2. **Bootstrapper test ** — Does it assemble relevant context correctly?
3. **Knowledge store test ** — Does the index.json maintain consistency?
4. **Confidence calibration test ** — Do high-confidence facts actually prove true in later sessions?
5. **Deduplication test ** — Are duplicate facts across sessions handled?
6. **Staleness test ** — How does the system handle outdated knowledge?
---
## Security Considerations
1. **No secrets in knowledge store ** — The harvester must filter out API keys, tokens, and credentials from extracted facts. The prompt constraints mention this but there is no automated guard.
2. **Knowledge poisoning ** — A malicious or corrupted session could inject false facts. Confidence scoring partially mitigates this, but there is no verification step.
3. **Access control ** — The knowledge store has no access control. Any process that can read the directory can read all facts. In a multi-tenant setup, this is a concern.
4. **Transcript privacy ** — Session transcripts may contain user data. The harvester must not extract personally identifiable information into the knowledge store.
---
## The 100x Path (from README)
```
Month 1: 15,000 facts, sessions 20% faster
Month 2: 45,000 facts, sessions 40% faster, first-try success up 30%
Month 3: 90,000 facts, fleet measurably smarter per token
```
Each new session is better than the last. The intelligence compounds.
---
* Generated by codebase-genome pipeline. Ref: timmy-home#676. *