Files
compounding-intelligence/README.md
Rockachopa 8628a0d610 feat(connectors): add sovereign personal archive connector pack foundation
- Add connectors/ directory with base infrastructure
- Implement SourceEvent unified schema (source/account/thread/author/timestamp/content/attachments/raw_ref/hash/consent_scope)
- Create BaseConnector abstract class with checkpoint/dedup/consent gates
- Implement TwitterArchiveConnector for official Twitter/X data exports
- Add run_connector.py CLI entry point
- Add comprehensive test suite (13 tests, all passing)
- Add connectors/README.md with usage docs
- Add Makefile targets: test-connectors, run-connector, connectors-help
- Reference parent EPIC #194 and issue #233

This is the foundational connector pack. Future work: Discord, Slack, WhatsApp, Notion, iMessage, Google.
2026-04-26 20:45:07 -04:00

79 lines
2.9 KiB
Markdown

# Compounding Intelligence
Turn 1B+ daily tokens into durable, compounding fleet intelligence.
## The Problem
20,991 sessions on disk. Each one starts at zero. Every agent rediscover the same HTTP 405 is a branch protection issue. The intelligence from a million tokens of work evaporates when the session ends.
## The Solution
Three pipelines that form a compounding loop:
```
SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION STARTS SMARTER
MEASURER → Prove it's working
```
## Architecture
### Pipeline 1: Harvester
Reads finished session transcripts. Extracts durable knowledge: facts, pitfalls, patterns, tool quirks. Stores in `knowledge/`.
### Pipeline 2: Bootstrap
Before a session starts, queries knowledge store for relevant facts. Assembles compact 2k-token context. Injects into session so it starts with full situational awareness.
### Pipeline 3: Measure
Tracks whether compounding is happening. Knowledge velocity, error reduction, hit rate, task completion. Daily report proves the loop works.
### Connector Pack (EPIC #233)
Sovereign personal archive connectors: Twitter/X, Discord, Slack, WhatsApp, Notion, iMessage, Google.
Connectors mirror local exports or explicit API tokens → normalize → redact → index → sync with provenance.
See [`connectors/`](connectors/README.md) for the full connector suite and usage.
## Directory Structure
```
├── knowledge/
│ ├── index.json # Machine-readable fact index
│ ├── global/ # Cross-repo knowledge
│ ├── repos/{repo}.md # Per-repo knowledge
│ └── agents/{agent}.md # Agent-type notes
├── scripts/
│ ├── harvester.py # Post-session knowledge extractor
│ ├── bootstrapper.py # Pre-session context loader
│ ├── measurer.py # Compounding metrics
│ └── session_reader.py # JSONL parser
├── connectors/ # Personal archive connectors (EPIC #233)
│ ├── __init__.py
│ ├── base.py
│ ├── schema.py
│ ├── twitter_archive.py
│ └── README.md
├── metrics/
│ └── dashboard.md # Human-readable status
└── templates/
├── bootstrap-context.md
└── harvest-prompt.md
```
## The 100x Path
```
Month 1: 15,000 facts, sessions 20% faster
Month 2: 45,000 facts, sessions 40% faster, first-try success up 30%
Month 3: 90,000 facts, fleet measurably smarter per token
```
Each new session is better than the last. The intelligence compounds.
## Issues
See [all issues](https://forge.alexanderwhitestone.com/Timmy_Foundation/compounding-intelligence/issues) for the full roadmap.
**Epics:**
- EPIC 1: Session Harvester (#2)
- EPIC 2: Knowledge Store & Bootstrap (#3)
- EPIC 3: Compounding Measurement (#4)
- EPIC 4: Retroactive Harvest (#5)