Compare commits
1 Commits
fix/676
...
fix/198-qu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e1e42c3f8e |
412
GENOME.md
412
GENOME.md
@@ -1,16 +1,16 @@
|
||||
# GENOME.md — compounding-intelligence
|
||||
|
||||
**Generated:** 2026-04-17
|
||||
**Repo:** Timmy_Foundation/compounding-intelligence
|
||||
**Description:** Turn 1B+ daily agent tokens into durable, compounding fleet intelligence.
|
||||
*Auto-generated codebase genome. Addresses timmy-home#676.*
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
Every agent session starts at zero. The same HTTP 405 gets rediscovered as a branch protection issue. The same token path gets searched from scratch. Intelligence evaporates when the session ends.
|
||||
**What:** A system that turns 1B+ daily agent tokens into durable, compounding fleet intelligence.
|
||||
|
||||
Compounding-intelligence solves this with three pipelines forming a loop:
|
||||
**Why:** Every agent session starts at zero. The same mistakes get made repeatedly — the same HTTP 405 is rediscovered as a branch protection issue, the same token path is searched for from scratch. Intelligence evaporates when the session ends.
|
||||
|
||||
**How:** Three pipelines form a compounding loop:
|
||||
|
||||
```
|
||||
SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION STARTS SMARTER
|
||||
@@ -18,234 +18,222 @@ SESSION ENDS → HARVESTER → KNOWLEDGE STORE → BOOTSTRAPPER → NEW SESSION
|
||||
MEASURER → Prove it's working
|
||||
```
|
||||
|
||||
**Status:** Active development. Core pipelines implemented. 20+ scripts, 14 test files, knowledge store populated with real data.
|
||||
**Status:** Early stage. Template and test scaffolding exist. Core pipeline scripts (harvester.py, bootstrapper.py, measurer.py, session_reader.py) are planned but not yet implemented. The knowledge extraction prompt is complete and validated.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
TRANS[Session Transcripts<br/>~/.hermes/sessions/*.jsonl] --> READER[session_reader.py]
|
||||
READER --> HARVESTER[harvester.py]
|
||||
HARVESTER -->|LLM extraction| PROMPT[harvest-prompt.md]
|
||||
HARVESTER --> DEDUP[deduplicate()]
|
||||
DEDUP --> INDEX[knowledge/index.json]
|
||||
DEDUP --> GLOBAL[knowledge/global/*.yaml]
|
||||
DEDUP --> REPO[knowledge/repos/*.yaml]
|
||||
|
||||
INDEX --> BOOTSTRAPPER[bootstrapper.py]
|
||||
BOOTSTRAPPER -->|filter + rank + truncate| CONTEXT[Bootstrap Context<br/>2k token injection]
|
||||
CONTEXT --> SESSION[New Session starts smarter]
|
||||
|
||||
INDEX --> VALIDATOR[validate_knowledge.py]
|
||||
INDEX --> STALENESS[knowledge_staleness_check.py]
|
||||
INDEX --> GAPS[knowledge_gap_identifier.py]
|
||||
|
||||
TRANS --> SAMPLER[sampler.py]
|
||||
SAMPLER -->|score + rank| BEST[High-value sessions]
|
||||
BEST --> HARVESTER
|
||||
|
||||
TRANS --> METADATA[session_metadata.py]
|
||||
METADATA --> SUMMARY[SessionSummary objects]
|
||||
|
||||
KNOWLEDGE --> DIFF[diff_analyzer.py]
|
||||
DIFF --> PROPOSALS[improvement_proposals.py]
|
||||
PROPOSALS --> PRIORITIES[priority_rebalancer.py]
|
||||
A[Session Transcript<br/>.jsonl] --> B[Harvester]
|
||||
B --> C{Extract Knowledge}
|
||||
C --> D[knowledge/index.json]
|
||||
C --> E[knowledge/global/*.md]
|
||||
C --> F[knowledge/repos/{repo}.md]
|
||||
C --> G[knowledge/agents/{agent}.md]
|
||||
D --> H[Bootstrapper]
|
||||
H --> I[Bootstrap Context<br/>2k token injection]
|
||||
I --> J[New Session<br/>starts smarter]
|
||||
J --> A
|
||||
D --> K[Measurer]
|
||||
K --> L[metrics/dashboard.md]
|
||||
K --> M[Velocity / Hit Rate<br/>Error Reduction]
|
||||
```
|
||||
|
||||
## Entry Points
|
||||
### Pipeline 1: Harvester
|
||||
|
||||
### Core Pipelines
|
||||
**Status:** Prompt designed. Script not implemented.
|
||||
|
||||
| Script | Purpose | Key Functions |
|
||||
|--------|---------|---------------|
|
||||
| `harvester.py` | Extract knowledge from session transcripts | `harvest_session()`, `call_llm()`, `deduplicate()`, `validate_fact()` |
|
||||
| `bootstrapper.py` | Build pre-session context from knowledge store | `build_bootstrap_context()`, `filter_facts()`, `sort_facts()`, `truncate_to_tokens()` |
|
||||
| `session_reader.py` | Parse JSONL session transcripts | `read_session()`, `extract_conversation()`, `messages_to_text()` |
|
||||
| `sampler.py` | Score and rank sessions for harvesting value | `scan_session_fast()`, `score_session()` |
|
||||
| `session_metadata.py` | Extract structured metadata from sessions | `extract_session_metadata()`, `SessionSummary` |
|
||||
Reads finished session transcripts (JSONL). Uses `templates/harvest-prompt.md` to extract durable knowledge into five categories:
|
||||
|
||||
### Analysis & Quality
|
||||
| Category | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `fact` | Concrete, verifiable information | "Repository X has 5 files" |
|
||||
| `pitfall` | Errors encountered, wrong assumptions | "Token is at ~/.config/gitea/token, not env var" |
|
||||
| `pattern` | Successful action sequences | "Deploy: test → build → push → webhook" |
|
||||
| `tool-quirk` | Environment-specific behaviors | "URL format requires trailing slash" |
|
||||
| `question` | Identified but unanswered | "Need optimal batch size for harvesting" |
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `validate_knowledge.py` | Validate knowledge index schema compliance |
|
||||
| `knowledge_staleness_check.py` | Detect stale knowledge (source changed since extraction) |
|
||||
| `knowledge_gap_identifier.py` | Find untested functions, undocumented APIs, missing tests |
|
||||
| `diff_analyzer.py` | Analyze code diffs for improvement signals |
|
||||
| `improvement_proposals.py` | Generate ranked improvement proposals |
|
||||
| `priority_rebalancer.py` | Rebalance priorities across proposals |
|
||||
| `automation_opportunity_finder.py` | Find manual steps that can be automated |
|
||||
| `dead_code_detector.py` | Detect unused code |
|
||||
| `dependency_graph.py` | Map dependency relationships |
|
||||
| `perf_bottleneck_finder.py` | Find performance bottlenecks |
|
||||
| `refactoring_opportunity_finder.py` | Identify refactoring targets |
|
||||
| `gitea_issue_parser.py` | Parse Gitea issues for knowledge extraction |
|
||||
|
||||
### Automation
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `session_pair_harvester.py` | Extract training pairs from sessions |
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
1. Session ends → .jsonl written to ~/.hermes/sessions/
|
||||
2. sampler.py scores sessions by age, recency, repo coverage
|
||||
3. harvester.py reads top sessions, calls LLM with harvest-prompt.md
|
||||
4. LLM extracts facts/pitfalls/patterns/quirks/questions
|
||||
5. deduplicate() checks against existing index via fact_fingerprint()
|
||||
6. validate_fact() checks schema compliance
|
||||
7. write_knowledge() appends to knowledge/index.json + per-repo YAML
|
||||
8. On next session start, bootstrapper.py:
|
||||
a. Loads knowledge/index.json
|
||||
b. Filters by session's repo and agent type
|
||||
c. Sorts by confidence (high first), then recency
|
||||
d. Truncates to 2k token budget
|
||||
e. Injects as pre-context
|
||||
9. Agent starts with full situational awareness instead of zero
|
||||
```
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### Knowledge Item (fact/pitfall/pattern/quirk/question)
|
||||
Output schema per knowledge item:
|
||||
```json
|
||||
{
|
||||
"fact": "Gitea token is at ~/.config/gitea/token",
|
||||
"category": "tool-quirk",
|
||||
"repo": "global",
|
||||
"confidence": 0.9,
|
||||
"evidence": "Found during clone attempt",
|
||||
"source_session": "2026-04-13_abc123",
|
||||
"extracted_at": "2026-04-13T20:00:00Z"
|
||||
"fact": "One sentence description",
|
||||
"category": "fact|pitfall|pattern|tool-quirk|question",
|
||||
"repo": "repo-name or 'global'",
|
||||
"confidence": 0.0-1.0
|
||||
}
|
||||
```
|
||||
|
||||
### SessionSummary (session_metadata.py)
|
||||
Extracted metadata per session: duration, token count, tools used, repos touched, error count, outcome.
|
||||
### Pipeline 2: Bootstrapper
|
||||
|
||||
### Gap / GapReport (knowledge_gap_identifier.py)
|
||||
Structured gap analysis: untested functions, undocumented APIs, missing tests. Severity: critical/high/medium/low.
|
||||
**Status:** Not implemented.
|
||||
|
||||
### Knowledge Index (knowledge/index.json)
|
||||
Machine-readable fact store. 12KB, populated with real data. Categories: fact, pitfall, pattern, tool-quirk, question.
|
||||
Queries knowledge store before session start. Assembles a compact 2k-token context from relevant facts. Injects into session startup so the agent begins with full situational awareness.
|
||||
|
||||
## Knowledge Store
|
||||
### Pipeline 3: Measurer
|
||||
|
||||
```
|
||||
knowledge/
|
||||
├── index.json # Master fact store (12KB, populated)
|
||||
├── SCHEMA.md # Schema documentation
|
||||
├── global/
|
||||
│ ├── pitfalls.yaml # Cross-repo pitfalls (2KB)
|
||||
│ └── tool-quirks.yaml # Tool-specific quirks (2KB)
|
||||
├── repos/
|
||||
│ ├── hermes-agent.yaml # hermes-agent knowledge (2KB)
|
||||
│ └── the-nexus.yaml # the-nexus knowledge (2KB)
|
||||
└── agents/ # Per-agent knowledge (empty)
|
||||
```
|
||||
**Status:** Not implemented.
|
||||
|
||||
## API Surface
|
||||
|
||||
### LLM API (consumed)
|
||||
| Provider | Endpoint | Usage |
|
||||
|----------|----------|-------|
|
||||
| Nous Research | `https://inference-api.nousresearch.com/v1` | Knowledge extraction |
|
||||
| Ollama | `http://localhost:11434/v1` | Local fallback |
|
||||
|
||||
### File API (consumed/produced)
|
||||
| Path | Format | Direction |
|
||||
|------|--------|-----------|
|
||||
| `~/.hermes/sessions/*.jsonl` | JSONL | Input (session transcripts) |
|
||||
| `knowledge/index.json` | JSON | Output (master fact store) |
|
||||
| `knowledge/global/*.yaml` | YAML | Output (cross-repo knowledge) |
|
||||
| `knowledge/repos/*.yaml` | YAML | Output (per-repo knowledge) |
|
||||
| `templates/harvest-prompt.md` | Markdown | Config (extraction prompt) |
|
||||
|
||||
## Test Coverage
|
||||
|
||||
**14 test files** covering core pipelines:
|
||||
|
||||
| Test File | Covers |
|
||||
|-----------|--------|
|
||||
| `test_harvest_prompt.py` | Prompt validation, hallucination detection |
|
||||
| `test_harvest_prompt_comprehensive.py` | Extended prompt testing |
|
||||
| `test_harvester_pipeline.py` | Harvester extraction + dedup |
|
||||
| `test_bootstrapper.py` | Context building, filtering, truncation |
|
||||
| `test_session_pair_harvester.py` | Training pair extraction |
|
||||
| `test_improvement_proposals.py` | Proposal generation |
|
||||
| `test_priority_rebalancer.py` | Priority scoring |
|
||||
| `test_knowledge_staleness.py` | Staleness detection |
|
||||
| `test_automation_opportunity_finder.py` | Automation detection |
|
||||
| `test_diff_analyzer.py` | Diff analysis |
|
||||
| `test_gitea_issue_parser.py` | Issue parsing |
|
||||
| `test_refactoring_opportunity_finder.py` | Refactoring signals |
|
||||
| `test_knowledge_gap_identifier.py` | Gap analysis |
|
||||
| `test_perf_bottleneck_finder.py` | Perf bottleneck detection |
|
||||
|
||||
### Coverage Gaps
|
||||
|
||||
1. **session_reader.py** — No dedicated test file (tested indirectly)
|
||||
2. **sampler.py** — No test file (scoring logic untested)
|
||||
3. **session_metadata.py** — No test file
|
||||
4. **validate_knowledge.py** — No test file
|
||||
5. **knowledge_staleness_check.py** — Tested but limited
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### API Key Handling
|
||||
- `harvester.py` reads API key from `~/.hermes/auth.json` or env vars
|
||||
- Key passed to LLM API in request headers only
|
||||
- No key logging
|
||||
|
||||
### Knowledge Integrity
|
||||
- `validate_fact()` checks schema before writing
|
||||
- `deduplicate()` prevents duplicate entries via fingerprint
|
||||
- `knowledge_staleness_check.py` detects when source code changed but knowledge didn't
|
||||
- Confidence scores prevent low-quality knowledge from polluting the store
|
||||
|
||||
### File Safety
|
||||
- Knowledge writes are append-only (never deletes)
|
||||
- Bootstrap context is truncated to budget (no prompt injection via knowledge)
|
||||
- Session reader handles malformed JSONL gracefully
|
||||
|
||||
## File Index
|
||||
|
||||
```
|
||||
scripts/
|
||||
harvester.py (473 lines) — Core knowledge extraction
|
||||
bootstrapper.py (302 lines) — Pre-session context builder
|
||||
session_reader.py (137 lines) — JSONL session parser
|
||||
sampler.py (363 lines) — Session scoring + ranking
|
||||
session_metadata.py (271 lines) — Session metadata extraction
|
||||
validate_knowledge.py (44 lines) — Index validation
|
||||
knowledge_staleness_check.py (125 lines) — Staleness detection
|
||||
knowledge_gap_identifier.py (291 lines) — Gap analysis engine
|
||||
diff_analyzer.py (203 lines) — Diff analysis
|
||||
improvement_proposals.py (518 lines) — Proposal generation
|
||||
priority_rebalancer.py (745 lines) — Priority scoring
|
||||
automation_opportunity_finder.py (600 lines) — Automation detection
|
||||
dead_code_detector.py (270 lines) — Dead code detection
|
||||
dependency_graph.py (220 lines) — Dependency mapping
|
||||
perf_bottleneck_finder.py (635 lines) — Perf analysis
|
||||
refactoring_opportunity_finder.py (46 lines) — Refactoring signals
|
||||
gitea_issue_parser.py (140 lines) — Gitea issue parsing
|
||||
session_pair_harvester.py (224 lines) — Training pair extraction
|
||||
knowledge/
|
||||
index.json (12KB) — Master fact store
|
||||
SCHEMA.md (3KB) — Schema docs
|
||||
global/pitfalls.yaml (2KB) — Cross-repo pitfalls
|
||||
global/tool-quirks.yaml (2KB) — Tool quirks
|
||||
repos/hermes-agent.yaml (2KB) — Repo-specific knowledge
|
||||
repos/the-nexus.yaml (2KB) — Repo-specific knowledge
|
||||
templates/
|
||||
harvest-prompt.md (4KB) — Extraction prompt
|
||||
test_sessions/ (5 files) — Sample transcripts
|
||||
tests/ + scripts/test_* (14 files)— Test suite
|
||||
```
|
||||
|
||||
**Total:** ~6,500 lines of code across 18 scripts + 14 test files.
|
||||
Tracks compounding metrics: knowledge velocity (facts/day), error reduction (%), hit rate (knowledge used / knowledge available), task completion improvement.
|
||||
|
||||
---
|
||||
|
||||
*Generated by Codebase Genome pipeline — Issue #676*
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
compounding-intelligence/
|
||||
├── README.md # Project overview and architecture
|
||||
├── GENOME.md # This file (codebase genome)
|
||||
├── knowledge/ # [PLANNED] Knowledge store
|
||||
│ ├── index.json # Machine-readable fact index
|
||||
│ ├── global/ # Cross-repo knowledge
|
||||
│ ├── repos/{repo}.md # Per-repo knowledge
|
||||
│ └── agents/{agent}.md # Agent-type notes
|
||||
├── scripts/
|
||||
│ ├── test_harvest_prompt.py # Basic prompt validation (2.5KB)
|
||||
│ └── test_harvest_prompt_comprehensive.py # Full prompt structure test (6.8KB)
|
||||
├── templates/
|
||||
│ └── harvest-prompt.md # Knowledge extraction prompt (3.5KB)
|
||||
├── test_sessions/
|
||||
│ ├── session_success.jsonl # Happy path test data
|
||||
│ ├── session_failure.jsonl # Failure path test data
|
||||
│ ├── session_partial.jsonl # Incomplete session test data
|
||||
│ ├── session_patterns.jsonl # Pattern extraction test data
|
||||
│ └── session_questions.jsonl # Question identification test data
|
||||
└── metrics/ # [PLANNED] Compounding metrics
|
||||
└── dashboard.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Points and Data Flow
|
||||
|
||||
### Entry Point 1: Knowledge Extraction (Harvester)
|
||||
|
||||
```
|
||||
Input: Session transcript (JSONL)
|
||||
↓
|
||||
templates/harvest-prompt.md (LLM prompt)
|
||||
↓
|
||||
Knowledge items (JSON array)
|
||||
↓
|
||||
Output: knowledge/index.json + per-repo/per-agent markdown files
|
||||
```
|
||||
|
||||
### Entry Point 2: Session Bootstrap (Bootstrapper)
|
||||
|
||||
```
|
||||
Input: Session context (repo, agent type, task type)
|
||||
↓
|
||||
knowledge/index.json (query relevant facts)
|
||||
↓
|
||||
2k-token bootstrap context
|
||||
↓
|
||||
Output: Injected into session startup
|
||||
```
|
||||
|
||||
### Entry Point 3: Measurement (Measurer)
|
||||
|
||||
```
|
||||
Input: knowledge/index.json + session history
|
||||
↓
|
||||
Velocity, hit rate, error reduction calculations
|
||||
↓
|
||||
Output: metrics/dashboard.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
### Knowledge Item
|
||||
The atomic unit. One sentence, one category, one confidence score. Designed to be small enough that 1000 items fit in a 2k-token bootstrap context.
|
||||
|
||||
### Knowledge Store
|
||||
A directory structure that mirrors the fleet's mental model:
|
||||
- `global/` — knowledge that applies everywhere (tool quirks, environment facts)
|
||||
- `repos/` — knowledge specific to each repo
|
||||
- `agents/` — knowledge specific to each agent type
|
||||
|
||||
### Confidence Score
|
||||
0.0–1.0 scale. Defines how certain the harvester is about each extracted fact:
|
||||
- 0.9–1.0: Explicitly stated with verification
|
||||
- 0.7–0.8: Clearly implied by multiple data points
|
||||
- 0.5–0.6: Suggested but not fully verified
|
||||
- 0.3–0.4: Inferred from limited data
|
||||
- 0.1–0.2: Speculative or uncertain
|
||||
|
||||
### Bootstrap Context
|
||||
The 2k-token injection that a new session receives. Assembled from the most relevant knowledge items for the current task, filtered by confidence > 0.7, deduplicated, and compressed.
|
||||
|
||||
---
|
||||
|
||||
## API Surface
|
||||
|
||||
### Internal (scripts not yet implemented)
|
||||
|
||||
| Script | Input | Output | Status |
|
||||
|--------|-------|--------|--------|
|
||||
| `harvester.py` | Session JSONL path | Knowledge items JSON | PLANNED |
|
||||
| `bootstrapper.py` | Repo + agent type | 2k-token context string | PLANNED |
|
||||
| `measurer.py` | Knowledge store path | Metrics JSON | PLANNED |
|
||||
| `session_reader.py` | Session JSONL path | Parsed transcript | PLANNED |
|
||||
|
||||
### Prompt (templates/harvest-prompt.md)
|
||||
|
||||
The extraction prompt is the core "API." It takes a session transcript and returns structured JSON. It defines:
|
||||
- Five extraction categories
|
||||
- Output format (JSON array of knowledge items)
|
||||
- Confidence scoring rubric
|
||||
- Constraints (no hallucination, specificity, relevance, brevity)
|
||||
- Example input/output pair
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### What Exists
|
||||
|
||||
| File | Tests | Coverage |
|
||||
|------|-------|----------|
|
||||
| `scripts/test_harvest_prompt.py` | 2 tests | Prompt file existence, sample transcript |
|
||||
| `scripts/test_harvest_prompt_comprehensive.py` | 5 tests | Prompt structure, categories, fields, confidence scoring, size limits |
|
||||
| `test_sessions/*.jsonl` | 5 sessions | Success, failure, partial, patterns, questions |
|
||||
|
||||
### What's Missing
|
||||
|
||||
1. **Harvester integration test** — Does the prompt actually extract correct knowledge from real transcripts?
|
||||
2. **Bootstrapper test** — Does it assemble relevant context correctly?
|
||||
3. **Knowledge store test** — Does the index.json maintain consistency?
|
||||
4. **Confidence calibration test** — Do high-confidence facts actually prove true in later sessions?
|
||||
5. **Deduplication test** — Are duplicate facts across sessions handled?
|
||||
6. **Staleness test** — How does the system handle outdated knowledge?
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **No secrets in knowledge store** — The harvester must filter out API keys, tokens, and credentials from extracted facts. The prompt constraints mention this but there is no automated guard.
|
||||
|
||||
2. **Knowledge poisoning** — A malicious or corrupted session could inject false facts. Confidence scoring partially mitigates this, but there is no verification step.
|
||||
|
||||
3. **Access control** — The knowledge store has no access control. Any process that can read the directory can read all facts. In a multi-tenant setup, this is a concern.
|
||||
|
||||
4. **Transcript privacy** — Session transcripts may contain user data. The harvester must not extract personally identifiable information into the knowledge store.
|
||||
|
||||
---
|
||||
|
||||
## The 100x Path (from README)
|
||||
|
||||
```
|
||||
Month 1: 15,000 facts, sessions 20% faster
|
||||
Month 2: 45,000 facts, sessions 40% faster, first-try success up 30%
|
||||
Month 3: 90,000 facts, fleet measurably smarter per token
|
||||
```
|
||||
|
||||
Each new session is better than the last. The intelligence compounds.
|
||||
|
||||
---
|
||||
|
||||
*Generated by codebase-genome pipeline. Ref: timmy-home#676.*
|
||||
|
||||
297
quality_gate.py
Normal file
297
quality_gate.py
Normal file
@@ -0,0 +1,297 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
quality_gate.py — Score and filter knowledge entries.
|
||||
|
||||
Scores each entry on 4 dimensions:
|
||||
- Specificity: concrete examples vs vague generalities
|
||||
- Actionability: can this be used to do something?
|
||||
- Freshness: is this still accurate?
|
||||
- Source quality: was the model/provider reliable?
|
||||
|
||||
Usage:
|
||||
from quality_gate import score_entry, filter_entries, quality_report
|
||||
|
||||
score = score_entry(entry)
|
||||
filtered = filter_entries(entries, threshold=0.5)
|
||||
report = quality_report(entries)
|
||||
"""
|
||||
|
||||
import json
|
||||
import math
|
||||
import re
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
# Source quality scores (higher = more reliable)
|
||||
SOURCE_QUALITY = {
|
||||
"claude-sonnet": 0.9,
|
||||
"claude-opus": 0.95,
|
||||
"gpt-4": 0.85,
|
||||
"gpt-4-turbo": 0.85,
|
||||
"gpt-5": 0.9,
|
||||
"mimo-v2-pro": 0.8,
|
||||
"gemini-pro": 0.8,
|
||||
"llama-3-70b": 0.75,
|
||||
"llama-3-8b": 0.7,
|
||||
"ollama": 0.6,
|
||||
"unknown": 0.5,
|
||||
}
|
||||
|
||||
DEFAULT_SOURCE_QUALITY = 0.5
|
||||
|
||||
# Specificity indicators
|
||||
SPECIFIC_INDICATORS = [
|
||||
r"\b\d+\.\d+", # decimal numbers
|
||||
r"\b\d{4}-\d{2}-\d{2}", # dates
|
||||
r"\b[A-Z][a-z]+\s[A-Z][a-z]+", # proper nouns
|
||||
r"`[^`]+`", # code/commands
|
||||
r"https?://", # URLs
|
||||
r"\b(example|instance|specifically|concretely)\b",
|
||||
r"\b(step \d|first|second|third)\b",
|
||||
r"\b(exactly|precisely|measured|counted)\b",
|
||||
]
|
||||
|
||||
# Vagueness indicators (penalty)
|
||||
VAGUE_INDICATORS = [
|
||||
r"\b(generally|usually|often|sometimes|might|could|perhaps)\b",
|
||||
r"\b(various|several|many|some|few)\b",
|
||||
r"\b(it depends|varies|differs)\b",
|
||||
r"\b(basically|essentially|fundamentally)\b",
|
||||
r"\b(everyone knows|it's obvious|clearly)\b",
|
||||
]
|
||||
|
||||
# Actionability indicators
|
||||
ACTIONABLE_INDICATORS = [
|
||||
r"\b(run|execute|install|deploy|configure|set up)\b",
|
||||
r"\b(use|apply|implement|create|build)\b",
|
||||
r"\b(check|verify|test|validate|confirm)\b",
|
||||
r"\b(fix|resolve|solve|debug|troubleshoot)\b",
|
||||
r"\b(if .+ then|when .+ do|to .+ use)\b",
|
||||
r"```[a-z]*\n", # code blocks
|
||||
r"\$\s", # shell commands
|
||||
r"\b\d+\.\s", # numbered steps
|
||||
]
|
||||
|
||||
|
||||
def score_specificity(content: str) -> float:
|
||||
"""Score specificity: 0=vague, 1=very specific."""
|
||||
content_lower = content.lower()
|
||||
score = 0.5 # baseline
|
||||
|
||||
# Check for specific indicators
|
||||
specific_count = sum(
|
||||
len(re.findall(p, content, re.IGNORECASE))
|
||||
for p in SPECIFIC_INDICATORS
|
||||
)
|
||||
|
||||
# Check for vague indicators
|
||||
vague_count = sum(
|
||||
len(re.findall(p, content_lower))
|
||||
for p in VAGUE_INDICATORS
|
||||
)
|
||||
|
||||
# Adjust score
|
||||
score += min(specific_count * 0.05, 0.4)
|
||||
score -= min(vague_count * 0.08, 0.3)
|
||||
|
||||
# Length bonus (longer = more detail, up to a point)
|
||||
word_count = len(content.split())
|
||||
if word_count > 50:
|
||||
score += min((word_count - 50) * 0.001, 0.1)
|
||||
|
||||
return max(0.0, min(1.0, score))
|
||||
|
||||
|
||||
def score_actionability(content: str) -> float:
|
||||
"""Score actionability: 0=abstract, 1=highly actionable."""
|
||||
content_lower = content.lower()
|
||||
score = 0.3 # baseline (most knowledge is informational)
|
||||
|
||||
# Check for actionable indicators
|
||||
actionable_count = sum(
|
||||
len(re.findall(p, content_lower))
|
||||
for p in ACTIONABLE_INDICATORS
|
||||
)
|
||||
|
||||
score += min(actionable_count * 0.1, 0.6)
|
||||
|
||||
# Code blocks are highly actionable
|
||||
if "```" in content:
|
||||
score += 0.2
|
||||
|
||||
# Numbered steps are actionable
|
||||
if re.search(r"\d+\.\s+\w", content):
|
||||
score += 0.1
|
||||
|
||||
return max(0.0, min(1.0, score))
|
||||
|
||||
|
||||
def score_freshness(timestamp: Optional[str]) -> float:
|
||||
"""Score freshness: 1=new, decays over time."""
|
||||
if not timestamp:
|
||||
return 0.5
|
||||
|
||||
try:
|
||||
if isinstance(timestamp, str):
|
||||
ts = datetime.fromisoformat(timestamp.replace("Z", "+00:00"))
|
||||
else:
|
||||
ts = timestamp
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
age_days = (now - ts).days
|
||||
|
||||
# Exponential decay: 1.0 at day 0, 0.5 at ~180 days, 0.1 at ~365 days
|
||||
score = math.exp(-age_days / 180)
|
||||
return max(0.1, min(1.0, score))
|
||||
except (ValueError, TypeError):
|
||||
return 0.5
|
||||
|
||||
|
||||
def score_source_quality(model: Optional[str]) -> float:
|
||||
"""Score source quality based on model/provider."""
|
||||
if not model:
|
||||
return DEFAULT_SOURCE_QUALITY
|
||||
|
||||
# Normalize model name
|
||||
model_lower = model.lower()
|
||||
for key, score in SOURCE_QUALITY.items():
|
||||
if key in model_lower:
|
||||
return score
|
||||
|
||||
return DEFAULT_SOURCE_QUALITY
|
||||
|
||||
|
||||
def score_entry(entry: dict) -> float:
|
||||
"""
|
||||
Score a knowledge entry on quality (0.0-1.0).
|
||||
|
||||
Weights:
|
||||
- specificity: 0.3
|
||||
- actionability: 0.3
|
||||
- freshness: 0.2
|
||||
- source_quality: 0.2
|
||||
"""
|
||||
content = entry.get("content", entry.get("text", entry.get("response", "")))
|
||||
model = entry.get("model", entry.get("provenance", {}).get("model"))
|
||||
timestamp = entry.get("timestamp", entry.get("provenance", {}).get("timestamp"))
|
||||
|
||||
specificity = score_specificity(content)
|
||||
actionability = score_actionability(content)
|
||||
freshness = score_freshness(timestamp)
|
||||
source = score_source_quality(model)
|
||||
|
||||
return round(
|
||||
0.3 * specificity +
|
||||
0.3 * actionability +
|
||||
0.2 * freshness +
|
||||
0.2 * source,
|
||||
4
|
||||
)
|
||||
|
||||
|
||||
def score_entry_detailed(entry: dict) -> dict:
|
||||
"""Score with breakdown."""
|
||||
content = entry.get("content", entry.get("text", entry.get("response", "")))
|
||||
model = entry.get("model", entry.get("provenance", {}).get("model"))
|
||||
timestamp = entry.get("timestamp", entry.get("provenance", {}).get("timestamp"))
|
||||
|
||||
specificity = score_specificity(content)
|
||||
actionability = score_actionability(content)
|
||||
freshness = score_freshness(timestamp)
|
||||
source = score_source_quality(model)
|
||||
|
||||
return {
|
||||
"score": round(0.3 * specificity + 0.3 * actionability + 0.2 * freshness + 0.2 * source, 4),
|
||||
"specificity": round(specificity, 4),
|
||||
"actionability": round(actionability, 4),
|
||||
"freshness": round(freshness, 4),
|
||||
"source_quality": round(source, 4),
|
||||
}
|
||||
|
||||
|
||||
def filter_entries(entries: List[dict], threshold: float = 0.5) -> List[dict]:
|
||||
"""Filter entries below quality threshold."""
|
||||
filtered = []
|
||||
for entry in entries:
|
||||
if score_entry(entry) >= threshold:
|
||||
filtered.append(entry)
|
||||
return filtered
|
||||
|
||||
|
||||
def quality_report(entries: List[dict]) -> str:
|
||||
"""Generate quality distribution report."""
|
||||
if not entries:
|
||||
return "No entries to analyze."
|
||||
|
||||
scores = [score_entry(e) for e in entries]
|
||||
|
||||
avg = sum(scores) / len(scores)
|
||||
min_score = min(scores)
|
||||
max_score = max(scores)
|
||||
|
||||
# Distribution buckets
|
||||
buckets = {"high": 0, "medium": 0, "low": 0, "rejected": 0}
|
||||
for s in scores:
|
||||
if s >= 0.7:
|
||||
buckets["high"] += 1
|
||||
elif s >= 0.5:
|
||||
buckets["medium"] += 1
|
||||
elif s >= 0.3:
|
||||
buckets["low"] += 1
|
||||
else:
|
||||
buckets["rejected"] += 1
|
||||
|
||||
lines = [
|
||||
"=" * 50,
|
||||
" QUALITY GATE REPORT",
|
||||
"=" * 50,
|
||||
f" Total entries: {len(entries)}",
|
||||
f" Average score: {avg:.3f}",
|
||||
f" Min: {min_score:.3f}",
|
||||
f" Max: {max_score:.3f}",
|
||||
"",
|
||||
" Distribution:",
|
||||
]
|
||||
|
||||
for bucket, count in buckets.items():
|
||||
pct = count / len(entries) * 100
|
||||
bar = "█" * int(pct / 5)
|
||||
lines.append(f" {bucket:<12} {count:>5} ({pct:>5.1f}%) {bar}")
|
||||
|
||||
passed = buckets["high"] + buckets["medium"]
|
||||
lines.append(f"\n Pass rate (>= 0.5): {passed}/{len(entries)} ({passed/len(entries)*100:.1f}%)")
|
||||
lines.append("=" * 50)
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser(description="Knowledge quality gate")
|
||||
parser.add_argument("files", nargs="+", help="JSONL files to score")
|
||||
parser.add_argument("--threshold", type=float, default=0.5, help="Quality threshold")
|
||||
parser.add_argument("--json", action="store_true", help="JSON output")
|
||||
parser.add_argument("--filter", action="store_true", help="Filter and write back")
|
||||
args = parser.parse_args()
|
||||
|
||||
all_entries = []
|
||||
for filepath in args.files:
|
||||
with open(filepath) as f:
|
||||
for line in f:
|
||||
if line.strip():
|
||||
all_entries.append(json.loads(line))
|
||||
|
||||
if args.json:
|
||||
results = [{"entry": e, **score_entry_detailed(e)} for e in all_entries]
|
||||
print(json.dumps(results, indent=2))
|
||||
elif args.filter:
|
||||
filtered = filter_entries(all_entries, args.threshold)
|
||||
print(f"Kept {len(filtered)}/{len(all_entries)} entries (threshold: {args.threshold})")
|
||||
else:
|
||||
print(quality_report(all_entries))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
108
tests/test_quality_gate.py
Normal file
108
tests/test_quality_gate.py
Normal file
@@ -0,0 +1,108 @@
|
||||
"""
|
||||
Tests for quality_gate.py — Knowledge entry quality scoring.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from quality_gate import (
|
||||
score_specificity,
|
||||
score_actionability,
|
||||
score_freshness,
|
||||
score_source_quality,
|
||||
score_entry,
|
||||
filter_entries,
|
||||
)
|
||||
|
||||
|
||||
class TestScoreSpecificity(unittest.TestCase):
|
||||
def test_specific_content_scores_high(self):
|
||||
content = "Run `python3 deploy.py --env prod` on 2026-04-15. Example: step 1 configure nginx."
|
||||
score = score_specificity(content)
|
||||
self.assertGreater(score, 0.6)
|
||||
|
||||
def test_vague_content_scores_low(self):
|
||||
content = "It generally depends. Various factors might affect this. Basically, it varies."
|
||||
score = score_specificity(content)
|
||||
self.assertLess(score, 0.5)
|
||||
|
||||
def test_empty_scores_baseline(self):
|
||||
score = score_specificity("")
|
||||
self.assertAlmostEqual(score, 0.5, delta=0.1)
|
||||
|
||||
|
||||
class TestScoreActionability(unittest.TestCase):
|
||||
def test_actionable_content_scores_high(self):
|
||||
content = "1. Run `pip install -r requirements.txt`\n2. Execute `python3 train.py`\n3. Verify with `pytest`"
|
||||
score = score_actionability(content)
|
||||
self.assertGreater(score, 0.6)
|
||||
|
||||
def test_abstract_content_scores_low(self):
|
||||
content = "The concept of intelligence is fascinating and multifaceted."
|
||||
score = score_actionability(content)
|
||||
self.assertLess(score, 0.5)
|
||||
|
||||
|
||||
class TestScoreFreshness(unittest.TestCase):
|
||||
def test_recent_timestamp_scores_high(self):
|
||||
recent = datetime.now(timezone.utc).isoformat()
|
||||
score = score_freshness(recent)
|
||||
self.assertGreater(score, 0.9)
|
||||
|
||||
def test_old_timestamp_scores_low(self):
|
||||
old = (datetime.now(timezone.utc) - timedelta(days=365)).isoformat()
|
||||
score = score_freshness(old)
|
||||
self.assertLess(score, 0.2)
|
||||
|
||||
def test_none_returns_baseline(self):
|
||||
score = score_freshness(None)
|
||||
self.assertEqual(score, 0.5)
|
||||
|
||||
|
||||
class TestScoreSourceQuality(unittest.TestCase):
|
||||
def test_claude_scores_high(self):
|
||||
self.assertGreater(score_source_quality("claude-sonnet"), 0.85)
|
||||
|
||||
def test_ollama_scores_lower(self):
|
||||
self.assertLess(score_source_quality("ollama"), 0.7)
|
||||
|
||||
def test_unknown_returns_default(self):
|
||||
self.assertEqual(score_source_quality("unknown"), 0.5)
|
||||
|
||||
|
||||
class TestScoreEntry(unittest.TestCase):
|
||||
def test_good_entry_scores_high(self):
|
||||
entry = {
|
||||
"content": "To deploy: run `kubectl apply -f deployment.yaml`. Verify with `kubectl get pods`.",
|
||||
"model": "claude-sonnet",
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
}
|
||||
score = score_entry(entry)
|
||||
self.assertGreater(score, 0.6)
|
||||
|
||||
def test_poor_entry_scores_low(self):
|
||||
entry = {
|
||||
"content": "It depends. Various things might happen.",
|
||||
"model": "unknown",
|
||||
}
|
||||
score = score_entry(entry)
|
||||
self.assertLess(score, 0.5)
|
||||
|
||||
|
||||
class TestFilterEntries(unittest.TestCase):
|
||||
def test_filters_low_quality(self):
|
||||
entries = [
|
||||
{"content": "Run `deploy.py` to fix the issue.", "model": "claude"},
|
||||
{"content": "It might work sometimes.", "model": "unknown"},
|
||||
{"content": "Configure nginx: step 1 edit nginx.conf", "model": "gpt-4"},
|
||||
]
|
||||
filtered = filter_entries(entries, threshold=0.5)
|
||||
self.assertGreaterEqual(len(filtered), 2)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user