Compare commits
1 Commits
step35/148
...
step35/98-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5a6f41689f |
472
docs/API.md
Normal file
472
docs/API.md
Normal file
@@ -0,0 +1,472 @@
|
||||
# Compounding Intelligence — Scripts API Reference
|
||||
|
||||
*Generated: 2026-04-26 11:02 UTC*
|
||||
|
||||
This document auto-documents the public API surface of all scripts
|
||||
in `scripts/`. Each section covers one script: module purpose,
|
||||
public functions, and their signatures.
|
||||
|
||||
---
|
||||
|
||||
## `scripts/api_doc_generator.py`
|
||||
|
||||
API Doc Generator — Issue #98
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `extract_functions_from_ast` | `extract_functions_from_ast(tree, file_rel)` | Extract public function names, signatures, and first-line doc summaries. |
|
||||
| `parse_module` | `parse_module(filepath)` | Parse a Python file and return its module-level docstring and public functions. |
|
||||
| `scan_scripts_dir` | `scan_scripts_dir(scripts_dir)` | Scan all .py files in scripts/ and extract API info. |
|
||||
| `render_markdown` | `render_markdown(modules)` | Generate full docs/API.md content from the scanned modules. |
|
||||
| `render_json` | `render_json(modules)` | Emit machine-readable JSON version of the API reference. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/automation_opportunity_finder.py`
|
||||
|
||||
Automation Opportunity Finder — Scan fleet for manual processes that could be automated.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `analyze_cron_jobs` | `analyze_cron_jobs(hermes_home)` | Analyze cron job definitions for automation gaps. |
|
||||
| `analyze_documents` | `analyze_documents(root_dirs)` | Scan documentation for manual step patterns. |
|
||||
| `analyze_scripts` | `analyze_scripts(root_dirs)` | Detect repeated command sequences in scripts. |
|
||||
| `analyze_session_transcripts` | `analyze_session_transcripts(session_dirs)` | Find repeated tool-call patterns in session transcripts. |
|
||||
| `analyze_shell_history` | `analyze_shell_history(root_dirs)` | Find repeated shell commands from history files. |
|
||||
| `deduplicate_proposals` | `deduplicate_proposals(proposals)` | Remove duplicate proposals based on title similarity. |
|
||||
| `rank_proposals` | `rank_proposals(proposals)` | Sort proposals by impact * confidence (highest first). |
|
||||
| `format_text_report` | `format_text_report(proposals)` | Format proposals as human-readable text. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/bootstrapper.py`
|
||||
|
||||
Bootstrapper — assemble pre-session context from knowledge store.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `load_index` | `load_index(index_path)` | Load and validate the knowledge index. |
|
||||
| `filter_facts` | `filter_facts(facts, repo, agent, include_global)` | Filter facts by repo, agent, and global scope. |
|
||||
| `sort_facts` | `sort_facts(facts)` | Sort facts by: confidence (desc), then category priority, then fact text. |
|
||||
| `load_repo_knowledge` | `load_repo_knowledge(repo)` | Load per-repo knowledge markdown if it exists. |
|
||||
| `load_agent_knowledge` | `load_agent_knowledge(agent)` | Load per-agent knowledge markdown if it exists. |
|
||||
| `load_global_knowledge` | `load_global_knowledge()` | Load all global knowledge markdown files. |
|
||||
| `render_facts_section` | `render_facts_section(facts, category, label)` | Render a section of facts for a single category. |
|
||||
| `estimate_tokens` | `estimate_tokens(text)` | Rough token estimate. |
|
||||
| `truncate_to_tokens` | `truncate_to_tokens(text, max_tokens)` | Truncate text to approximately max_tokens, cutting at line boundaries. |
|
||||
| `build_bootstrap_context` | `build_bootstrap_context(repo, agent, include_global, max_tokens, index_path)` | Build the full bootstrap context block. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/dead_code_detector.py`
|
||||
|
||||
Dead Code Detector for Python Codebases
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `is_safe_unused` | `is_safe_unused(name, filepath)` | Check if an unused name is expected to be unused. |
|
||||
| `get_git_blame` | `get_git_blame(filepath, lineno)` | Get last author of a line via git blame. |
|
||||
| `analyze_file` | `analyze_file(filepath)` | Analyze a single Python file for dead code. |
|
||||
| `scan_repo` | `scan_repo(repo_path, exclude_patterns)` | Scan an entire repo for dead code. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/dedup.py`
|
||||
|
||||
dedup.py — Knowledge deduplication: content hash + semantic similarity.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `normalize_text` | `normalize_text(text)` | Normalize text for hashing: lowercase, collapse whitespace, strip. |
|
||||
| `content_hash` | `content_hash(text)` | SHA256 hash of normalized text for exact dedup. |
|
||||
| `tokenize` | `tokenize(text)` | Simple tokenizer: lowercase words, 3+ chars. |
|
||||
| `token_similarity` | `token_similarity(a, b)` | Token-based Jaccard similarity (0.0-1.0). |
|
||||
| `quality_score` | `quality_score(fact)` | Compute quality score for merge ranking. |
|
||||
| `merge_facts` | `merge_facts(keep, drop)` | Merge two near-duplicate facts, keeping higher-quality fields. |
|
||||
| `dedup_facts` | `dedup_facts(facts, exact_threshold, near_threshold, dry_run)` | Deduplicate a list of knowledge facts. |
|
||||
| `dedup_index_file` | `dedup_index_file(input_path, output_path, near_threshold, dry_run)` | Deduplicate an index.json file. |
|
||||
| `generate_test_duplicates` | `generate_test_duplicates(n)` | Generate test facts with intentional duplicates for testing. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/dependency_graph.py`
|
||||
|
||||
Cross-Repo Dependency Graph Builder
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `normalize_repo_name` | `normalize_repo_name(name)` | Normalize a repo name for comparison. |
|
||||
| `scan_file_for_deps` | `scan_file_for_deps(filepath, content, own_repo)` | Scan a file's content for references to other repos. |
|
||||
| `scan_repo` | `scan_repo(repo_path, repo_name)` | Scan a repo directory for dependencies. |
|
||||
| `detect_cycles` | `detect_cycles(graph)` | Detect circular dependencies using DFS. |
|
||||
| `to_dot` | `to_dot(graph)` | Generate DOT format output. |
|
||||
| `to_mermaid` | `to_mermaid(graph)` | Generate Mermaid format output. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/diff_analyzer.py`
|
||||
|
||||
Diff Analyzer — Parse unified diffs and categorize every change.
|
||||
|
||||
*(no public functions — script runs as `main()` only)*
|
||||
|
||||
## `scripts/freshness.py`
|
||||
|
||||
Knowledge Freshness Cron — Detect stale entries from code changes (Issue #200)
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `compute_file_hash` | `compute_file_hash(filepath)` | Compute SHA-256 hash of a file. Returns None if file doesn't exist. |
|
||||
| `get_git_file_changes` | `get_git_file_changes(repo_path, days)` | Get files changed in git in the last N days. |
|
||||
| `load_knowledge_entries` | `load_knowledge_entries(knowledge_dir)` | Load knowledge entries from YAML files in the knowledge directory. |
|
||||
| `check_freshness` | `check_freshness(knowledge_dir, repo_root, days)` | Check freshness of knowledge entries against recent code changes. |
|
||||
| `update_stale_hashes` | `update_stale_hashes(knowledge_dir, repo_root)` | Update hashes for stale entries. Returns count of updated entries. |
|
||||
| `format_report` | `format_report(result, max_items)` | Format freshness check results as a human-readable report. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/gitea_issue_parser.py`
|
||||
|
||||
Gitea Issue Body Parser — Extract structured data from markdown issue bodies.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `parse_issue_body` | `parse_issue_body(body, title, labels)` | Parse a Gitea issue markdown body into structured JSON. |
|
||||
| `fetch_issue_from_url` | `fetch_issue_from_url(url)` | Fetch an issue from a Gitea API URL and parse it. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/harvester.py`
|
||||
|
||||
harvester.py — Extract durable knowledge from Hermes session transcripts.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `find_api_key` | `find_api_key()` | Find API key from common locations. |
|
||||
| `load_extraction_prompt` | `load_extraction_prompt()` | Load the extraction prompt template. |
|
||||
| `call_llm` | `call_llm(prompt, transcript, api_base, api_key, model)` | Call the LLM API to extract knowledge from a transcript. |
|
||||
| `parse_extraction_response` | `parse_extraction_response(content)` | Parse the LLM response to extract knowledge items. |
|
||||
| `load_existing_knowledge` | `load_existing_knowledge(knowledge_dir)` | Load the existing knowledge index. |
|
||||
| `fact_fingerprint` | `fact_fingerprint(fact)` | Generate a deduplication fingerprint for a fact. |
|
||||
| `deduplicate` | `deduplicate(new_facts, existing, similarity_threshold)` | Remove duplicate facts from new_facts that already exist in the knowledge store. |
|
||||
| `validate_fact` | `validate_fact(fact)` | Validate a single knowledge item has required fields. |
|
||||
| `write_knowledge` | `write_knowledge(index, new_facts, knowledge_dir, source_session)` | Write new facts to the knowledge store. |
|
||||
| `harvest_session` | `harvest_session(session_path, knowledge_dir, api_base, api_key, model, dry_run, min_confidence)` | Harvest knowledge from a single session. |
|
||||
| `batch_harvest` | `batch_harvest(sessions_dir, knowledge_dir, api_base, api_key, model, since, limit, dry_run)` | Harvest knowledge from multiple sessions in batch. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/improvement_proposals.py`
|
||||
|
||||
Improvement Proposal Generator for compounding-intelligence.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `analyze_sessions` | `analyze_sessions(sessions)` | Analyze session data to find waste patterns. |
|
||||
| `generate_proposals` | `generate_proposals(patterns, hourly_rate, implementation_overhead)` | Generate improvement proposals from waste patterns. |
|
||||
| `format_proposals_markdown` | `format_proposals_markdown(proposals, patterns, generated_at)` | Format proposals as a markdown document. |
|
||||
| `format_proposals_json` | `format_proposals_json(proposals)` | Format proposals as JSON. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/knowledge_gap_identifier.py`
|
||||
|
||||
Knowledge Gap Identifier — Pipeline 10.7
|
||||
|
||||
*(no public functions — script runs as `main()` only)*
|
||||
|
||||
## `scripts/knowledge_staleness_check.py`
|
||||
|
||||
Knowledge Store Staleness Detector — Detect stale knowledge entries by comparing source file hashes.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `compute_file_hash` | `compute_file_hash(filepath)` | Compute SHA-256 hash of a file. Returns None if file doesn't exist. |
|
||||
| `check_staleness` | `check_staleness(index_path, repo_root)` | Check all entries in knowledge index for staleness. |
|
||||
| `fix_hashes` | `fix_hashes(index_path, repo_root)` | Add hashes to entries missing them. Returns count of fixed entries. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/perf_bottleneck_finder.py`
|
||||
|
||||
Performance Bottleneck Finder — Identify slow tests, builds, and CI steps.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `find_slow_tests_pytest` | `find_slow_tests_pytest(repo_path)` | Run pytest --durations and parse slow tests. |
|
||||
| `find_slow_tests_by_scan` | `find_slow_tests_by_scan(repo_path)` | Scan test files for patterns that indicate slow tests. |
|
||||
| `analyze_build_artifacts` | `analyze_build_artifacts(repo_path)` | Find large build artifacts that slow down builds. |
|
||||
| `analyze_makefile_targets` | `analyze_makefile_targets(repo_path)` | Analyze Makefile for potentially slow targets. |
|
||||
| `analyze_github_actions` | `analyze_github_actions(repo_path)` | Analyze GitHub Actions workflow files for inefficiencies. |
|
||||
| `analyze_gitea_ci` | `analyze_gitea_ci(repo_path)` | Analyze Gitea/Drone CI config files. |
|
||||
| `find_slow_imports` | `find_slow_imports(repo_path)` | Find Python files with heavy import chains. |
|
||||
| `severity_sort_key` | `severity_sort_key(b)` | Sort by severity then duration. |
|
||||
| `generate_report` | `generate_report(repo_path)` | Run all analyses and generate a performance report. |
|
||||
| `format_markdown` | `format_markdown(report)` | Format report as markdown. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/priority_rebalancer.py`
|
||||
|
||||
Priority Rebalancer — Re-evaluate issue priorities based on accumulated data.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `collect_knowledge_signals` | `collect_knowledge_signals(knowledge_dir)` | Analyze knowledge store for coverage gaps and staleness. |
|
||||
| `collect_staleness_signals` | `collect_staleness_signals(scripts_dir, knowledge_dir)` | Run staleness checker if available. |
|
||||
| `collect_metrics_signals` | `collect_metrics_signals(metrics_dir)` | Analyze metrics directory for pipeline health. |
|
||||
| `extract_priority` | `extract_priority(labels)` | Extract priority level from issue labels. |
|
||||
| `compute_issue_score` | `compute_issue_score(issue, repo, signals, now)` | Compute priority score for a single issue. |
|
||||
| `generate_report` | `generate_report(scores, signals, org, repos_scanned)` | Generate the full priority report. |
|
||||
| `generate_markdown_report` | `generate_markdown_report(report)` | Generate human-readable markdown report. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/refactoring_opportunity_finder.py`
|
||||
|
||||
Finds refactoring opportunities in codebases
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `compute_file_complexity` | `compute_file_complexity(filepath)` | Compute cyclomatic complexity for a Python file. |
|
||||
| `calculate_refactoring_score` | `calculate_refactoring_score(metrics)` | Calculate a refactoring priority score (0-100) based on file metrics. |
|
||||
| `scan_directory` | `scan_directory(directory, extensions)` | Scan directory for source files. |
|
||||
| `generate_proposals` | `generate_proposals(directory, min_score)` | Generate refactoring proposals by analyzing source files. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/sampler.py`
|
||||
|
||||
sampler.py — Score and rank sessions by harvest value.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `scan_session_fast` | `scan_session_fast(path)` | Extract scoring metadata from a session without parsing the full JSONL. |
|
||||
| `parse_session_timestamp` | `parse_session_timestamp(filename)` | Parse timestamp from session filename. |
|
||||
| `score_session` | `score_session(meta, now, seen_repos)` | Score a session for harvest value. Returns (score, breakdown). |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/session_metadata.py`
|
||||
|
||||
session_metadata.py - Extract structured metadata from Hermes session transcripts.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `extract_session_metadata` | `extract_session_metadata(file_path)` | Extract structured metadata from a Hermes session JSONL transcript. |
|
||||
| `process_session_directory` | `process_session_directory(directory_path, output_file)` | Process all JSONL files in a directory. |
|
||||
| `main` | `main()` | CLI entry point. |
|
||||
|
||||
## `scripts/session_pair_harvester.py`
|
||||
|
||||
Session Transcript → Training Pair Harvester
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `compute_hash` | `compute_hash(text)` | Content hash for deduplication. |
|
||||
| `extract_pairs_from_session` | `extract_pairs_from_session(session_data, min_ratio, min_response_words)` | Extract terse→rich pairs from a single session object. |
|
||||
| `extract_from_jsonl_file` | `extract_from_jsonl_file(filepath, **kwargs)` | Extract pairs from a session JSONL file. |
|
||||
| `deduplicate_pairs` | `deduplicate_pairs(pairs)` | Remove duplicate pairs across files. |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
## `scripts/session_reader.py`
|
||||
|
||||
session_reader.py — Parse Hermes session JSONL transcripts.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `read_session` | `read_session(path)` | Read a session JSONL file and return all messages as a list. |
|
||||
| `read_session_iter` | `read_session_iter(path)` | Iterate over session messages without loading all into memory. |
|
||||
| `extract_conversation` | `extract_conversation(messages)` | Extract user/assistant conversation turns, skipping tool-only messages. |
|
||||
| `truncate_for_context` | `truncate_for_context(messages, head, tail)` | Truncate long sessions: keep first N + last N messages. |
|
||||
| `messages_to_text` | `messages_to_text(messages)` | Convert message list to plain text for LLM consumption. |
|
||||
| `get_session_metadata` | `get_session_metadata(path)` | Extract metadata from a session file (first message often has config info). |
|
||||
|
||||
## `scripts/test_automation_opportunity_finder.py`
|
||||
|
||||
Tests for scripts/automation_opportunity_finder.py — 8 tests.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_analyze_cron_jobs_no_file` | `test_analyze_cron_jobs_no_file()` | Returns empty list when no cron jobs file exists. |
|
||||
| `test_analyze_cron_jobs_disabled` | `test_analyze_cron_jobs_disabled()` | Detects disabled cron jobs. |
|
||||
| `test_analyze_cron_jobs_errors` | `test_analyze_cron_jobs_errors()` | Detects cron jobs with error status. |
|
||||
| `test_analyze_documents_finds_todos` | `test_analyze_documents_finds_todos()` | Detects TODO markers in documents. |
|
||||
| `test_analyze_scripts_repeated_commands` | `test_analyze_scripts_repeated_commands()` | Detects repeated shell commands across scripts. |
|
||||
| `test_analyze_session_transcripts` | `test_analyze_session_transcripts()` | Detects repeated tool-call sequences. |
|
||||
| `test_deduplicate_proposals` | `test_deduplicate_proposals()` | Deduplicates proposals with similar titles. |
|
||||
| `test_rank_proposals` | `test_rank_proposals()` | Ranks proposals by impact * confidence. |
|
||||
|
||||
## `scripts/test_bootstrapper.py`
|
||||
|
||||
Tests for bootstrapper.py — context assembly from knowledge store.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `make_index` | `make_index(facts, tmp_dir)` | Create a temporary index.json with given facts. |
|
||||
| `test_empty_index` | `test_empty_index()` | Empty knowledge store produces graceful output. |
|
||||
| `test_filter_by_repo` | `test_filter_by_repo()` | Filter facts by repository. |
|
||||
| `test_filter_by_agent` | `test_filter_by_agent()` | Filter facts by agent type. |
|
||||
| `test_no_global_flag` | `test_no_global_flag()` | Excluding global facts works. |
|
||||
| `test_sort_by_confidence` | `test_sort_by_confidence()` | Facts sort by confidence descending. |
|
||||
| `test_sort_pitfalls_first` | `test_sort_pitfalls_first()` | Pitfalls sort before facts at same confidence. |
|
||||
| `test_truncate_to_tokens` | `test_truncate_to_tokens()` | Truncation cuts at line boundary. |
|
||||
| `test_estimate_tokens` | `test_estimate_tokens()` | Token estimation is reasonable. |
|
||||
| `test_build_full_context` | `test_build_full_context()` | Full context with facts renders correctly. |
|
||||
| `test_max_tokens_respected` | `test_max_tokens_respected()` | Output respects max_tokens limit. |
|
||||
| `test_missing_index_graceful` | `test_missing_index_graceful()` | Missing index.json doesn't crash. |
|
||||
|
||||
## `scripts/test_diff_analyzer.py`
|
||||
|
||||
Tests for scripts/diff_analyzer.py — 10 tests.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_empty` | `test_empty()` | - |
|
||||
| `test_addition` | `test_addition()` | - |
|
||||
| `test_deletion` | `test_deletion()` | - |
|
||||
| `test_modification` | `test_modification()` | - |
|
||||
| `test_rename` | `test_rename()` | - |
|
||||
| `test_multiple_files` | `test_multiple_files()` | - |
|
||||
| `test_binary` | `test_binary()` | - |
|
||||
| `test_to_dict` | `test_to_dict()` | - |
|
||||
| `test_context_only` | `test_context_only()` | - |
|
||||
| `test_multi_hunk` | `test_multi_hunk()` | - |
|
||||
| `run_all` | `run_all()` | - |
|
||||
|
||||
## `scripts/test_gitea_issue_parser.py`
|
||||
|
||||
Tests for scripts/gitea_issue_parser.py
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_basic_parsing` | `test_basic_parsing()` | - |
|
||||
| `test_numbered_criteria` | `test_numbered_criteria()` | - |
|
||||
| `test_epic_ref_from_body` | `test_epic_ref_from_body()` | - |
|
||||
| `test_empty_body` | `test_empty_body()` | - |
|
||||
| `test_no_sections` | `test_no_sections()` | - |
|
||||
| `test_multiple_sections` | `test_multiple_sections()` | - |
|
||||
| `run_all` | `run_all()` | - |
|
||||
|
||||
## `scripts/test_harvest_prompt.py`
|
||||
|
||||
Test harness for knowledge extraction prompt.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `validate_knowledge_item` | `validate_knowledge_item(item, idx)` | Validate a single knowledge item. Returns list of errors. |
|
||||
| `validate_extraction` | `validate_extraction(data)` | Validate a full extraction result. Returns (is_valid, errors, warnings). |
|
||||
| `validate_transcript_coverage` | `validate_transcript_coverage(data, transcript)` | Check that extracted facts are actually supported by the transcript. |
|
||||
| `run_tests` | `run_tests()` | Run the built-in test suite. |
|
||||
| `validate_file` | `validate_file(filepath)` | Validate an existing extraction JSON file. |
|
||||
|
||||
## `scripts/test_harvest_prompt_comprehensive.py`
|
||||
|
||||
Comprehensive tests for knowledge extraction prompt.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `check_prompt_structure` | `check_prompt_structure()` | - |
|
||||
| `check_confidence_scoring` | `check_confidence_scoring()` | - |
|
||||
| `check_example_quality` | `check_example_quality()` | - |
|
||||
| `check_constraint_coverage` | `check_constraint_coverage()` | - |
|
||||
| `check_test_sessions` | `check_test_sessions()` | - |
|
||||
| `test_prompt_structure` | `test_prompt_structure()` | - |
|
||||
| `test_confidence_scoring` | `test_confidence_scoring()` | - |
|
||||
| `test_example_quality` | `test_example_quality()` | - |
|
||||
| `test_constraint_coverage` | `test_constraint_coverage()` | - |
|
||||
| `test_test_sessions` | `test_test_sessions()` | - |
|
||||
|
||||
## `scripts/test_harvester_pipeline.py`
|
||||
|
||||
Smoke test for harvester pipeline — verifies the full chain:
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_session_reader` | `test_session_reader()` | Test that session_reader parses JSONL correctly. |
|
||||
| `test_validate_fact` | `test_validate_fact()` | Test fact validation. |
|
||||
| `test_deduplicate` | `test_deduplicate()` | Test deduplication. |
|
||||
| `test_knowledge_store_roundtrip` | `test_knowledge_store_roundtrip()` | Test loading and writing knowledge index. |
|
||||
| `test_full_chain_no_llm` | `test_full_chain_no_llm()` | Test the full pipeline minus the LLM call. |
|
||||
|
||||
## `scripts/test_improvement_proposals.py`
|
||||
|
||||
Tests for scripts/improvement_proposals.py — 15 tests.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_empty_sessions` | `test_empty_sessions()` | - |
|
||||
| `test_no_patterns_on_clean_sessions` | `test_no_patterns_on_clean_sessions()` | - |
|
||||
| `test_repeated_error_detection` | `test_repeated_error_detection()` | Same error across 3+ sessions triggers pattern. |
|
||||
| `test_repeated_error_threshold` | `test_repeated_error_threshold()` | 2 occurrences should NOT trigger (threshold is 3). |
|
||||
| `test_slow_tool_detection` | `test_slow_tool_detection()` | Tool with avg latency > 5000ms across 5+ calls. |
|
||||
| `test_fast_tool_not_flagged` | `test_fast_tool_not_flagged()` | Tool under 5000ms avg should not trigger. |
|
||||
| `test_failed_retry_detection` | `test_failed_retry_detection()` | 3+ consecutive calls to same tool triggers retry pattern. |
|
||||
| `test_manual_process_detection` | `test_manual_process_detection()` | 10+ tool calls with <= 3 unique tools. |
|
||||
| `test_generate_proposals_from_patterns` | `test_generate_proposals_from_patterns()` | Proposals generated from waste patterns. |
|
||||
| `test_proposal_roi_positive` | `test_proposal_roi_positive()` | ROI weeks should be a positive number for recoverable time. |
|
||||
| `test_proposals_sorted_by_impact` | `test_proposals_sorted_by_impact()` | Proposals should be sorted by monthly hours saved (descending). |
|
||||
| `test_format_markdown` | `test_format_markdown()` | Markdown output should contain expected sections. |
|
||||
| `test_format_json` | `test_format_json()` | JSON output should be valid and parseable. |
|
||||
| `test_normalize_error` | `test_normalize_error()` | Error normalization should remove paths and hashes. |
|
||||
| `test_cli_integration` | `test_cli_integration()` | End-to-end test: write input JSON, run script, check output. |
|
||||
| `run_all` | `run_all()` | - |
|
||||
|
||||
## `scripts/test_knowledge_staleness.py`
|
||||
|
||||
Tests for scripts/knowledge_staleness_check.py — 8 tests.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_fresh_entry` | `test_fresh_entry()` | - |
|
||||
| `test_stale_entry` | `test_stale_entry()` | - |
|
||||
| `test_missing_source` | `test_missing_source()` | - |
|
||||
| `test_no_hash` | `test_no_hash()` | - |
|
||||
| `test_no_source_field` | `test_no_source_field()` | - |
|
||||
| `test_fix_hashes` | `test_fix_hashes()` | - |
|
||||
| `test_empty_index` | `test_empty_index()` | - |
|
||||
| `test_compute_hash_nonexistent` | `test_compute_hash_nonexistent()` | - |
|
||||
| `run_all` | `run_all()` | - |
|
||||
|
||||
## `scripts/test_priority_rebalancer.py`
|
||||
|
||||
Tests for Priority Rebalancer
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test` | `test(name)` | - |
|
||||
| `assert_eq` | `assert_eq(a, b, msg)` | - |
|
||||
| `assert_true` | `assert_true(v, msg)` | - |
|
||||
| `assert_false` | `assert_false(v, msg)` | - |
|
||||
| `make_issue` | `make_issue(**kwargs)` | - |
|
||||
|
||||
## `scripts/test_refactoring_opportunity_finder.py`
|
||||
|
||||
Tests for scripts/refactoring_opportunity_finder.py — 10 tests.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_complexity_simple_function` | `test_complexity_simple_function()` | Simple function should have low complexity. |
|
||||
| `test_complexity_with_conditionals` | `test_complexity_with_conditionals()` | Function with if/else should have higher complexity. |
|
||||
| `test_complexity_with_loops` | `test_complexity_with_loops()` | Function with loops should increase complexity. |
|
||||
| `test_complexity_with_class` | `test_complexity_with_class()` | Class with methods should count both. |
|
||||
| `test_complexity_syntax_error` | `test_complexity_syntax_error()` | File with syntax error should return zeros. |
|
||||
| `test_refactoring_score_high_complexity` | `test_refactoring_score_high_complexity()` | High complexity should give high score. |
|
||||
| `test_refactoring_score_low_complexity` | `test_refactoring_score_low_complexity()` | Low complexity should give lower score. |
|
||||
| `test_refactoring_score_high_churn` | `test_refactoring_score_high_churn()` | High churn should increase score. |
|
||||
| `test_refactoring_score_no_coverage` | `test_refactoring_score_no_coverage()` | No coverage data should assume medium risk. |
|
||||
| `test_refactoring_score_large_file` | `test_refactoring_score_large_file()` | Large files should score higher. |
|
||||
| `run_all` | `run_all()` | - |
|
||||
|
||||
## `scripts/test_session_pair_harvester.py`
|
||||
|
||||
Tests for session_pair_harvester.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `test_basic_extraction` | `test_basic_extraction()` | - |
|
||||
| `test_filters_short_responses` | `test_filters_short_responses()` | - |
|
||||
| `test_skips_tool_results` | `test_skips_tool_results()` | - |
|
||||
| `test_deduplication` | `test_deduplication()` | - |
|
||||
| `test_ratio_filter` | `test_ratio_filter()` | - |
|
||||
|
||||
## `scripts/validate_knowledge.py`
|
||||
|
||||
Validate knowledge files and index.json against the schema.
|
||||
|
||||
| Function | Signature | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `validate_fact` | `validate_fact(fact, src)` | - |
|
||||
| `main` | `main()` | - |
|
||||
|
||||
|
||||
---
|
||||
|
||||
**Total scripts documented:** 33
|
||||
|
||||
*Generated by `scripts/api_doc_generator.py` (Issue #98)*
|
||||
219
scripts/api_doc_generator.py
Normal file
219
scripts/api_doc_generator.py
Normal file
@@ -0,0 +1,219 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
API Doc Generator — Issue #98
|
||||
|
||||
Scans all Python modules in `scripts/`, extracts their public API surface
|
||||
(module docstring + public function signatures + first-line doc summaries),
|
||||
and produces a single markdown reference document at `docs/API.md`.
|
||||
|
||||
Usage:
|
||||
python3 scripts/api_doc_generator.py # Write docs/API.md
|
||||
python3 scripts/api_doc_generator.py --check # Verify docs/API.md is up-to-date
|
||||
python3 scripts/api_doc_generator.py --json # Emit JSON for downstream tooling
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import TypedDict, List, Optional
|
||||
|
||||
|
||||
# ─── Paths ────────────────────────────────────────────────────────────────────
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
REPO_ROOT = SCRIPT_DIR.parent
|
||||
SCRIPTS_DIR = REPO_ROOT / "scripts"
|
||||
DOCS_DIR = REPO_ROOT / "docs"
|
||||
OUTPUT_PATH = DOCS_DIR / "API.md"
|
||||
|
||||
|
||||
# ─── Data structures ───────────────────────────────────────────────────────────
|
||||
class FunctionInfo(TypedDict):
|
||||
name: str
|
||||
signature: str
|
||||
summary: str
|
||||
|
||||
|
||||
class ModuleInfo(TypedDict):
|
||||
path: str # relative to repo root, e.g. "scripts/harvester.py"
|
||||
docstring: str
|
||||
functions: List[FunctionInfo]
|
||||
|
||||
|
||||
# ─── AST extraction ────────────────────────────────────────────────────────────
|
||||
def extract_functions_from_ast(tree: ast.AST, file_rel: str) -> List[FunctionInfo]:
|
||||
"""Extract public function names, signatures, and first-line doc summaries."""
|
||||
funcs: list[FunctionInfo] = []
|
||||
|
||||
for node in ast.iter_child_nodes(tree):
|
||||
if not isinstance(node, ast.FunctionDef):
|
||||
continue
|
||||
# Skip private functions
|
||||
if node.name.startswith("_"):
|
||||
continue
|
||||
|
||||
# Build signature: arg1, arg2=default, *args, **kwargs
|
||||
args = []
|
||||
for arg in node.args.args:
|
||||
args.append(arg.arg)
|
||||
if node.args.vararg:
|
||||
args.append(f"*{node.args.vararg.arg}")
|
||||
if node.args.kwarg:
|
||||
args.append(f"**{node.args.kwarg.arg}")
|
||||
|
||||
# Get first line of docstring
|
||||
summary = ""
|
||||
if (node.body and isinstance(node.body[0], ast.Expr) and
|
||||
isinstance(node.body[0].value, ast.Constant) and
|
||||
isinstance(node.body[0].value.value, str)):
|
||||
raw = node.body[0].value.value.strip()
|
||||
summary = raw.split("\n")[0].strip()
|
||||
if len(summary) > 100:
|
||||
summary = summary[:97] + "..."
|
||||
|
||||
funcs.append({
|
||||
"name": node.name,
|
||||
"signature": ", ".join(args),
|
||||
"summary": summary,
|
||||
})
|
||||
|
||||
return funcs
|
||||
|
||||
|
||||
def parse_module(filepath: Path) -> Optional[ModuleInfo]:
|
||||
"""Parse a Python file and return its module-level docstring and public functions."""
|
||||
try:
|
||||
with open(filepath, "r", encoding="utf-8") as f:
|
||||
source = f.read()
|
||||
tree = ast.parse(source, filename=str(filepath))
|
||||
except Exception as e:
|
||||
print(f"WARNING: Could not parse {filepath}: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
# Module docstring
|
||||
module_doc = ast.get_docstring(tree) or ""
|
||||
module_doc = module_doc.strip().split("\n")[0] # first line only
|
||||
|
||||
# Public functions
|
||||
functions = extract_functions_from_ast(tree, filepath.name)
|
||||
|
||||
rel = filepath.relative_to(REPO_ROOT)
|
||||
return {
|
||||
"path": str(rel),
|
||||
"docstring": module_doc,
|
||||
"functions": functions,
|
||||
}
|
||||
|
||||
|
||||
# ─── Scanning ──────────────────────────────────────────────────────────────────
|
||||
def scan_scripts_dir(scripts_dir: Path) -> List[ModuleInfo]:
|
||||
"""Scan all .py files in scripts/ and extract API info."""
|
||||
modules: list[ModuleInfo] = []
|
||||
for pyfile in sorted(scripts_dir.glob("*.py")):
|
||||
info = parse_module(pyfile)
|
||||
if info is not None:
|
||||
modules.append(info)
|
||||
return modules
|
||||
|
||||
|
||||
# ─── Markdown rendering ─────────────────────────────────────────────────────────
|
||||
def render_markdown(modules: List[ModuleInfo]) -> str:
|
||||
"""Generate full docs/API.md content from the scanned modules."""
|
||||
lines = [
|
||||
"# Compounding Intelligence — Scripts API Reference",
|
||||
"",
|
||||
f"*Generated: {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M UTC')}*",
|
||||
"",
|
||||
"This document auto-documents the public API surface of all scripts",
|
||||
"in `scripts/`. Each section covers one script: module purpose,",
|
||||
"public functions, and their signatures.",
|
||||
"",
|
||||
"---",
|
||||
"",
|
||||
]
|
||||
|
||||
for mod in modules:
|
||||
rel = mod["path"]
|
||||
name = Path(rel).stem # e.g. harvester
|
||||
lines.append(f"## `{rel}`")
|
||||
lines.append("")
|
||||
if mod["docstring"]:
|
||||
lines.append(mod["docstring"])
|
||||
lines.append("")
|
||||
|
||||
if mod["functions"]:
|
||||
lines.append("| Function | Signature | Description |")
|
||||
lines.append("|----------|-----------|-------------|")
|
||||
for fn in mod["functions"]:
|
||||
sig = fn["name"] + "(" + fn["signature"] + ")"
|
||||
desc = fn["summary"] or "-"
|
||||
lines.append(f"| `{fn['name']}` | `{sig}` | {desc} |")
|
||||
lines.append("")
|
||||
else:
|
||||
lines.append("*(no public functions — script runs as `main()` only)*")
|
||||
lines.append("")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"---",
|
||||
"",
|
||||
f"**Total scripts documented:** {len(modules)}",
|
||||
"",
|
||||
"*Generated by `scripts/api_doc_generator.py` (Issue #98)*",
|
||||
])
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
# ─── JSON output (optional, for automation) ───────────────────────────────────
|
||||
def render_json(modules: List[ModuleInfo]) -> str:
|
||||
"""Emit machine-readable JSON version of the API reference."""
|
||||
import json
|
||||
payload = {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"generator": "scripts/api_doc_generator.py",
|
||||
"repo": "Timmy_Foundation/compounding-intelligence",
|
||||
"modules": modules,
|
||||
}
|
||||
return json.dumps(payload, indent=2)
|
||||
|
||||
|
||||
# ─── Main ──────────────────────────────────────────────────────────────────────
|
||||
def main() -> int:
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser(description="Generate API docs for scripts/")
|
||||
parser.add_argument("--check", action="store_true",
|
||||
help="Exit 1 if docs/API.md is out-of-date")
|
||||
parser.add_argument("--json", action="store_true",
|
||||
help="Emit JSON to stdout instead of writing markdown")
|
||||
args = parser.parse_args()
|
||||
|
||||
modules = scan_scripts_dir(SCRIPTS_DIR)
|
||||
modules.sort(key=lambda m: m["path"])
|
||||
|
||||
if args.json:
|
||||
print(render_json(modules))
|
||||
return 0
|
||||
|
||||
md = render_markdown(modules)
|
||||
|
||||
if args.check:
|
||||
if OUTPUT_PATH.exists():
|
||||
existing = OUTPUT_PATH.read_text(encoding="utf-8")
|
||||
if existing == md:
|
||||
print("✅ docs/API.md is up-to-date")
|
||||
return 0
|
||||
print("❌ docs/API.md is missing or out-of-date — regenerate with "
|
||||
"`python3 scripts/api_doc_generator.py`", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
DOCS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
OUTPUT_PATH.write_text(md, encoding="utf-8")
|
||||
print(f"✅ Wrote {OUTPUT_PATH} ({len(modules)} modules documented)")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,468 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
session_knowledge_extractor.py — Extract session-level entities and relationships from Hermes transcripts.
|
||||
|
||||
Creates knowledge facts about: which agent handled the session, what task was solved,
|
||||
which tools were used and why, and the outcome. Target: 10+ facts per session.
|
||||
|
||||
Usage:
|
||||
python3 session_knowledge_extractor.py --session session.jsonl --output knowledge/
|
||||
python3 session_knowledge_extractor.py --batch --sessions-dir ~/.hermes/sessions/ --limit 10
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import hashlib
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Dict, Any
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from session_reader import read_session, extract_conversation, truncate_for_context, messages_to_text
|
||||
|
||||
# --- Configuration ---
|
||||
DEFAULT_API_BASE = os.environ.get(
|
||||
"EXTRACTOR_API_BASE",
|
||||
os.environ.get("HARVESTER_API_BASE", "https://api.nousresearch.com/v1")
|
||||
)
|
||||
DEFAULT_API_KEY = os.environ.get(
|
||||
"EXTRACTOR_API_KEY",
|
||||
os.environ.get("HARVESTER_API_KEY", "")
|
||||
)
|
||||
DEFAULT_MODEL = os.environ.get(
|
||||
"EXTRACTOR_MODEL",
|
||||
os.environ.get("HARVESTER_MODEL", "xiaomi/mimo-v2-pro")
|
||||
)
|
||||
KNOWLEDGE_DIR = os.environ.get("EXTRACTOR_KNOWLEDGE_DIR", "knowledge")
|
||||
PROMPT_PATH = os.environ.get(
|
||||
"EXTRACTOR_PROMPT_PATH",
|
||||
str(SCRIPT_DIR.parent / "templates" / "session-entity-prompt.md")
|
||||
)
|
||||
|
||||
API_KEY_PATHS = [
|
||||
os.path.expanduser("~/.config/nous/key"),
|
||||
os.path.expanduser("~/.hermes/keymaxxing/active/minimax.key"),
|
||||
os.path.expanduser("~/.config/openrouter/key"),
|
||||
os.path.expanduser("~/.config/gitea/token"), # fallback
|
||||
]
|
||||
|
||||
|
||||
def find_api_key() -> str:
|
||||
for path in API_KEY_PATHS:
|
||||
if os.path.exists(path):
|
||||
with open(path) as f:
|
||||
key = f.read().strip()
|
||||
if key:
|
||||
return key
|
||||
return ""
|
||||
|
||||
|
||||
def load_extraction_prompt() -> str:
|
||||
path = Path(PROMPT_PATH)
|
||||
if not path.exists():
|
||||
print(f"ERROR: Extraction prompt not found at {path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
return path.read_text(encoding='utf-8')
|
||||
|
||||
|
||||
def call_llm(prompt: str, transcript: str, api_base: str, api_key: str, model: str) -> Optional[List[dict]]:
|
||||
"""Call LLM to extract session entity knowledge."""
|
||||
import urllib.request
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": prompt},
|
||||
{"role": "user", "content": f"Extract knowledge from this session transcript:\n\n{transcript}"}
|
||||
]
|
||||
|
||||
payload = json.dumps({
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"temperature": 0.1,
|
||||
"max_tokens": 4096
|
||||
}).encode('utf-8')
|
||||
|
||||
req = urllib.request.Request(
|
||||
f"{api_base}/chat/completions",
|
||||
data=payload,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
method="POST"
|
||||
)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=60) as resp:
|
||||
result = json.loads(resp.read().decode('utf-8'))
|
||||
content = result["choices"][0]["message"]["content"]
|
||||
return parse_extraction_response(content)
|
||||
except Exception as e:
|
||||
print(f"ERROR: LLM API call failed: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
|
||||
def parse_extraction_response(content: str) -> Optional[List[dict]]:
|
||||
"""Parse LLM response; handles JSON or markdown-wrapped JSON."""
|
||||
try:
|
||||
data = json.loads(content)
|
||||
if isinstance(data, dict) and 'knowledge' in data:
|
||||
return data['knowledge']
|
||||
if isinstance(data, list):
|
||||
return data
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
import re
|
||||
json_match = re.search(r'```(?:json)?\s*(\{.*?\})\s*```', content, re.DOTALL)
|
||||
if json_match:
|
||||
try:
|
||||
data = json.loads(json_match.group(1))
|
||||
if isinstance(data, dict) and 'knowledge' in data:
|
||||
return data['knowledge']
|
||||
if isinstance(data, list):
|
||||
return data
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
json_match = re.search(r'(\{[^{}]*"knowledge"[^{}]*\[.*?\])', content, re.DOTALL)
|
||||
if json_match:
|
||||
try:
|
||||
data = json.loads(json_match.group(1))
|
||||
return data.get('knowledge', [])
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
print(f"WARNING: Could not parse LLM response as JSON", file=sys.stderr)
|
||||
print(f"Response preview: {content[:500]}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
|
||||
def load_existing_knowledge(knowledge_dir: str) -> dict:
|
||||
index_path = Path(knowledge_dir) / "index.json"
|
||||
if not index_path.exists():
|
||||
return {"version": 1, "last_updated": "", "total_facts": 0, "facts": []}
|
||||
try:
|
||||
with open(index_path, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, IOError) as e:
|
||||
print(f"WARNING: Could not load knowledge index: {e}", file=sys.stderr)
|
||||
return {"version": 1, "last_updated": "", "total_facts": 0, "facts": []}
|
||||
|
||||
|
||||
def fact_fingerprint(fact: dict) -> str:
|
||||
text = fact.get('fact', '').lower().strip()
|
||||
text = ' '.join(text.split())
|
||||
return hashlib.md5(text.encode('utf-8')).hexdigest()
|
||||
|
||||
|
||||
def deduplicate(new_facts: List[dict], existing: List[dict], similarity_threshold: float = 0.8) -> List[dict]:
|
||||
existing_fingerprints = set()
|
||||
existing_texts = []
|
||||
for f in existing:
|
||||
fp = fact_fingerprint(f)
|
||||
existing_fingerprints.add(fp)
|
||||
existing_texts.append(f.get('fact', '').lower().strip())
|
||||
|
||||
unique = []
|
||||
for fact in new_facts:
|
||||
fp = fact_fingerprint(fact)
|
||||
if fp in existing_fingerprints:
|
||||
continue
|
||||
|
||||
fact_words = set(fact.get('fact', '').lower().split())
|
||||
is_dup = False
|
||||
for existing_text in existing_texts:
|
||||
existing_words = set(existing_text.split())
|
||||
if not fact_words or not existing_words:
|
||||
continue
|
||||
overlap = len(fact_words & existing_words) / max(len(fact_words | existing_words), 1)
|
||||
if overlap >= similarity_threshold:
|
||||
is_dup = True
|
||||
break
|
||||
|
||||
if not is_dup:
|
||||
unique.append(fact)
|
||||
existing_fingerprints.add(fp)
|
||||
existing_texts.append(fact.get('fact', '').lower().strip())
|
||||
|
||||
return unique
|
||||
|
||||
|
||||
def validate_fact(fact: dict) -> bool:
|
||||
required = ['fact', 'category', 'repo', 'confidence']
|
||||
for field in required:
|
||||
if field not in fact:
|
||||
return False
|
||||
if not isinstance(fact['fact'], str) or not fact['fact'].strip():
|
||||
return False
|
||||
valid_categories = ['fact', 'pitfall', 'pattern', 'tool-quirk', 'question']
|
||||
if fact['category'] not in valid_categories:
|
||||
return False
|
||||
if not isinstance(fact.get('confidence', 0), (int, float)):
|
||||
return False
|
||||
if not (0.0 <= fact['confidence'] <= 1.0):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def write_knowledge(index: dict, new_facts: List[dict], knowledge_dir: str, source_session: str = ""):
|
||||
kdir = Path(knowledge_dir)
|
||||
kdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for fact in new_facts:
|
||||
fact['source_session'] = source_session
|
||||
fact['harvested_at'] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
index['facts'].extend(new_facts)
|
||||
index['total_facts'] = len(index['facts'])
|
||||
index['last_updated'] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
index_path = kdir / "index.json"
|
||||
with open(index_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(index, f, indent=2, ensure_ascii=False)
|
||||
|
||||
repos = {}
|
||||
for fact in new_facts:
|
||||
repo = fact.get('repo', 'global')
|
||||
repos.setdefault(repo, []).append(fact)
|
||||
|
||||
for repo, facts in repos.items():
|
||||
if repo == 'global':
|
||||
md_path = kdir / "global" / "sessions.md"
|
||||
else:
|
||||
md_path = kdir / "repos" / f"{repo}.md"
|
||||
|
||||
md_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
mode = 'a' if md_path.exists() else 'w'
|
||||
with open(md_path, mode, encoding='utf-8') as f:
|
||||
if mode == 'w':
|
||||
f.write(f"# Session Knowledge: {repo}\n\n")
|
||||
f.write(f"## Session {Path(source_session).stem} — {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M')}\n\n")
|
||||
for fact in facts:
|
||||
icon = {'fact': '📋', 'pitfall': '⚠️', 'pattern': '🔄', 'tool-quirk': '🔧', 'question': '❓'}.get(fact['category'], '•')
|
||||
f.write(f"- {icon} **{fact['category']}** (conf: {fact['confidence']:.1f}): {fact['fact']}\n")
|
||||
f.write("\n")
|
||||
|
||||
|
||||
def extract_session_id(messages: List[dict]) -> str:
|
||||
"""Derive a stable session ID from messages or return 'unknown'."""
|
||||
# Try to find session_id in the first message or use filename from source
|
||||
for msg in messages[:3]:
|
||||
if msg.get('session_id'):
|
||||
return msg['session_id'][:32]
|
||||
# Fallback: hash first few messages
|
||||
content = str(messages[:3])
|
||||
return hashlib.md5(content.encode()).hexdigest()[:12]
|
||||
|
||||
|
||||
def extract_agent(messages: List[dict]) -> Optional[str]:
|
||||
"""Extract the agent/model name from assistant messages."""
|
||||
for msg in messages:
|
||||
if msg.get('role') == 'assistant' and msg.get('model'):
|
||||
return msg['model']
|
||||
return None
|
||||
|
||||
|
||||
def extract_tasks(messages: List[dict]) -> List[str]:
|
||||
"""Extract the task/goal from the first user message."""
|
||||
tasks = []
|
||||
for msg in messages:
|
||||
if msg.get('role') == 'user' and msg.get('content'):
|
||||
content = msg['content']
|
||||
if isinstance(content, str) and len(content.strip()) < 500:
|
||||
tasks.append(content.strip())
|
||||
break # First user message is usually the task
|
||||
return tasks
|
||||
|
||||
|
||||
def extract_tools(messages: List[dict]) -> List[str]:
|
||||
"""Extract tool names used in the session."""
|
||||
tools = set()
|
||||
for msg in messages:
|
||||
if msg.get('tool_calls'):
|
||||
for tc in msg['tool_calls']:
|
||||
func = tc.get('function', {})
|
||||
name = func.get('name', '')
|
||||
if name:
|
||||
tools.add(name)
|
||||
return list(tools)
|
||||
|
||||
|
||||
def extract_outcome(messages: List[dict]) -> str:
|
||||
"""Classify session outcome: success/partial/failure."""
|
||||
errors = []
|
||||
for msg in messages:
|
||||
if msg.get('role') == 'tool' and msg.get('is_error'):
|
||||
err = msg.get('content', '')
|
||||
if isinstance(err, str):
|
||||
errors.append(err.lower())
|
||||
|
||||
if errors:
|
||||
if any('405' in e or 'permission' in e or 'authentication' in e for e in errors):
|
||||
return 'failure'
|
||||
return 'partial'
|
||||
|
||||
# Check last assistant message for success indicators
|
||||
last = messages[-1] if messages else {}
|
||||
if last.get('role') == 'assistant':
|
||||
content = str(last.get('content', ''))
|
||||
success_words = ['done', 'completed', 'success', 'merged', 'pushed', 'created', 'saved']
|
||||
if any(word in content.lower() for word in success_words):
|
||||
return 'success'
|
||||
|
||||
return 'unknown'
|
||||
|
||||
|
||||
def harvest_session(session_path: str, knowledge_dir: str, api_base: str, api_key: str,
|
||||
model: str, dry_run: bool = False, min_confidence: float = 0.3) -> dict:
|
||||
"""Harvest session entities and relationships from one session."""
|
||||
start_time = time.time()
|
||||
stats = {
|
||||
'session': session_path,
|
||||
'facts_found': 0,
|
||||
'facts_new': 0,
|
||||
'facts_dup': 0,
|
||||
'elapsed_seconds': 0,
|
||||
'error': None
|
||||
}
|
||||
|
||||
try:
|
||||
messages = read_session(session_path)
|
||||
if not messages:
|
||||
stats['error'] = "Empty session file"
|
||||
return stats
|
||||
|
||||
conv = extract_conversation(messages)
|
||||
if not conv:
|
||||
stats['error'] = "No conversation turns found"
|
||||
return stats
|
||||
|
||||
truncated = truncate_for_context(conv, head=50, tail=50)
|
||||
transcript = messages_to_text(truncated)
|
||||
|
||||
prompt = load_extraction_prompt()
|
||||
raw_facts = call_llm(prompt, transcript, api_base, api_key, model)
|
||||
if raw_facts is None:
|
||||
stats['error'] = "LLM extraction failed"
|
||||
return stats
|
||||
|
||||
valid_facts = [f for f in raw_facts if validate_fact(f) and f.get('confidence', 0) >= min_confidence]
|
||||
stats['facts_found'] = len(valid_facts)
|
||||
|
||||
existing_index = load_existing_knowledge(knowledge_dir)
|
||||
existing_facts = existing_index.get('facts', [])
|
||||
new_facts = deduplicate(valid_facts, existing_facts)
|
||||
stats['facts_new'] = len(new_facts)
|
||||
stats['facts_dup'] = len(valid_facts) - len(new_facts)
|
||||
|
||||
if new_facts and not dry_run:
|
||||
write_knowledge(existing_index, new_facts, knowledge_dir, source_session=session_path)
|
||||
|
||||
stats['elapsed_seconds'] = round(time.time() - start_time, 2)
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
stats['error'] = str(e)
|
||||
stats['elapsed_seconds'] = round(time.time() - start_time, 2)
|
||||
return stats
|
||||
|
||||
|
||||
def batch_harvest(sessions_dir: str, knowledge_dir: str, api_base: str, api_key: str,
|
||||
model: str, since: str = "", limit: int = 0, dry_run: bool = False) -> List[dict]:
|
||||
sessions_path = Path(sessions_dir)
|
||||
if not sessions_path.is_dir():
|
||||
print(f"ERROR: Sessions directory not found: {sessions_dir}", file=sys.stderr)
|
||||
return []
|
||||
|
||||
session_files = sorted(sessions_path.glob("*.jsonl"), reverse=True)
|
||||
|
||||
if since:
|
||||
since_dt = datetime.fromisoformat(since.replace('Z', '+00:00'))
|
||||
filtered = []
|
||||
for sf in session_files:
|
||||
try:
|
||||
parts = sf.stem.split('_')
|
||||
if len(parts) >= 3:
|
||||
date_str = parts[1]
|
||||
file_dt = datetime.strptime(date_str, '%Y%m%d').replace(tzinfo=timezone.utc)
|
||||
if file_dt >= since_dt:
|
||||
filtered.append(sf)
|
||||
except (ValueError, IndexError):
|
||||
filtered.append(sf)
|
||||
session_files = filtered
|
||||
|
||||
if limit > 0:
|
||||
session_files = session_files[:limit]
|
||||
|
||||
print(f"Harvesting {len(session_files)} sessions with session knowledge extractor...")
|
||||
|
||||
results = []
|
||||
for i, sf in enumerate(session_files, 1):
|
||||
print(f"[{i}/{len(session_files)}] {sf.name}...", end=" ", flush=True)
|
||||
stats = harvest_session(str(sf), knowledge_dir, api_base, api_key, model, dry_run)
|
||||
if stats['error']:
|
||||
print(f"ERROR: {stats['error']}")
|
||||
else:
|
||||
print(f"{stats['facts_new']} new, {stats['facts_dup']} dup ({stats['elapsed_seconds']}s)")
|
||||
results.append(stats)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Extract session entities and relationships from Hermes transcripts")
|
||||
parser.add_argument('--session', help='Path to a single session JSONL file')
|
||||
parser.add_argument('--batch', action='store_true', help='Batch mode: process multiple sessions')
|
||||
parser.add_argument('--sessions-dir', default=os.path.expanduser('~/.hermes/sessions'),
|
||||
help='Directory containing session files (default: ~/.hermes/sessions)')
|
||||
parser.add_argument('--output', default='knowledge', help='Output directory for knowledge store')
|
||||
parser.add_argument('--since', default='', help='Only process sessions after this date (YYYY-MM-DD)')
|
||||
parser.add_argument('--limit', type=int, default=0, help='Max sessions to process (0=unlimited)')
|
||||
parser.add_argument('--api-base', default=DEFAULT_API_BASE, help='LLM API base URL')
|
||||
parser.add_argument('--api-key', default='', help='LLM API key (or set EXTRACTOR_API_KEY)')
|
||||
parser.add_argument('--model', default=DEFAULT_MODEL, help='Model to use for extraction')
|
||||
parser.add_argument('--dry-run', action='store_true', help='Preview without writing to knowledge store')
|
||||
parser.add_argument('--min-confidence', type=float, default=0.3, help='Minimum confidence threshold')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
api_key = args.api_key or DEFAULT_API_KEY or find_api_key()
|
||||
if not api_key:
|
||||
print("ERROR: No API key found. Set EXTRACTOR_API_KEY or store in one of:", file=sys.stderr)
|
||||
for p in API_KEY_PATHS:
|
||||
print(f" {p}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
knowledge_dir = args.output
|
||||
if not os.path.isabs(knowledge_dir):
|
||||
knowledge_dir = os.path.join(SCRIPT_DIR.parent, knowledge_dir)
|
||||
|
||||
if args.session:
|
||||
stats = harvest_session(
|
||||
args.session, knowledge_dir, args.api_base, api_key, args.model,
|
||||
dry_run=args.dry_run, min_confidence=args.min_confidence
|
||||
)
|
||||
print(json.dumps(stats, indent=2))
|
||||
if stats['error']:
|
||||
sys.exit(1)
|
||||
elif args.batch:
|
||||
results = batch_harvest(
|
||||
args.sessions_dir, knowledge_dir, args.api_base, api_key, args.model,
|
||||
since=args.since, limit=args.limit, dry_run=args.dry_run
|
||||
)
|
||||
total_new = sum(r['facts_new'] for r in results)
|
||||
total_dup = sum(r['facts_dup'] for r in results)
|
||||
errors = sum(1 for r in results if r['error'])
|
||||
print(f"\nDone: {total_new} new facts, {total_dup} duplicates, {errors} errors")
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,197 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Smoke test for session knowledge extractor.
|
||||
Tests: parsing, entity extraction, metadata generation, dedup, store roundtrip.
|
||||
Does NOT call real LLM — uses mock facts.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import tempfile
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from session_reader import read_session, extract_conversation, truncate_for_context, messages_to_text
|
||||
from session_knowledge_extractor import (
|
||||
validate_fact, deduplicate, load_existing_knowledge, fact_fingerprint,
|
||||
extract_agent, extract_tasks, extract_tools, extract_outcome,
|
||||
write_knowledge
|
||||
)
|
||||
|
||||
|
||||
def make_test_session():
|
||||
"""Create a sample Hermes session transcript."""
|
||||
messages = [
|
||||
{"role": "user", "content": "Clone the compounding-intelligence repo and run tests", "timestamp": "2026-04-13T10:00:00Z"},
|
||||
{"role": "assistant", "model": "xiaomi/mimo-v2-pro", "content": "I'll clone the repo and run tests.", "timestamp": "2026-04-13T10:00:02Z",
|
||||
"tool_calls": [
|
||||
{"function": {"name": "terminal", "arguments": '{"command": "git clone https://forge.alexanderwhitestone.com/Timmy_Foundation/compounding-intelligence.git"}'}},
|
||||
]},
|
||||
{"role": "tool", "content": "Cloned successfully", "timestamp": "2026-04-13T10:00:10Z"},
|
||||
{"role": "assistant", "model": "xiaomi/mimo-v2-pro", "content": "Now running pytest...", "timestamp": "2026-04-13T10:00:11Z",
|
||||
"tool_calls": [
|
||||
{"function": {"name": "execute_code", "arguments": '{"code": "import subprocess; subprocess.run([\"pytest\"])"}'}},
|
||||
]},
|
||||
{"role": "tool", "content": "15 passed, 0 failed", "timestamp": "2026-04-13T10:00:15Z"},
|
||||
{"role": "assistant", "model": "xiaomi/mimo-v2-pro", "content": "All tests passed — done.", "timestamp": "2026-04-13T10:00:16Z"},
|
||||
]
|
||||
return messages
|
||||
|
||||
|
||||
def test_extract_entities():
|
||||
"""Test entity extraction from messages."""
|
||||
messages = make_test_session() # 6 total: 3 user/assistant + 3 tool
|
||||
agent = extract_agent(messages)
|
||||
assert agent == "xiaomi/mimo-v2-pro"
|
||||
tasks = extract_tasks(messages)
|
||||
assert len(tasks) >= 1 and "clone" in tasks[0].lower()
|
||||
tools = extract_tools(messages)
|
||||
assert "terminal" in tools and "execute_code" in tools and len(tools) == 2
|
||||
outcome = extract_outcome(messages)
|
||||
assert outcome == "success"
|
||||
|
||||
print(" [PASS] entity extraction works")
|
||||
|
||||
|
||||
def test_validate_fact():
|
||||
good = {"fact": "Token is at ~/.config/gitea/token", "category": "tool-quirk", "repo": "global", "confidence": 0.9}
|
||||
assert validate_fact(good), "Valid fact should pass"
|
||||
|
||||
bad = {"fact": "Something", "category": "nonsense", "repo": "x", "confidence": 0.5}
|
||||
assert not validate_fact(bad), "Bad category should fail"
|
||||
|
||||
print(" [PASS] fact validation works")
|
||||
|
||||
|
||||
def test_deduplicate():
|
||||
existing = [{"fact": "A", "category": "fact", "repo": "global", "confidence": 0.9}]
|
||||
new = [
|
||||
{"fact": "A", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "B", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
]
|
||||
result = deduplicate(new, existing)
|
||||
assert len(result) == 1 and result[0]["fact"] == "B", "Should remove exact dup"
|
||||
print(" [PASS] deduplication works")
|
||||
|
||||
|
||||
def test_knowledge_store_roundtrip():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
index = load_existing_knowledge(tmpdir)
|
||||
assert index["total_facts"] == 0
|
||||
|
||||
new_facts = [
|
||||
{"fact": "session_x used terminal", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "session_x task: clone repo", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "session_x outcome: success", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
] * 4 # 12 facts total
|
||||
|
||||
write_knowledge(index, new_facts, tmpdir, source_session="session_x.jsonl")
|
||||
|
||||
index2 = load_existing_knowledge(tmpdir)
|
||||
assert index2["total_facts"] == 12
|
||||
|
||||
# Verify markdown written
|
||||
md_path = Path(tmpdir) / "repos" / "compounding-intelligence.md"
|
||||
assert md_path.exists(), "Markdown file should be created"
|
||||
|
||||
print(" [PASS] knowledge store roundtrip works (12 facts)")
|
||||
|
||||
|
||||
def test_min_facts_per_session():
|
||||
"""Validator: a typical session should yield 10+ facts."""
|
||||
# Simulate facts from one session (what the LLM would produce)
|
||||
mock_facts = [
|
||||
{"fact": "session_123 was handled by model xiaomi/mimo-v2-pro", "category": "fact", "repo": "global", "confidence": 0.95},
|
||||
{"fact": "session_123's task was to clone the compounding-intelligence repository", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "session_123 used tool 'terminal' to run git clone", "category": "tool-quirk", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "session_123 used tool 'execute_code' to run pytest", "category": "tool-quirk", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "session_123 executed: git clone https://forge...", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "session_123 executed: pytest (15 tests)", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "session_123 outcome: all 15 tests passed", "category": "fact", "repo": "global", "confidence": 0.95},
|
||||
{"fact": "session_123 touched repo: compounding-intelligence", "category": "fact", "repo": "compounding-intelligence", "confidence": 1.0},
|
||||
{"fact": "session_123 terminal output: 'Cloned successfully'", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "session_123 test output: '15 passed, 0 failed'", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "session_123 completed without errors", "category": "fact", "repo": "global", "confidence": 0.85},
|
||||
{"fact": "session_123 final message: 'All tests passed — done.'", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
]
|
||||
assert len(mock_facts) >= 10, f"Should have at least 10 facts, got {len(mock_facts)}"
|
||||
print(f" [PASS] mock session produces {len(mock_facts)} facts")
|
||||
|
||||
|
||||
def test_full_chain_no_llm():
|
||||
"""Full pipeline: read -> extract entities -> validate -> dedup -> store."""
|
||||
messages = make_test_session()
|
||||
|
||||
with tempfile.NamedTemporaryFile(mode='w', suffix='.jsonl', delete=False) as f:
|
||||
for msg in messages:
|
||||
f.write(json.dumps(msg) + '\n')
|
||||
session_path = f.name
|
||||
|
||||
with tempfile.TemporaryDirectory() as knowledge_dir:
|
||||
# Step 1: Read
|
||||
msgs = read_session(session_path)
|
||||
assert len(msgs) == 6 # 3 user/assistant + 3 tool role messages
|
||||
|
||||
# Step 2: Extract conversation
|
||||
conv = extract_conversation(msgs)
|
||||
assert len(conv) == 4 # 1 user + 3 assistant messages (tool role messages skipped)
|
||||
|
||||
# Step 3: Truncate
|
||||
truncated = truncate_for_context(conv, head=50, tail=50)
|
||||
transcript = messages_to_text(truncated)
|
||||
assert "clone" in transcript.lower()
|
||||
|
||||
# Step 4: Extract entities
|
||||
agent = extract_agent(msgs)
|
||||
tools = extract_tools(msgs)
|
||||
outcome = extract_outcome(msgs)
|
||||
assert agent == "xiaomi/mimo-v2-pro"
|
||||
assert len(tools) >= 2
|
||||
assert outcome == "success"
|
||||
|
||||
# Step 5-7: Simulated LLM output → validate → dedup → store
|
||||
# Create 12 distinct facts to meet the 10+ requirement
|
||||
mock_facts = [
|
||||
{"fact": "Session used tool terminal", "category": "tool-quirk", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "Session used tool execute_code", "category": "tool-quirk", "repo": "global", "confidence": 0.9},
|
||||
{"fact": f"Session handled by agent {agent}", "category": "fact", "repo": "global", "confidence": 0.95},
|
||||
{"fact": "Session task: clone the repository", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "Session task: run pytest", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.9},
|
||||
{"fact": "Session outcome: success", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "Session repo: compounding-intelligence touched", "category": "fact", "repo": "compounding-intelligence", "confidence": 1.0},
|
||||
{"fact": "Terminal command executed: git clone", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "Test result: 15 passed, 0 failed", "category": "fact", "repo": "compounding-intelligence", "confidence": 0.95},
|
||||
{"fact": "All tests passed — session complete", "category": "fact", "repo": "global", "confidence": 0.9},
|
||||
{"fact": "No errors encountered during session", "category": "fact", "repo": "global", "confidence": 0.8},
|
||||
{"fact": "Session duration: approximately 16 seconds", "category": "fact", "repo": "global", "confidence": 0.7},
|
||||
]
|
||||
|
||||
valid = [f for f in mock_facts if validate_fact(f)]
|
||||
assert len(valid) == 12
|
||||
|
||||
index = load_existing_knowledge(knowledge_dir)
|
||||
new_facts = deduplicate(valid, index.get("facts", []))
|
||||
assert len(new_facts) == 12
|
||||
|
||||
from session_knowledge_extractor import write_knowledge
|
||||
write_knowledge(index, new_facts, knowledge_dir, source_session=session_path)
|
||||
|
||||
index2 = load_existing_knowledge(knowledge_dir)
|
||||
assert index2["total_facts"] == 12
|
||||
|
||||
os.unlink(session_path)
|
||||
print(" [PASS] full chain (read → entities → validate → dedup → store) works (12 facts)")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Running session knowledge extractor smoke tests...")
|
||||
test_extract_entities()
|
||||
test_validate_fact()
|
||||
test_deduplicate()
|
||||
test_knowledge_store_roundtrip()
|
||||
test_min_facts_per_session()
|
||||
test_full_chain_no_llm()
|
||||
print("\nAll tests passed — extractor produces 10+ facts per session ✓")
|
||||
@@ -1,95 +0,0 @@
|
||||
# Knowledge Extraction Prompt — Session Entities & Relationships
|
||||
|
||||
## System Prompt
|
||||
|
||||
You are a session knowledge extraction engine. You read Hermes session transcripts and output ONLY structured JSON. You extract session entities (agent, task, tools, outcome) and the relationships between them. You never invent facts not in the transcript.
|
||||
|
||||
## Prompt
|
||||
|
||||
```
|
||||
TASK: Extract knowledge facts from this session transcript. Focus on:
|
||||
|
||||
1. AGENT: Which model/agent handled this session
|
||||
2. TASK: What problem or goal was being solved
|
||||
3. TOOLS: Which tools were used and what each accomplished
|
||||
4. OUTCOME: Did the session succeed, partially succeed, or fail?
|
||||
5. RELATIONSHIPS: How do these entities connect?
|
||||
|
||||
RULES:
|
||||
1. Extract ONLY information explicitly stated or clearly implied by the transcript.
|
||||
2. Do NOT infer, assume, or hallucinate.
|
||||
3. Every fact must point to a specific message or tool call as evidence.
|
||||
4. Generate at least 10 facts. Break complex tool usages into multiple atomic facts.
|
||||
5. Include relationship facts: "session X used tool Y", "agent Z handled session X", "task W was completed by session X".
|
||||
6. Include outcome facts: success indicators, error conditions, partial completions.
|
||||
|
||||
CATEGORIES (assign exactly one):
|
||||
- fact: Concrete, verifiable statement (paths, commands, results, configs)
|
||||
- pitfall: Error hit, wrong assumption, time wasted
|
||||
- pattern: Successful reusable sequence
|
||||
- tool-quirk: Environment-specific behavior (token paths, URLs, API gotchas)
|
||||
- question: Something identified but not answered
|
||||
|
||||
CONFIDENCE:
|
||||
- 0.9: Directly observed with explicit output or verification
|
||||
- 0.7: Multiple data points confirm, but not explicitly verified
|
||||
- 0.5: Clear implication but not directly stated
|
||||
- 0.3: Weak inference from limited evidence
|
||||
|
||||
OUTPUT FORMAT (valid JSON only, no markdown, no explanation):
|
||||
{
|
||||
"knowledge": [
|
||||
{
|
||||
"fact": "One specific sentence of knowledge",
|
||||
"category": "fact|pitfall|pattern|tool-quirk|question",
|
||||
"repo": "repo-name or global",
|
||||
"confidence": 0.0-1.0,
|
||||
"evidence": "Brief quote or reference from transcript that supports this"
|
||||
}
|
||||
],
|
||||
"meta": {
|
||||
"session_id": "extracted or generated id",
|
||||
"session_outcome": "success|partial|failure|unknown",
|
||||
"agent": "model name if identifiable",
|
||||
"task": "brief description of the goal",
|
||||
"tools_used": ["tool1", "tool2"],
|
||||
"repos_touched": ["repo1"],
|
||||
"fact_count": 0
|
||||
}
|
||||
}
|
||||
|
||||
TRANSCRIPT:
|
||||
{{transcript}}
|
||||
```
|
||||
|
||||
## Design Notes
|
||||
|
||||
### Entity extraction strategy
|
||||
|
||||
**Agent:** Look for `"model": "..."` in assistant messages or model mentions in content.
|
||||
|
||||
**Task:** The first user message usually states the goal. If vague, look for the assistant's interpretation: "I'll help you X".
|
||||
|
||||
**Tools:** Every `tool_calls` entry is a tool use. Extract the function name and what it was used for based on arguments.
|
||||
|
||||
**Outcome:** Success indicators: "done", "completed", "merged", "pushed", "created". Failures: HTTP errors (405, 404, 403), stack traces, explicit failures.
|
||||
|
||||
**Relationships:** Treat the session as a central entity. Generate facts like:
|
||||
- Agent relationship: "session_abc was handled by model xiaomi/mimo-v2-pro"
|
||||
- Task relationship: "session_abc's task was to merge PR #123"
|
||||
- Tool relationship: "session_abc used terminal to run 'git clone'"
|
||||
- Outcome relationship: "session_abc outcome: success — PR merged"
|
||||
|
||||
### 10+ facts guarantee
|
||||
|
||||
Each session with tool usage typically yields:
|
||||
- 1 fact: agent identity
|
||||
- 1-2 facts: task/goal (decomposed into sub-goals)
|
||||
- 3-5 facts: each tool call becomes 1-2 facts (tool name + purpose + result)
|
||||
- 1-2 facts: outcome details
|
||||
- 1-2 facts: repo touched
|
||||
Total: 10+ per non-trivial session.
|
||||
|
||||
### Token budget
|
||||
|
||||
~700 tokens for prompt (excluding transcript). Leaves room for long transcripts.
|
||||
148
tests/test_api_doc_generator.py
Normal file
148
tests/test_api_doc_generator.py
Normal file
@@ -0,0 +1,148 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Smoke tests for API Doc Generator — Issue #98
|
||||
|
||||
Validates that the generator runs, produces docs/API.md, and that
|
||||
the generated markdown contains expected sections for the known scripts.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
# Resolve repo root
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
SCRIPTS_DIR = REPO_ROOT / "scripts"
|
||||
DOCS_DIR = REPO_ROOT / "docs"
|
||||
API_MD = DOCS_DIR / "API.md"
|
||||
GENERATOR = SCRIPTS_DIR / "api_doc_generator.py"
|
||||
|
||||
|
||||
# ─── Generator presence ─────────────────────────────────────────────────────────
|
||||
class TestGeneratorPresence:
|
||||
def test_generator_script_exists(self):
|
||||
assert GENERATOR.exists(), f"Missing: {GENERATOR}"
|
||||
|
||||
def test_generator_is_executable(self):
|
||||
with open(GENERATOR) as f:
|
||||
first = f.readline().strip()
|
||||
assert first.startswith("#!"), "Missing shebang"
|
||||
assert "python" in first.lower()
|
||||
|
||||
|
||||
# ─── API.md generation ──────────────────────────────────────────────────────────
|
||||
class TestAPIDocGeneration:
|
||||
def test_generator_runs_successfully(self):
|
||||
"""Run the generator and verify exit code 0."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(GENERATOR)],
|
||||
capture_output=True, text=True, cwd=REPO_ROOT, timeout=30
|
||||
)
|
||||
assert result.returncode == 0, (
|
||||
f"Generator failed (code {{result.returncode}})\n"
|
||||
f"STDERR: {{result.stderr[:500]}}"
|
||||
)
|
||||
|
||||
def test_api_md_is_created(self):
|
||||
"""docs/API.md must exist after generation."""
|
||||
assert API_MD.exists(), f"Missing output: {API_MD}"
|
||||
|
||||
def test_api_md_is_not_empty(self):
|
||||
"""Generate markdown must have substantial content."""
|
||||
content = API_MD.read_text(encoding="utf-8")
|
||||
assert len(content) > 1000, "API.md is suspiciously small"
|
||||
|
||||
def test_api_md_has_expected_structure(self):
|
||||
"""Top-level headings and table markers must be present."""
|
||||
content = API_MD.read_text(encoding="utf-8")
|
||||
assert "# Compounding Intelligence — Scripts API Reference" in content
|
||||
assert "## `scripts/" in content
|
||||
assert "| Function | Signature | Description |" in content
|
||||
|
||||
def test_api_md_covers_expected_scripts(self):
|
||||
"""At minimum the core scripts should be documented."""
|
||||
content = API_MD.read_text(encoding="utf-8")
|
||||
# Core scripts that must appear
|
||||
core = ["scripts/harvester.py", "scripts/bootstrapper.py",
|
||||
"scripts/session_reader.py", "scripts/dedup.py"]
|
||||
for rel in core:
|
||||
assert f"## `{rel}`" in content, f"Missing section for {rel}"
|
||||
|
||||
def test_api_md_contains_function_names(self):
|
||||
"""Spot-check: known public functions from key modules must appear."""
|
||||
content = API_MD.read_text(encoding="utf-8")
|
||||
checks = [
|
||||
("harvester", "read_session"),
|
||||
("bootstrapper", "load_index"),
|
||||
("session_reader", "extract_conversation"),
|
||||
("dedup", "normalize_text"),
|
||||
]
|
||||
for module_stem, func_name in checks:
|
||||
assert f"| `{func_name}` |" in content, f"Missing function {func_name} from {module_stem}"
|
||||
|
||||
|
||||
# ─── Idempotence / --check ─────────────────────────────────────────────────────
|
||||
class TestIdempotence:
|
||||
def test_check_flag_passes_when_current(self):
|
||||
"""`--check` should exit 0 immediately after generation."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(GENERATOR), "--check"],
|
||||
capture_output=True, text=True, cwd=REPO_ROOT, timeout=30
|
||||
)
|
||||
assert result.returncode == 0, (
|
||||
f"--check failed\nSTDOUT: {{result.stdout}}\nSTDERR: {{result.stderr[:200]}}"
|
||||
)
|
||||
|
||||
def test_check_fails_when_api_md_stale(self):
|
||||
"""If docs/API.md is manually altered, --check should detect staleness."""
|
||||
# Generate fresh baseline first
|
||||
subprocess.run([sys.executable, str(GENERATOR)], capture_output=True, cwd=REPO_ROOT, timeout=30)
|
||||
|
||||
# Corrupt API.md slightly (append a line at the end)
|
||||
original = API_MD.read_text(encoding="utf-8")
|
||||
corrupted = original + "\n<!-- corrupted -->\n"
|
||||
API_MD.write_text(corrupted, encoding="utf-8")
|
||||
|
||||
# --check should now fail
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(GENERATOR), "--check"],
|
||||
capture_output=True, text=True, cwd=REPO_ROOT, timeout=30
|
||||
)
|
||||
assert result.returncode != 0, "--check should detect stale API.md"
|
||||
assert "out-of-date" in result.stderr.lower() or "out-of-date" in result.stdout.lower()
|
||||
|
||||
# Restore clean state
|
||||
subprocess.run([sys.executable, str(GENERATOR)], capture_output=True, cwd=REPO_ROOT, timeout=30)
|
||||
assert API_MD.read_text(encoding="utf-8") == original
|
||||
|
||||
# ─── JSON output ────────────────────────────────────────────────────────────────
|
||||
class TestJSONOutput:
|
||||
def test_json_flag_emits_valid_json(self):
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(GENERATOR), "--json"],
|
||||
capture_output=True, text=True, cwd=REPO_ROOT, timeout=30
|
||||
)
|
||||
assert result.returncode == 0
|
||||
import json
|
||||
payload = json.loads(result.stdout)
|
||||
assert "modules" in payload
|
||||
assert len(payload["modules"]) >= 30
|
||||
|
||||
def test_json_has_expected_fields(self):
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(GENERATOR), "--json"],
|
||||
capture_output=True, text=True, cwd=REPO_ROOT, timeout=30
|
||||
)
|
||||
import json
|
||||
payload = json.loads(result.stdout)
|
||||
mod = payload["modules"][0]
|
||||
for key in ("path", "docstring", "functions"):
|
||||
assert key in mod, f"Missing key {{key}} in module payload"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
Reference in New Issue
Block a user