- scripts/session_knowledge_extractor.py: new module that parses session JSONL, extracts agent/task/tools/outcome, and generates 10+ facts via LLM - templates/session-entity-prompt.md: focused prompt for session entities - scripts/test_session_knowledge_extractor.py: smoke test (no LLM) verifying 10+ facts per session, entity extraction, dedup, store roundtrip - Extracts session entities (agent, task, tools used, outcome) and writes relationships to knowledge/index.json and per-repo markdown files - Target: 10+ knowledge facts per non-trivial session transcript
3.7 KiB
Knowledge Extraction Prompt — Session Entities & Relationships
System Prompt
You are a session knowledge extraction engine. You read Hermes session transcripts and output ONLY structured JSON. You extract session entities (agent, task, tools, outcome) and the relationships between them. You never invent facts not in the transcript.
Prompt
TASK: Extract knowledge facts from this session transcript. Focus on:
1. AGENT: Which model/agent handled this session
2. TASK: What problem or goal was being solved
3. TOOLS: Which tools were used and what each accomplished
4. OUTCOME: Did the session succeed, partially succeed, or fail?
5. RELATIONSHIPS: How do these entities connect?
RULES:
1. Extract ONLY information explicitly stated or clearly implied by the transcript.
2. Do NOT infer, assume, or hallucinate.
3. Every fact must point to a specific message or tool call as evidence.
4. Generate at least 10 facts. Break complex tool usages into multiple atomic facts.
5. Include relationship facts: "session X used tool Y", "agent Z handled session X", "task W was completed by session X".
6. Include outcome facts: success indicators, error conditions, partial completions.
CATEGORIES (assign exactly one):
- fact: Concrete, verifiable statement (paths, commands, results, configs)
- pitfall: Error hit, wrong assumption, time wasted
- pattern: Successful reusable sequence
- tool-quirk: Environment-specific behavior (token paths, URLs, API gotchas)
- question: Something identified but not answered
CONFIDENCE:
- 0.9: Directly observed with explicit output or verification
- 0.7: Multiple data points confirm, but not explicitly verified
- 0.5: Clear implication but not directly stated
- 0.3: Weak inference from limited evidence
OUTPUT FORMAT (valid JSON only, no markdown, no explanation):
{
"knowledge": [
{
"fact": "One specific sentence of knowledge",
"category": "fact|pitfall|pattern|tool-quirk|question",
"repo": "repo-name or global",
"confidence": 0.0-1.0,
"evidence": "Brief quote or reference from transcript that supports this"
}
],
"meta": {
"session_id": "extracted or generated id",
"session_outcome": "success|partial|failure|unknown",
"agent": "model name if identifiable",
"task": "brief description of the goal",
"tools_used": ["tool1", "tool2"],
"repos_touched": ["repo1"],
"fact_count": 0
}
}
TRANSCRIPT:
{{transcript}}
Design Notes
Entity extraction strategy
Agent: Look for "model": "..." in assistant messages or model mentions in content.
Task: The first user message usually states the goal. If vague, look for the assistant's interpretation: "I'll help you X".
Tools: Every tool_calls entry is a tool use. Extract the function name and what it was used for based on arguments.
Outcome: Success indicators: "done", "completed", "merged", "pushed", "created". Failures: HTTP errors (405, 404, 403), stack traces, explicit failures.
Relationships: Treat the session as a central entity. Generate facts like:
- Agent relationship: "session_abc was handled by model xiaomi/mimo-v2-pro"
- Task relationship: "session_abc's task was to merge PR #123"
- Tool relationship: "session_abc used terminal to run 'git clone'"
- Outcome relationship: "session_abc outcome: success — PR merged"
10+ facts guarantee
Each session with tool usage typically yields:
- 1 fact: agent identity
- 1-2 facts: task/goal (decomposed into sub-goals)
- 3-5 facts: each tool call becomes 1-2 facts (tool name + purpose + result)
- 1-2 facts: outcome details
- 1-2 facts: repo touched Total: 10+ per non-trivial session.
Token budget
~700 tokens for prompt (excluding transcript). Leaves room for long transcripts.