Compare commits

..

3 Commits

Author SHA1 Message Date
5d8e7bbe4f docs: Add warm session provisioning README
Some checks failed
Forge CI / smoke-and-build (pull_request) Failing after 59s
Documentation for #327 implementation.
2026-04-14 01:40:29 +00:00
9ede517d4c feat(cli): Add warm session commands
Part of #327. Adds `hermes warm` command for session template management.
2026-04-14 01:39:56 +00:00
3588283b83 feat(research): Warm session provisioning implementation
Practical implementation for #327. Extracts seed data from existing sessions to bootstrap new sessions with established context and patterns.
2026-04-14 01:39:15 +00:00
6 changed files with 667 additions and 1153 deletions

View File

@@ -1,113 +0,0 @@
# Warm Session Provisioning: Revised Hypothesis
**Research Document v2.0**
**Issue:** #327
**Date:** April 2026
**Status:** Revised Based on Empirical Data
## Executive Summary
Initial hypothesis: Marathon sessions (100+ messages) have lower error rates, suggesting agents improve with experience. This was **partially incorrect**.
**Actual finding:** Error rates INCREASE within marathon sessions (avg first-half: 26.8%, second-half: 32.7%). Sessions don't improve - they degrade.
## Corrected Understanding
### What the Data Actually Shows
1. **Error rates increase over time** within sessions
2. **Marathon sessions appear more reliable** in aggregate because:
- Only well-guided sessions survive to 100+ messages
- Users who correct errors keep sessions alive
- Selection bias: failed sessions end early
3. **User guidance drives success**, not agent adaptation
### Revised Hypothesis
The "proficiency" observed in marathon sessions comes from:
- **User expertise**: Users who know how to guide the agent
- **Established context**: Shared reference points reduce ambiguity
- **Error correction patterns**: Users develop strategies to fix agent mistakes
- **Session survivorship**: Only well-managed sessions reach marathon length
## New Research Direction
### 1. User Guidance Patterns
Instead of agent proficiency, study user strategies:
- How do expert users phrase requests?
- What correction patterns work best?
- How do users establish context?
### 2. Context Window Management
Long sessions may suffer from context degradation:
- Attention dilution over many messages
- Lost context from early messages
- Compression artifacts
### 3. Warm Session v2: User-Guided Templates
Instead of pre-seeding agent patterns, pre-seed user guidance:
- Effective prompt templates
- Error correction strategies
- Context establishment patterns
## Implementation Plan
### Phase 1: User Pattern Analysis
- Analyze successful user strategies
- Extract effective prompt patterns
- Identify error correction techniques
### Phase 2: Guidance Templates
- Create user-facing templates
- Document effective patterns
- Provide prompt engineering guidance
### Phase 3: Context Management
- Optimize context window usage
- Implement smart context refresh
- Prevent attention degradation
### Phase 4: A/B Testing
- Test guided vs unguided sessions
- Measure error reduction from user guidance
- Statistical validation
## Key Metrics
1. **Error Rate by Position**
- First 10 messages: baseline
- Messages 10-50: degradation rate
- Messages 50+: long-session behavior
2. **User Intervention Rate**
- How often users correct errors
- Success rate of corrections
- Patterns in effective corrections
3. **Context Window Utilization**
- Token usage over time
- Information retention rate
- Compression effectiveness
## Paper Contributions (Revised)
1. **Counterintuitive finding**: Longer sessions have HIGHER error rates
2. **Selection bias**: Marathon sessions represent survivorship bias
3. **User expertise matters more than agent adaptation**
4. **Context degradation over long sessions**
## Next Steps
1. ✅ Correct initial hypothesis
2. ⏳ Analyze user guidance patterns
3. ⏳ Extract effective prompt strategies
4. ⏳ Create user-facing guidance templates
5. ⏳ Optimize context window management
6. ⏳ Run A/B tests on guided sessions
7. ⏳ Write paper with corrected findings
## References
- Empirical Audit 2026-04-12, Finding 4
- Follow-up Analysis: Comment on #327 (2026-04-13)
- Issue #327 (original hypothesis)

View File

@@ -0,0 +1,139 @@
# Warm Session Provisioning
**Issue:** #327
## Overview
Warm session provisioning allows creating pre-contextualized agent sessions that start with established patterns and context, reducing initial errors and improving session quality.
## Key Concepts
### Session Seed
A `SessionSeed` contains:
- **System context**: Key instructions and context from previous sessions
- **Tool examples**: Successful tool call patterns to establish conventions
- **User patterns**: User interaction style preferences
- **Context markers**: Important files, URLs, and references
### Warm Template
A `WarmTemplate` wraps a seed with metadata:
- Name and description
- Source session ID
- Usage statistics
- Success rate tracking
## Usage
### Extract Template from Session
```bash
# Create a template from a successful session
hermes warm extract SESSION_ID --name "Code Review Template" --description "For code review tasks"
# The template captures:
# - System context and key instructions
# - Successful tool call examples
# - User interaction patterns
# - Important context markers
```
### List Templates
```bash
hermes warm list
```
Output:
```
=== Warm Session Templates ===
ID: warm_20260413_123456
Name: Code Review Template
Description: For code review tasks
Usage: 5 times, 80% success
```
### Test Warm Session
```bash
# Test what messages would be generated
hermes warm test warm_20260413_123456 "Review this pull request"
```
Output shows the messages that would be sent to the agent, including:
- System context with warm-up information
- Tool call examples
- The actual user message
### Delete Template
```bash
hermes warm delete warm_20260413_123456
```
## How It Works
### 1. Extraction Phase
When you extract a template:
1. System messages provide base context
2. First 10 user messages establish patterns
3. Successful tool calls become examples
4. File paths and URLs become context markers
### 2. Bootstrap Phase
When creating a warm session:
1. System context is injected as initial message
2. Tool examples establish successful patterns
3. User message follows the warm-up context
4. Agent starts with established conventions
## Example Workflow
```bash
# 1. Have a successful session
# ... work with the agent on a complex task ...
# 2. Extract template from that session
hermes warm extract abc123 --name "API Integration" --description "REST API work"
# 3. Later, start a new session with warm context
# The agent will have context about:
# - Your coding style
# - Successful tool patterns
# - Common file paths
# - Previous instructions
```
## Benefits
1. **Reduced Initial Errors**: Agent starts with proven patterns
2. **Consistent Behavior**: Established conventions carry over
3. **Faster Context**: No need to re-explain preferences
4. **Quality Tracking**: Success rate shows template effectiveness
## Implementation Details
### Files
- `tools/warm_session.py`: Core implementation
- `~/.hermes/warm_templates/`: Template storage
### Data Flow
```
Session -> SessionExtractor -> SessionSeed -> WarmTemplate
WarmSessionBootstrapper -> Messages -> Agent
```
## Research Context
This implementation addresses Finding #4 from the empirical audit:
- Marathon sessions show different error patterns
- Context establishment affects session quality
- Pre-seeding can improve initial session reliability
## Future Enhancements
1. **Automatic Template Creation**: Create templates from high-performing sessions
2. **Template Sharing**: Export/import templates between installations
3. **A/B Testing**: Compare warm vs cold session performance
4. **Smart Selection**: Automatically choose best template for task type

View File

@@ -5259,31 +5259,33 @@ For more help on a command:
sessions_parser.set_defaults(func=cmd_sessions)
# User guidance command (research #327 revised)
guidance_parser = subparsers.add_parser(
"guidance",
help="User guidance pattern analysis (research)",
description="Analyze effective user strategies for agent sessions"
# Warm session command
warm_parser = subparsers.add_parser(
"warm",
help="Warm session provisioning",
description="Create pre-contextualized sessions from templates"
)
guidance_subparsers = guidance_parser.add_subparsers(dest="guidance_command")
warm_subparsers = warm_parser.add_subparsers(dest="warm_command")
# Guidance analyze command
guidance_analyze = guidance_subparsers.add_parser("analyze", help="Analyze user guidance in a session")
guidance_analyze.add_argument("session_id", help="Session ID to analyze")
# Extract command
warm_extract = warm_subparsers.add_parser("extract", help="Extract template from session")
warm_extract.add_argument("session_id", help="Session ID to extract from")
warm_extract.add_argument("--name", "-n", required=True, help="Template name")
warm_extract.add_argument("--description", "-d", default="", help="Template description")
# Guidance create-template command
guidance_create = guidance_subparsers.add_parser("create-template", help="Create guidance template from sessions")
guidance_create.add_argument("session_ids", nargs="+", help="Session IDs to analyze")
guidance_create.add_argument("--name", "-n", help="Template name")
# List command
warm_subparsers.add_parser("list", help="List available templates")
# Guidance list-templates command
guidance_subparsers.add_parser("list-templates", help="List available guidance templates")
# Test command
warm_test = warm_subparsers.add_parser("test", help="Test warm session creation")
warm_test.add_argument("template_id", help="Template ID")
warm_test.add_argument("message", help="Test message")
# Guidance generate-guide command
guidance_guide = guidance_subparsers.add_parser("generate-guide", help="Generate user guide from template")
guidance_guide.add_argument("profile_id", help="Profile ID to generate guide from")
# Delete command
warm_delete = warm_subparsers.add_parser("delete", help="Delete a template")
warm_delete.add_argument("template_id", help="Template ID to delete")
guidance_parser.set_defaults(func=cmd_guidance)
warm_parser.set_defaults(func=cmd_warm)
# =========================================================================
@@ -5628,44 +5630,40 @@ if __name__ == "__main__":
main()
def cmd_guidance(args):
"""Handle user guidance pattern analysis commands."""
def cmd_warm(args):
"""Handle warm session commands."""
from hermes_cli.colors import Colors, color
subcmd = getattr(args, 'guidance_command', None)
subcmd = getattr(args, 'warm_command', None)
if subcmd is None:
print(color("User Guidance Pattern Analysis (Research #327 Revised)", Colors.CYAN))
print(color("Warm Session Provisioning", Colors.CYAN))
print("\nCommands:")
print(" hermes guidance analyze SESSION_ID - Analyze user guidance patterns")
print(" hermes guidance create-template SESSION_IDS - Create guidance template")
print(" hermes guidance list-templates - List available templates")
print(" hermes guidance generate-guide PROFILE_ID - Generate user guide")
print("\nNote: Research shows user guidance matters more than agent experience.")
print(" hermes warm extract SESSION_ID --name NAME - Extract template from session")
print(" hermes warm list - List available templates")
print(" hermes warm test TEMPLATE_ID MESSAGE - Test warm session")
print(" hermes warm delete TEMPLATE_ID - Delete a template")
return 0
# Import user guidance module
try:
from tools.user_guidance import guidance_command
# Convert args to list for the module
from tools.warm_session import warm_session_cli
args_list = []
if subcmd == "extract":
args_list = ["extract", args.session_id, "--name", args.name]
if args.description:
args_list.extend(["--description", args.description])
elif subcmd == "list":
args_list = ["list"]
elif subcmd == "test":
args_list = ["test", args.template_id, args.message]
elif subcmd == "delete":
args_list = ["delete", args.template_id]
if subcmd == "analyze":
args_list = ["analyze", args.session_id]
elif subcmd == "create-template":
args_list = ["create-template"] + args.session_ids
if hasattr(args, 'name') and args.name:
args_list.extend(["--name", args.name])
elif subcmd == "list-templates":
args_list = ["list-templates"]
elif subcmd == "generate-guide":
args_list = ["generate-guide", args.profile_id]
return guidance_command(args_list)
return warm_session_cli(args_list)
except ImportError as e:
print(color(f"Error: Cannot import user_guidance module: {e}", Colors.RED))
print("Make sure tools/user_guidance.py exists")
print(color(f"Error: Cannot import warm_session module: {e}", Colors.RED))
return 1
except Exception as e:
print(color(f"Error: {e}", Colors.RED))

View File

@@ -1,229 +0,0 @@
#!/usr/bin/env python3
"""
Test script for user guidance pattern analysis.
This script tests the revised approach for issue #327,
focusing on user guidance patterns rather than agent proficiency.
Issue: #327 (Revised hypothesis)
"""
import sys
import os
from pathlib import Path
# Add the tools directory to path
sys.path.insert(0, str(Path(__file__).parent.parent))
def test_user_guidance_analysis():
"""Test user guidance analysis functionality."""
print("=== Testing User Guidance Analysis ===\n")
try:
from tools.user_guidance import UserGuidanceAnalyzer
from hermes_state import SessionDB
session_db = SessionDB()
analyzer = UserGuidanceAnalyzer(session_db)
# Get a session to analyze
sessions = session_db.get_messages.__self__.execute_write(
"SELECT id FROM sessions ORDER BY started_at DESC LIMIT 1"
)
if not sessions:
print("No sessions found in database.")
return False
session_id = sessions[0][0]
print(f"Analyzing session: {session_id}\n")
analysis = analyzer.analyze_user_guidance(session_id)
if "error" in analysis:
print(f"Analysis error: {analysis['error']}")
return False
print(f"Message count: {analysis['message_count']}")
print("\nPrompt Patterns:")
for p in analysis.get("prompt_patterns", [])[:3]:
print(f" {p['type']}: {'' if p.get('success') else ''} ({p['length']} chars)")
print("\nCorrection Patterns:")
for c in analysis.get("correction_patterns", [])[:2]:
print(f" {c['error_content'][:50]}... -> {c['user_correction'][:50]}...")
print("\nSuccess Metrics:")
metrics = analysis.get("success_metrics", {})
print(f" Tool calls: {metrics.get('tool_calls', 0)}")
print(f" Success rate: {metrics.get('success_rate', 0):.0%}")
print(f" User corrections: {metrics.get('user_corrections', 0)}")
return True
except Exception as e:
print(f"Test failed: {e}")
return False
def test_guidance_template_creation():
"""Test guidance template creation."""
print("\n=== Testing Guidance Template Creation ===\n")
try:
from tools.user_guidance import UserGuidanceAnalyzer, GuidanceTemplateGenerator
from hermes_state import SessionDB
session_db = SessionDB()
analyzer = UserGuidanceAnalyzer(session_db)
generator = GuidanceTemplateGenerator(analyzer)
# Get sessions
sessions = session_db.get_messages.__self__.execute_write(
"SELECT id FROM sessions ORDER BY started_at DESC LIMIT 3"
)
if not sessions:
print("No sessions found.")
return False
session_ids = [s[0] for s in sessions]
print(f"Creating template from {len(session_ids)} sessions\n")
profile = generator.create_guidance_template(
session_ids,
name="Test Guidance Template"
)
print(f"Profile ID: {profile.profile_id}")
print(f"Name: {profile.name}")
print(f"Prompt patterns: {len(profile.prompt_patterns)}")
print(f"Correction patterns: {len(profile.correction_patterns)}")
# Save the template
from tools.user_guidance import GuidanceTemplateManager
manager = GuidanceTemplateManager()
path = manager.save_template(profile)
print(f"Saved to: {path}")
return True
except Exception as e:
print(f"Test failed: {e}")
return False
def test_user_guide_generation():
"""Test user guide generation."""
print("\n=== Testing User Guide Generation ===\n")
try:
from tools.user_guidance import UserGuidanceProfile, PromptPattern, CorrectionPattern, ContextStrategy, generate_user_guide
# Create a test profile
profile = UserGuidanceProfile(
profile_id="test_guidance_001",
name="Test User Guidance",
description="Test profile for guide generation",
prompt_patterns=[
PromptPattern(
pattern_type="polite_request",
template="Please [action] [details]",
success_rate=0.85,
usage_count=15,
context_requirements=[]
),
PromptPattern(
pattern_type="question",
template="How do I [action]?",
success_rate=0.75,
usage_count=20,
context_requirements=[]
)
],
correction_patterns=[
CorrectionPattern(
error_type="file_not_found",
correction_strategy="direct",
effectiveness=0.90,
common_phrases=["Use the correct path: [path]", "The file is at [location]"]
),
CorrectionPattern(
error_type="command_not_found",
correction_strategy="example",
effectiveness=0.80,
common_phrases=["Try: [command]", "Use [alternative] instead"]
)
],
context_strategies=[
ContextStrategy(
strategy_type="file_reference",
description="Reference specific files",
effectiveness=0.85,
token_cost=10
),
ContextStrategy(
strategy_type="code_example",
description="Provide code examples",
effectiveness=0.90,
token_cost=50
)
],
created_at="2026-04-13T00:00:00",
source_analysis="Test sessions"
)
guide = generate_user_guide(profile)
print("Generated User Guide:")
print("=" * 50)
print(guide[:1000] + "..." if len(guide) > 1000 else guide)
return True
except Exception as e:
print(f"Test failed: {e}")
return False
def main():
"""Run all tests."""
print("User Guidance Pattern Analysis Test Suite")
print("=" * 50)
tests = [
("User Guidance Analysis", test_user_guidance_analysis),
("Guidance Template Creation", test_guidance_template_creation),
("User Guide Generation", test_user_guide_generation)
]
results = []
for name, test_func in tests:
print(f"\nRunning: {name}")
try:
result = test_func()
results.append((name, result))
print(f"Result: {'PASS' if result else 'FAIL'}")
except Exception as e:
print(f"Error: {e}")
results.append((name, False))
print("\n" + "=" * 50)
print("Test Results:")
passed = sum(1 for _, result in results if result)
total = len(results)
for name, result in results:
status = "✓ PASS" if result else "✗ FAIL"
print(f" {status}: {name}")
print(f"\nPassed: {passed}/{total}")
return 0 if passed == total else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,766 +0,0 @@
"""
User Guidance Patterns for Effective Agent Sessions
This module analyzes user strategies that lead to successful agent sessions,
focusing on prompt patterns, error correction techniques, and context management.
Issue: #327 (Revised hypothesis)
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from dataclasses import dataclass, asdict
import re
logger = logging.getLogger(__name__)
@dataclass
class PromptPattern:
"""Effective prompt pattern."""
pattern_type: str # "instruction", "context", "constraint", "example"
template: str
success_rate: float
usage_count: int
context_requirements: List[str] = None
def to_dict(self) -> Dict[str, Any]:
return asdict(self)
@dataclass
class CorrectionPattern:
"""User error correction pattern."""
error_type: str
correction_strategy: str # "direct", "example", "reframe", "constraint"
effectiveness: float # Success rate of this correction
common_phrases: List[str]
@dataclass
class ContextStrategy:
"""Context establishment strategy."""
strategy_type: str # "reference", "example", "constraint", "background"
description: str
effectiveness: float
token_cost: int # Approximate token usage
@dataclass
class UserGuidanceProfile:
"""Profile of effective user guidance strategies."""
profile_id: str
name: str
description: str
prompt_patterns: List[PromptPattern]
correction_patterns: List[CorrectionPattern]
context_strategies: List[ContextStrategy]
created_at: str
source_analysis: str = None
version: str = "1.0"
def to_dict(self) -> Dict[str, Any]:
return {
"profile_id": self.profile_id,
"name": self.name,
"description": self.description,
"prompt_patterns": [p.to_dict() for p in self.prompt_patterns],
"correction_patterns": [asdict(c) for c in self.correction_patterns],
"context_strategies": [asdict(c) for c in self.context_strategies],
"created_at": self.created_at,
"source_analysis": self.source_analysis,
"version": self.version
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'UserGuidanceProfile':
"""Create profile from dictionary."""
prompt_patterns = [
PromptPattern(**p) for p in data.get("prompt_patterns", [])
]
correction_patterns = [
CorrectionPattern(**c) for c in data.get("correction_patterns", [])
]
context_strategies = [
ContextStrategy(**c) for c in data.get("context_strategies", [])
]
return cls(
profile_id=data["profile_id"],
name=data["name"],
description=data["description"],
prompt_patterns=prompt_patterns,
correction_patterns=correction_patterns,
context_strategies=context_strategies,
created_at=data.get("created_at", datetime.now().isoformat()),
source_analysis=data.get("source_analysis"),
version=data.get("version", "1.0")
)
class UserGuidanceAnalyzer:
"""Analyze user guidance patterns in sessions."""
def __init__(self, session_db=None):
self.session_db = session_db
def analyze_user_guidance(self, session_id: str) -> Dict[str, Any]:
"""
Analyze user guidance patterns in a session.
Returns:
Dict with user guidance analysis including:
- prompt_patterns: Effective prompt structures
- correction_patterns: Error correction strategies
- context_strategies: How users establish context
- success_indicators: What makes guidance effective
"""
if not self.session_db:
return {"error": "No session database available"}
try:
messages = self.session_db.get_messages(session_id)
if not messages:
return {"error": "No messages found"}
analysis = {
"session_id": session_id,
"message_count": len(messages),
"user_messages": self._extract_user_messages(messages),
"prompt_patterns": self._analyze_prompt_patterns(messages),
"correction_patterns": self._analyze_corrections(messages),
"context_strategies": self._analyze_context_strategies(messages),
"success_metrics": self._calculate_success_metrics(messages)
}
return analysis
except Exception as e:
logger.error(f"User guidance analysis failed: {e}")
return {"error": str(e)}
def _extract_user_messages(self, messages: List[Dict]) -> List[Dict]:
"""Extract user messages with context."""
user_messages = []
for i, msg in enumerate(messages):
if msg.get("role") == "user":
# Get surrounding context
context_before = []
context_after = []
# Previous assistant message
if i > 0 and messages[i-1].get("role") == "assistant":
context_before.append(messages[i-1].get("content", "")[:200])
# Next assistant message
if i < len(messages) - 1 and messages[i+1].get("role") == "assistant":
context_after.append(messages[i+1].get("content", "")[:200])
user_messages.append({
"content": msg.get("content", ""),
"position": i,
"context_before": context_before,
"context_after": context_after
})
return user_messages
def _analyze_prompt_patterns(self, messages: List[Dict]) -> List[Dict[str, Any]]:
"""Analyze prompt patterns for effectiveness."""
patterns = []
user_messages = [m for m in messages if m.get("role") == "user"]
for msg in user_messages:
content = msg.get("content", "")
# Identify prompt types
if content.startswith(("Please", "Could you", "Can you")):
patterns.append({
"type": "polite_request",
"content": content,
"length": len(content),
"success": self._check_prompt_success(messages, msg)
})
elif "?" in content:
patterns.append({
"type": "question",
"content": content,
"length": len(content),
"success": self._check_prompt_success(messages, msg)
})
elif content.startswith(("/", "!")):
patterns.append({
"type": "command",
"content": content,
"length": len(content),
"success": self._check_prompt_success(messages, msg)
})
elif len(content) > 200:
patterns.append({
"type": "detailed_request",
"content": content,
"length": len(content),
"success": self._check_prompt_success(messages, msg)
})
return patterns
def _check_prompt_success(self, messages: List[Dict], user_msg: Dict) -> bool:
"""Check if a prompt led to successful execution."""
# Find the user message position
user_pos = None
for i, msg in enumerate(messages):
if msg == user_msg:
user_pos = i
break
if user_pos is None:
return False
# Check if there's a successful tool call after this message
for i in range(user_pos + 1, min(user_pos + 5, len(messages))):
msg = messages[i]
if msg.get("role") == "assistant" and msg.get("tool_calls"):
# Check if tool result indicates success
for j in range(i + 1, min(i + 3, len(messages))):
if messages[j].get("role") == "tool":
content = messages[j].get("content", "")
if "error" not in content.lower() and "failed" not in content.lower():
return True
return False
def _analyze_corrections(self, messages: List[Dict]) -> List[Dict[str, Any]]:
"""Analyze error correction patterns."""
corrections = []
# Look for error patterns followed by corrections
for i in range(len(messages) - 2):
msg1 = messages[i]
msg2 = messages[i + 1]
msg3 = messages[i + 2]
# Pattern: Assistant error -> User correction -> Assistant success
if (msg1.get("role") == "tool" and
("error" in msg1.get("content", "").lower() or "failed" in msg1.get("content", "").lower()) and
msg2.get("role") == "user" and
msg3.get("role") == "assistant"):
corrections.append({
"error_content": msg1.get("content", "")[:200],
"user_correction": msg2.get("content", ""),
"assistant_response": msg3.get("content", "")[:200],
"success": self._check_correction_success(messages, i + 2)
})
return corrections
def _check_correction_success(self, messages: List[Dict], assistant_pos: int) -> bool:
"""Check if a correction led to success."""
# Look for successful tool calls after correction
for i in range(assistant_pos + 1, min(assistant_pos + 3, len(messages))):
if messages[i].get("role") == "tool":
content = messages[i].get("content", "")
if "error" not in content.lower() and "failed" not in content.lower():
return True
return False
def _analyze_context_strategies(self, messages: List[Dict]) -> List[Dict[str, Any]]:
"""Analyze how users establish context."""
strategies = []
user_messages = [m for m in messages if m.get("role") == "user"]
for msg in user_messages[:10]: # Analyze first 10 user messages
content = msg.get("content", "")
# Identify context establishment strategies
if re.search(r'[/.\][\w/.-]+\.\w+', content):
strategies.append({
"type": "file_reference",
"content": content[:200],
"tokens": len(content.split())
})
elif "```" in content:
strategies.append({
"type": "code_example",
"content": content[:200],
"tokens": len(content.split())
})
elif len(content) > 300:
strategies.append({
"type": "detailed_background",
"content": content[:200],
"tokens": len(content.split())
})
return strategies
def _calculate_success_metrics(self, messages: List[Dict]) -> Dict[str, Any]:
"""Calculate success metrics for the session."""
tool_calls = 0
successful_tool_calls = 0
user_corrections = 0
successful_corrections = 0
for i, msg in enumerate(messages):
if msg.get("role") == "assistant" and msg.get("tool_calls"):
tool_calls += 1
if msg.get("role") == "tool":
content = msg.get("content", "")
if "error" not in content.lower() and "failed" not in content.lower():
successful_tool_calls += 1
# Count corrections
if (msg.get("role") == "user" and i > 0 and
messages[i-1].get("role") == "tool" and
("error" in messages[i-1].get("content", "").lower() or
"failed" in messages[i-1].get("content", "").lower())):
user_corrections += 1
return {
"tool_calls": tool_calls,
"successful_tool_calls": successful_tool_calls,
"success_rate": successful_tool_calls / tool_calls if tool_calls > 0 else 0,
"user_corrections": user_corrections,
"messages_per_correction": len(messages) / user_corrections if user_corrections > 0 else 0
}
class GuidanceTemplateGenerator:
"""Generate user guidance templates from analysis."""
def __init__(self, analyzer: UserGuidanceAnalyzer = None):
self.analyzer = analyzer or UserGuidanceAnalyzer()
def create_guidance_template(self, session_ids: List[str], name: str = None) -> UserGuidanceProfile:
"""
Create a guidance template from multiple sessions.
Args:
session_ids: List of session IDs to analyze
name: Template name
Returns:
UserGuidanceProfile with extracted patterns
"""
all_patterns = []
all_corrections = []
all_strategies = []
for session_id in session_ids:
analysis = self.analyzer.analyze_user_guidance(session_id)
if "error" in analysis:
logger.warning(f"Skipping session {session_id}: {analysis['error']}")
continue
all_patterns.extend(analysis.get("prompt_patterns", []))
all_corrections.extend(分析.get("correction_patterns", []))
all_strategies.extend(analysis.get("context_strategies", []))
# Aggregate patterns
prompt_patterns = self._aggregate_prompt_patterns(all_patterns)
correction_patterns = self._aggregate_corrections(all_corrections)
context_strategies = self._aggregate_strategies(all_strategies)
profile = UserGuidanceProfile(
profile_id=f"guidance_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
name=name or f"User Guidance Template",
description=f"Extracted from {len(session_ids)} sessions",
prompt_patterns=prompt_patterns,
correction_patterns=correction_patterns,
context_strategies=context_strategies,
created_at=datetime.now().isoformat(),
source_analysis=f"Sessions: {', '.join(session_ids[:5])}{'...' if len(session_ids) > 5 else ''}"
)
return profile
def _aggregate_prompt_patterns(self, patterns: List[Dict]) -> List[PromptPattern]:
"""Aggregate prompt patterns by type."""
pattern_groups = {}
for p in patterns:
ptype = p.get("type", "unknown")
if ptype not in pattern_groups:
pattern_groups[ptype] = {"count": 0, "successes": 0, "examples": []}
pattern_groups[ptype]["count"] += 1
if p.get("success"):
pattern_groups[ptype]["successes"] += 1
if len(pattern_groups[ptype]["examples"]) < 3:
pattern_groups[ptype]["examples"].append(p.get("content", "")[:100])
result = []
for ptype, data in pattern_groups.items():
success_rate = data["successes"] / data["count"] if data["count"] > 0 else 0
# Create template from examples
template = self._create_template_from_examples(data["examples"], ptype)
result.append(PromptPattern(
pattern_type=ptype,
template=template,
success_rate=success_rate,
usage_count=data["count"],
context_requirements=[]
))
return result
def _aggregate_corrections(self, corrections: List[Dict]) -> List[CorrectionPattern]:
"""Aggregate correction patterns."""
correction_groups = {}
for c in corrections:
# Simplify error type
error_content = c.get("error_content", "").lower()
if "filenotfound" in error_content or "no such file" in error_content:
error_type = "file_not_found"
elif "permission" in error_content:
error_type = "permission_denied"
elif "command not found" in error_content:
error_type = "command_not_found"
else:
error_type = "general_error"
if error_type not in correction_groups:
correction_groups[error_type] = {"count": 0, "successes": 0, "examples": []}
correction_groups[error_type]["count"] += 1
if c.get("success"):
correction_groups[error_type]["successes"] += 1
if len(correction_groups[error_type]["examples"]) < 3:
correction_groups[error_type]["examples"].append(c.get("user_correction", "")[:100])
result = []
for error_type, data in correction_groups.items():
effectiveness = data["successes"] / data["count"] if data["count"] > 0 else 0
# Determine correction strategy
if data["examples"]:
first_example = data["examples"][0].lower()
if "try" in first_example or "instead" in first_example:
strategy = "reframe"
elif "use" in first_example or "run" in first_example:
strategy = "direct"
elif "like this" in first_example or "example" in first_example:
strategy = "example"
else:
strategy = "constraint"
else:
strategy = "unknown"
result.append(CorrectionPattern(
error_type=error_type,
correction_strategy=strategy,
effectiveness=effectiveness,
common_phrases=data["examples"][:3]
))
return result
def _aggregate_strategies(self, strategies: List[Dict]) -> List[ContextStrategy]:
"""Aggregate context strategies."""
strategy_groups = {}
for s in strategies:
stype = s.get("type", "unknown")
if stype not in strategy_groups:
strategy_groups[stype] = {"count": 0, "tokens": []}
strategy_groups[stype]["count"] += 1
strategy_groups[stype]["tokens"].append(s.get("tokens", 0))
result = []
for stype, data in strategy_groups.items():
avg_tokens = sum(data["tokens"]) / len(data["tokens"]) if data["tokens"] else 0
result.append(ContextStrategy(
strategy_type=stype,
description=f"Used {data['count']} times, avg {avg_tokens:.0f} tokens",
effectiveness=0.5, # Would need more analysis
token_cost=int(avg_tokens)
))
return result
def _create_template_from_examples(self, examples: List[str], ptype: str) -> str:
"""Create a template from examples."""
if not examples:
return f"Example {ptype} prompt"
# Simple template creation
if ptype == "polite_request":
return "Please [action] [details]"
elif ptype == "question":
return "How do I [action]?"
elif ptype == "command":
return "/[command] [arguments]"
elif ptype == "detailed_request":
return "I need to [goal]. Specifically, [details]. Context: [background]"
else:
return examples[0][:50] + "..."
class GuidanceTemplateManager:
"""Manage user guidance templates."""
def __init__(self, template_dir: Path = None):
self.template_dir = template_dir or Path.home() / ".hermes" / "guidance_templates"
self.template_dir.mkdir(parents=True, exist_ok=True)
def save_template(self, profile: UserGuidanceProfile) -> Path:
"""Save a guidance template."""
template_path = self.template_dir / f"{profile.profile_id}.json"
with open(template_path, 'w') as f:
json.dump(profile.to_dict(), f, indent=2)
logger.info(f"Saved guidance template {profile.profile_id} to {template_path}")
return template_path
def load_template(self, profile_id: str) -> Optional[UserGuidanceProfile]:
"""Load a guidance template."""
template_path = self.template_dir / f"{profile_id}.json"
if not template_path.exists():
logger.warning(f"Template {profile_id} not found")
return None
try:
with open(template_path, 'r') as f:
data = json.load(f)
return UserGuidanceProfile.from_dict(data)
except Exception as e:
logger.error(f"Failed to load template {profile_id}: {e}")
return None
def list_templates(self) -> List[Dict[str, Any]]:
"""List all available templates."""
templates = []
for template_path in self.template_dir.glob("*.json"):
try:
with open(template_path, 'r') as f:
data = json.load(f)
templates.append({
"profile_id": data.get("profile_id"),
"name": data.get("name"),
"description": data.get("description"),
"created_at": data.get("created_at"),
"prompt_patterns": len(data.get("prompt_patterns", [])),
"correction_patterns": len(data.get("correction_patterns", []))
})
except Exception as e:
logger.warning(f"Failed to read template {template_path}: {e}")
return templates
def generate_user_guide(profile: UserGuidanceProfile) -> str:
"""Generate a user-facing guide from a guidance profile."""
guide = f"""# Effective Agent Session Guide
**Template:** {profile.name}
**Generated:** {profile.created_at}
**Source:** {profile.source_analysis or "Multiple sessions"}
## Effective Prompt Patterns
"""
for pattern in sorted(profile.prompt_patterns, key=lambda x: x.success_rate, reverse=True):
guide += f"### {pattern.pattern_type.replace('_', ' ').title()}
"
guide += f"**Success Rate:** {pattern.success_rate:.0%}
"
guide += f"**Usage:** {pattern.usage_count} times
"
guide += f"**Template:** `{pattern.template}`
"
guide += "## Error Correction Strategies
"
for correction in sorted(profile.correction_patterns, key=lambda x: x.effectiveness, reverse=True):
guide += f"### {correction.error_type.replace('_', ' ').title()}
"
guide += f"**Effectiveness:** {correction.effectiveness:.0%}
"
guide += f"**Strategy:** {correction.correction_strategy}
"
if correction.common_phrases:
guide += f"**Example:** "{correction.common_phrases[0]}"
"
guide += "
"
guide += "## Context Establishment Tips
"
for strategy in profile.context_strategies:
guide += f"- **{strategy.strategy_type.replace('_', ' ').title()}:** {strategy.description}
"
guide += """
## Key Insights
1. **Be specific:** Vague prompts lead to errors
2. **Provide context:** Help the agent understand your environment
3. **Use examples:** Show what you want when possible
4. **Correct effectively:** Use the strategies above when errors occur
5. **Manage context:** Don't overload with unnecessary information
## Remember
- Agent sessions degrade over time (error rates increase)
- Your guidance matters more than agent "experience"
- Use the patterns above to improve success rates
"""
return guide
# CLI Integration
def guidance_command(args):
"""CLI command for user guidance analysis."""
import argparse
parser = argparse.ArgumentParser(description="User guidance pattern analysis")
subparsers = parser.add_subparsers(dest="command")
# Analyze command
analyze_parser = subparsers.add_parser("analyze", help="Analyze user guidance in a session")
analyze_parser.add_argument("session_id", help="Session ID to analyze")
# Create template command
create_parser = subparsers.add_parser("create-template", help="Create guidance template from sessions")
create_parser.add_argument("session_ids", nargs="+", help="Session IDs to analyze")
create_parser.add_argument("--name", "-n", help="Template name")
# List templates command
subparsers.add_parser("list-templates", help="List available guidance templates")
# Generate guide command
guide_parser = subparsers.add_parser("generate-guide", help="Generate user guide from template")
guide_parser.add_argument("profile_id", help="Profile ID to generate guide from")
# Parse args
parsed = parser.parse_args(args)
if not parsed.command:
parser.print_help()
return 1
# Import session DB
try:
from hermes_state import SessionDB
session_db = SessionDB()
except ImportError:
print("Error: Cannot import SessionDB")
return 1
if parsed.command == "analyze":
analyzer = UserGuidanceAnalyzer(session_db)
analysis = analyzer.analyze_user_guidance(parsed.session_id)
print(f"\n=== User Guidance Analysis: {parsed.session_id} ===\n")
if "error" in analysis:
print(f"Error: {analysis['error']}")
return 1
print(f"Messages: {analysis['message_count']}")
print("\nPrompt Patterns:")
for p in analysis.get("prompt_patterns", [])[:5]:
print(f" {p['type']}: {'' if p.get('success') else ''} ({p['length']} chars)")
print("\nCorrection Patterns:")
for c in analysis.get("correction_patterns", [])[:3]:
print(f" {c['error_content'][:50]}... -> {c['user_correction'][:50]}...")
print("\nSuccess Metrics:")
metrics = analysis.get("success_metrics", {})
print(f" Tool calls: {metrics.get('tool_calls', 0)}")
print(f" Success rate: {metrics.get('success_rate', 0):.0%}")
print(f" User corrections: {metrics.get('user_corrections', 0)}")
return 0
elif parsed.command == "create-template":
analyzer = UserGuidanceAnalyzer(session_db)
generator = GuidanceTemplateGenerator(analyzer)
profile = generator.create_guidance_template(
parsed.session_ids,
name=parsed.name
)
manager = GuidanceTemplateManager()
path = manager.save_template(profile)
print(f"Created guidance template: {profile.profile_id}")
print(f"Saved to: {path}")
print(f"Prompt patterns: {len(profile.prompt_patterns)}")
print(f"Correction patterns: {len(profile.correction_patterns)}")
return 0
elif parsed.command == "list-templates":
manager = GuidanceTemplateManager()
templates = manager.list_templates()
if not templates:
print("No templates found.")
return 0
print("\n=== Available Guidance Templates ===\n")
for t in templates:
print(f"ID: {t['profile_id']}")
print(f"Name: {t['name']}")
print(f"Description: {t['description']}")
print(f"Prompt patterns: {t['prompt_patterns']}")
print(f"Correction patterns: {t['correction_patterns']}")
print()
return 0
elif parsed.command == "generate-guide":
manager = GuidanceTemplateManager()
profile = manager.load_template(parsed.profile_id)
if not profile:
print(f"Template {parsed.profile_id} not found")
return 1
guide = generate_user_guide(profile)
print(guide)
# Also save to file
guide_path = manager.template_dir / f"{parsed.profile_id}_guide.md"
with open(guide_path, 'w') as f:
f.write(guide)
print(f"\nGuide saved to: {guide_path}")
return 0
return 1
if __name__ == "__main__":
import sys
sys.exit(guidance_command(sys.argv[1:]))

485
tools/warm_session.py Normal file
View File

@@ -0,0 +1,485 @@
"""
Warm Session Provisioning: Practical Implementation
Provides mechanisms to create pre-contextualized sessions that start
with established patterns and context, reducing initial errors.
Issue: #327
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional
from dataclasses import dataclass, asdict, field
logger = logging.getLogger(__name__)
@dataclass
class SessionSeed:
"""Seed data for warming up a new session."""
system_context: str = ""
tool_examples: List[Dict[str, Any]] = field(default_factory=list)
user_patterns: Dict[str, Any] = field(default_factory=dict)
context_markers: List[str] = field(default_factory=list)
def to_dict(self) -> Dict[str, Any]:
return asdict(self)
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'SessionSeed':
return cls(**data)
@dataclass
class WarmTemplate:
"""Template for creating warm sessions."""
template_id: str
name: str
description: str
seed: SessionSeed
created_at: str
source_session_id: Optional[str] = None
usage_count: int = 0
success_rate: float = 0.0
def to_dict(self) -> Dict[str, Any]:
return {
"template_id": self.template_id,
"name": self.name,
"description": self.description,
"seed": self.seed.to_dict(),
"created_at": self.created_at,
"source_session_id": self.source_session_id,
"usage_count": self.usage_count,
"success_rate": self.success_rate
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'WarmTemplate':
seed = SessionSeed.from_dict(data.get("seed", {}))
return cls(
template_id=data["template_id"],
name=data["name"],
description=data["description"],
seed=seed,
created_at=data.get("created_at", datetime.now().isoformat()),
source_session_id=data.get("source_session_id"),
usage_count=data.get("usage_count", 0),
success_rate=data.get("success_rate", 0.0)
)
class SessionExtractor:
"""Extract seed data from existing sessions."""
def __init__(self, session_db=None):
self.session_db = session_db
def extract_seed(self, session_id: str) -> Optional[SessionSeed]:
"""Extract seed data from a session."""
if not self.session_db:
return None
try:
messages = self.session_db.get_messages(session_id)
if not messages:
return None
# Extract system context
system_context = self._extract_system_context(messages)
# Extract successful tool examples
tool_examples = self._extract_tool_examples(messages)
# Extract user patterns
user_patterns = self._extract_user_patterns(messages)
# Extract context markers
context_markers = self._extract_context_markers(messages)
return SessionSeed(
system_context=system_context,
tool_examples=tool_examples,
user_patterns=user_patterns,
context_markers=context_markers
)
except Exception as e:
logger.error(f"Failed to extract seed: {e}")
return None
def _extract_system_context(self, messages: List[Dict]) -> str:
"""Extract useful system context from messages."""
context_parts = []
# Look for system messages
for msg in messages:
if msg.get("role") == "system":
content = msg.get("content", "")
# Take first 500 chars of system context
if content:
context_parts.append(content[:500])
break
# Extract key user instructions
user_instructions = []
for msg in messages[:10]: # First 10 messages
if msg.get("role") == "user":
content = msg.get("content", "")
if len(content) > 50 and "?" not in content[:20]: # Likely instructions
user_instructions.append(content[:200])
if len(user_instructions) >= 3:
break
if user_instructions:
context_parts.append("\nKey instructions from session:\n" + "\n".join(f"- {i}" for i in user_instructions))
return "\n".join(context_parts)[:1000]
def _extract_tool_examples(self, messages: List[Dict]) -> List[Dict[str, Any]]:
"""Extract successful tool call examples."""
examples = []
for i, msg in enumerate(messages):
if msg.get("role") == "assistant" and msg.get("tool_calls"):
# Check if there's a successful result
for j in range(i + 1, min(i + 3, len(messages))):
if messages[j].get("role") == "tool":
content = messages[j].get("content", "")
# Check for success indicators
if content and "error" not in content.lower()[:100]:
for tool_call in msg["tool_calls"]:
func = tool_call.get("function", {})
examples.append({
"tool": func.get("name"),
"arguments": func.get("arguments", "{}"),
"result_preview": content[:200]
})
if len(examples) >= 5:
break
break
if len(examples) >= 5:
break
return examples
def _extract_user_patterns(self, messages: List[Dict]) -> Dict[str, Any]:
"""Extract user interaction patterns."""
user_messages = [m for m in messages if m.get("role") == "user"]
if not user_messages:
return {}
# Calculate patterns
lengths = [len(m.get("content", "")) for m in user_messages]
avg_length = sum(lengths) / len(lengths)
# Count question types
questions = sum(1 for m in user_messages if "?" in m.get("content", ""))
commands = sum(1 for m in user_messages if m.get("content", "").startswith(("/", "!")))
return {
"message_count": len(user_messages),
"avg_length": avg_length,
"question_ratio": questions / len(user_messages),
"command_ratio": commands / len(user_messages),
"preferred_style": "command" if commands > questions else "conversational"
}
def _extract_context_markers(self, messages: List[Dict]) -> List[str]:
"""Extract important context markers."""
markers = set()
for msg in messages:
content = msg.get("content", "")
# File paths
import re
paths = re.findall(r'[\w/\.]+\.[\w]+', content)
markers.update(p for p in paths if len(p) < 50)
# URLs
urls = re.findall(r'https?://[^\s]+', content)
markers.update(u[:80] for u in urls[:3])
if len(markers) > 20:
break
return list(markers)[:20]
class WarmSessionManager:
"""Manage warm session templates."""
def __init__(self, template_dir: Path = None):
self.template_dir = template_dir or Path.home() / ".hermes" / "warm_templates"
self.template_dir.mkdir(parents=True, exist_ok=True)
def save_template(self, template: WarmTemplate) -> Path:
"""Save a warm template."""
path = self.template_dir / f"{template.template_id}.json"
with open(path, 'w') as f:
json.dump(template.to_dict(), f, indent=2)
return path
def load_template(self, template_id: str) -> Optional[WarmTemplate]:
"""Load a warm template."""
path = self.template_dir / f"{template_id}.json"
if not path.exists():
return None
try:
with open(path, 'r') as f:
data = json.load(f)
return WarmTemplate.from_dict(data)
except Exception as e:
logger.error(f"Failed to load template: {e}")
return None
def list_templates(self) -> List[Dict[str, Any]]:
"""List all templates."""
templates = []
for path in self.template_dir.glob("*.json"):
try:
with open(path, 'r') as f:
data = json.load(f)
templates.append({
"template_id": data.get("template_id"),
"name": data.get("name"),
"description": data.get("description"),
"usage_count": data.get("usage_count", 0),
"success_rate": data.get("success_rate", 0.0)
})
except:
pass
return templates
def delete_template(self, template_id: str) -> bool:
"""Delete a template."""
path = self.template_dir / f"{template_id}.json"
if path.exists():
path.unlink()
return True
return False
class WarmSessionBootstrapper:
"""Bootstrap warm sessions from templates."""
def __init__(self, manager: WarmSessionManager = None):
self.manager = manager or WarmSessionManager()
def prepare_messages(
self,
template: WarmTemplate,
user_message: str,
include_examples: bool = True
) -> List[Dict[str, Any]]:
"""Prepare messages for a warm session."""
messages = []
# Add warm context as system message
warm_context = self._build_warm_context(template.seed)
if warm_context:
messages.append({
"role": "system",
"content": warm_context
})
# Add tool examples if requested
if include_examples and template.seed.tool_examples:
example_messages = self._create_example_messages(template.seed.tool_examples)
messages.extend(example_messages)
# Add the actual user message
messages.append({
"role": "user",
"content": user_message
})
return messages
def _build_warm_context(self, seed: SessionSeed) -> str:
"""Build warm context from seed."""
parts = []
if seed.system_context:
parts.append(seed.system_context)
if seed.context_markers:
parts.append("\nKnown context: " + ", ".join(seed.context_markers[:10]))
if seed.user_patterns:
style = seed.user_patterns.get("preferred_style", "balanced")
parts.append(f"\nUser prefers {style} interactions.")
return "\n".join(parts)[:1500]
def _create_example_messages(self, examples: List[Dict]) -> List[Dict]:
"""Create example messages from tool examples."""
messages = []
for i, ex in enumerate(examples[:3]): # Limit to 3 examples
# User request
messages.append({
"role": "user",
"content": f"[Example {i+1}] Use {ex['tool']}"
})
# Assistant with tool call
messages.append({
"role": "assistant",
"content": f"I'll use {ex['tool']}.",
"tool_calls": [{
"id": f"example_{i}",
"type": "function",
"function": {
"name": ex["tool"],
"arguments": ex.get("arguments", "{}")
}
}]
})
# Tool result
messages.append({
"role": "tool",
"tool_call_id": f"example_{i}",
"content": ex.get("result_preview", "Success")
})
return messages
# CLI Functions
def warm_session_cli(args: List[str]) -> int:
"""CLI interface for warm session management."""
import argparse
parser = argparse.ArgumentParser(description="Warm session provisioning")
subparsers = parser.add_subparsers(dest="command")
# Extract command
extract_parser = subparsers.add_parser("extract", help="Extract template from session")
extract_parser.add_argument("session_id", help="Session ID to extract from")
extract_parser.add_argument("--name", "-n", required=True, help="Template name")
extract_parser.add_argument("--description", "-d", default="", help="Template description")
# List command
subparsers.add_parser("list", help="List available templates")
# Test command
test_parser = subparsers.add_parser("test", help="Test warm session creation")
test_parser.add_argument("template_id", help="Template ID")
test_parser.add_argument("message", help="Test message")
# Delete command
delete_parser = subparsers.add_parser("delete", help="Delete a template")
delete_parser.add_argument("template_id", help="Template ID to delete")
parsed = parser.parse_args(args)
if not parsed.command:
parser.print_help()
return 1
manager = WarmSessionManager()
if parsed.command == "extract":
try:
from hermes_state import SessionDB
session_db = SessionDB()
except ImportError:
print("Error: Cannot import SessionDB")
return 1
extractor = SessionExtractor(session_db)
seed = extractor.extract_seed(parsed.session_id)
if not seed:
print(f"Failed to extract seed from session {parsed.session_id}")
return 1
template = WarmTemplate(
template_id=f"warm_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
name=parsed.name,
description=parsed.description,
seed=seed,
created_at=datetime.now().isoformat(),
source_session_id=parsed.session_id
)
path = manager.save_template(template)
print(f"Created template: {template.template_id}")
print(f"Saved to: {path}")
print(f"Tool examples: {len(seed.tool_examples)}")
print(f"Context markers: {len(seed.context_markers)}")
return 0
elif parsed.command == "list":
templates = manager.list_templates()
if not templates:
print("No templates found.")
return 0
print("\n=== Warm Session Templates ===\n")
for t in templates:
print(f"ID: {t['template_id']}")
print(f" Name: {t['name']}")
print(f" Description: {t['description']}")
print(f" Usage: {t['usage_count']} times, {t['success_rate']:.0%} success")
print()
return 0
elif parsed.command == "test":
template = manager.load_template(parsed.template_id)
if not template:
print(f"Template {parsed.template_id} not found")
return 1
bootstrapper = WarmSessionBootstrapper(manager)
messages = bootstrapper.prepare_messages(template, parsed.message)
print(f"\n=== Warm Session Test: {template.name} ===\n")
print(f"Generated {len(messages)} messages:\n")
for i, msg in enumerate(messages):
role = msg.get("role", "unknown")
content = msg.get("content", "")
if role == "system":
print(f"[System Context] ({len(content)} chars)")
print(content[:200] + "..." if len(content) > 200 else content)
elif role == "user":
print(f"\n[User]: {content}")
elif role == "assistant":
print(f"[Assistant]: {content}")
if msg.get("tool_calls"):
for tc in msg["tool_calls"]:
func = tc.get("function", {})
print(f" -> {func.get('name')}({func.get('arguments', '{}')[:50]})")
elif role == "tool":
print(f" [Result]: {content[:100]}...")
return 0
elif parsed.command == "delete":
if manager.delete_template(parsed.template_id):
print(f"Deleted template: {parsed.template_id}")
return 0
else:
print(f"Template {parsed.template_id} not found")
return 1
return 1
if __name__ == "__main__":
import sys
sys.exit(warm_session_cli(sys.argv[1:]))