Compare commits
2 Commits
step35/232
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 2a4e73aa03 | |||
|
|
b1a728f5f4 |
@@ -1,207 +0,0 @@
|
||||
# Swarm Memory Architecture — Design Note
|
||||
|
||||
**Issue:** #232 — [ATLAS][Research] Solve the swarm-memory gap for concurrent subagents
|
||||
**Repo:** Timmy_Foundation/compounding-intelligence
|
||||
**Status:** Research — Design Draft
|
||||
**Author:** step35 (burn)
|
||||
**Date:** 2026-04-26
|
||||
|
||||
---
|
||||
|
||||
## 1. Problem Statement
|
||||
|
||||
The compounding-intelligence pipelines assume a **session-bounded** memory model: each agent session starts with injected bootstrap context, runs, produces a transcript, then ends. Knowledge is harvested *after* the session and injected *before* the next.
|
||||
|
||||
But **concurrent subagents** (multiple simultaneous agents working parallel tasks) break this model:
|
||||
|
||||
- **No shared scratch space:** Each subagent operates in isolation; discoveries in sibling sessions aren't visible until the next harvest cycle.
|
||||
- **Race conditions on promotion:** Two subagents may discover the same fact; both write it, causing duplication or conflicts.
|
||||
- **Lost correlation:** Without a shared event log, you cannot reconstruct what happened across the swarm.
|
||||
- **Stale shared state:** If a fact is promoted to global memory while subagents are still running, they may act on outdated assumptions.
|
||||
|
||||
**Core question:** What memory semantics should exist across concurrent subagents so they can cooperate without corrupting each other or losing important results?
|
||||
|
||||
---
|
||||
|
||||
## 2. Session Memory vs Swarm Memory
|
||||
|
||||
### Session Memory (Current)
|
||||
|
||||
| Property | Description |
|
||||
|---|---|
|
||||
| **Scope** | Single agent process lifetime |
|
||||
| **Storage** | In-memory context window + transient tool state |
|
||||
| **Visibility** | Private to that session |
|
||||
| **Lifetime** | Ephemeral — disappears on exit |
|
||||
| **Promotion** | Post-session harvester extracts durable facts |
|
||||
| **Example** | "I read the config file and saw port 8080" |
|
||||
|
||||
### Swarm Memory (What's Missing)
|
||||
|
||||
| Property | Desired |
|
||||
|---|---|
|
||||
| **Scope** | All concurrent subagents in a task group |
|
||||
| **Storage** | Shared, durable, versioned |
|
||||
| **Visibility** | Readable by all siblings; write semantics TBD |
|
||||
| **Lifetime** | Persists for duration of the coordinated task |
|
||||
| **Promotion** | Real-time or near-real-time synchronization |
|
||||
| **Example** | "Agent A found that the API returns 405 on main; all agents should know this now" |
|
||||
|
||||
**Key insight:** Session memory is **private and accumulated**; swarm memory is **shared and coordinated**. The harvester/bootstrapper loop is too slow for real-time coordination.
|
||||
|
||||
---
|
||||
|
||||
## 3. Candidate Designs
|
||||
|
||||
### Design A — Append-Only Event Log + Synthesis
|
||||
|
||||
**Overview:** All subagents write to a shared, append-only event log. A background synthesis process reads the log and extracts high-level facts into the knowledge store. Subagents also read the log to stay current.
|
||||
|
||||
**Data model:**
|
||||
```
|
||||
swarm-memory/
|
||||
event-log.jsonl # Immutable, ordered, concurrent-safe append
|
||||
event-index/ # By agent, by type, by timestamp
|
||||
synthesized-facts/ # Periodic distillation into durable facts
|
||||
checkpoints/ # Snapshot every N events for fast replay
|
||||
```
|
||||
|
||||
**Write path:**
|
||||
1. Subagent observes something → `event_log.append({agent, type, content, timestamp, session_id})`
|
||||
2. Other subagents can tail the log (like a changelog)
|
||||
|
||||
**Read path:**
|
||||
1. Before each action, subagent queries recent events (last N minutes or last M entries)
|
||||
2. Background job periodically runs synthesis LLM to convert raw events → distilled facts
|
||||
|
||||
**Pros:**
|
||||
- **Lossless:** Nothing is ever overwritten; full audit trail
|
||||
- **Concurrent-safe:** Append-only, no locking
|
||||
- **Causality preserved:** Order of discoveries is visible
|
||||
- **Replayable:** Any subagent can reconstruct state from checkpoint + tail
|
||||
|
||||
**Cons:**
|
||||
- **Signal/noise:** Raw events are noisy; synthesis latency means swarm facts lag
|
||||
- **Storage growth:** Event log grows unbounded without pruning policy
|
||||
- **Query performance:** Finding "all facts about X" requires synthesis or full scan
|
||||
- **Coordination latency:** Subagents only learn of discoveries after they're written and tailed
|
||||
|
||||
**Failure modes:**
|
||||
- **Duplication:** Multiple agents write the same observation → synthesis dedups
|
||||
- **Contradiction:** Two agents report conflicting facts → synthesis must reconcile
|
||||
- **Stale state:** Agent reads log at T0, then new events arrive before it acts
|
||||
|
||||
---
|
||||
|
||||
### Design B — Shared Board + Evidence Links
|
||||
|
||||
**Overview:** A shared, mutable board stores distilled facts. Each fact includes provenance links to the agent sessions that discovered it. Agents read-before-write and update via compare-and-swap.
|
||||
|
||||
**Data model:**
|
||||
```
|
||||
swarm-memory/
|
||||
board.yaml # Current set of facts with version stamps
|
||||
evidence-links/ # Mapping: fact_id → [session_id, turn_range]
|
||||
fact-history/ # append-only log of fact revisions (for audit)
|
||||
```
|
||||
|
||||
**Write path (compare-and-swap):**
|
||||
1. Agent reads current fact version
|
||||
2. Agent proposes update with new evidence
|
||||
3. System accepts if version unchanged since read; rejects with retry if conflict
|
||||
4. On accept → append to fact-history, increment board version
|
||||
|
||||
**Read path:**
|
||||
1. Agent reads board.yaml (small, distilled)
|
||||
2. If deeper verification needed, follow evidence-links to source sessions
|
||||
|
||||
**Pros:**
|
||||
- **Low-latency reads:** Board is small and current
|
||||
- **Explicit provenance:** Every fact knows which sessions contributed
|
||||
- **Conflict detection:** CAS catches concurrent updates
|
||||
- **Intentional updates:** Agents must justify changes with evidence
|
||||
|
||||
**Cons:**
|
||||
- **Write contention:** Multiple agents writing same fact cause retry storms
|
||||
- **Central point:** board.yaml is a single source of truth (but versioned)
|
||||
- **Merge complexity:** CAS retry logic must be retry-with-backoff; could stall
|
||||
- **Staleness window:** Between read and act, board may change
|
||||
|
||||
**Failure modes:**
|
||||
- **Thundering herd:** Many agents CAS-fail on same hot fact → exponential backoff needed
|
||||
- **Missing promotions:** A fact discovered but never written because agent crashed pre-write
|
||||
- **Board corruption:** If CAS not atomic, two writes could interleave
|
||||
- **Evidence loss:** If evidence-links point to deleted session transcripts, verification fails
|
||||
|
||||
---
|
||||
|
||||
## 4. Trade-off Matrix
|
||||
|
||||
| Dimension | Event Log | Shared Board |
|
||||
|---|---|---|
|
||||
| **Write concurrency** | Unbounded (append-only) | Contention on hot keys |
|
||||
| **Read latency** | Must scan/synthesize | Direct read (constant-time) |
|
||||
| **Storage efficiency** | Redundant raw events | Condensed facts |
|
||||
| **Auditability** | Full reconstruction | Requires fact-history |
|
||||
| **Coordination speed** | Lag between event → synthesis | Near-real-time (CAS cycle) |
|
||||
| **Complexity** | Log management + synthesis worker | CAS protocol + retry logic |
|
||||
|
||||
**Verdict:** Start with **Event Log** (simpler, safer, no coordination overhead), then layer Board as a *view* over synthesized facts if read latency becomes a bottleneck.
|
||||
|
||||
---
|
||||
|
||||
## 5. Proposed Experimental Prototype
|
||||
|
||||
**Scope:** Minimal viable swarm-memory path for a controlled parallel task.
|
||||
|
||||
**Task:** Have 3 concurrent subagents process a set of GitHub issues. Each agent:
|
||||
1. Reads issue details
|
||||
2. Searches codebase for relevant files
|
||||
3. Drafts a fix
|
||||
4. **Writes discovery events to swarm event log**
|
||||
5. Reads peer discoveries before next step
|
||||
|
||||
**Metrics to collect:**
|
||||
- Duplication rate: how many agents found the same root cause independently?
|
||||
- Correlation lift: did reading peer discoveries change agent behavior?
|
||||
- Latency: time from discovery to visibility across swarm
|
||||
- Synthesis quality: can an LLM summarize raw events into coherent fact?
|
||||
|
||||
**Implementation plan:**
|
||||
1. `scripts/swarm_event_log.py` — thread-safe JSONL append + tail API
|
||||
2. `scripts/swarm_synthesizer.py` — periodic batch that consumes event log, emits distilled facts
|
||||
3. Patch `hermes-agent` burn worker to emit events at key milestones
|
||||
4. Simple dashboard: `metrics/swarm_memory_dashboard.md`
|
||||
|
||||
**Success criteria:** Prototype runs end-to-end with 3 agents; event log captures discoveries; synthesizer produces at least one cross-agent insight.
|
||||
|
||||
---
|
||||
|
||||
## 6. Failure Modes to Watch
|
||||
|
||||
| Mode | Symptom | Mitigation |
|
||||
|---|---|---|
|
||||
| Duplication | Same fact appears from 3 agents | Synthesis dedup; evidence links count |
|
||||
| Contradiction | Agent A says "port 8080", Agent B says "port 3000" | Evidence-weighted majority; timestamp priority |
|
||||
| Stale shared state | Agent reads board, acts, board changed under it | Version vectors; read-modify-write CAS with retry |
|
||||
| Missing promotion | Discovery lost on agent crash | Event log is durable before action; recovery from last checkpoint |
|
||||
| Race on hot fact | Two agents try to write same fact simultaneously | CAS backoff; random jitter |
|
||||
| Log unbounded | Event log grows 10GB/day | Checkpoint + prune: keep summary + recent window |
|
||||
|
||||
---
|
||||
|
||||
## 7. Next Steps (Out of Scope for This Note)
|
||||
|
||||
- Build the event log implementation (Design A, phase 1)
|
||||
- Wire hermes-agent to emit events
|
||||
- Run the 3-agent parallel experiment
|
||||
- Measure and compare Board vs Log read patterns
|
||||
- Decide: ship to prod or iterate
|
||||
|
||||
---
|
||||
|
||||
## 8. References
|
||||
|
||||
- Parent: Timmy_Foundation/hermes-agent#984 — [ATLAS] Steal highest-leverage ecosystem patterns
|
||||
- Related: compounding-intelligence#229 — Telemetry ingestion (Tokscale)
|
||||
- Related: hermes-agent#985 — Lossless context + memory subsystem (LCM/GBrain)
|
||||
@@ -22,114 +22,95 @@ import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from session_reader import extract_conversation, read_session
|
||||
|
||||
|
||||
def compute_hash(text: str) -> str:
|
||||
"""Content hash for deduplication."""
|
||||
return hashlib.sha256(text.encode()).hexdigest()[:16]
|
||||
|
||||
|
||||
def extract_pairs_from_session(session_data: dict, min_ratio: float = 1.5,
|
||||
def extract_pairs_from_conversation(conversation: list, session_id: str, model: str,
|
||||
min_ratio: float = 1.5,
|
||||
min_response_words: int = 20) -> list:
|
||||
"""Extract terse→rich pairs from a single session object."""
|
||||
"""Extract terse→rich pairs from a normalized conversation."""
|
||||
pairs = []
|
||||
conversations = session_data.get("conversations", [])
|
||||
session_id = session_data.get("id", "unknown")
|
||||
model = session_data.get("model", "unknown")
|
||||
|
||||
seen_hashes = set()
|
||||
|
||||
for i, msg in enumerate(conversations):
|
||||
# Look for assistant/gpt responses
|
||||
if msg.get("from") not in ("gpt", "assistant"):
|
||||
for i, msg in enumerate(conversation):
|
||||
# Look for assistant responses
|
||||
if msg.get('role') != 'assistant':
|
||||
continue
|
||||
|
||||
response_text = msg.get("value", "")
|
||||
response_text = msg.get('content', '')
|
||||
if not response_text or len(response_text.split()) < min_response_words:
|
||||
continue
|
||||
|
||||
# Find the preceding human message
|
||||
# Find the preceding user message
|
||||
prompt_text = ""
|
||||
for j in range(i - 1, -1, -1):
|
||||
if conversations[j].get("from") == "human":
|
||||
prompt_text = conversations[j].get("value", "")
|
||||
if conversation[j].get('role') == 'user':
|
||||
prompt_text = conversation[j].get('content', '')
|
||||
break
|
||||
|
||||
if not prompt_text:
|
||||
continue
|
||||
|
||||
# Filter: skip tool results, system messages embedded as human
|
||||
if prompt_text.startswith("{") and "output" in prompt_text[:100]:
|
||||
continue # likely a tool result
|
||||
if prompt_text.startswith("# SOUL.md") or prompt_text.startswith("You are"):
|
||||
continue # system prompt leak
|
||||
if prompt_text.startswith('{') and 'output' in prompt_text[:100]:
|
||||
continue
|
||||
if prompt_text.startswith('# SOUL.md') or prompt_text.startswith('You are'):
|
||||
continue
|
||||
|
||||
# Quality filters
|
||||
prompt_words = len(prompt_text.split())
|
||||
response_words = len(response_text.split())
|
||||
|
||||
# Must have meaningful length ratio
|
||||
if prompt_words == 0 or response_words == 0:
|
||||
continue
|
||||
ratio = response_words / prompt_words
|
||||
if ratio < min_ratio:
|
||||
continue
|
||||
|
||||
# Skip responses that are mostly code
|
||||
code_blocks = response_text.count("```")
|
||||
if code_blocks >= 4 and len(response_text.replace("```", "").strip()) < 50:
|
||||
code_blocks = response_text.count('```')
|
||||
if code_blocks >= 4 and len(response_text.replace('```', '').strip()) < 50:
|
||||
continue
|
||||
|
||||
# Skip responses with tool call artifacts
|
||||
if "tool_call" in response_text[:100] or "function_call" in response_text[:100]:
|
||||
if 'tool_call' in response_text[:100] or 'function_call' in response_text[:100]:
|
||||
continue
|
||||
|
||||
# Deduplicate by content hash
|
||||
content_hash = compute_hash(prompt_text + response_text[:200])
|
||||
if content_hash in seen_hashes:
|
||||
continue
|
||||
seen_hashes.add(content_hash)
|
||||
|
||||
# Clean up response: remove markdown headers if too many
|
||||
clean_response = response_text
|
||||
|
||||
pairs.append({
|
||||
"terse": prompt_text.strip(),
|
||||
"rich": clean_response.strip(),
|
||||
"source": session_id,
|
||||
"model": model,
|
||||
"prompt_words": prompt_words,
|
||||
"response_words": response_words,
|
||||
"ratio": round(ratio, 2),
|
||||
'terse': prompt_text.strip(),
|
||||
'rich': clean_response.strip(),
|
||||
'source': session_id,
|
||||
'model': model,
|
||||
'prompt_words': prompt_words,
|
||||
'response_words': response_words,
|
||||
'ratio': round(ratio, 2),
|
||||
})
|
||||
|
||||
return pairs
|
||||
|
||||
|
||||
def extract_from_jsonl_file(filepath: str, **kwargs) -> list:
|
||||
"""Extract pairs from a session JSONL file."""
|
||||
pairs = []
|
||||
path = Path(filepath)
|
||||
|
||||
if not path.exists():
|
||||
print(f"Warning: {filepath} not found", file=sys.stderr)
|
||||
return pairs
|
||||
|
||||
content = path.read_text()
|
||||
lines = content.strip().split("\n")
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
session = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
session_pairs = extract_pairs_from_session(session, **kwargs)
|
||||
pairs.extend(session_pairs)
|
||||
|
||||
return pairs
|
||||
def extract_from_jsonl_file(path: str, **kwargs) -> list:
|
||||
"""Read a session file and extract training pairs using normalized conversation."""
|
||||
session_messages = read_session(path)
|
||||
if not session_messages:
|
||||
return []
|
||||
conversation = extract_conversation(session_messages)
|
||||
# Derive session_id and model from first real message metadata
|
||||
first_msg = next((m for m in session_messages if m.get('role') or m.get('from')), {})
|
||||
session_id = first_msg.get('meta_session_id', Path(path).name)
|
||||
model = first_msg.get('model', 'unknown')
|
||||
return extract_pairs_from_conversation(conversation, session_id, model, **kwargs)
|
||||
|
||||
|
||||
def deduplicate_pairs(pairs: list) -> list:
|
||||
|
||||
118
tests/test_session_pair_harvester.py
Normal file
118
tests/test_session_pair_harvester.py
Normal file
@@ -0,0 +1,118 @@
|
||||
"""
|
||||
Tests for session_pair_harvester — training pair extraction from sessions.
|
||||
"""
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "scripts"))
|
||||
from session_pair_harvester import (
|
||||
extract_pairs_from_conversation,
|
||||
extract_from_jsonl_file,
|
||||
deduplicate_pairs,
|
||||
compute_hash,
|
||||
)
|
||||
|
||||
|
||||
class TestSessionPairHarvester(unittest.TestCase):
|
||||
def test_compute_hash_consistent(self):
|
||||
h1 = compute_hash("hello world")
|
||||
h2 = compute_hash("hello world")
|
||||
self.assertEqual(h1, h2)
|
||||
self.assertEqual(len(h1), 16)
|
||||
|
||||
def test_extract_simple_qa_pair(self):
|
||||
"""A simple user→assistant exchange produces one pair."""
|
||||
conversation = [
|
||||
{"role": "user", "content": "What is the capital of France?"},
|
||||
{"role": "assistant", "content": "The capital of France is Paris. It is a major European city renowned for its art, fashion, gastronomy, cultural heritage, and historical significance. The city attracts millions of tourists annually."},
|
||||
]
|
||||
pairs = extract_pairs_from_conversation(conversation, "test_session", "test-model")
|
||||
self.assertEqual(len(pairs), 1)
|
||||
self.assertEqual(pairs[0]["terse"], "What is the capital of France?")
|
||||
self.assertIn("Paris", pairs[0]["rich"])
|
||||
self.assertEqual(pairs[0]["source"], "test_session")
|
||||
|
||||
def test_min_ratio_filter(self):
|
||||
"""Very short responses are filtered out."""
|
||||
conversation = [
|
||||
{"role": "user", "content": "Yes"},
|
||||
{"role": "assistant", "content": "No."},
|
||||
]
|
||||
# Default min_ratio = 1.5, min_words = 20 for response
|
||||
pairs = extract_pairs_from_conversation(conversation, "s", "m", min_response_words=3)
|
||||
self.assertEqual(len(pairs), 0)
|
||||
|
||||
def test_min_words_filter(self):
|
||||
"""Assistant responses below min word count are skipped."""
|
||||
conversation = [
|
||||
{"role": "user", "content": "Explain the project architecture in detail"},
|
||||
{"role": "assistant", "content": "OK."},
|
||||
]
|
||||
pairs = extract_pairs_from_conversation(conversation, "s", "m", min_response_words=5)
|
||||
self.assertEqual(len(pairs), 0)
|
||||
|
||||
def test_skip_non_assistant_messages(self):
|
||||
"""System and tool messages are ignored."""
|
||||
conversation = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi there! How can I help you today?"},
|
||||
]
|
||||
pairs = extract_pairs_from_conversation(conversation, "s", "m", min_response_words=3)
|
||||
self.assertEqual(len(pairs), 1)
|
||||
self.assertEqual(pairs[0]["terse"], "Hello")
|
||||
|
||||
def test_multiple_pairs_from_one_session(self):
|
||||
"""A conversation with several Q&A turns yields multiple pairs."""
|
||||
conversation = [
|
||||
{"role": "user", "content": "First question?"},
|
||||
{"role": "assistant", "content": "Here is a detailed and comprehensive answer that thoroughly explores multiple aspects of the subject. It provides background context and practical implications for the reader."},
|
||||
{"role": "user", "content": "Second?"},
|
||||
{"role": "assistant", "content": "Another comprehensive response with detailed examples. This includes practical code blocks and thorough explanations to ensure deep understanding of the topic at hand."},
|
||||
]
|
||||
pairs = extract_pairs_from_conversation(conversation, "s", "m", min_ratio=1.0)
|
||||
self.assertEqual(len(pairs), 2)
|
||||
|
||||
def test_deduplication_removes_duplicates(self):
|
||||
"""Identical pairs across sessions are deduplicated."""
|
||||
pairs = [
|
||||
{"terse": "q1", "rich": "a1", "source": "s1", "model": "m"},
|
||||
{"terse": "q1", "rich": "a1", "source": "s2", "model": "m"},
|
||||
{"terse": "q2", "rich": "a2", "source": "s1", "model": "m"},
|
||||
]
|
||||
unique = deduplicate_pairs(pairs)
|
||||
self.assertEqual(len(unique), 2)
|
||||
sources = {p["source"] for p in unique}
|
||||
# First unique pair can be from either s1 or s2
|
||||
self.assertIn("s1", sources)
|
||||
|
||||
def test_integration_with_test_sessions(self):
|
||||
"""Harvester finds pairs in real test session files."""
|
||||
repo_root = Path(__file__).parent.parent
|
||||
test_sessions_dir = repo_root / "test_sessions"
|
||||
if not test_sessions_dir.exists():
|
||||
self.skipTest("test_sessions not found")
|
||||
|
||||
pairs = []
|
||||
for jsonl_file in sorted(test_sessions_dir.glob("*.jsonl")):
|
||||
pairs.extend(extract_from_jsonl_file(str(jsonl_file)))
|
||||
|
||||
self.assertGreater(len(pairs), 0, "Should extract at least one pair from test_sessions")
|
||||
for p in pairs:
|
||||
self.assertIn("terse", p)
|
||||
self.assertIn("rich", p)
|
||||
self.assertIn("source", p)
|
||||
self.assertIn("model", p)
|
||||
# Verify content exists
|
||||
self.assertGreater(len(p["terse"]), 0)
|
||||
self.assertGreater(len(p["rich"]), 0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
Reference in New Issue
Block a user