Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7c38007094 |
@@ -1,66 +0,0 @@
|
||||
# Morning Review Packet Status — #949
|
||||
|
||||
Generated: 2026-04-22T14:57:44.332419+00:00
|
||||
Epic: [EPIC: Morning review packet — Hermes harness features landed 2026-04-21](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949)
|
||||
|
||||
## Summary
|
||||
|
||||
- Child QA issues tracked: 13
|
||||
- Open child issues: 11
|
||||
- Closed child issues: 2
|
||||
- Open child issues already backed by PRs: 7
|
||||
- Open child issues still unowned on forge: 4
|
||||
|
||||
## Child QA Matrix
|
||||
|
||||
| Issue | State | Open PRs | Title |
|
||||
|------:|-------|----------|-------|
|
||||
| #950 | open | — | [QA] Verify AI Gateway provider UX + attribution headers |
|
||||
| #951 | open | — | [QA] Verify transport abstraction + AnthropicTransport wiring |
|
||||
| #952 | open | — | [QA] Verify CLI voice beep toggle |
|
||||
| #953 | open | [#1020](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020) | [QA] Verify bundled skill scripts run out of the box |
|
||||
| #954 | open | [#1021](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021) | [QA] Verify maps skill guest_house / camp_site / bakery expansion |
|
||||
| #955 | open | — | [QA] Verify KittenTTS local provider end-to-end |
|
||||
| #956 | open | [#1018](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018) | [QA] Verify numbered keyboard shortcuts for approval + clarify prompts |
|
||||
| #957 | open | [#1015](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015) | [QA] Verify optional adversarial-ux-test skill catalog flow |
|
||||
| #958 | open | [#1016](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016) | [QA] Verify /usage account limits in CLI + gateway |
|
||||
| #959 | open | [#1014](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014) | [QA] Verify OpenCode-Go curated catalog additions |
|
||||
| #960 | open | [#1017](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017) | [QA] Verify patch 'did you mean?' suggestions |
|
||||
| #961 | closed | — | [QA] Verify web dashboard update/restart action buttons |
|
||||
| #962 | closed | — | [QA] Verify hardcoded-home path guard on burn/921 branch |
|
||||
|
||||
## Drift Signals
|
||||
|
||||
forge/main is still catching up to the upstream packet.
|
||||
|
||||
Active PR-backed child lanes:
|
||||
- #953 -> #1020 ([QA] Verify bundled skill scripts run out of the box)
|
||||
- #954 -> #1021 ([QA] Verify maps skill guest_house / camp_site / bakery expansion)
|
||||
- #956 -> #1018 ([QA] Verify numbered keyboard shortcuts for approval + clarify prompts)
|
||||
- #957 -> #1015 ([QA] Verify optional adversarial-ux-test skill catalog flow)
|
||||
- #958 -> #1016 ([QA] Verify /usage account limits in CLI + gateway)
|
||||
- #959 -> #1014 ([QA] Verify OpenCode-Go curated catalog additions)
|
||||
- #960 -> #1017 ([QA] Verify patch 'did you mean?' suggestions)
|
||||
|
||||
## Unowned Open QA Issues
|
||||
|
||||
- #950 [QA] Verify AI Gateway provider UX + attribution headers
|
||||
- #951 [QA] Verify transport abstraction + AnthropicTransport wiring
|
||||
- #952 [QA] Verify CLI voice beep toggle
|
||||
- #955 [QA] Verify KittenTTS local provider end-to-end
|
||||
|
||||
## Decomposition Follow-Ups
|
||||
|
||||
- #965 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding
|
||||
- #966 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation
|
||||
- #967 [closed] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification
|
||||
|
||||
## Conclusion
|
||||
|
||||
Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.
|
||||
|
||||
## Regeneration
|
||||
|
||||
```bash
|
||||
python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md
|
||||
```
|
||||
@@ -1,172 +0,0 @@
|
||||
{
|
||||
"generated_at": "2026-04-22T14:57:44.332419+00:00",
|
||||
"repo": "Timmy_Foundation/hermes-agent",
|
||||
"epic": {
|
||||
"number": 949,
|
||||
"title": "EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949"
|
||||
},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "[QA] Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/950",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 951,
|
||||
"title": "[QA] Verify transport abstraction + AnthropicTransport wiring",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/951",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 952,
|
||||
"title": "[QA] Verify CLI voice beep toggle",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/952",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 953,
|
||||
"title": "[QA] Verify bundled skill scripts run out of the box",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/953",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1020,
|
||||
"title": "fix: ship bundled skill scripts executable",
|
||||
"head": "fix/953",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "[QA] Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/954",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1021,
|
||||
"title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)",
|
||||
"head": "fix/954",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 955,
|
||||
"title": "[QA] Verify KittenTTS local provider end-to-end",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/955",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 956,
|
||||
"title": "[QA] Verify numbered keyboard shortcuts for approval + clarify prompts",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/956",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1018,
|
||||
"title": "fix: add numbered approval and clarify shortcuts (#956)",
|
||||
"head": "fix/956",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 957,
|
||||
"title": "[QA] Verify optional adversarial-ux-test skill catalog flow",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/957",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1015,
|
||||
"title": "feat(skills): backport adversarial-ux-test optional skill",
|
||||
"head": "fix/957",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 958,
|
||||
"title": "[QA] Verify /usage account limits in CLI + gateway",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/958",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1016,
|
||||
"title": "fix: restore /usage account limits in CLI + gateway (#958)",
|
||||
"head": "fix/958",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 959,
|
||||
"title": "[QA] Verify OpenCode-Go curated catalog additions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/959",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1014,
|
||||
"title": "fix(opencode-go): restore curated catalog additions",
|
||||
"head": "fix/959",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 960,
|
||||
"title": "[QA] Verify patch 'did you mean?' suggestions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/960",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1017,
|
||||
"title": "fix(patch): port and verify did-you-mean suggestions (#960)",
|
||||
"head": "fix/960",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "[QA] Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/961",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 962,
|
||||
"title": "[QA] Verify hardcoded-home path guard on burn/921 branch",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/962",
|
||||
"open_prs": []
|
||||
}
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{
|
||||
"number": 965,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/965"
|
||||
},
|
||||
{
|
||||
"number": 966,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/966"
|
||||
},
|
||||
{
|
||||
"number": 967,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/967"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -26,6 +26,7 @@ from agent.memory_provider import MemoryProvider
|
||||
from tools.registry import tool_error
|
||||
from .store import MemoryStore
|
||||
from .retrieval import FactRetriever
|
||||
from .observations import ObservationSynthesizer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -37,28 +38,29 @@ logger = logging.getLogger(__name__)
|
||||
FACT_STORE_SCHEMA = {
|
||||
"name": "fact_store",
|
||||
"description": (
|
||||
"Deep structured memory with algebraic reasoning. "
|
||||
"Deep structured memory with algebraic reasoning and grounded observation synthesis. "
|
||||
"Use alongside the memory tool — memory for always-on context, "
|
||||
"fact_store for deep recall and compositional queries.\n\n"
|
||||
"fact_store for deep recall, compositional queries, and higher-order observations.\n\n"
|
||||
"ACTIONS (simple → powerful):\n"
|
||||
"• add — Store a fact the user would expect you to remember.\n"
|
||||
"• search — Keyword lookup ('editor config', 'deploy process').\n"
|
||||
"• probe — Entity recall: ALL facts about a person/thing.\n"
|
||||
"• related — What connects to an entity? Structural adjacency.\n"
|
||||
"• reason — Compositional: facts connected to MULTIPLE entities simultaneously.\n"
|
||||
"• observe — Synthesized higher-order observations backed by supporting facts.\n"
|
||||
"• contradict — Memory hygiene: find facts making conflicting claims.\n"
|
||||
"• update/remove/list — CRUD operations.\n\n"
|
||||
"IMPORTANT: Before answering questions about the user, ALWAYS probe or reason first."
|
||||
"IMPORTANT: Before answering questions about the user, ALWAYS probe/reason/observe first."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["add", "search", "probe", "related", "reason", "contradict", "update", "remove", "list"],
|
||||
"enum": ["add", "search", "probe", "related", "reason", "observe", "contradict", "update", "remove", "list"],
|
||||
},
|
||||
"content": {"type": "string", "description": "Fact content (required for 'add')."},
|
||||
"query": {"type": "string", "description": "Search query (required for 'search')."},
|
||||
"query": {"type": "string", "description": "Search query (required for 'search'/'observe')."},
|
||||
"entity": {"type": "string", "description": "Entity name for 'probe'/'related'."},
|
||||
"entities": {"type": "array", "items": {"type": "string"}, "description": "Entity names for 'reason'."},
|
||||
"fact_id": {"type": "integer", "description": "Fact ID for 'update'/'remove'."},
|
||||
@@ -66,6 +68,12 @@ FACT_STORE_SCHEMA = {
|
||||
"tags": {"type": "string", "description": "Comma-separated tags."},
|
||||
"trust_delta": {"type": "number", "description": "Trust adjustment for 'update'."},
|
||||
"min_trust": {"type": "number", "description": "Minimum trust filter (default: 0.3)."},
|
||||
"min_confidence": {"type": "number", "description": "Minimum observation confidence (default: 0.6)."},
|
||||
"observation_type": {
|
||||
"type": "string",
|
||||
"enum": ["recurring_preference", "stable_direction", "behavioral_pattern"],
|
||||
"description": "Optional observation type filter for 'observe'.",
|
||||
},
|
||||
"limit": {"type": "integer", "description": "Max results (default: 10)."},
|
||||
},
|
||||
"required": ["action"],
|
||||
@@ -118,7 +126,9 @@ class HolographicMemoryProvider(MemoryProvider):
|
||||
self._config = config or _load_plugin_config()
|
||||
self._store = None
|
||||
self._retriever = None
|
||||
self._observation_synth = None
|
||||
self._min_trust = float(self._config.get("min_trust_threshold", 0.3))
|
||||
self._observation_min_confidence = float(self._config.get("observation_min_confidence", 0.6))
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
@@ -177,6 +187,7 @@ class HolographicMemoryProvider(MemoryProvider):
|
||||
hrr_weight=hrr_weight,
|
||||
hrr_dim=hrr_dim,
|
||||
)
|
||||
self._observation_synth = ObservationSynthesizer(self._store)
|
||||
self._session_id = session_id
|
||||
|
||||
def system_prompt_block(self) -> str:
|
||||
@@ -193,30 +204,76 @@ class HolographicMemoryProvider(MemoryProvider):
|
||||
"# Holographic Memory\n"
|
||||
"Active. Empty fact store — proactively add facts the user would expect you to remember.\n"
|
||||
"Use fact_store(action='add') to store durable structured facts about people, projects, preferences, decisions.\n"
|
||||
"Use fact_store(action='observe') to synthesize higher-order observations with evidence.\n"
|
||||
"Use fact_feedback to rate facts after using them (trains trust scores)."
|
||||
)
|
||||
return (
|
||||
f"# Holographic Memory\n"
|
||||
f"Active. {total} facts stored with entity resolution and trust scoring.\n"
|
||||
f"Use fact_store to search, probe entities, reason across entities, or add facts.\n"
|
||||
f"Use fact_store to search, probe entities, reason across entities, or synthesize observations.\n"
|
||||
f"Use fact_feedback to rate facts after using them (trains trust scores)."
|
||||
)
|
||||
|
||||
def prefetch(self, query: str, *, session_id: str = "") -> str:
|
||||
if not self._retriever or not query:
|
||||
if not query:
|
||||
return ""
|
||||
|
||||
parts = []
|
||||
raw_results = []
|
||||
try:
|
||||
results = self._retriever.search(query, min_trust=self._min_trust, limit=5)
|
||||
if not results:
|
||||
return ""
|
||||
if self._retriever:
|
||||
raw_results = self._retriever.search(query, min_trust=self._min_trust, limit=5)
|
||||
except Exception as e:
|
||||
logger.debug("Holographic prefetch fact search failed: %s", e)
|
||||
raw_results = []
|
||||
|
||||
observations = []
|
||||
try:
|
||||
if self._observation_synth:
|
||||
observations = self._observation_synth.observe(
|
||||
query,
|
||||
min_confidence=self._observation_min_confidence,
|
||||
limit=3,
|
||||
refresh=True,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug("Holographic prefetch observation search failed: %s", e)
|
||||
observations = []
|
||||
|
||||
if not raw_results and observations:
|
||||
seen_fact_ids = set()
|
||||
evidence_backfill = []
|
||||
for observation in observations:
|
||||
for evidence in observation.get("evidence", []):
|
||||
fact_id = evidence.get("fact_id")
|
||||
if fact_id in seen_fact_ids:
|
||||
continue
|
||||
seen_fact_ids.add(fact_id)
|
||||
evidence_backfill.append(evidence)
|
||||
raw_results = evidence_backfill[:5]
|
||||
|
||||
if raw_results:
|
||||
lines = []
|
||||
for r in results:
|
||||
for r in raw_results:
|
||||
trust = r.get("trust_score", r.get("trust", 0))
|
||||
lines.append(f"- [{trust:.1f}] {r.get('content', '')}")
|
||||
return "## Holographic Memory\n" + "\n".join(lines)
|
||||
except Exception as e:
|
||||
logger.debug("Holographic prefetch failed: %s", e)
|
||||
return ""
|
||||
parts.append("## Holographic Memory\n" + "\n".join(lines))
|
||||
|
||||
if observations:
|
||||
lines = []
|
||||
for observation in observations:
|
||||
evidence_ids = ", ".join(
|
||||
f"#{item['fact_id']}" for item in observation.get("evidence", [])[:3]
|
||||
) or "none"
|
||||
lines.append(
|
||||
f"- [{observation.get('confidence', 0.0):.2f}] "
|
||||
f"{observation.get('observation_type', 'observation')}: "
|
||||
f"{observation.get('summary', '')} "
|
||||
f"(evidence: {evidence_ids})"
|
||||
)
|
||||
parts.append("## Holographic Observations\n" + "\n".join(lines))
|
||||
|
||||
return "\n\n".join(parts)
|
||||
|
||||
def sync_turn(self, user_content: str, assistant_content: str, *, session_id: str = "") -> None:
|
||||
# Holographic memory stores explicit facts via tools, not auto-sync.
|
||||
@@ -252,6 +309,7 @@ class HolographicMemoryProvider(MemoryProvider):
|
||||
def shutdown(self) -> None:
|
||||
self._store = None
|
||||
self._retriever = None
|
||||
self._observation_synth = None
|
||||
|
||||
# -- Tool handlers -------------------------------------------------------
|
||||
|
||||
@@ -305,6 +363,19 @@ class HolographicMemoryProvider(MemoryProvider):
|
||||
)
|
||||
return json.dumps({"results": results, "count": len(results)})
|
||||
|
||||
elif action == "observe":
|
||||
synthesizer = self._observation_synth
|
||||
if not synthesizer:
|
||||
return tool_error("Observation synthesizer is not initialized")
|
||||
observations = synthesizer.observe(
|
||||
args.get("query", ""),
|
||||
observation_type=args.get("observation_type"),
|
||||
min_confidence=float(args.get("min_confidence", self._observation_min_confidence)),
|
||||
limit=int(args.get("limit", 10)),
|
||||
refresh=True,
|
||||
)
|
||||
return json.dumps({"observations": observations, "count": len(observations)})
|
||||
|
||||
elif action == "contradict":
|
||||
results = retriever.contradict(
|
||||
category=args.get("category"),
|
||||
|
||||
249
plugins/memory/holographic/observations.py
Normal file
249
plugins/memory/holographic/observations.py
Normal file
@@ -0,0 +1,249 @@
|
||||
"""Higher-order observation synthesis for holographic memory.
|
||||
|
||||
Builds grounded observations from accumulated facts and keeps them in a
|
||||
separate retrieval layer with explicit evidence links back to supporting facts.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
from .store import MemoryStore
|
||||
|
||||
_TOKEN_RE = re.compile(r"[a-z0-9_]+")
|
||||
_HIGHER_ORDER_CUES = {
|
||||
"prefer",
|
||||
"preference",
|
||||
"preferences",
|
||||
"style",
|
||||
"pattern",
|
||||
"patterns",
|
||||
"behavior",
|
||||
"behaviour",
|
||||
"habit",
|
||||
"habits",
|
||||
"workflow",
|
||||
"direction",
|
||||
"trajectory",
|
||||
"strategy",
|
||||
"tend",
|
||||
"usually",
|
||||
}
|
||||
|
||||
_OBSERVATION_PATTERNS = [
|
||||
{
|
||||
"observation_type": "recurring_preference",
|
||||
"subject": "communication_style",
|
||||
"categories": {"user_pref", "general"},
|
||||
"labels": {
|
||||
"concise": ["concise", "terse", "brief", "short", "no fluff"],
|
||||
"result_first": ["result-only", "result only", "outcome only", "quick", "quickly"],
|
||||
"silent_ops": ["silent", "no status", "no repetitive status", "no questions"],
|
||||
},
|
||||
"summary_prefix": "Recurring preference",
|
||||
},
|
||||
{
|
||||
"observation_type": "stable_direction",
|
||||
"subject": "project_direction",
|
||||
"categories": {"project", "general", "tool"},
|
||||
"labels": {
|
||||
"local_first": ["local-first", "local first", "local-only", "local only", "ollama", "own hardware"],
|
||||
"gitea_first": ["gitea-first", "gitea first", "forge", "pull request", "pr flow", "issue flow"],
|
||||
"ansible": ["ansible", "playbook", "role", "deploy via ansible"],
|
||||
},
|
||||
"summary_prefix": "Stable direction",
|
||||
},
|
||||
{
|
||||
"observation_type": "behavioral_pattern",
|
||||
"subject": "operator_workflow",
|
||||
"categories": {"general", "project", "tool", "user_pref"},
|
||||
"labels": {
|
||||
"commit_early": ["commit early", "commits early", "commit after", "wip commit"],
|
||||
"pr_first": ["open pr", "push a pr", "pull request", "pr immediately", "create pr"],
|
||||
"dedup_guard": ["no dupes", "no duplicates", "avoid duplicate", "existing pr"],
|
||||
},
|
||||
"summary_prefix": "Behavioral pattern",
|
||||
},
|
||||
]
|
||||
|
||||
_TYPE_QUERY_HINTS = {
|
||||
"recurring_preference": {"prefer", "preference", "style", "communication", "likes", "wants"},
|
||||
"stable_direction": {"direction", "trajectory", "strategy", "project", "roadmap", "moving"},
|
||||
"behavioral_pattern": {"pattern", "behavior", "workflow", "habit", "operator", "agent", "usually"},
|
||||
}
|
||||
|
||||
|
||||
class ObservationSynthesizer:
|
||||
"""Synthesizes grounded observations from facts and retrieves them by query."""
|
||||
|
||||
def __init__(self, store: MemoryStore):
|
||||
self.store = store
|
||||
|
||||
def synthesize(
|
||||
self,
|
||||
*,
|
||||
persist: bool = True,
|
||||
min_confidence: float = 0.6,
|
||||
limit: int = 10,
|
||||
) -> list[dict[str, Any]]:
|
||||
facts = self.store.list_facts(min_trust=0.0, limit=1000)
|
||||
observations: list[dict[str, Any]] = []
|
||||
|
||||
for pattern in _OBSERVATION_PATTERNS:
|
||||
candidate = self._build_candidate(pattern, facts, min_confidence=min_confidence)
|
||||
if not candidate:
|
||||
continue
|
||||
|
||||
if persist:
|
||||
candidate["observation_id"] = self.store.upsert_observation(
|
||||
candidate["observation_type"],
|
||||
candidate["subject"],
|
||||
candidate["summary"],
|
||||
candidate["confidence"],
|
||||
candidate["evidence_fact_ids"],
|
||||
metadata=candidate["metadata"],
|
||||
)
|
||||
|
||||
candidate["evidence"] = self._expand_evidence(candidate["evidence_fact_ids"])
|
||||
candidate["evidence_count"] = len(candidate["evidence"])
|
||||
candidate.pop("evidence_fact_ids", None)
|
||||
observations.append(candidate)
|
||||
|
||||
observations.sort(
|
||||
key=lambda item: (item["confidence"], item.get("evidence_count", 0)),
|
||||
reverse=True,
|
||||
)
|
||||
return observations[:limit]
|
||||
|
||||
def observe(
|
||||
self,
|
||||
query: str = "",
|
||||
*,
|
||||
observation_type: str | None = None,
|
||||
min_confidence: float = 0.6,
|
||||
limit: int = 10,
|
||||
refresh: bool = True,
|
||||
) -> list[dict[str, Any]]:
|
||||
if refresh:
|
||||
self.synthesize(persist=True, min_confidence=min_confidence, limit=limit)
|
||||
|
||||
observations = self.store.list_observations(
|
||||
observation_type=observation_type,
|
||||
min_confidence=min_confidence,
|
||||
limit=max(limit * 4, 20),
|
||||
)
|
||||
if not observations:
|
||||
return []
|
||||
|
||||
if not query:
|
||||
return observations[:limit]
|
||||
|
||||
query_tokens = self._tokenize(query)
|
||||
is_higher_order = bool(query_tokens & _HIGHER_ORDER_CUES)
|
||||
ranked: list[dict[str, Any]] = []
|
||||
|
||||
for item in observations:
|
||||
searchable = " ".join(
|
||||
[
|
||||
item.get("summary", ""),
|
||||
item.get("subject", ""),
|
||||
item.get("observation_type", ""),
|
||||
" ".join(item.get("metadata", {}).get("labels", [])),
|
||||
]
|
||||
)
|
||||
overlap = self._overlap_score(query_tokens, self._tokenize(searchable))
|
||||
type_bonus = self._type_bonus(query_tokens, item.get("observation_type", ""))
|
||||
if overlap <= 0 and type_bonus <= 0 and not is_higher_order:
|
||||
continue
|
||||
ranked_item = dict(item)
|
||||
ranked_item["score"] = round(item.get("confidence", 0.0) + overlap + type_bonus, 3)
|
||||
ranked.append(ranked_item)
|
||||
|
||||
if not ranked and is_higher_order:
|
||||
ranked = [
|
||||
{**item, "score": round(float(item.get("confidence", 0.0)), 3)}
|
||||
for item in observations
|
||||
]
|
||||
|
||||
ranked.sort(
|
||||
key=lambda item: (item.get("score", 0.0), item.get("confidence", 0.0), item.get("evidence_count", 0)),
|
||||
reverse=True,
|
||||
)
|
||||
return ranked[:limit]
|
||||
|
||||
def _build_candidate(
|
||||
self,
|
||||
pattern: dict[str, Any],
|
||||
facts: list[dict[str, Any]],
|
||||
*,
|
||||
min_confidence: float,
|
||||
) -> dict[str, Any] | None:
|
||||
matched_fact_ids: set[int] = set()
|
||||
matched_labels: dict[str, set[int]] = {label: set() for label in pattern["labels"]}
|
||||
|
||||
for fact in facts:
|
||||
if fact.get("category") not in pattern["categories"]:
|
||||
continue
|
||||
haystack = f"{fact.get('content', '')} {fact.get('tags', '')}".lower()
|
||||
local_match = False
|
||||
for label, keywords in pattern["labels"].items():
|
||||
if any(keyword in haystack for keyword in keywords):
|
||||
matched_labels[label].add(int(fact["fact_id"]))
|
||||
local_match = True
|
||||
if local_match:
|
||||
matched_fact_ids.add(int(fact["fact_id"]))
|
||||
|
||||
if len(matched_fact_ids) < 2:
|
||||
return None
|
||||
|
||||
active_labels = sorted(label for label, ids in matched_labels.items() if ids)
|
||||
confidence = min(0.95, 0.35 + 0.12 * len(matched_fact_ids) + 0.08 * len(active_labels))
|
||||
confidence = round(confidence, 3)
|
||||
if confidence < min_confidence:
|
||||
return None
|
||||
|
||||
label_summary = ", ".join(label.replace("_", "-") for label in active_labels)
|
||||
subject_text = pattern["subject"].replace("_", " ")
|
||||
summary = (
|
||||
f"{pattern['summary_prefix']}: {subject_text} trends toward {label_summary} "
|
||||
f"based on {len(matched_fact_ids)} supporting facts."
|
||||
)
|
||||
return {
|
||||
"observation_type": pattern["observation_type"],
|
||||
"subject": pattern["subject"],
|
||||
"summary": summary,
|
||||
"confidence": confidence,
|
||||
"metadata": {
|
||||
"labels": active_labels,
|
||||
"evidence_count": len(matched_fact_ids),
|
||||
},
|
||||
"evidence_fact_ids": sorted(matched_fact_ids),
|
||||
}
|
||||
|
||||
def _expand_evidence(self, fact_ids: list[int]) -> list[dict[str, Any]]:
|
||||
facts_by_id = {
|
||||
fact["fact_id"]: fact
|
||||
for fact in self.store.list_facts(min_trust=0.0, limit=1000)
|
||||
}
|
||||
return [facts_by_id[fact_id] for fact_id in fact_ids if fact_id in facts_by_id]
|
||||
|
||||
@staticmethod
|
||||
def _tokenize(text: str) -> set[str]:
|
||||
return set(_TOKEN_RE.findall(text.lower()))
|
||||
|
||||
@staticmethod
|
||||
def _overlap_score(query_tokens: set[str], text_tokens: set[str]) -> float:
|
||||
if not query_tokens or not text_tokens:
|
||||
return 0.0
|
||||
overlap = query_tokens & text_tokens
|
||||
if not overlap:
|
||||
return 0.0
|
||||
return round(len(overlap) / max(len(query_tokens), 1), 3)
|
||||
|
||||
@staticmethod
|
||||
def _type_bonus(query_tokens: set[str], observation_type: str) -> float:
|
||||
hints = _TYPE_QUERY_HINTS.get(observation_type, set())
|
||||
if not hints:
|
||||
return 0.0
|
||||
return 0.25 if query_tokens & hints else 0.0
|
||||
@@ -3,6 +3,7 @@ SQLite-backed fact store with entity resolution and trust scoring.
|
||||
Single-user Hermes memory store plugin.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import sqlite3
|
||||
import threading
|
||||
@@ -73,6 +74,28 @@ CREATE TABLE IF NOT EXISTS memory_banks (
|
||||
fact_count INTEGER DEFAULT 0,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
observation_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
observation_type TEXT NOT NULL,
|
||||
subject TEXT NOT NULL,
|
||||
summary TEXT NOT NULL,
|
||||
confidence REAL DEFAULT 0.0,
|
||||
metadata_json TEXT DEFAULT '{}',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE(observation_type, subject)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS observation_evidence (
|
||||
observation_id INTEGER REFERENCES observations(observation_id) ON DELETE CASCADE,
|
||||
fact_id INTEGER REFERENCES facts(fact_id) ON DELETE CASCADE,
|
||||
evidence_weight REAL DEFAULT 1.0,
|
||||
PRIMARY KEY (observation_id, fact_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(observation_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_confidence ON observations(confidence DESC);
|
||||
"""
|
||||
|
||||
# Trust adjustment constants
|
||||
@@ -128,6 +151,7 @@ class MemoryStore:
|
||||
def _init_db(self) -> None:
|
||||
"""Create tables, indexes, and triggers if they do not exist. Enable WAL mode."""
|
||||
self._conn.execute("PRAGMA journal_mode=WAL")
|
||||
self._conn.execute("PRAGMA foreign_keys=ON")
|
||||
self._conn.executescript(_SCHEMA)
|
||||
# Migrate: add hrr_vector column if missing (safe for existing databases)
|
||||
columns = {row[1] for row in self._conn.execute("PRAGMA table_info(facts)").fetchall()}
|
||||
@@ -346,6 +370,115 @@ class MemoryStore:
|
||||
rows = self._conn.execute(sql, params).fetchall()
|
||||
return [self._row_to_dict(r) for r in rows]
|
||||
|
||||
def upsert_observation(
|
||||
self,
|
||||
observation_type: str,
|
||||
subject: str,
|
||||
summary: str,
|
||||
confidence: float,
|
||||
evidence_fact_ids: list[int],
|
||||
metadata: dict | None = None,
|
||||
) -> int:
|
||||
"""Create or update a synthesized observation and its evidence links."""
|
||||
with self._lock:
|
||||
metadata_json = json.dumps(metadata or {}, sort_keys=True)
|
||||
self._conn.execute(
|
||||
"""
|
||||
INSERT INTO observations (
|
||||
observation_type, subject, summary, confidence, metadata_json
|
||||
)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
ON CONFLICT(observation_type, subject) DO UPDATE SET
|
||||
summary = excluded.summary,
|
||||
confidence = excluded.confidence,
|
||||
metadata_json = excluded.metadata_json,
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
""",
|
||||
(observation_type, subject, summary, confidence, metadata_json),
|
||||
)
|
||||
row = self._conn.execute(
|
||||
"""
|
||||
SELECT observation_id
|
||||
FROM observations
|
||||
WHERE observation_type = ? AND subject = ?
|
||||
""",
|
||||
(observation_type, subject),
|
||||
).fetchone()
|
||||
observation_id = int(row["observation_id"])
|
||||
|
||||
self._conn.execute(
|
||||
"DELETE FROM observation_evidence WHERE observation_id = ?",
|
||||
(observation_id,),
|
||||
)
|
||||
unique_fact_ids = sorted({int(fid) for fid in evidence_fact_ids})
|
||||
if unique_fact_ids:
|
||||
self._conn.executemany(
|
||||
"""
|
||||
INSERT OR IGNORE INTO observation_evidence (observation_id, fact_id)
|
||||
VALUES (?, ?)
|
||||
""",
|
||||
[(observation_id, fact_id) for fact_id in unique_fact_ids],
|
||||
)
|
||||
self._conn.commit()
|
||||
return observation_id
|
||||
|
||||
def list_observations(
|
||||
self,
|
||||
observation_type: str | None = None,
|
||||
min_confidence: float = 0.0,
|
||||
limit: int = 50,
|
||||
) -> list[dict]:
|
||||
"""List synthesized observations with expanded supporting evidence."""
|
||||
with self._lock:
|
||||
params: list = [min_confidence]
|
||||
observation_clause = ""
|
||||
if observation_type is not None:
|
||||
observation_clause = "AND observation_type = ?"
|
||||
params.append(observation_type)
|
||||
params.append(limit)
|
||||
rows = self._conn.execute(
|
||||
f"""
|
||||
SELECT observation_id, observation_type, subject, summary, confidence,
|
||||
metadata_json, created_at, updated_at,
|
||||
(
|
||||
SELECT COUNT(*)
|
||||
FROM observation_evidence oe
|
||||
WHERE oe.observation_id = observations.observation_id
|
||||
) AS evidence_count
|
||||
FROM observations
|
||||
WHERE confidence >= ?
|
||||
{observation_clause}
|
||||
ORDER BY confidence DESC, updated_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
params,
|
||||
).fetchall()
|
||||
|
||||
results = []
|
||||
for row in rows:
|
||||
item = dict(row)
|
||||
try:
|
||||
item["metadata"] = json.loads(item.pop("metadata_json") or "{}")
|
||||
except json.JSONDecodeError:
|
||||
item["metadata"] = {}
|
||||
item["evidence"] = self._get_observation_evidence(int(item["observation_id"]))
|
||||
results.append(item)
|
||||
return results
|
||||
|
||||
def _get_observation_evidence(self, observation_id: int) -> list[dict]:
|
||||
rows = self._conn.execute(
|
||||
"""
|
||||
SELECT f.fact_id, f.content, f.category, f.tags, f.trust_score,
|
||||
f.retrieval_count, f.helpful_count, f.created_at, f.updated_at
|
||||
FROM observation_evidence oe
|
||||
JOIN facts f ON f.fact_id = oe.fact_id
|
||||
WHERE oe.observation_id = ?
|
||||
ORDER BY f.trust_score DESC, f.updated_at DESC
|
||||
""",
|
||||
(observation_id,),
|
||||
).fetchall()
|
||||
return [self._row_to_dict(row) for row in rows]
|
||||
|
||||
def record_feedback(self, fact_id: int, helpful: bool) -> dict:
|
||||
"""Record user feedback and adjust trust asymmetrically.
|
||||
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate a grounded status report for hermes-agent morning review packet epic #949."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import ssl
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
BASE_API = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
REPO = "Timmy_Foundation/hermes-agent"
|
||||
TOKEN_PATH = Path("~/.config/gitea/token").expanduser()
|
||||
DEFAULT_JSON_OUT = Path("docs/morning-review-packet-2026-04-21.snapshot.json")
|
||||
DEFAULT_MARKDOWN_OUT = Path("docs/morning-review-packet-2026-04-21-status.md")
|
||||
|
||||
|
||||
def extract_issue_numbers(text: str) -> list[int]:
|
||||
seen: set[int] = set()
|
||||
numbers: list[int] = []
|
||||
for match in re.finditer(r"#(\d+)", text or ""):
|
||||
num = int(match.group(1))
|
||||
if num not in seen:
|
||||
seen.add(num)
|
||||
numbers.append(num)
|
||||
return numbers
|
||||
|
||||
|
||||
def _auth_headers(token: str) -> list[dict[str, str]]:
|
||||
basic = base64.b64encode(f"{token}:".encode()).decode()
|
||||
return [
|
||||
{"Authorization": f"token {token}", "Accept": "application/json"},
|
||||
{"Authorization": f"Basic {basic}", "Accept": "application/json"},
|
||||
]
|
||||
|
||||
|
||||
def api_get(path: str, *, headers_options: list[dict[str, str]] | None = None) -> Any:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = headers_options or _auth_headers(token)
|
||||
ctx = ssl.create_default_context()
|
||||
url = f"{BASE_API}{path}"
|
||||
last_error: Exception | None = None
|
||||
for headers in headers_options:
|
||||
try:
|
||||
req = urllib.request.Request(url, headers=headers)
|
||||
with urllib.request.urlopen(req, context=ctx, timeout=30) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except Exception as exc: # pragma: no cover - exercised via live CLI use
|
||||
last_error = exc
|
||||
raise RuntimeError(f"GET {url} failed: {last_error}")
|
||||
|
||||
|
||||
def issue_pr_matches(pr: dict[str, Any], issue_num: int) -> bool:
|
||||
title = pr.get("title") or ""
|
||||
body = pr.get("body") or ""
|
||||
head = (pr.get("head") or {}).get("ref") or ""
|
||||
exact_ref = re.compile(rf"(?<!\d)#{issue_num}(?!\d)")
|
||||
body_ref = re.compile(rf"(?i)(closes|close|fixes|fix|resolves|resolve|refs|ref)\s+#?{issue_num}(?!\d)")
|
||||
branch_variants = {
|
||||
f"fix/{issue_num}",
|
||||
f"issue-{issue_num}",
|
||||
f"burn/{issue_num}",
|
||||
f"fix/issue-{issue_num}",
|
||||
}
|
||||
return bool(
|
||||
exact_ref.search(title)
|
||||
or exact_ref.search(body)
|
||||
or body_ref.search(body)
|
||||
or head in branch_variants
|
||||
)
|
||||
|
||||
|
||||
def fetch_open_prs(*, headers_options: list[dict[str, str]]) -> list[dict[str, Any]]:
|
||||
prs: list[dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
batch = api_get(
|
||||
f"/repos/{REPO}/pulls?state=open&limit=100&page={page}",
|
||||
headers_options=headers_options,
|
||||
)
|
||||
if not batch:
|
||||
break
|
||||
prs.extend(batch)
|
||||
if len(batch) < 100:
|
||||
break
|
||||
page += 1
|
||||
return prs
|
||||
|
||||
|
||||
def fetch_live_snapshot(epic_issue_num: int = 949) -> dict[str, Any]:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = _auth_headers(token)
|
||||
|
||||
epic = api_get(f"/repos/{REPO}/issues/{epic_issue_num}", headers_options=headers_options)
|
||||
comments = api_get(f"/repos/{REPO}/issues/{epic_issue_num}/comments", headers_options=headers_options)
|
||||
child_numbers = [n for n in extract_issue_numbers(epic.get("body") or "") if n != epic_issue_num]
|
||||
decomposition_numbers = [
|
||||
n
|
||||
for comment in comments
|
||||
for n in extract_issue_numbers(comment.get("body") or "")
|
||||
if n not in child_numbers and n != epic_issue_num
|
||||
]
|
||||
|
||||
open_prs = fetch_open_prs(headers_options=headers_options)
|
||||
|
||||
children = []
|
||||
for number in child_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
matching_prs = [
|
||||
{
|
||||
"number": pr["number"],
|
||||
"title": pr["title"],
|
||||
"head": pr.get("head", {}).get("ref", ""),
|
||||
"url": pr["html_url"],
|
||||
}
|
||||
for pr in open_prs
|
||||
if issue_pr_matches(pr, number)
|
||||
]
|
||||
children.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
"open_prs": matching_prs,
|
||||
}
|
||||
)
|
||||
|
||||
decomposition_issues = []
|
||||
for number in decomposition_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
decomposition_issues.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"repo": REPO,
|
||||
"epic": {
|
||||
"number": epic["number"],
|
||||
"title": epic["title"],
|
||||
"state": epic["state"],
|
||||
"html_url": epic["html_url"],
|
||||
},
|
||||
"children": children,
|
||||
"decomposition_issues": decomposition_issues,
|
||||
}
|
||||
|
||||
|
||||
def summarize_snapshot(snapshot: dict[str, Any]) -> dict[str, int]:
|
||||
children = snapshot.get("children", [])
|
||||
open_children = [issue for issue in children if issue.get("state") == "open"]
|
||||
closed_children = [issue for issue in children if issue.get("state") == "closed"]
|
||||
open_with_pr = [issue for issue in open_children if issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in open_children if not issue.get("open_prs")]
|
||||
return {
|
||||
"total_children": len(children),
|
||||
"open_children": len(open_children),
|
||||
"closed_children": len(closed_children),
|
||||
"open_with_pr": len(open_with_pr),
|
||||
"open_without_pr": len(open_without_pr),
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(snapshot: dict[str, Any]) -> str:
|
||||
epic = snapshot["epic"]
|
||||
children = snapshot.get("children", [])
|
||||
summary = summarize_snapshot(snapshot)
|
||||
open_with_pr = [issue for issue in children if issue.get("state") == "open" and issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in children if issue.get("state") == "open" and not issue.get("open_prs")]
|
||||
decomposition = snapshot.get("decomposition_issues", [])
|
||||
|
||||
lines = [
|
||||
f"# Morning Review Packet Status — #{epic['number']}",
|
||||
"",
|
||||
f"Generated: {snapshot.get('generated_at', '')}",
|
||||
f"Epic: [{epic['title']}]({epic.get('html_url', '')})",
|
||||
"",
|
||||
"## Summary",
|
||||
"",
|
||||
f"- Child QA issues tracked: {summary['total_children']}",
|
||||
f"- Open child issues: {summary['open_children']}",
|
||||
f"- Closed child issues: {summary['closed_children']}",
|
||||
f"- Open child issues already backed by PRs: {summary['open_with_pr']}",
|
||||
f"- Open child issues still unowned on forge: {summary['open_without_pr']}",
|
||||
"",
|
||||
"## Child QA Matrix",
|
||||
"",
|
||||
"| Issue | State | Open PRs | Title |",
|
||||
"|------:|-------|----------|-------|",
|
||||
]
|
||||
|
||||
for issue in children:
|
||||
rendered_prs = []
|
||||
for pr in issue.get("open_prs", []):
|
||||
pr_num = pr.get("number", "?")
|
||||
pr_url = pr.get("url") or pr.get("html_url") or ""
|
||||
rendered_prs.append(f"[#{pr_num}]({pr_url})" if pr_url else f"#{pr_num}")
|
||||
pr_text = ", ".join(rendered_prs) or "—"
|
||||
lines.append(
|
||||
f"| #{issue['number']} | {issue['state']} | {pr_text} | {issue['title']} |"
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Drift Signals",
|
||||
"",
|
||||
"forge/main is still catching up to the upstream packet.",
|
||||
])
|
||||
|
||||
if open_with_pr:
|
||||
lines.append("")
|
||||
lines.append("Active PR-backed child lanes:")
|
||||
for issue in open_with_pr:
|
||||
pr_numbers = ", ".join(f"#{pr['number']}" for pr in issue.get("open_prs", []))
|
||||
lines.append(f"- #{issue['number']} -> {pr_numbers} ({issue['title']})")
|
||||
|
||||
if open_without_pr:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Unowned Open QA Issues",
|
||||
"",
|
||||
])
|
||||
for issue in open_without_pr:
|
||||
lines.append(f"- #{issue['number']} {issue['title']}")
|
||||
|
||||
if decomposition:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Decomposition Follow-Ups",
|
||||
"",
|
||||
])
|
||||
for issue in decomposition:
|
||||
lines.append(f"- #{issue['number']} [{issue['state']}] {issue['title']}")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Conclusion",
|
||||
"",
|
||||
"Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.",
|
||||
"",
|
||||
"## Regeneration",
|
||||
"",
|
||||
"```bash",
|
||||
"python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md",
|
||||
"```",
|
||||
])
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def write_json(path: Path, data: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(data, indent=2) + "\n", encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate grounded status docs for epic #949")
|
||||
parser.add_argument("--fetch-live", action="store_true", help="Fetch the current packet state from Forge")
|
||||
parser.add_argument("--snapshot", type=Path, help="Read a local JSON snapshot instead of hitting the API")
|
||||
parser.add_argument("--json-out", type=Path, default=DEFAULT_JSON_OUT, help="Path to write JSON snapshot")
|
||||
parser.add_argument("--markdown-out", type=Path, default=DEFAULT_MARKDOWN_OUT, help="Path to write markdown report")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.fetch_live or not args.snapshot:
|
||||
snapshot = fetch_live_snapshot()
|
||||
else:
|
||||
snapshot = json.loads(args.snapshot.read_text(encoding="utf-8"))
|
||||
|
||||
write_json(args.json_out, snapshot)
|
||||
args.markdown_out.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.markdown_out.write_text(render_markdown(snapshot), encoding="utf-8")
|
||||
print(args.markdown_out)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
96
tests/plugins/memory/test_holographic_observations.py
Normal file
96
tests/plugins/memory/test_holographic_observations.py
Normal file
@@ -0,0 +1,96 @@
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
from plugins.memory.holographic import HolographicMemoryProvider
|
||||
from plugins.memory.holographic.store import MemoryStore
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def store(tmp_path):
|
||||
db_path = tmp_path / "memory.db"
|
||||
s = MemoryStore(db_path=str(db_path), default_trust=0.5)
|
||||
yield s
|
||||
s.close()
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def provider(tmp_path):
|
||||
p = HolographicMemoryProvider(
|
||||
config={
|
||||
"db_path": str(tmp_path / "memory.db"),
|
||||
"default_trust": 0.5,
|
||||
}
|
||||
)
|
||||
p.initialize(session_id="test-session")
|
||||
yield p
|
||||
if p._store:
|
||||
p._store.close()
|
||||
|
||||
|
||||
class TestObservationSynthesis:
|
||||
def test_observe_action_persists_observation_with_evidence_links(self, provider):
|
||||
fact_ids = [
|
||||
provider._store.add_fact('User prefers concise status updates', category='user_pref'),
|
||||
provider._store.add_fact('User wants result-only replies with no fluff', category='user_pref'),
|
||||
]
|
||||
|
||||
result = json.loads(
|
||||
provider.handle_tool_call(
|
||||
'fact_store',
|
||||
{
|
||||
'action': 'observe',
|
||||
'query': 'What communication style does the user prefer?',
|
||||
'limit': 5,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
assert result['count'] == 1
|
||||
observation = result['observations'][0]
|
||||
assert observation['observation_type'] == 'recurring_preference'
|
||||
assert observation['confidence'] >= 0.6
|
||||
assert sorted(item['fact_id'] for item in observation['evidence']) == sorted(fact_ids)
|
||||
|
||||
stored = provider._store.list_observations(limit=10)
|
||||
assert len(stored) == 1
|
||||
assert stored[0]['observation_type'] == 'recurring_preference'
|
||||
assert stored[0]['evidence_count'] == 2
|
||||
assert len(provider._store.list_facts(limit=10)) == 2
|
||||
|
||||
def test_observe_action_synthesizes_three_observation_types(self, provider):
|
||||
provider._store.add_fact('User prefers concise updates', category='user_pref')
|
||||
provider._store.add_fact('User wants result-only communication', category='user_pref')
|
||||
provider._store.add_fact('Project is moving to a local-first deployment model', category='project')
|
||||
provider._store.add_fact('Project direction stays Gitea-first for issue and PR flow', category='project')
|
||||
provider._store.add_fact('Operator always commits early before moving on', category='general')
|
||||
provider._store.add_fact('Operator pushes a PR immediately after each meaningful fix', category='general')
|
||||
|
||||
result = json.loads(provider.handle_tool_call('fact_store', {'action': 'observe', 'limit': 10}))
|
||||
types = {item['observation_type'] for item in result['observations']}
|
||||
|
||||
assert {'recurring_preference', 'stable_direction', 'behavioral_pattern'} <= types
|
||||
|
||||
def test_single_fact_does_not_create_overconfident_observation(self, provider):
|
||||
provider._store.add_fact('User prefers concise updates', category='user_pref')
|
||||
|
||||
result = json.loads(
|
||||
provider.handle_tool_call(
|
||||
'fact_store',
|
||||
{'action': 'observe', 'query': 'What does the user prefer?', 'limit': 5},
|
||||
)
|
||||
)
|
||||
|
||||
assert result['count'] == 0
|
||||
assert provider._store.list_observations(limit=10) == []
|
||||
|
||||
def test_prefetch_surfaces_observations_as_separate_layer(self, provider):
|
||||
provider._store.add_fact('User prefers concise updates', category='user_pref')
|
||||
provider._store.add_fact('User wants result-only communication', category='user_pref')
|
||||
|
||||
prefetch = provider.prefetch('What communication style does the user prefer?')
|
||||
|
||||
assert '## Holographic Observations' in prefetch
|
||||
assert '## Holographic Memory' in prefetch
|
||||
assert 'recurring_preference' in prefetch
|
||||
assert 'evidence' in prefetch.lower()
|
||||
@@ -1,94 +0,0 @@
|
||||
"""Tests for the morning review packet status report generator."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_PATH = Path(__file__).resolve().parents[1] / "scripts" / "morning_review_packet_status.py"
|
||||
DOC_PATH = Path(__file__).resolve().parents[1] / "docs" / "morning-review-packet-2026-04-21-status.md"
|
||||
|
||||
|
||||
def load_module():
|
||||
assert SCRIPT_PATH.exists(), f"missing status script: {SCRIPT_PATH}"
|
||||
spec = importlib.util.spec_from_file_location("morning_review_packet_status_test", SCRIPT_PATH)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def sample_snapshot():
|
||||
return {
|
||||
"epic": {"number": 949, "title": "Morning review packet", "state": "open"},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"open_prs": [],
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"open_prs": [
|
||||
{"number": 1021, "head": "fix/954", "title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)"}
|
||||
],
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"open_prs": [],
|
||||
},
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{"number": 965, "title": "Phase 1: Landscape Analysis & Scaffolding", "state": "open"},
|
||||
{"number": 967, "title": "Phase 3: Poka-yoke Integration & Fleet Verification", "state": "closed"},
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def test_extract_child_issue_numbers_from_epic_body():
|
||||
module = load_module()
|
||||
body = """
|
||||
- [ ] #950 one
|
||||
- [ ] #951 two
|
||||
- [ ] #962 three
|
||||
"""
|
||||
assert module.extract_issue_numbers(body) == [950, 951, 962]
|
||||
|
||||
|
||||
def test_summarize_snapshot_counts_open_closed_and_pr_backing():
|
||||
module = load_module()
|
||||
summary = module.summarize_snapshot(sample_snapshot())
|
||||
|
||||
assert summary["total_children"] == 3
|
||||
assert summary["open_children"] == 2
|
||||
assert summary["closed_children"] == 1
|
||||
assert summary["open_with_pr"] == 1
|
||||
assert summary["open_without_pr"] == 1
|
||||
|
||||
|
||||
def test_render_markdown_includes_issue_matrix_and_drift_sections():
|
||||
module = load_module()
|
||||
md = module.render_markdown(sample_snapshot())
|
||||
|
||||
assert "# Morning Review Packet Status — #949" in md
|
||||
assert "## Child QA Matrix" in md
|
||||
assert "#950" in md
|
||||
assert "#954" in md
|
||||
assert "#1021" in md
|
||||
assert "## Unowned Open QA Issues" in md
|
||||
assert "## Drift Signals" in md
|
||||
assert "forge/main is still catching up to the upstream packet" in md
|
||||
|
||||
|
||||
def test_committed_status_doc_exists_and_mentions_live_examples():
|
||||
assert DOC_PATH.exists(), f"missing generated status doc: {DOC_PATH}"
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
assert "# Morning Review Packet Status — #949" in text
|
||||
assert "#954" in text
|
||||
assert "#1021" in text
|
||||
assert "#950" in text
|
||||
Reference in New Issue
Block a user