Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a581d03a2b | ||
|
|
69b30152b4 |
@@ -1,66 +0,0 @@
|
||||
# Morning Review Packet Status — #949
|
||||
|
||||
Generated: 2026-04-22T14:57:44.332419+00:00
|
||||
Epic: [EPIC: Morning review packet — Hermes harness features landed 2026-04-21](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949)
|
||||
|
||||
## Summary
|
||||
|
||||
- Child QA issues tracked: 13
|
||||
- Open child issues: 11
|
||||
- Closed child issues: 2
|
||||
- Open child issues already backed by PRs: 7
|
||||
- Open child issues still unowned on forge: 4
|
||||
|
||||
## Child QA Matrix
|
||||
|
||||
| Issue | State | Open PRs | Title |
|
||||
|------:|-------|----------|-------|
|
||||
| #950 | open | — | [QA] Verify AI Gateway provider UX + attribution headers |
|
||||
| #951 | open | — | [QA] Verify transport abstraction + AnthropicTransport wiring |
|
||||
| #952 | open | — | [QA] Verify CLI voice beep toggle |
|
||||
| #953 | open | [#1020](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020) | [QA] Verify bundled skill scripts run out of the box |
|
||||
| #954 | open | [#1021](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021) | [QA] Verify maps skill guest_house / camp_site / bakery expansion |
|
||||
| #955 | open | — | [QA] Verify KittenTTS local provider end-to-end |
|
||||
| #956 | open | [#1018](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018) | [QA] Verify numbered keyboard shortcuts for approval + clarify prompts |
|
||||
| #957 | open | [#1015](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015) | [QA] Verify optional adversarial-ux-test skill catalog flow |
|
||||
| #958 | open | [#1016](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016) | [QA] Verify /usage account limits in CLI + gateway |
|
||||
| #959 | open | [#1014](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014) | [QA] Verify OpenCode-Go curated catalog additions |
|
||||
| #960 | open | [#1017](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017) | [QA] Verify patch 'did you mean?' suggestions |
|
||||
| #961 | closed | — | [QA] Verify web dashboard update/restart action buttons |
|
||||
| #962 | closed | — | [QA] Verify hardcoded-home path guard on burn/921 branch |
|
||||
|
||||
## Drift Signals
|
||||
|
||||
forge/main is still catching up to the upstream packet.
|
||||
|
||||
Active PR-backed child lanes:
|
||||
- #953 -> #1020 ([QA] Verify bundled skill scripts run out of the box)
|
||||
- #954 -> #1021 ([QA] Verify maps skill guest_house / camp_site / bakery expansion)
|
||||
- #956 -> #1018 ([QA] Verify numbered keyboard shortcuts for approval + clarify prompts)
|
||||
- #957 -> #1015 ([QA] Verify optional adversarial-ux-test skill catalog flow)
|
||||
- #958 -> #1016 ([QA] Verify /usage account limits in CLI + gateway)
|
||||
- #959 -> #1014 ([QA] Verify OpenCode-Go curated catalog additions)
|
||||
- #960 -> #1017 ([QA] Verify patch 'did you mean?' suggestions)
|
||||
|
||||
## Unowned Open QA Issues
|
||||
|
||||
- #950 [QA] Verify AI Gateway provider UX + attribution headers
|
||||
- #951 [QA] Verify transport abstraction + AnthropicTransport wiring
|
||||
- #952 [QA] Verify CLI voice beep toggle
|
||||
- #955 [QA] Verify KittenTTS local provider end-to-end
|
||||
|
||||
## Decomposition Follow-Ups
|
||||
|
||||
- #965 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding
|
||||
- #966 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation
|
||||
- #967 [closed] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification
|
||||
|
||||
## Conclusion
|
||||
|
||||
Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.
|
||||
|
||||
## Regeneration
|
||||
|
||||
```bash
|
||||
python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md
|
||||
```
|
||||
@@ -1,172 +0,0 @@
|
||||
{
|
||||
"generated_at": "2026-04-22T14:57:44.332419+00:00",
|
||||
"repo": "Timmy_Foundation/hermes-agent",
|
||||
"epic": {
|
||||
"number": 949,
|
||||
"title": "EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949"
|
||||
},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "[QA] Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/950",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 951,
|
||||
"title": "[QA] Verify transport abstraction + AnthropicTransport wiring",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/951",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 952,
|
||||
"title": "[QA] Verify CLI voice beep toggle",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/952",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 953,
|
||||
"title": "[QA] Verify bundled skill scripts run out of the box",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/953",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1020,
|
||||
"title": "fix: ship bundled skill scripts executable",
|
||||
"head": "fix/953",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "[QA] Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/954",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1021,
|
||||
"title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)",
|
||||
"head": "fix/954",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 955,
|
||||
"title": "[QA] Verify KittenTTS local provider end-to-end",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/955",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 956,
|
||||
"title": "[QA] Verify numbered keyboard shortcuts for approval + clarify prompts",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/956",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1018,
|
||||
"title": "fix: add numbered approval and clarify shortcuts (#956)",
|
||||
"head": "fix/956",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 957,
|
||||
"title": "[QA] Verify optional adversarial-ux-test skill catalog flow",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/957",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1015,
|
||||
"title": "feat(skills): backport adversarial-ux-test optional skill",
|
||||
"head": "fix/957",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 958,
|
||||
"title": "[QA] Verify /usage account limits in CLI + gateway",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/958",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1016,
|
||||
"title": "fix: restore /usage account limits in CLI + gateway (#958)",
|
||||
"head": "fix/958",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 959,
|
||||
"title": "[QA] Verify OpenCode-Go curated catalog additions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/959",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1014,
|
||||
"title": "fix(opencode-go): restore curated catalog additions",
|
||||
"head": "fix/959",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 960,
|
||||
"title": "[QA] Verify patch 'did you mean?' suggestions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/960",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1017,
|
||||
"title": "fix(patch): port and verify did-you-mean suggestions (#960)",
|
||||
"head": "fix/960",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "[QA] Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/961",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 962,
|
||||
"title": "[QA] Verify hardcoded-home path guard on burn/921 branch",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/962",
|
||||
"open_prs": []
|
||||
}
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{
|
||||
"number": 965,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/965"
|
||||
},
|
||||
{
|
||||
"number": 966,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/966"
|
||||
},
|
||||
{
|
||||
"number": 967,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/967"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -57,7 +57,7 @@ CONFIGURABLE_TOOLSETS = [
|
||||
("moa", "🧠 Mixture of Agents", "mixture_of_agents"),
|
||||
("tts", "🔊 Text-to-Speech", "text_to_speech"),
|
||||
("skills", "📚 Skills", "list, view, manage"),
|
||||
("todo", "📋 Task Planning", "todo"),
|
||||
("todo", "📋 Task Planning", "todo, ultraplan"),
|
||||
("memory", "💾 Memory", "persistent memory across sessions"),
|
||||
("session_search", "🔎 Session Search", "search past conversations"),
|
||||
("clarify", "❓ Clarifying Questions", "clarify"),
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate a grounded status report for hermes-agent morning review packet epic #949."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import ssl
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
BASE_API = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
REPO = "Timmy_Foundation/hermes-agent"
|
||||
TOKEN_PATH = Path("~/.config/gitea/token").expanduser()
|
||||
DEFAULT_JSON_OUT = Path("docs/morning-review-packet-2026-04-21.snapshot.json")
|
||||
DEFAULT_MARKDOWN_OUT = Path("docs/morning-review-packet-2026-04-21-status.md")
|
||||
|
||||
|
||||
def extract_issue_numbers(text: str) -> list[int]:
|
||||
seen: set[int] = set()
|
||||
numbers: list[int] = []
|
||||
for match in re.finditer(r"#(\d+)", text or ""):
|
||||
num = int(match.group(1))
|
||||
if num not in seen:
|
||||
seen.add(num)
|
||||
numbers.append(num)
|
||||
return numbers
|
||||
|
||||
|
||||
def _auth_headers(token: str) -> list[dict[str, str]]:
|
||||
basic = base64.b64encode(f"{token}:".encode()).decode()
|
||||
return [
|
||||
{"Authorization": f"token {token}", "Accept": "application/json"},
|
||||
{"Authorization": f"Basic {basic}", "Accept": "application/json"},
|
||||
]
|
||||
|
||||
|
||||
def api_get(path: str, *, headers_options: list[dict[str, str]] | None = None) -> Any:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = headers_options or _auth_headers(token)
|
||||
ctx = ssl.create_default_context()
|
||||
url = f"{BASE_API}{path}"
|
||||
last_error: Exception | None = None
|
||||
for headers in headers_options:
|
||||
try:
|
||||
req = urllib.request.Request(url, headers=headers)
|
||||
with urllib.request.urlopen(req, context=ctx, timeout=30) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except Exception as exc: # pragma: no cover - exercised via live CLI use
|
||||
last_error = exc
|
||||
raise RuntimeError(f"GET {url} failed: {last_error}")
|
||||
|
||||
|
||||
def issue_pr_matches(pr: dict[str, Any], issue_num: int) -> bool:
|
||||
title = pr.get("title") or ""
|
||||
body = pr.get("body") or ""
|
||||
head = (pr.get("head") or {}).get("ref") or ""
|
||||
exact_ref = re.compile(rf"(?<!\d)#{issue_num}(?!\d)")
|
||||
body_ref = re.compile(rf"(?i)(closes|close|fixes|fix|resolves|resolve|refs|ref)\s+#?{issue_num}(?!\d)")
|
||||
branch_variants = {
|
||||
f"fix/{issue_num}",
|
||||
f"issue-{issue_num}",
|
||||
f"burn/{issue_num}",
|
||||
f"fix/issue-{issue_num}",
|
||||
}
|
||||
return bool(
|
||||
exact_ref.search(title)
|
||||
or exact_ref.search(body)
|
||||
or body_ref.search(body)
|
||||
or head in branch_variants
|
||||
)
|
||||
|
||||
|
||||
def fetch_open_prs(*, headers_options: list[dict[str, str]]) -> list[dict[str, Any]]:
|
||||
prs: list[dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
batch = api_get(
|
||||
f"/repos/{REPO}/pulls?state=open&limit=100&page={page}",
|
||||
headers_options=headers_options,
|
||||
)
|
||||
if not batch:
|
||||
break
|
||||
prs.extend(batch)
|
||||
if len(batch) < 100:
|
||||
break
|
||||
page += 1
|
||||
return prs
|
||||
|
||||
|
||||
def fetch_live_snapshot(epic_issue_num: int = 949) -> dict[str, Any]:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = _auth_headers(token)
|
||||
|
||||
epic = api_get(f"/repos/{REPO}/issues/{epic_issue_num}", headers_options=headers_options)
|
||||
comments = api_get(f"/repos/{REPO}/issues/{epic_issue_num}/comments", headers_options=headers_options)
|
||||
child_numbers = [n for n in extract_issue_numbers(epic.get("body") or "") if n != epic_issue_num]
|
||||
decomposition_numbers = [
|
||||
n
|
||||
for comment in comments
|
||||
for n in extract_issue_numbers(comment.get("body") or "")
|
||||
if n not in child_numbers and n != epic_issue_num
|
||||
]
|
||||
|
||||
open_prs = fetch_open_prs(headers_options=headers_options)
|
||||
|
||||
children = []
|
||||
for number in child_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
matching_prs = [
|
||||
{
|
||||
"number": pr["number"],
|
||||
"title": pr["title"],
|
||||
"head": pr.get("head", {}).get("ref", ""),
|
||||
"url": pr["html_url"],
|
||||
}
|
||||
for pr in open_prs
|
||||
if issue_pr_matches(pr, number)
|
||||
]
|
||||
children.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
"open_prs": matching_prs,
|
||||
}
|
||||
)
|
||||
|
||||
decomposition_issues = []
|
||||
for number in decomposition_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
decomposition_issues.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"repo": REPO,
|
||||
"epic": {
|
||||
"number": epic["number"],
|
||||
"title": epic["title"],
|
||||
"state": epic["state"],
|
||||
"html_url": epic["html_url"],
|
||||
},
|
||||
"children": children,
|
||||
"decomposition_issues": decomposition_issues,
|
||||
}
|
||||
|
||||
|
||||
def summarize_snapshot(snapshot: dict[str, Any]) -> dict[str, int]:
|
||||
children = snapshot.get("children", [])
|
||||
open_children = [issue for issue in children if issue.get("state") == "open"]
|
||||
closed_children = [issue for issue in children if issue.get("state") == "closed"]
|
||||
open_with_pr = [issue for issue in open_children if issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in open_children if not issue.get("open_prs")]
|
||||
return {
|
||||
"total_children": len(children),
|
||||
"open_children": len(open_children),
|
||||
"closed_children": len(closed_children),
|
||||
"open_with_pr": len(open_with_pr),
|
||||
"open_without_pr": len(open_without_pr),
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(snapshot: dict[str, Any]) -> str:
|
||||
epic = snapshot["epic"]
|
||||
children = snapshot.get("children", [])
|
||||
summary = summarize_snapshot(snapshot)
|
||||
open_with_pr = [issue for issue in children if issue.get("state") == "open" and issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in children if issue.get("state") == "open" and not issue.get("open_prs")]
|
||||
decomposition = snapshot.get("decomposition_issues", [])
|
||||
|
||||
lines = [
|
||||
f"# Morning Review Packet Status — #{epic['number']}",
|
||||
"",
|
||||
f"Generated: {snapshot.get('generated_at', '')}",
|
||||
f"Epic: [{epic['title']}]({epic.get('html_url', '')})",
|
||||
"",
|
||||
"## Summary",
|
||||
"",
|
||||
f"- Child QA issues tracked: {summary['total_children']}",
|
||||
f"- Open child issues: {summary['open_children']}",
|
||||
f"- Closed child issues: {summary['closed_children']}",
|
||||
f"- Open child issues already backed by PRs: {summary['open_with_pr']}",
|
||||
f"- Open child issues still unowned on forge: {summary['open_without_pr']}",
|
||||
"",
|
||||
"## Child QA Matrix",
|
||||
"",
|
||||
"| Issue | State | Open PRs | Title |",
|
||||
"|------:|-------|----------|-------|",
|
||||
]
|
||||
|
||||
for issue in children:
|
||||
rendered_prs = []
|
||||
for pr in issue.get("open_prs", []):
|
||||
pr_num = pr.get("number", "?")
|
||||
pr_url = pr.get("url") or pr.get("html_url") or ""
|
||||
rendered_prs.append(f"[#{pr_num}]({pr_url})" if pr_url else f"#{pr_num}")
|
||||
pr_text = ", ".join(rendered_prs) or "—"
|
||||
lines.append(
|
||||
f"| #{issue['number']} | {issue['state']} | {pr_text} | {issue['title']} |"
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Drift Signals",
|
||||
"",
|
||||
"forge/main is still catching up to the upstream packet.",
|
||||
])
|
||||
|
||||
if open_with_pr:
|
||||
lines.append("")
|
||||
lines.append("Active PR-backed child lanes:")
|
||||
for issue in open_with_pr:
|
||||
pr_numbers = ", ".join(f"#{pr['number']}" for pr in issue.get("open_prs", []))
|
||||
lines.append(f"- #{issue['number']} -> {pr_numbers} ({issue['title']})")
|
||||
|
||||
if open_without_pr:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Unowned Open QA Issues",
|
||||
"",
|
||||
])
|
||||
for issue in open_without_pr:
|
||||
lines.append(f"- #{issue['number']} {issue['title']}")
|
||||
|
||||
if decomposition:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Decomposition Follow-Ups",
|
||||
"",
|
||||
])
|
||||
for issue in decomposition:
|
||||
lines.append(f"- #{issue['number']} [{issue['state']}] {issue['title']}")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Conclusion",
|
||||
"",
|
||||
"Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.",
|
||||
"",
|
||||
"## Regeneration",
|
||||
"",
|
||||
"```bash",
|
||||
"python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md",
|
||||
"```",
|
||||
])
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def write_json(path: Path, data: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(data, indent=2) + "\n", encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate grounded status docs for epic #949")
|
||||
parser.add_argument("--fetch-live", action="store_true", help="Fetch the current packet state from Forge")
|
||||
parser.add_argument("--snapshot", type=Path, help="Read a local JSON snapshot instead of hitting the API")
|
||||
parser.add_argument("--json-out", type=Path, default=DEFAULT_JSON_OUT, help="Path to write JSON snapshot")
|
||||
parser.add_argument("--markdown-out", type=Path, default=DEFAULT_MARKDOWN_OUT, help="Path to write markdown report")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.fetch_live or not args.snapshot:
|
||||
snapshot = fetch_live_snapshot()
|
||||
else:
|
||||
snapshot = json.loads(args.snapshot.read_text(encoding="utf-8"))
|
||||
|
||||
write_json(args.json_out, snapshot)
|
||||
args.markdown_out.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.markdown_out.write_text(render_markdown(snapshot), encoding="utf-8")
|
||||
print(args.markdown_out)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,94 +0,0 @@
|
||||
"""Tests for the morning review packet status report generator."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_PATH = Path(__file__).resolve().parents[1] / "scripts" / "morning_review_packet_status.py"
|
||||
DOC_PATH = Path(__file__).resolve().parents[1] / "docs" / "morning-review-packet-2026-04-21-status.md"
|
||||
|
||||
|
||||
def load_module():
|
||||
assert SCRIPT_PATH.exists(), f"missing status script: {SCRIPT_PATH}"
|
||||
spec = importlib.util.spec_from_file_location("morning_review_packet_status_test", SCRIPT_PATH)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def sample_snapshot():
|
||||
return {
|
||||
"epic": {"number": 949, "title": "Morning review packet", "state": "open"},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"open_prs": [],
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"open_prs": [
|
||||
{"number": 1021, "head": "fix/954", "title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)"}
|
||||
],
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"open_prs": [],
|
||||
},
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{"number": 965, "title": "Phase 1: Landscape Analysis & Scaffolding", "state": "open"},
|
||||
{"number": 967, "title": "Phase 3: Poka-yoke Integration & Fleet Verification", "state": "closed"},
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def test_extract_child_issue_numbers_from_epic_body():
|
||||
module = load_module()
|
||||
body = """
|
||||
- [ ] #950 one
|
||||
- [ ] #951 two
|
||||
- [ ] #962 three
|
||||
"""
|
||||
assert module.extract_issue_numbers(body) == [950, 951, 962]
|
||||
|
||||
|
||||
def test_summarize_snapshot_counts_open_closed_and_pr_backing():
|
||||
module = load_module()
|
||||
summary = module.summarize_snapshot(sample_snapshot())
|
||||
|
||||
assert summary["total_children"] == 3
|
||||
assert summary["open_children"] == 2
|
||||
assert summary["closed_children"] == 1
|
||||
assert summary["open_with_pr"] == 1
|
||||
assert summary["open_without_pr"] == 1
|
||||
|
||||
|
||||
def test_render_markdown_includes_issue_matrix_and_drift_sections():
|
||||
module = load_module()
|
||||
md = module.render_markdown(sample_snapshot())
|
||||
|
||||
assert "# Morning Review Packet Status — #949" in md
|
||||
assert "## Child QA Matrix" in md
|
||||
assert "#950" in md
|
||||
assert "#954" in md
|
||||
assert "#1021" in md
|
||||
assert "## Unowned Open QA Issues" in md
|
||||
assert "## Drift Signals" in md
|
||||
assert "forge/main is still catching up to the upstream packet" in md
|
||||
|
||||
|
||||
def test_committed_status_doc_exists_and_mentions_live_examples():
|
||||
assert DOC_PATH.exists(), f"missing generated status doc: {DOC_PATH}"
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
assert "# Morning Review Packet Status — #949" in text
|
||||
assert "#954" in text
|
||||
assert "#1021" in text
|
||||
assert "#950" in text
|
||||
@@ -294,22 +294,32 @@ class TestBuiltinDiscovery:
|
||||
"tools.browser_tool",
|
||||
"tools.clarify_tool",
|
||||
"tools.code_execution_tool",
|
||||
"tools.crisis_tool",
|
||||
"tools.cronjob_tools",
|
||||
"tools.delegate_tool",
|
||||
"tools.file_tools",
|
||||
"tools.homeassistant_tool",
|
||||
"tools.image_generation_tool",
|
||||
"tools.local_inference_tool",
|
||||
"tools.memory_tool",
|
||||
"tools.mixture_of_agents_tool",
|
||||
"tools.process_registry",
|
||||
"tools.rl_training_tool",
|
||||
"tools.scavenger_fixer",
|
||||
"tools.send_message_tool",
|
||||
"tools.session_search_tool",
|
||||
"tools.skill_manager_tool",
|
||||
"tools.skills_tool",
|
||||
"tools.sovereign_router",
|
||||
"tools.sovereign_scavenger",
|
||||
"tools.sovereign_teleport",
|
||||
"tools.static_analyzer",
|
||||
"tools.symbolic_verify",
|
||||
"tools.terminal_tool",
|
||||
"tools.todo_tool",
|
||||
"tools.tts_tool",
|
||||
"tools.ultraplan",
|
||||
"tools.verify_tool",
|
||||
"tools.vision_tools",
|
||||
"tools.web_tools",
|
||||
}
|
||||
|
||||
81
tests/tools/test_ultraplan_tool.py
Normal file
81
tests/tools/test_ultraplan_tool.py
Normal file
@@ -0,0 +1,81 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from toolsets import resolve_toolset
|
||||
from tools.registry import registry
|
||||
|
||||
|
||||
def test_create_action_saves_markdown_and_json(tmp_path):
|
||||
from tools.ultraplan import ultraplan_tool
|
||||
|
||||
result = json.loads(
|
||||
ultraplan_tool(
|
||||
action="create",
|
||||
mission="Daily autonomous planning",
|
||||
streams=[
|
||||
{
|
||||
"id": "A",
|
||||
"name": "Backlog burn",
|
||||
"phases": [
|
||||
{"id": "A1", "name": "Triage", "artifact": "issue list"},
|
||||
{"id": "A2", "name": "Ship", "dependencies": ["A1"], "artifact": "PR"},
|
||||
],
|
||||
}
|
||||
],
|
||||
base_dir=str(tmp_path),
|
||||
)
|
||||
)
|
||||
|
||||
assert result["success"] is True
|
||||
assert Path(result["file_path"]).exists()
|
||||
assert Path(result["json_path"]).exists()
|
||||
assert "Work Streams" in Path(result["file_path"]).read_text(encoding="utf-8")
|
||||
|
||||
|
||||
def test_load_action_returns_saved_plan(tmp_path):
|
||||
from tools.ultraplan import ultraplan_tool
|
||||
|
||||
created = json.loads(
|
||||
ultraplan_tool(
|
||||
action="create",
|
||||
date="20260422",
|
||||
mission="Mission from saved plan",
|
||||
base_dir=str(tmp_path),
|
||||
)
|
||||
)
|
||||
loaded = json.loads(
|
||||
ultraplan_tool(
|
||||
action="load",
|
||||
date="20260422",
|
||||
base_dir=str(tmp_path),
|
||||
)
|
||||
)
|
||||
|
||||
assert created["success"] is True
|
||||
assert loaded["success"] is True
|
||||
assert loaded["plan"]["mission"] == "Mission from saved plan"
|
||||
assert loaded["file_path"].endswith("ultraplan_20260422.md")
|
||||
|
||||
|
||||
def test_cron_spec_returns_daily_schedule_and_prompt():
|
||||
from tools.ultraplan import ultraplan_tool
|
||||
|
||||
result = json.loads(ultraplan_tool(action="cron_spec"))
|
||||
|
||||
assert result["success"] is True
|
||||
assert result["schedule"] == "0 6 * * *"
|
||||
assert "Ultraplan" in result["prompt"]
|
||||
assert "ultraplan_YYYYMMDD.md" in result["prompt"]
|
||||
|
||||
|
||||
def test_registry_registers_ultraplan_tool():
|
||||
import tools.ultraplan # noqa: F401
|
||||
|
||||
entry = registry.get_entry("ultraplan")
|
||||
assert entry is not None
|
||||
assert entry.toolset == "todo"
|
||||
|
||||
|
||||
def test_default_toolsets_include_ultraplan():
|
||||
assert "ultraplan" in resolve_toolset("todo")
|
||||
assert "ultraplan" in resolve_toolset("hermes-cli")
|
||||
@@ -290,6 +290,9 @@ def load_ultraplan(date: str, base_dir: Path = None) -> Optional[Ultraplan]:
|
||||
return None
|
||||
|
||||
|
||||
DEFAULT_ULTRAPLAN_SCHEDULE = "0 6 * * *"
|
||||
|
||||
|
||||
def generate_daily_cron_prompt() -> str:
|
||||
"""Generate the prompt for the daily ultraplan cron job."""
|
||||
return """Generate today's Ultraplan.
|
||||
@@ -298,9 +301,9 @@ Steps:
|
||||
1. Check open Gitea issues assigned to you
|
||||
2. Check open PRs needing review
|
||||
3. Check fleet health status
|
||||
4. Decompose work into parallel streams
|
||||
5. Generate ultraplan_YYYYMMDD.md
|
||||
6. File Gitea issue with the plan
|
||||
4. Decompose work into parallel streams with concrete phases and artifacts
|
||||
5. Use the ultraplan tool to save ~/.timmy/cron/ultraplan_YYYYMMDD.md and the matching JSON sidecar
|
||||
6. Optionally file a Gitea issue with the plan summary
|
||||
|
||||
Output format:
|
||||
- Mission statement
|
||||
@@ -308,3 +311,176 @@ Output format:
|
||||
- Dependency map
|
||||
- Success metrics
|
||||
"""
|
||||
|
||||
|
||||
def generate_daily_cron_job_spec(schedule: str = DEFAULT_ULTRAPLAN_SCHEDULE) -> Dict[str, str]:
|
||||
"""Return a reusable cron job spec for daily Ultraplan generation."""
|
||||
return {
|
||||
"name": "Daily Ultraplan",
|
||||
"schedule": schedule,
|
||||
"prompt": generate_daily_cron_prompt(),
|
||||
"path_pattern": "~/.timmy/cron/ultraplan_YYYYMMDD.md",
|
||||
}
|
||||
|
||||
|
||||
def _resolve_base_dir(base_dir: Optional[str | Path]) -> Path:
|
||||
"""Normalize the requested Ultraplan base directory."""
|
||||
if base_dir is None:
|
||||
return Path.home() / ".timmy" / "cron"
|
||||
return Path(base_dir).expanduser()
|
||||
|
||||
|
||||
def ultraplan_tool(
|
||||
action: str,
|
||||
date: Optional[str] = None,
|
||||
mission: str = "",
|
||||
streams: Optional[List[Dict[str, Any]]] = None,
|
||||
metrics: Optional[Dict[str, Any]] = None,
|
||||
notes: str = "",
|
||||
base_dir: Optional[str] = None,
|
||||
) -> str:
|
||||
"""Create/load Ultraplan artifacts and expose a daily cron spec."""
|
||||
from tools.registry import tool_error, tool_result
|
||||
|
||||
action = (action or "").strip().lower()
|
||||
resolved_base_dir = _resolve_base_dir(base_dir)
|
||||
|
||||
try:
|
||||
if action == "create":
|
||||
plan = create_ultraplan(date=date, mission=mission, streams=streams or [])
|
||||
if metrics:
|
||||
plan.metrics = metrics
|
||||
if notes:
|
||||
plan.notes = notes
|
||||
md_path = save_ultraplan(plan, base_dir=resolved_base_dir)
|
||||
json_path = resolved_base_dir / f"ultraplan_{plan.date}.json"
|
||||
return tool_result(
|
||||
success=True,
|
||||
action="create",
|
||||
date=plan.date,
|
||||
file_path=str(md_path),
|
||||
json_path=str(json_path),
|
||||
plan=plan.to_dict(),
|
||||
)
|
||||
|
||||
if action == "load":
|
||||
plan_date = date or datetime.now().strftime("%Y%m%d")
|
||||
plan = load_ultraplan(plan_date, base_dir=resolved_base_dir)
|
||||
if plan is None:
|
||||
return tool_error(
|
||||
f"No Ultraplan found for {plan_date}",
|
||||
success=False,
|
||||
action="load",
|
||||
date=plan_date,
|
||||
)
|
||||
return tool_result(
|
||||
success=True,
|
||||
action="load",
|
||||
date=plan.date,
|
||||
file_path=str(resolved_base_dir / f"ultraplan_{plan.date}.md"),
|
||||
json_path=str(resolved_base_dir / f"ultraplan_{plan.date}.json"),
|
||||
plan=plan.to_dict(),
|
||||
markdown=plan.to_markdown(),
|
||||
)
|
||||
|
||||
if action == "cron_spec":
|
||||
spec = generate_daily_cron_job_spec()
|
||||
return tool_result(success=True, action="cron_spec", **spec)
|
||||
|
||||
return tool_error(
|
||||
f"Unknown Ultraplan action: {action}",
|
||||
success=False,
|
||||
action=action,
|
||||
)
|
||||
except Exception as e:
|
||||
return tool_error(f"Ultraplan {action or 'tool'} failed: {e}", success=False, action=action)
|
||||
|
||||
|
||||
ULTRAPLAN_SCHEMA = {
|
||||
"name": "ultraplan",
|
||||
"description": (
|
||||
"Create or load daily Ultraplan planning artifacts under ~/.timmy/cron/ and "
|
||||
"return a reusable cron spec for autonomous planning. Use this when you want "
|
||||
"a concrete markdown/json plan file with streams, phases, dependencies, and metrics."
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["create", "load", "cron_spec"],
|
||||
"description": "Operation to perform",
|
||||
},
|
||||
"date": {
|
||||
"type": "string",
|
||||
"description": "Plan date as YYYYMMDD. Defaults to today for create/load.",
|
||||
},
|
||||
"mission": {
|
||||
"type": "string",
|
||||
"description": "High-level mission statement for today's plan.",
|
||||
},
|
||||
"streams": {
|
||||
"type": "array",
|
||||
"description": "Optional work streams with phases/artifacts/dependencies for create.",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {"type": "string"},
|
||||
"name": {"type": "string"},
|
||||
"phases": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {"type": "string"},
|
||||
"name": {"type": "string"},
|
||||
"description": {"type": "string"},
|
||||
"artifact": {"type": "string"},
|
||||
"dependencies": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
},
|
||||
},
|
||||
"required": ["name"],
|
||||
},
|
||||
},
|
||||
},
|
||||
"required": ["name"],
|
||||
},
|
||||
},
|
||||
"metrics": {
|
||||
"type": "object",
|
||||
"description": "Optional success metrics to store on the plan.",
|
||||
"additionalProperties": True,
|
||||
},
|
||||
"notes": {
|
||||
"type": "string",
|
||||
"description": "Optional free-form notes appended to the saved plan.",
|
||||
},
|
||||
"base_dir": {
|
||||
"type": "string",
|
||||
"description": "Optional override for the Ultraplan storage directory.",
|
||||
},
|
||||
},
|
||||
"required": ["action"],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
from tools.registry import registry
|
||||
|
||||
registry.register(
|
||||
name="ultraplan",
|
||||
toolset="todo",
|
||||
schema=ULTRAPLAN_SCHEMA,
|
||||
handler=lambda args, **_kw: ultraplan_tool(
|
||||
action=args.get("action", ""),
|
||||
date=args.get("date"),
|
||||
mission=args.get("mission", ""),
|
||||
streams=args.get("streams"),
|
||||
metrics=args.get("metrics"),
|
||||
notes=args.get("notes", ""),
|
||||
base_dir=args.get("base_dir"),
|
||||
),
|
||||
emoji="🗺️",
|
||||
)
|
||||
|
||||
@@ -47,7 +47,7 @@ _HERMES_CORE_TOOLS = [
|
||||
# Text-to-speech
|
||||
"text_to_speech",
|
||||
# Planning & memory
|
||||
"todo", "memory",
|
||||
"todo", "ultraplan", "memory",
|
||||
# Session history search
|
||||
"session_search",
|
||||
# Clarifying questions
|
||||
@@ -157,8 +157,8 @@ TOOLSETS = {
|
||||
},
|
||||
|
||||
"todo": {
|
||||
"description": "Task planning and tracking for multi-step work",
|
||||
"tools": ["todo"],
|
||||
"description": "Task planning and tracking for multi-step work, including daily Ultraplan artifacts",
|
||||
"tools": ["todo", "ultraplan"],
|
||||
"includes": []
|
||||
},
|
||||
|
||||
|
||||
Reference in New Issue
Block a user