Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b1890a0300 | ||
|
|
3d7b839e6b | ||
|
|
e869bf20f9 |
66
docs/morning-review-packet-2026-04-21-status.md
Normal file
66
docs/morning-review-packet-2026-04-21-status.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Morning Review Packet Status — #949
|
||||
|
||||
Generated: 2026-04-22T14:57:44.332419+00:00
|
||||
Epic: [EPIC: Morning review packet — Hermes harness features landed 2026-04-21](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949)
|
||||
|
||||
## Summary
|
||||
|
||||
- Child QA issues tracked: 13
|
||||
- Open child issues: 11
|
||||
- Closed child issues: 2
|
||||
- Open child issues already backed by PRs: 7
|
||||
- Open child issues still unowned on forge: 4
|
||||
|
||||
## Child QA Matrix
|
||||
|
||||
| Issue | State | Open PRs | Title |
|
||||
|------:|-------|----------|-------|
|
||||
| #950 | open | — | [QA] Verify AI Gateway provider UX + attribution headers |
|
||||
| #951 | open | — | [QA] Verify transport abstraction + AnthropicTransport wiring |
|
||||
| #952 | open | — | [QA] Verify CLI voice beep toggle |
|
||||
| #953 | open | [#1020](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020) | [QA] Verify bundled skill scripts run out of the box |
|
||||
| #954 | open | [#1021](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021) | [QA] Verify maps skill guest_house / camp_site / bakery expansion |
|
||||
| #955 | open | — | [QA] Verify KittenTTS local provider end-to-end |
|
||||
| #956 | open | [#1018](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018) | [QA] Verify numbered keyboard shortcuts for approval + clarify prompts |
|
||||
| #957 | open | [#1015](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015) | [QA] Verify optional adversarial-ux-test skill catalog flow |
|
||||
| #958 | open | [#1016](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016) | [QA] Verify /usage account limits in CLI + gateway |
|
||||
| #959 | open | [#1014](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014) | [QA] Verify OpenCode-Go curated catalog additions |
|
||||
| #960 | open | [#1017](https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017) | [QA] Verify patch 'did you mean?' suggestions |
|
||||
| #961 | closed | — | [QA] Verify web dashboard update/restart action buttons |
|
||||
| #962 | closed | — | [QA] Verify hardcoded-home path guard on burn/921 branch |
|
||||
|
||||
## Drift Signals
|
||||
|
||||
forge/main is still catching up to the upstream packet.
|
||||
|
||||
Active PR-backed child lanes:
|
||||
- #953 -> #1020 ([QA] Verify bundled skill scripts run out of the box)
|
||||
- #954 -> #1021 ([QA] Verify maps skill guest_house / camp_site / bakery expansion)
|
||||
- #956 -> #1018 ([QA] Verify numbered keyboard shortcuts for approval + clarify prompts)
|
||||
- #957 -> #1015 ([QA] Verify optional adversarial-ux-test skill catalog flow)
|
||||
- #958 -> #1016 ([QA] Verify /usage account limits in CLI + gateway)
|
||||
- #959 -> #1014 ([QA] Verify OpenCode-Go curated catalog additions)
|
||||
- #960 -> #1017 ([QA] Verify patch 'did you mean?' suggestions)
|
||||
|
||||
## Unowned Open QA Issues
|
||||
|
||||
- #950 [QA] Verify AI Gateway provider UX + attribution headers
|
||||
- #951 [QA] Verify transport abstraction + AnthropicTransport wiring
|
||||
- #952 [QA] Verify CLI voice beep toggle
|
||||
- #955 [QA] Verify KittenTTS local provider end-to-end
|
||||
|
||||
## Decomposition Follow-Ups
|
||||
|
||||
- #965 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding
|
||||
- #966 [open] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation
|
||||
- #967 [closed] [EPIC: Morning review packet — Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification
|
||||
|
||||
## Conclusion
|
||||
|
||||
Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.
|
||||
|
||||
## Regeneration
|
||||
|
||||
```bash
|
||||
python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md
|
||||
```
|
||||
172
docs/morning-review-packet-2026-04-21.snapshot.json
Normal file
172
docs/morning-review-packet-2026-04-21.snapshot.json
Normal file
@@ -0,0 +1,172 @@
|
||||
{
|
||||
"generated_at": "2026-04-22T14:57:44.332419+00:00",
|
||||
"repo": "Timmy_Foundation/hermes-agent",
|
||||
"epic": {
|
||||
"number": 949,
|
||||
"title": "EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/949"
|
||||
},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "[QA] Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/950",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 951,
|
||||
"title": "[QA] Verify transport abstraction + AnthropicTransport wiring",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/951",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 952,
|
||||
"title": "[QA] Verify CLI voice beep toggle",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/952",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 953,
|
||||
"title": "[QA] Verify bundled skill scripts run out of the box",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/953",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1020,
|
||||
"title": "fix: ship bundled skill scripts executable",
|
||||
"head": "fix/953",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1020"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "[QA] Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/954",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1021,
|
||||
"title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)",
|
||||
"head": "fix/954",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1021"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 955,
|
||||
"title": "[QA] Verify KittenTTS local provider end-to-end",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/955",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 956,
|
||||
"title": "[QA] Verify numbered keyboard shortcuts for approval + clarify prompts",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/956",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1018,
|
||||
"title": "fix: add numbered approval and clarify shortcuts (#956)",
|
||||
"head": "fix/956",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1018"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 957,
|
||||
"title": "[QA] Verify optional adversarial-ux-test skill catalog flow",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/957",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1015,
|
||||
"title": "feat(skills): backport adversarial-ux-test optional skill",
|
||||
"head": "fix/957",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1015"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 958,
|
||||
"title": "[QA] Verify /usage account limits in CLI + gateway",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/958",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1016,
|
||||
"title": "fix: restore /usage account limits in CLI + gateway (#958)",
|
||||
"head": "fix/958",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1016"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 959,
|
||||
"title": "[QA] Verify OpenCode-Go curated catalog additions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/959",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1014,
|
||||
"title": "fix(opencode-go): restore curated catalog additions",
|
||||
"head": "fix/959",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1014"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 960,
|
||||
"title": "[QA] Verify patch 'did you mean?' suggestions",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/960",
|
||||
"open_prs": [
|
||||
{
|
||||
"number": 1017,
|
||||
"title": "fix(patch): port and verify did-you-mean suggestions (#960)",
|
||||
"head": "fix/960",
|
||||
"url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/1017"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "[QA] Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/961",
|
||||
"open_prs": []
|
||||
},
|
||||
{
|
||||
"number": 962,
|
||||
"title": "[QA] Verify hardcoded-home path guard on burn/921 branch",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/962",
|
||||
"open_prs": []
|
||||
}
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{
|
||||
"number": 965,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 1: Landscape Analysis & Scaffolding",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/965"
|
||||
},
|
||||
{
|
||||
"number": 966,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 2: Core Logic Implementation",
|
||||
"state": "open",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/966"
|
||||
},
|
||||
{
|
||||
"number": 967,
|
||||
"title": "[EPIC: Morning review packet \u2014 Hermes harness features landed 2026-04-21] Phase 3: Poka-yoke Integration & Fleet Verification",
|
||||
"state": "closed",
|
||||
"html_url": "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/issues/967"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -38,18 +38,6 @@ def _cron_api(**kwargs):
|
||||
return json.loads(cronjob_tool(**kwargs))
|
||||
|
||||
|
||||
def _print_runtime_overrides(job: dict) -> None:
|
||||
model = job.get("model")
|
||||
provider = job.get("provider")
|
||||
base_url = job.get("base_url")
|
||||
if model:
|
||||
print(f" Model: {model}")
|
||||
if provider:
|
||||
print(f" Provider: {provider}")
|
||||
if base_url:
|
||||
print(f" Base URL: {base_url}")
|
||||
|
||||
|
||||
def cron_list(show_all: bool = False):
|
||||
"""List all scheduled jobs."""
|
||||
from cron.jobs import list_jobs
|
||||
@@ -105,7 +93,6 @@ def cron_list(show_all: bool = False):
|
||||
script = job.get("script")
|
||||
if script:
|
||||
print(f" Script: {script}")
|
||||
_print_runtime_overrides(job)
|
||||
|
||||
# Execution history
|
||||
last_status = job.get("last_status")
|
||||
@@ -180,9 +167,6 @@ def cron_create(args):
|
||||
repeat=getattr(args, "repeat", None),
|
||||
skill=getattr(args, "skill", None),
|
||||
skills=_normalize_skills(getattr(args, "skill", None), getattr(args, "skills", None)),
|
||||
model=getattr(args, "model", None),
|
||||
provider=getattr(args, "provider", None),
|
||||
base_url=getattr(args, "base_url", None),
|
||||
script=getattr(args, "script", None),
|
||||
)
|
||||
if not result.get("success"):
|
||||
@@ -196,8 +180,6 @@ def cron_create(args):
|
||||
job_data = result.get("job", {})
|
||||
if job_data.get("script"):
|
||||
print(f" Script: {job_data['script']}")
|
||||
if job_data:
|
||||
_print_runtime_overrides(job_data)
|
||||
print(f" Next run: {result['next_run_at']}")
|
||||
return 0
|
||||
|
||||
@@ -235,9 +217,6 @@ def cron_edit(args):
|
||||
deliver=getattr(args, "deliver", None),
|
||||
repeat=getattr(args, "repeat", None),
|
||||
skills=final_skills,
|
||||
model=getattr(args, "model", None),
|
||||
provider=getattr(args, "provider", None),
|
||||
base_url=getattr(args, "base_url", None),
|
||||
script=getattr(args, "script", None),
|
||||
)
|
||||
if not result.get("success"):
|
||||
@@ -254,7 +233,6 @@ def cron_edit(args):
|
||||
print(" Skills: none")
|
||||
if updated.get("script"):
|
||||
print(f" Script: {updated['script']}")
|
||||
_print_runtime_overrides(updated)
|
||||
return 0
|
||||
|
||||
|
||||
|
||||
@@ -4958,9 +4958,6 @@ For more help on a command:
|
||||
cron_create.add_argument("--deliver", help="Delivery target: origin, local, telegram, discord, signal, or platform:chat_id")
|
||||
cron_create.add_argument("--repeat", type=int, help="Optional repeat count")
|
||||
cron_create.add_argument("--skill", dest="skills", action="append", help="Attach a skill. Repeat to add multiple skills.")
|
||||
cron_create.add_argument("--model", help="Pin this job to a specific model (for example: google/gemma-4-31b-it)")
|
||||
cron_create.add_argument("--provider", help="Pin this job to a specific provider (for example: openrouter)")
|
||||
cron_create.add_argument("--base-url", dest="base_url", help="Optional base URL override for the job's runtime provider")
|
||||
cron_create.add_argument("--script", help="Path to a Python script whose stdout is injected into the prompt each run")
|
||||
|
||||
# cron edit
|
||||
@@ -4975,9 +4972,6 @@ For more help on a command:
|
||||
cron_edit.add_argument("--add-skill", dest="add_skills", action="append", help="Append a skill without replacing the existing list. Repeatable.")
|
||||
cron_edit.add_argument("--remove-skill", dest="remove_skills", action="append", help="Remove a specific attached skill. Repeatable.")
|
||||
cron_edit.add_argument("--clear-skills", action="store_true", help="Remove all attached skills from the job")
|
||||
cron_edit.add_argument("--model", help="Update the job's pinned model")
|
||||
cron_edit.add_argument("--provider", help="Update the job's pinned provider")
|
||||
cron_edit.add_argument("--base-url", dest="base_url", help="Update the job's pinned base URL. Pass an empty string to clear it.")
|
||||
cron_edit.add_argument("--script", help="Path to a Python script whose stdout is injected into the prompt each run. Pass empty string to clear.")
|
||||
|
||||
# lifecycle actions
|
||||
|
||||
288
scripts/morning_review_packet_status.py
Normal file
288
scripts/morning_review_packet_status.py
Normal file
@@ -0,0 +1,288 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate a grounded status report for hermes-agent morning review packet epic #949."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import ssl
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
BASE_API = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
REPO = "Timmy_Foundation/hermes-agent"
|
||||
TOKEN_PATH = Path("~/.config/gitea/token").expanduser()
|
||||
DEFAULT_JSON_OUT = Path("docs/morning-review-packet-2026-04-21.snapshot.json")
|
||||
DEFAULT_MARKDOWN_OUT = Path("docs/morning-review-packet-2026-04-21-status.md")
|
||||
|
||||
|
||||
def extract_issue_numbers(text: str) -> list[int]:
|
||||
seen: set[int] = set()
|
||||
numbers: list[int] = []
|
||||
for match in re.finditer(r"#(\d+)", text or ""):
|
||||
num = int(match.group(1))
|
||||
if num not in seen:
|
||||
seen.add(num)
|
||||
numbers.append(num)
|
||||
return numbers
|
||||
|
||||
|
||||
def _auth_headers(token: str) -> list[dict[str, str]]:
|
||||
basic = base64.b64encode(f"{token}:".encode()).decode()
|
||||
return [
|
||||
{"Authorization": f"token {token}", "Accept": "application/json"},
|
||||
{"Authorization": f"Basic {basic}", "Accept": "application/json"},
|
||||
]
|
||||
|
||||
|
||||
def api_get(path: str, *, headers_options: list[dict[str, str]] | None = None) -> Any:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = headers_options or _auth_headers(token)
|
||||
ctx = ssl.create_default_context()
|
||||
url = f"{BASE_API}{path}"
|
||||
last_error: Exception | None = None
|
||||
for headers in headers_options:
|
||||
try:
|
||||
req = urllib.request.Request(url, headers=headers)
|
||||
with urllib.request.urlopen(req, context=ctx, timeout=30) as resp:
|
||||
return json.loads(resp.read().decode())
|
||||
except Exception as exc: # pragma: no cover - exercised via live CLI use
|
||||
last_error = exc
|
||||
raise RuntimeError(f"GET {url} failed: {last_error}")
|
||||
|
||||
|
||||
def issue_pr_matches(pr: dict[str, Any], issue_num: int) -> bool:
|
||||
title = pr.get("title") or ""
|
||||
body = pr.get("body") or ""
|
||||
head = (pr.get("head") or {}).get("ref") or ""
|
||||
exact_ref = re.compile(rf"(?<!\d)#{issue_num}(?!\d)")
|
||||
body_ref = re.compile(rf"(?i)(closes|close|fixes|fix|resolves|resolve|refs|ref)\s+#?{issue_num}(?!\d)")
|
||||
branch_variants = {
|
||||
f"fix/{issue_num}",
|
||||
f"issue-{issue_num}",
|
||||
f"burn/{issue_num}",
|
||||
f"fix/issue-{issue_num}",
|
||||
}
|
||||
return bool(
|
||||
exact_ref.search(title)
|
||||
or exact_ref.search(body)
|
||||
or body_ref.search(body)
|
||||
or head in branch_variants
|
||||
)
|
||||
|
||||
|
||||
def fetch_open_prs(*, headers_options: list[dict[str, str]]) -> list[dict[str, Any]]:
|
||||
prs: list[dict[str, Any]] = []
|
||||
page = 1
|
||||
while True:
|
||||
batch = api_get(
|
||||
f"/repos/{REPO}/pulls?state=open&limit=100&page={page}",
|
||||
headers_options=headers_options,
|
||||
)
|
||||
if not batch:
|
||||
break
|
||||
prs.extend(batch)
|
||||
if len(batch) < 100:
|
||||
break
|
||||
page += 1
|
||||
return prs
|
||||
|
||||
|
||||
def fetch_live_snapshot(epic_issue_num: int = 949) -> dict[str, Any]:
|
||||
token = TOKEN_PATH.read_text(encoding="utf-8").strip()
|
||||
headers_options = _auth_headers(token)
|
||||
|
||||
epic = api_get(f"/repos/{REPO}/issues/{epic_issue_num}", headers_options=headers_options)
|
||||
comments = api_get(f"/repos/{REPO}/issues/{epic_issue_num}/comments", headers_options=headers_options)
|
||||
child_numbers = [n for n in extract_issue_numbers(epic.get("body") or "") if n != epic_issue_num]
|
||||
decomposition_numbers = [
|
||||
n
|
||||
for comment in comments
|
||||
for n in extract_issue_numbers(comment.get("body") or "")
|
||||
if n not in child_numbers and n != epic_issue_num
|
||||
]
|
||||
|
||||
open_prs = fetch_open_prs(headers_options=headers_options)
|
||||
|
||||
children = []
|
||||
for number in child_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
matching_prs = [
|
||||
{
|
||||
"number": pr["number"],
|
||||
"title": pr["title"],
|
||||
"head": pr.get("head", {}).get("ref", ""),
|
||||
"url": pr["html_url"],
|
||||
}
|
||||
for pr in open_prs
|
||||
if issue_pr_matches(pr, number)
|
||||
]
|
||||
children.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
"open_prs": matching_prs,
|
||||
}
|
||||
)
|
||||
|
||||
decomposition_issues = []
|
||||
for number in decomposition_numbers:
|
||||
issue = api_get(f"/repos/{REPO}/issues/{number}", headers_options=headers_options)
|
||||
decomposition_issues.append(
|
||||
{
|
||||
"number": issue["number"],
|
||||
"title": issue["title"],
|
||||
"state": issue["state"],
|
||||
"html_url": issue["html_url"],
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"repo": REPO,
|
||||
"epic": {
|
||||
"number": epic["number"],
|
||||
"title": epic["title"],
|
||||
"state": epic["state"],
|
||||
"html_url": epic["html_url"],
|
||||
},
|
||||
"children": children,
|
||||
"decomposition_issues": decomposition_issues,
|
||||
}
|
||||
|
||||
|
||||
def summarize_snapshot(snapshot: dict[str, Any]) -> dict[str, int]:
|
||||
children = snapshot.get("children", [])
|
||||
open_children = [issue for issue in children if issue.get("state") == "open"]
|
||||
closed_children = [issue for issue in children if issue.get("state") == "closed"]
|
||||
open_with_pr = [issue for issue in open_children if issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in open_children if not issue.get("open_prs")]
|
||||
return {
|
||||
"total_children": len(children),
|
||||
"open_children": len(open_children),
|
||||
"closed_children": len(closed_children),
|
||||
"open_with_pr": len(open_with_pr),
|
||||
"open_without_pr": len(open_without_pr),
|
||||
}
|
||||
|
||||
|
||||
def render_markdown(snapshot: dict[str, Any]) -> str:
|
||||
epic = snapshot["epic"]
|
||||
children = snapshot.get("children", [])
|
||||
summary = summarize_snapshot(snapshot)
|
||||
open_with_pr = [issue for issue in children if issue.get("state") == "open" and issue.get("open_prs")]
|
||||
open_without_pr = [issue for issue in children if issue.get("state") == "open" and not issue.get("open_prs")]
|
||||
decomposition = snapshot.get("decomposition_issues", [])
|
||||
|
||||
lines = [
|
||||
f"# Morning Review Packet Status — #{epic['number']}",
|
||||
"",
|
||||
f"Generated: {snapshot.get('generated_at', '')}",
|
||||
f"Epic: [{epic['title']}]({epic.get('html_url', '')})",
|
||||
"",
|
||||
"## Summary",
|
||||
"",
|
||||
f"- Child QA issues tracked: {summary['total_children']}",
|
||||
f"- Open child issues: {summary['open_children']}",
|
||||
f"- Closed child issues: {summary['closed_children']}",
|
||||
f"- Open child issues already backed by PRs: {summary['open_with_pr']}",
|
||||
f"- Open child issues still unowned on forge: {summary['open_without_pr']}",
|
||||
"",
|
||||
"## Child QA Matrix",
|
||||
"",
|
||||
"| Issue | State | Open PRs | Title |",
|
||||
"|------:|-------|----------|-------|",
|
||||
]
|
||||
|
||||
for issue in children:
|
||||
rendered_prs = []
|
||||
for pr in issue.get("open_prs", []):
|
||||
pr_num = pr.get("number", "?")
|
||||
pr_url = pr.get("url") or pr.get("html_url") or ""
|
||||
rendered_prs.append(f"[#{pr_num}]({pr_url})" if pr_url else f"#{pr_num}")
|
||||
pr_text = ", ".join(rendered_prs) or "—"
|
||||
lines.append(
|
||||
f"| #{issue['number']} | {issue['state']} | {pr_text} | {issue['title']} |"
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Drift Signals",
|
||||
"",
|
||||
"forge/main is still catching up to the upstream packet.",
|
||||
])
|
||||
|
||||
if open_with_pr:
|
||||
lines.append("")
|
||||
lines.append("Active PR-backed child lanes:")
|
||||
for issue in open_with_pr:
|
||||
pr_numbers = ", ".join(f"#{pr['number']}" for pr in issue.get("open_prs", []))
|
||||
lines.append(f"- #{issue['number']} -> {pr_numbers} ({issue['title']})")
|
||||
|
||||
if open_without_pr:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Unowned Open QA Issues",
|
||||
"",
|
||||
])
|
||||
for issue in open_without_pr:
|
||||
lines.append(f"- #{issue['number']} {issue['title']}")
|
||||
|
||||
if decomposition:
|
||||
lines.extend([
|
||||
"",
|
||||
"## Decomposition Follow-Ups",
|
||||
"",
|
||||
])
|
||||
for issue in decomposition:
|
||||
lines.append(f"- #{issue['number']} [{issue['state']}] {issue['title']}")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Conclusion",
|
||||
"",
|
||||
"Refs #949 only. This epic remains open until every child QA issue has a truthful PASS/FAIL outcome, attached evidence, and any upstream/main versus forge/main drift is resolved or explicitly documented.",
|
||||
"",
|
||||
"## Regeneration",
|
||||
"",
|
||||
"```bash",
|
||||
"python3 scripts/morning_review_packet_status.py --fetch-live --json-out docs/morning-review-packet-2026-04-21.snapshot.json --markdown-out docs/morning-review-packet-2026-04-21-status.md",
|
||||
"```",
|
||||
])
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def write_json(path: Path, data: dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(json.dumps(data, indent=2) + "\n", encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate grounded status docs for epic #949")
|
||||
parser.add_argument("--fetch-live", action="store_true", help="Fetch the current packet state from Forge")
|
||||
parser.add_argument("--snapshot", type=Path, help="Read a local JSON snapshot instead of hitting the API")
|
||||
parser.add_argument("--json-out", type=Path, default=DEFAULT_JSON_OUT, help="Path to write JSON snapshot")
|
||||
parser.add_argument("--markdown-out", type=Path, default=DEFAULT_MARKDOWN_OUT, help="Path to write markdown report")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.fetch_live or not args.snapshot:
|
||||
snapshot = fetch_live_snapshot()
|
||||
else:
|
||||
snapshot = json.loads(args.snapshot.read_text(encoding="utf-8"))
|
||||
|
||||
write_json(args.json_out, snapshot)
|
||||
args.markdown_out.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.markdown_out.write_text(render_markdown(snapshot), encoding="utf-8")
|
||||
print(args.markdown_out)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Tests for hermes_cli.cron command handling."""
|
||||
|
||||
from argparse import Namespace
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -106,135 +105,3 @@ class TestCronCommandLifecycle:
|
||||
assert len(jobs) == 1
|
||||
assert jobs[0]["skills"] == ["blogwatcher", "find-nearby"]
|
||||
assert jobs[0]["name"] == "Skill combo"
|
||||
|
||||
def test_create_can_pin_runtime_model_provider_and_base_url(self, tmp_cron_dir, capsys):
|
||||
cron_command(
|
||||
Namespace(
|
||||
cron_command="create",
|
||||
schedule="every 1h",
|
||||
prompt="Run the burn loop",
|
||||
name="Gemma burn",
|
||||
deliver=None,
|
||||
repeat=None,
|
||||
skill=None,
|
||||
skills=None,
|
||||
script=None,
|
||||
model="google/gemma-4-31b-it",
|
||||
provider="openrouter",
|
||||
base_url="https://openrouter.ai/api/v1",
|
||||
)
|
||||
)
|
||||
|
||||
job = list_jobs()[0]
|
||||
assert job["model"] == "google/gemma-4-31b-it"
|
||||
assert job["provider"] == "openrouter"
|
||||
assert job["base_url"] == "https://openrouter.ai/api/v1"
|
||||
|
||||
out = capsys.readouterr().out
|
||||
assert "Created job" in out
|
||||
assert "Model: google/gemma-4-31b-it" in out
|
||||
assert "Provider: openrouter" in out
|
||||
|
||||
def test_edit_can_update_runtime_model_provider_and_clear_base_url(self, tmp_cron_dir, capsys):
|
||||
job = create_job(prompt="Check server status", schedule="every 1h")
|
||||
|
||||
cron_command(
|
||||
Namespace(
|
||||
cron_command="edit",
|
||||
job_id=job["id"],
|
||||
schedule=None,
|
||||
prompt=None,
|
||||
name=None,
|
||||
deliver=None,
|
||||
repeat=None,
|
||||
skill=None,
|
||||
skills=None,
|
||||
add_skills=None,
|
||||
remove_skills=None,
|
||||
clear_skills=False,
|
||||
script=None,
|
||||
model="google/gemma-4-31b-it",
|
||||
provider="openrouter",
|
||||
base_url="",
|
||||
)
|
||||
)
|
||||
|
||||
updated = get_job(job["id"])
|
||||
assert updated["model"] == "google/gemma-4-31b-it"
|
||||
assert updated["provider"] == "openrouter"
|
||||
assert updated["base_url"] is None
|
||||
|
||||
out = capsys.readouterr().out
|
||||
assert "Updated job" in out
|
||||
assert "Model: google/gemma-4-31b-it" in out
|
||||
assert "Provider: openrouter" in out
|
||||
|
||||
|
||||
class TestCronParserRuntimeOverrideFlags:
|
||||
def test_main_parses_create_runtime_override_flags(self, monkeypatch):
|
||||
from hermes_cli import main as main_mod
|
||||
|
||||
captured = {}
|
||||
|
||||
def fake_cmd_cron(args):
|
||||
captured["args"] = args
|
||||
|
||||
monkeypatch.setattr(main_mod, "cmd_cron", fake_cmd_cron)
|
||||
monkeypatch.setattr(
|
||||
"sys.argv",
|
||||
[
|
||||
"hermes",
|
||||
"cron",
|
||||
"create",
|
||||
"every 1h",
|
||||
"Run the burn loop",
|
||||
"--model",
|
||||
"google/gemma-4-31b-it",
|
||||
"--provider",
|
||||
"openrouter",
|
||||
"--base-url",
|
||||
"https://openrouter.ai/api/v1",
|
||||
],
|
||||
)
|
||||
|
||||
main_mod.main()
|
||||
|
||||
args = captured["args"]
|
||||
assert args.cron_command == "create"
|
||||
assert args.model == "google/gemma-4-31b-it"
|
||||
assert args.provider == "openrouter"
|
||||
assert args.base_url == "https://openrouter.ai/api/v1"
|
||||
|
||||
def test_main_parses_edit_runtime_override_flags(self, monkeypatch):
|
||||
from hermes_cli import main as main_mod
|
||||
|
||||
captured = {}
|
||||
|
||||
def fake_cmd_cron(args):
|
||||
captured["args"] = args
|
||||
|
||||
monkeypatch.setattr(main_mod, "cmd_cron", fake_cmd_cron)
|
||||
monkeypatch.setattr(
|
||||
"sys.argv",
|
||||
[
|
||||
"hermes",
|
||||
"cron",
|
||||
"edit",
|
||||
"job123",
|
||||
"--model",
|
||||
"google/gemma-4-31b-it",
|
||||
"--provider",
|
||||
"openrouter",
|
||||
"--base-url",
|
||||
"",
|
||||
],
|
||||
)
|
||||
|
||||
main_mod.main()
|
||||
|
||||
args = captured["args"]
|
||||
assert args.cron_command == "edit"
|
||||
assert args.job_id == "job123"
|
||||
assert args.model == "google/gemma-4-31b-it"
|
||||
assert args.provider == "openrouter"
|
||||
assert args.base_url == ""
|
||||
|
||||
94
tests/test_morning_review_packet_status.py
Normal file
94
tests/test_morning_review_packet_status.py
Normal file
@@ -0,0 +1,94 @@
|
||||
"""Tests for the morning review packet status report generator."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_PATH = Path(__file__).resolve().parents[1] / "scripts" / "morning_review_packet_status.py"
|
||||
DOC_PATH = Path(__file__).resolve().parents[1] / "docs" / "morning-review-packet-2026-04-21-status.md"
|
||||
|
||||
|
||||
def load_module():
|
||||
assert SCRIPT_PATH.exists(), f"missing status script: {SCRIPT_PATH}"
|
||||
spec = importlib.util.spec_from_file_location("morning_review_packet_status_test", SCRIPT_PATH)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def sample_snapshot():
|
||||
return {
|
||||
"epic": {"number": 949, "title": "Morning review packet", "state": "open"},
|
||||
"children": [
|
||||
{
|
||||
"number": 950,
|
||||
"title": "Verify AI Gateway provider UX + attribution headers",
|
||||
"state": "open",
|
||||
"open_prs": [],
|
||||
},
|
||||
{
|
||||
"number": 954,
|
||||
"title": "Verify maps skill guest_house / camp_site / bakery expansion",
|
||||
"state": "open",
|
||||
"open_prs": [
|
||||
{"number": 1021, "head": "fix/954", "title": "feat: sync maps skill and verify guest_house/camp_site/bakery (#954)"}
|
||||
],
|
||||
},
|
||||
{
|
||||
"number": 961,
|
||||
"title": "Verify web dashboard update/restart action buttons",
|
||||
"state": "closed",
|
||||
"open_prs": [],
|
||||
},
|
||||
],
|
||||
"decomposition_issues": [
|
||||
{"number": 965, "title": "Phase 1: Landscape Analysis & Scaffolding", "state": "open"},
|
||||
{"number": 967, "title": "Phase 3: Poka-yoke Integration & Fleet Verification", "state": "closed"},
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def test_extract_child_issue_numbers_from_epic_body():
|
||||
module = load_module()
|
||||
body = """
|
||||
- [ ] #950 one
|
||||
- [ ] #951 two
|
||||
- [ ] #962 three
|
||||
"""
|
||||
assert module.extract_issue_numbers(body) == [950, 951, 962]
|
||||
|
||||
|
||||
def test_summarize_snapshot_counts_open_closed_and_pr_backing():
|
||||
module = load_module()
|
||||
summary = module.summarize_snapshot(sample_snapshot())
|
||||
|
||||
assert summary["total_children"] == 3
|
||||
assert summary["open_children"] == 2
|
||||
assert summary["closed_children"] == 1
|
||||
assert summary["open_with_pr"] == 1
|
||||
assert summary["open_without_pr"] == 1
|
||||
|
||||
|
||||
def test_render_markdown_includes_issue_matrix_and_drift_sections():
|
||||
module = load_module()
|
||||
md = module.render_markdown(sample_snapshot())
|
||||
|
||||
assert "# Morning Review Packet Status — #949" in md
|
||||
assert "## Child QA Matrix" in md
|
||||
assert "#950" in md
|
||||
assert "#954" in md
|
||||
assert "#1021" in md
|
||||
assert "## Unowned Open QA Issues" in md
|
||||
assert "## Drift Signals" in md
|
||||
assert "forge/main is still catching up to the upstream packet" in md
|
||||
|
||||
|
||||
def test_committed_status_doc_exists_and_mentions_live_examples():
|
||||
assert DOC_PATH.exists(), f"missing generated status doc: {DOC_PATH}"
|
||||
text = DOC_PATH.read_text(encoding="utf-8")
|
||||
assert "# Morning Review Packet Status — #949" in text
|
||||
assert "#954" in text
|
||||
assert "#1021" in text
|
||||
assert "#950" in text
|
||||
Reference in New Issue
Block a user