Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
6cbb9a98e1 fix: refresh burn lane audit for #662
Some checks failed
Agent PR Gate / gate (pull_request) Failing after 48s
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 8s
Smoke Test / smoke (pull_request) Failing after 6s
Agent PR Gate / report (pull_request) Has been cancelled
2026-04-16 23:43:24 -04:00
5 changed files with 151 additions and 249 deletions

View File

@@ -1,6 +1,6 @@
# Burn Lane Empty Audit — timmy-home #662
Generated: 2026-04-16T01:22:37Z
Generated: 2026-04-17T03:42:50Z
Source issue: `[ops] Burn lane empty — all open issues triaged (2026-04-14)`
## Source Snapshot
@@ -11,9 +11,9 @@ Issue #662 is an operational status note, not a normal feature request. Its body
- Referenced issues audited: 42
- Already closed: 30
- Open but likely closure candidates (merged PR found): 0
- Open with active PRs: 12
- Open / needs manual review: 0
- Open but likely closure candidates (merged PR found): 1
- Open with active PRs: 0
- Open / needs manual review: 11
## Issue Body Drift
@@ -21,56 +21,56 @@ The body of #662 is not current truth. It mixes closed issues, open issues, rang
| Issue | State | Classification | PR Summary |
|---|---|---|---|
| #579 | closed | already closed | closed PR #644, closed PR #640, closed PR #635, closed PR #620 |
| #648 | open | active pr | open PR #731 |
| #579 | closed | already closed | closed PR #644, closed PR #643, closed PR #640, closed PR #635, closed PR #620 |
| #648 | open | needs manual review | closed PR #731 |
| #647 | closed | already closed | issue already closed |
| #619 | closed | already closed | issue already closed |
| #616 | closed | already closed | issue already closed |
| #614 | closed | already closed | issue already closed |
| #613 | closed | already closed | issue already closed |
| #660 | closed | already closed | issue already closed |
| #659 | closed | already closed | issue already closed |
| #659 | closed | already closed | closed PR #660 |
| #658 | closed | already closed | issue already closed |
| #657 | closed | already closed | issue already closed |
| #656 | closed | already closed | closed PR #658 |
| #655 | closed | already closed | issue already closed |
| #654 | closed | already closed | closed PR #661 |
| #653 | closed | already closed | issue already closed |
| #652 | closed | already closed | merged PR #657 |
| #651 | closed | already closed | issue already closed |
| #650 | closed | already closed | merged PR #654 |
| #649 | closed | already closed | issue already closed |
| #646 | closed | already closed | issue already closed |
| #582 | open | active pr | open PR #738 |
| #653 | closed | already closed | closed PR #660, closed PR #655 |
| #652 | closed | already closed | closed PR #660, merged PR #657, closed PR #655 |
| #651 | closed | already closed | closed PR #655 |
| #650 | closed | already closed | closed PR #661, closed PR #660, merged PR #654, closed PR #651 |
| #649 | closed | already closed | closed PR #660, merged PR #657, closed PR #651 |
| #646 | closed | already closed | closed PR #655, closed PR #651 |
| #582 | open | closure candidate | merged PR #641, merged PR #639, merged PR #637, merged PR #631, merged PR #630 |
| #627 | closed | already closed | issue already closed |
| #631 | closed | already closed | issue already closed |
| #632 | closed | already closed | issue already closed |
| #634 | closed | already closed | issue already closed |
| #639 | closed | already closed | issue already closed |
| #641 | closed | already closed | issue already closed |
| #575 | closed | already closed | merged PR #656 |
| #576 | closed | already closed | closed PR #663, closed PR #660, closed PR #655, closed PR #651, closed PR #646, closed PR #642, closed PR #633 |
| #575 | closed | already closed | closed PR #658, merged PR #656 |
| #576 | closed | already closed | merged PR #664, closed PR #663, closed PR #660, closed PR #655, merged PR #654, closed PR #651, closed PR #646, closed PR #642, closed PR #633 |
| #578 | closed | already closed | merged PR #638, closed PR #636 |
| #636 | closed | already closed | issue already closed |
| #638 | closed | already closed | issue already closed |
| #547 | open | active pr | open PR #730 |
| #548 | open | active pr | open PR #712 |
| #549 | open | active pr | open PR #729 |
| #550 | open | active pr | open PR #727 |
| #551 | open | active pr | open PR #725 |
| #552 | open | active pr | open PR #724 |
| #553 | open | active pr | open PR #722 |
| #562 | open | active pr | open PR #718 |
| #544 | open | active pr | open PR #732 |
| #545 | open | active pr | open PR #719 |
| #547 | open | needs manual review | closed PR #730 |
| #548 | open | needs manual review | closed PR #712 |
| #549 | open | needs manual review | closed PR #729 |
| #550 | open | needs manual review | closed PR #727 |
| #551 | open | needs manual review | closed PR #725 |
| #552 | open | needs manual review | closed PR #724 |
| #553 | open | needs manual review | closed PR #722 |
| #562 | open | needs manual review | closed PR #718 |
| #544 | open | needs manual review | closed PR #732 |
| #545 | open | needs manual review | closed PR #719 |
## Closure Candidates
These issues are still open but already have merged PR evidence in the forge and should be reviewed for bulk closure.
| None |
|---|
| None |
| Issue | State | Classification | PR Summary |
|---|---|---|---|
| #582 | open | closure candidate | merged PR #641, merged PR #639, merged PR #637, merged PR #631, merged PR #630 |
## Still Open / Needs Manual Review
@@ -78,18 +78,17 @@ These issues either have no matching PR signal or still have an active PR / ambi
| Issue | State | Classification | PR Summary |
|---|---|---|---|
| #648 | open | active pr | open PR #731 |
| #582 | open | active pr | open PR #738 |
| #547 | open | active pr | open PR #730 |
| #548 | open | active pr | open PR #712 |
| #549 | open | active pr | open PR #729 |
| #550 | open | active pr | open PR #727 |
| #551 | open | active pr | open PR #725 |
| #552 | open | active pr | open PR #724 |
| #553 | open | active pr | open PR #722 |
| #562 | open | active pr | open PR #718 |
| #544 | open | active pr | open PR #732 |
| #545 | open | active pr | open PR #719 |
| #648 | open | needs manual review | closed PR #731 |
| #547 | open | needs manual review | closed PR #730 |
| #548 | open | needs manual review | closed PR #712 |
| #549 | open | needs manual review | closed PR #729 |
| #550 | open | needs manual review | closed PR #727 |
| #551 | open | needs manual review | closed PR #725 |
| #552 | open | needs manual review | closed PR #724 |
| #553 | open | needs manual review | closed PR #722 |
| #562 | open | needs manual review | closed PR #718 |
| #544 | open | needs manual review | closed PR #732 |
| #545 | open | needs manual review | closed PR #719 |
## Recommendation

View File

@@ -23,6 +23,7 @@ class PullSummary:
state: str
merged: bool
head: str
body: str
url: str
@@ -75,7 +76,8 @@ def api_get(path: str, token: str):
def collect_pull_summaries(repo: str, token: str) -> list[PullSummary]:
pulls: list[PullSummary] = []
for state in ("open", "closed"):
for page in range(1, 6):
page = 1
while True:
batch = api_get(f"/repos/{ORG}/{repo}/pulls?state={state}&limit=100&page={page}", token)
if not batch:
break
@@ -87,18 +89,18 @@ def collect_pull_summaries(repo: str, token: str) -> list[PullSummary]:
state=pr.get("state") or state,
merged=bool(pr.get("merged")),
head=(pr.get("head") or {}).get("ref") or "",
body=pr.get("body") or "",
url=pr.get("html_url") or pr.get("url") or "",
)
)
if len(batch) < 100:
break
page += 1
return pulls
def match_prs(issue_num: int, pulls: Iterable[PullSummary]) -> list[PullSummary]:
matches: list[PullSummary] = []
for pr in pulls:
text = f"{pr.title} {pr.head}"
text = f"{pr.title} {pr.head} {pr.body}"
if f"#{issue_num}" in text or pr.head == f"fix/{issue_num}" or f"/{issue_num}" in pr.head or f"-{issue_num}" in pr.head:
matches.append(pr)
return matches
@@ -116,12 +118,16 @@ def classify_issue(issue: dict, related_prs: list[PullSummary]) -> IssueAuditRow
else:
merged = [pr for pr in related_prs if pr.merged]
open_prs = [pr for pr in related_prs if pr.state == "open"]
closed_unmerged = [pr for pr in related_prs if pr.state != "open" and not pr.merged]
if merged:
classification = "closure_candidate"
pr_summary = summarize_prs(merged)
elif open_prs:
classification = "active_pr"
pr_summary = summarize_prs(open_prs)
elif closed_unmerged:
classification = "needs_manual_review"
pr_summary = summarize_prs(closed_unmerged)
else:
classification = "needs_manual_review"
pr_summary = "no matching PR found"

View File

@@ -1,128 +0,0 @@
"""
Source Distinction Module — Verified vs Inferred Claims
SOUL.md compliance: "I tell the truth. When I do not know something, I say so.
I do not fabricate confidence."
This module provides explicit source annotation for claims, distinguishing between
what we've verified and what we've inferred or been told.
"""
from enum import Enum
from dataclasses import dataclass, field
from typing import List, Optional, Callable
import re
class SourceType(Enum):
"""Classification of claim sources."""
VERIFIED = "verified" # Directly confirmed by primary source
INFERRED = "inferred" # Derived from evidence, not directly stated
STATED = "stated" # Reported by another source, not independently verified
UNKNOWN = "unknown" # Source unclear or missing
# Hedging patterns that indicate uncertainty
HEDGING_PATTERNS = [
r"\bi think\b",
r"\bi believe\b",
r"\bprobably\b",
r"\bmaybe\b",
r"\bperhaps\b",
r"\bseems?\b",
r"\bappears?\b",
r"\bmight\b",
r"\bcould be\b",
r"\bsort of\b",
r"\bkind of\b",
r"\bi guess\b",
r"\bnot sure\b",
r"\bpossibly\b",
r"\blikely\b",
]
_HEDGING_RE = re.compile("|".join(HEDGING_PATTERNS), re.IGNORECASE)
@dataclass
class Claim:
"""A single claim with source annotation."""
text: str
source: SourceType = SourceType.UNKNOWN
citation: Optional[str] = None
confidence: float = 1.0
def render(self) -> str:
"""Render claim with source indicator."""
prefix = _source_prefix(self.source)
parts = [f"{prefix} {self.text}"]
if self.citation:
parts.append(f"({self.citation})")
return " ".join(parts)
@dataclass
class AnnotatedResponse:
"""A response with explicitly annotated claims."""
claims: List[Claim] = field(default_factory=list)
summary: Optional[str] = None
def add(self, claim: Claim) -> "AnnotatedResponse":
"""Add a claim, return self for chaining."""
self.claims.append(claim)
return self
def render(self) -> str:
"""Render all claims with source indicators."""
lines = []
if self.summary:
lines.append(self.summary)
lines.append("")
for claim in self.claims:
lines.append(claim.render())
return "\n".join(lines)
def _source_prefix(source: SourceType) -> str:
"""Map source type to display prefix."""
return {
SourceType.VERIFIED: "",
SourceType.INFERRED: "~",
SourceType.STATED: "",
SourceType.UNKNOWN: "?",
}[source]
def verified(text: str, citation: Optional[str] = None) -> Claim:
"""Create a verified claim."""
return Claim(text=text, source=SourceType.VERIFIED, citation=citation, confidence=1.0)
def inferred(text: str, citation: Optional[str] = None, confidence: float = 0.7) -> Claim:
"""Create an inferred claim."""
return Claim(text=text, source=SourceType.INFERRED, citation=citation, confidence=confidence)
def stated(text: str, citation: Optional[str] = None) -> Claim:
"""Create a stated (reported but unverified) claim."""
return Claim(text=text, source=SourceType.STATED, citation=citation, confidence=0.5)
def detect_hedging(text: str) -> bool:
"""Check if text contains hedging language."""
return bool(_HEDGING_RE.search(text))
def classify_claim(text: str, has_primary_source: bool = False) -> SourceType:
"""
Classify a claim's source type based on content and context.
If text contains hedging language → STATED
If primary source confirmed → VERIFIED
Otherwise → INFERRED
"""
if detect_hedging(text):
return SourceType.STATED
if has_primary_source:
return SourceType.VERIFIED
return SourceType.INFERRED

View File

@@ -1,6 +1,13 @@
from pathlib import Path
from scripts.burn_lane_issue_audit import extract_issue_numbers, render_report
from scripts.burn_lane_issue_audit import (
PullSummary,
classify_issue,
collect_pull_summaries,
extract_issue_numbers,
match_prs,
render_report,
)
def test_extract_issue_numbers_handles_ranges_and_literals() -> None:
@@ -14,6 +21,99 @@ def test_extract_issue_numbers_handles_ranges_and_literals() -> None:
assert extract_issue_numbers(body) == [579, 660, 659, 658, 582, 627, 631, 547, 546, 545]
def test_match_prs_detects_issue_ref_in_pr_body() -> None:
pulls = [
PullSummary(
number=731,
title="docs: verify session harvest report",
state="open",
merged=False,
head="fix/session-harvest-report",
body="Refs #648",
url="https://forge.example/pr/731",
),
PullSummary(
number=732,
title="unrelated",
state="open",
merged=False,
head="fix/unrelated",
body="Refs #700",
url="https://forge.example/pr/732",
),
]
assert [pr.number for pr in match_prs(648, pulls)] == [731]
def test_open_issue_with_closed_unmerged_pr_stays_manual_review_with_history() -> None:
issue = {
"number": 648,
"title": "session harvest report",
"state": "open",
"html_url": "https://forge.example/issues/648",
}
row = classify_issue(
issue,
[
PullSummary(
number=731,
title="docs: add session harvest report",
state="closed",
merged=False,
head="fix/648",
body="Closes #648",
url="https://forge.example/pr/731",
)
],
)
assert row.classification == "needs_manual_review"
assert row.pr_summary == "closed PR #731"
def test_collect_pull_summaries_pages_until_empty(monkeypatch) -> None:
def fake_api_get(path: str, token: str):
if "state=open" in path:
return []
page = int(path.split("page=")[1])
if page <= 5:
return [
{
"number": page * 1000 + i,
"title": f"page {page} pr {i}",
"state": "closed",
"merged": False,
"head": {"ref": f"fix/{page}-{i}"},
"body": f"Refs #{page * 1000 + i}",
"html_url": f"https://forge.example/pr/{page * 1000 + i}",
}
for i in range(100)
]
if page == 6:
return [
{
"number": 900,
"title": "late page pr",
"state": "closed",
"merged": False,
"head": {"ref": "fix/900"},
"body": "Refs #900",
"html_url": "https://forge.example/pr/900",
}
]
return []
monkeypatch.setattr("scripts.burn_lane_issue_audit.api_get", fake_api_get)
pulls = collect_pull_summaries("timmy-home", "token")
assert any(pr.number == 900 for pr in pulls)
def test_render_report_calls_out_drift_and_candidates() -> None:
rows = [
{

View File

@@ -1,75 +0,0 @@
"""Tests for source distinction module — 9 tests."""
import pytest
from scripts.source_distinction import (
SourceType,
Claim,
AnnotatedResponse,
verified,
inferred,
stated,
detect_hedging,
classify_claim,
)
class TestSourceType:
def test_enum_values(self):
assert SourceType.VERIFIED.value == "verified"
assert SourceType.INFERRED.value == "inferred"
assert SourceType.STATED.value == "stated"
assert SourceType.UNKNOWN.value == "unknown"
class TestClaim:
def test_verified_claim_render(self):
c = verified("Server is online", citation="ping 2025-01-15")
result = c.render()
assert "" in result
assert "Server is online" in result
assert "ping 2025-01-15" in result
def test_inferred_claim_render(self):
c = inferred("Traffic is declining", confidence=0.6)
result = c.render()
assert "~" in result
assert c.confidence == 0.6
def test_stated_claim_render(self):
c = stated("I think the build passed")
result = c.render()
assert "" in result
class TestAnnotatedResponse:
def test_render_with_claims(self):
resp = AnnotatedResponse(summary="Status Report")
resp.add(verified("DNS resolved")).add(inferred("Latency is high"))
rendered = resp.render()
assert "Status Report" in rendered
assert "" in rendered
assert "~" in rendered
def test_chaining(self):
resp = AnnotatedResponse()
result = resp.add(verified("a")).add(stated("b"))
assert result is resp
assert len(resp.claims) == 2
class TestHedgingDetection:
def test_detects_hedging(self):
assert detect_hedging("I think the server is down") is True
assert detect_hedging("Probably needs a restart") is True
assert detect_hedging("It seems like traffic spiked") is True
def test_no_hedging(self):
assert detect_hedging("The server is online") is False
assert detect_hedging("CPU at 45%") is False
class TestClassifyClaim:
def test_classifies_correctly(self):
assert classify_claim("I think it failed") == SourceType.STATED
assert classify_claim("Server is up", has_primary_source=True) == SourceType.VERIFIED
assert classify_claim("Traffic increased") == SourceType.INFERRED