Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
38ce7e4bc5 feat: Add PR backlog management process (#1470)
Some checks failed
CI / test (pull_request) Failing after 1m14s
CI / validate (pull_request) Failing after 50s
Review Approval Gate / verify-review (pull_request) Failing after 5s
## Summary
Added tools and process for managing PR backlog in timmy-config.

## Problem
timmy-config has 31+ open PRs, the highest in the organization.
This creates confusion, slows down development, and increases
merge conflicts.

## Solution
Created automated tools and process for PR backlog management:

### 1. PR Backlog Analyzer (`scripts/pr-backlog-analyzer.py`)
- Fetches all open PRs from timmy-config
- Analyzes age, review status, labels
- Generates markdown report
- Categorizes PRs: stale, needs review, approved, changes requested

### 2. GitHub Actions Workflow (`.github/workflows/pr-backlog-management.yml`)
- Runs weekly on Monday at 10 AM UTC
- Analyzes PR backlog
- Creates issue if backlog is high (>10 stale PRs)
- Uploads report as artifact

### 3. Documentation (`docs/pr-backlog-process.md`)
- Weekly analysis process
- Review stale PRs procedure
- Merge approved PRs workflow
- Review pending PRs SLA
- Close duplicate PRs process
- Metrics to track
- Escalation procedures

## Usage

### Run Analyzer
```bash
python scripts/pr-backlog-analyzer.py
```

### View Report
```bash
cat reports/pr-backlog-$(date +%Y%m%d).md
```

## Metrics
- **Current**: 32 open PRs in timmy-config
- **Target**: <20 open PRs
- **SLA**: Review within 48 hours, merge within 7 days

Issue: #1470
2026-04-14 21:14:55 -04:00
6 changed files with 413 additions and 58 deletions

View File

@@ -0,0 +1,70 @@
name: PR Backlog Management
on:
schedule:
# Run weekly on Monday at 10 AM UTC
- cron: '0 10 * * 1'
workflow_dispatch: # Allow manual trigger
jobs:
analyze-backlog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install requests
- name: Analyze PR backlog
env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
run: |
python scripts/pr-backlog-analyzer.py
- name: Upload report
uses: actions/upload-artifact@v4
with:
name: pr-backlog-report
path: reports/
- name: Create issue if backlog is high
if: failure()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('reports/pr-backlog-' + new Date().toISOString().split('T')[0] + '.md', 'utf8');
// Check if backlog is high (more than 10 stale PRs)
const staleMatch = report.match(/Stale \(>30 days\): (\d+)/);
const staleCount = staleMatch ? parseInt(staleMatch[1]) : 0;
if (staleCount > 10) {
const title = 'PR Backlog Alert: ' + staleCount + ' stale PRs';
const body = `## PR Backlog Alert
The PR backlog analysis found ${staleCount} stale PRs (>30 days old).
### Recommendation
Review and close stale PRs to reduce backlog.
### Report
See attached artifact for full analysis.
This issue was automatically created by the PR backlog management workflow.`;
await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title,
body,
labels: ['process-improvement', 'p2-backlog']
});
}

126
docs/pr-backlog-process.md Normal file
View File

@@ -0,0 +1,126 @@
# PR Backlog Management Process
## Overview
This document outlines the process for managing PR backlog in the Timmy Foundation repositories, specifically addressing the high PR backlog in timmy-config.
## Current State
As of the latest analysis:
- **timmy-config**: 31 open PRs (highest in org)
- **the-nexus**: Multiple PRs for same issues
- **hermes-agent**: Moderate PR count
## Process
### 1. Weekly Analysis
Run the PR backlog analyzer weekly:
```bash
python scripts/pr-backlog-analyzer.py
```
This generates a report in `reports/pr-backlog-YYYYMMDD.md`.
### 2. Review Stale PRs
PRs older than 30 days are considered stale. For each stale PR:
1. **Check relevance**: Is the PR still needed?
2. **Check conflicts**: Does it conflict with current main?
3. **Check activity**: Has there been recent activity?
4. **Action**: Close, update, or merge
### 3. Merge Approved PRs
PRs with approvals should be merged within 7 days:
1. **Verify CI**: Ensure all checks pass
2. **Verify review**: At least 1 approval
3. **Merge**: Use squash merge for clean history
4. **Delete branch**: Clean up after merge
### 4. Review Pending PRs
PRs waiting for review should be reviewed within 48 hours:
1. **Assign reviewer**: Ensure someone is responsible
2. **Review**: Check code quality, tests, documentation
3. **Approve or request changes**: Don't leave PRs in limbo
4. **Follow up**: If no response in 48 hours, escalate
### 5. Close Duplicate PRs
Multiple PRs for the same issue should be consolidated:
1. **Identify duplicates**: Same issue number or similar changes
2. **Keep newest**: Usually the most up-to-date
3. **Close older**: With explanatory comments
4. **Document**: Update issue with which PR was kept
## Automation
### GitHub Actions Workflow
The `pr-backlog-management.yml` workflow runs weekly to:
1. Analyze all open PRs
2. Generate a report
3. Create an issue if backlog is high (>10 stale PRs)
### Manual Trigger
The workflow can be triggered manually via GitHub Actions UI.
## Metrics
Track these metrics weekly:
- **Total open PRs**: Should be <20 per repo
- **Stale PRs**: Should be <5 per repo
- **Average PR age**: Should be <14 days
- **Time to review**: Should be <48 hours
- **Time to merge**: Should be <7 days after approval
## Escalation
If backlog exceeds thresholds:
1. **Level 1**: Automated issue created
2. **Level 2**: Team lead notified
3. **Level 3**: Organization-wide cleanup sprint
## Tools
### PR Backlog Analyzer
```bash
# Run analysis
python scripts/pr-backlog-analyzer.py
# View report
cat reports/pr-backlog-$(date +%Y%m%d).md
```
### Manual Cleanup
```bash
# List stale PRs
curl -s -H "Authorization: token $GITEA_TOKEN" "https://forge.alexanderwhitestone.com/api/v1/repos/Timmy_Foundation/timmy-config/pulls?state=open" | jq -r '.[] | select(.created_at < "'$(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%SZ)'") | .number'
# Close a PR
curl -s -X PATCH -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" -d '{"state": "closed"}' "https://forge.alexanderwhitestone.com/api/v1/repos/Timmy_Foundation/timmy-config/pulls/123"
```
## Success Criteria
- **Short-term**: Reduce timmy-config PRs from 31 to <20
- **Medium-term**: Maintain <15 open PRs across all repos
- **Long-term**: Automated PR lifecycle management
## Related
- Issue #1470: process: Address timmy-config PR backlog (9 PRs - highest in org)
- Issue #1127: Evening triage pass
- Issue #1128: Forge Cleanup

View File

@@ -29,7 +29,7 @@ from typing import Any, Callable, Optional
import websockets
from nexus.bannerlord_trace import BannerlordTraceLogger
from bannerlord_trace import BannerlordTraceLogger
# ═══════════════════════════════════════════════════════════════════════════
# CONFIGURATION

View File

@@ -181,63 +181,6 @@ async def live_bridge(log_dir: str, ws_url: str, reconnect_delay: float = 5.0):
await asyncio.gather(*tasks)
def clean_lines(text: str) -> list[str]:
"""Strip ANSI, normalize line endings, return non-empty lines."""
text = strip_ansi(text).replace("\r", "")
return [line.strip() for line in text.split("\n") if line.strip()]
def parse_room_output(text: str) -> dict:
"""Parse Evennia room text into structured data (title, desc, exits, objects)."""
lines = clean_lines(text)
if len(lines) < 2:
return {"title": lines[0] if lines else "", "desc": "", "exits": [], "objects": []}
title = lines[0]
desc = lines[1]
exits = []
objects = []
for line in lines[2:]:
if line.startswith("Exits:"):
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
exits = [{"key": t.strip(), "destination_id": t.strip().title(), "destination_key": t.strip().title()} for t in raw.split(",") if t.strip()]
elif line.startswith("You see:"):
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
parts = [t.strip() for t in raw.split(",") if t.strip()]
objects = [{"id": p.removeprefix("a ").removeprefix("an "), "key": p.removeprefix("a ").removeprefix("an "), "short_desc": p} for p in parts]
return {"title": title, "desc": desc, "exits": exits, "objects": objects}
def normalize_event(raw: dict, hermes_session_id: str) -> list[dict]:
"""Convert raw Evennia event dict into normalized Nexus events."""
from nexus.evennia_event_adapter import (
actor_located, command_issued, command_result,
room_snapshot, session_bound,
)
out = []
event = raw.get("event")
actor = raw.get("actor", "Timmy")
timestamp = raw.get("timestamp")
if event == "connect":
out.append(session_bound(hermes_session_id, evennia_account=actor, evennia_character=actor, timestamp=timestamp))
parsed = parse_room_output(raw.get("output", ""))
if parsed:
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
elif event == "command":
cmd = raw.get("command", "")
output = raw.get("output", "")
out.append(command_issued(hermes_session_id, actor, cmd, timestamp=timestamp))
success = not output.startswith("Command '") and not output.startswith("Could not find")
out.append(command_result(hermes_session_id, actor, cmd, strip_ansi(output), success=success, timestamp=timestamp))
parsed = parse_room_output(output)
if parsed:
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
return out
async def playback(log_path: Path, ws_url: str):
"""Legacy mode: replay a telemetry JSONL file."""
from nexus.evennia_event_adapter import (

View File

@@ -0,0 +1,35 @@
# PR Backlog Report — Timmy_Foundation/timmy-config
Generated: 2026-04-14 21:13:34
## Summary
- **Total Open PRs**: 32
- **Stale (>30 days)**: 0
- **Needs Review**: 0
- **Approved**: 0
- **Changes Requested**: 0
- **Recent (<7 days)**: 32
## Recommendations
### Immediate Actions
1. **Merge approved PRs**: 0 PRs are ready to merge
2. **Review stale PRs**: 0 PRs are >30 days old
3. **Address changes requested**: 0 PRs need updates
### Process Improvements
1. **Assign reviewers**: Ensure each PR has a reviewer within 24 hours
2. **Set SLAs**:
- Review within 48 hours
- Merge within 7 days of approval
- Close stale PRs after 30 days
3. **Automate**: Add CI checks to prevent backlog
## Detailed Analysis
### Stale PRs (>30 days)
### Approved PRs (Ready to Merge)
### Needs Review

181
scripts/pr-backlog-analyzer.py Executable file
View File

@@ -0,0 +1,181 @@
#!/usr/bin/env python3
"""
PR Backlog Analyzer for timmy-config
Analyzes open PRs and provides recommendations for cleanup.
"""
import json
import subprocess
import sys
from datetime import datetime, timedelta
from pathlib import Path
def get_open_prs(repo: str, token: str) -> list:
"""Get all open PRs from a repository."""
result = subprocess.run([
"curl", "-s", "-H", f"Authorization: token {token}",
f"https://forge.alexanderwhitestone.com/api/v1/repos/{repo}/pulls?state=open&limit=100"
], capture_output=True, text=True)
if result.returncode != 0:
print(f"Error fetching PRs: {result.stderr}")
return []
return json.loads(result.stdout)
def analyze_pr(pr: dict) -> dict:
"""Analyze a single PR."""
created = datetime.fromisoformat(pr['created_at'].replace('Z', '+00:00'))
age_days = (datetime.now(created.tzinfo) - created).days
# Check for reviews
reviews = pr.get('reviews', [])
has_approvals = any(r.get('state') == 'APPROVED' for r in reviews)
has_changes_requested = any(r.get('state') == 'CHANGES_REQUESTED' for r in reviews)
# Check labels
labels = [l['name'] for l in pr.get('labels', [])]
return {
'number': pr['number'],
'title': pr['title'],
'branch': pr['head']['ref'],
'created': pr['created_at'],
'age_days': age_days,
'user': pr['user']['login'],
'has_approvals': has_approvals,
'has_changes_requested': has_changes_requested,
'labels': labels,
'url': pr['html_url'],
}
def categorize_prs(prs: list) -> dict:
"""Categorize PRs by status."""
categories = {
'stale': [], # > 30 days old
'needs_review': [], # No reviews
'approved': [], # Approved but not merged
'changes_requested': [], # Changes requested
'recent': [], # < 7 days old
}
for pr in prs:
if pr['age_days'] > 30:
categories['stale'].append(pr)
elif pr['has_approvals']:
categories['approved'].append(pr)
elif pr['has_changes_requested']:
categories['changes_requested'].append(pr)
elif pr['age_days'] < 7:
categories['recent'].append(pr)
else:
categories['needs_review'].append(pr)
return categories
def generate_report(repo: str, prs: list, categories: dict) -> str:
"""Generate a markdown report."""
report = f"""# PR Backlog Report — {repo}
Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
## Summary
- **Total Open PRs**: {len(prs)}
- **Stale (>30 days)**: {len(categories['stale'])}
- **Needs Review**: {len(categories['needs_review'])}
- **Approved**: {len(categories['approved'])}
- **Changes Requested**: {len(categories['changes_requested'])}
- **Recent (<7 days)**: {len(categories['recent'])}
## Recommendations
### Immediate Actions
1. **Merge approved PRs**: {len(categories['approved'])} PRs are ready to merge
2. **Review stale PRs**: {len(categories['stale'])} PRs are >30 days old
3. **Address changes requested**: {len(categories['changes_requested'])} PRs need updates
### Process Improvements
1. **Assign reviewers**: Ensure each PR has a reviewer within 24 hours
2. **Set SLAs**:
- Review within 48 hours
- Merge within 7 days of approval
- Close stale PRs after 30 days
3. **Automate**: Add CI checks to prevent backlog
## Detailed Analysis
### Stale PRs (>30 days)
"""
for pr in categories['stale']:
report += f"- **#{pr['number']}**: {pr['title']}\n"
report += f" - Age: {pr['age_days']} days\n"
report += f" - Author: {pr['user']}\n"
report += f" - URL: {pr['url']}\n\n"
report += "\n### Approved PRs (Ready to Merge)\n"
for pr in categories['approved']:
report += f"- **#{pr['number']}**: {pr['title']}\n"
report += f" - Age: {pr['age_days']} days\n"
report += f" - Author: {pr['user']}\n"
report += f" - URL: {pr['url']}\n\n"
report += "\n### Needs Review\n"
for pr in categories['needs_review']:
report += f"- **#{pr['number']}**: {pr['title']}\n"
report += f" - Age: {pr['age_days']} days\n"
report += f" - Author: {pr['user']}\n"
report += f" - URL: {pr['url']}\n\n"
return report
def main():
"""Main function."""
token = Path.home() / '.config' / 'gitea' / 'token'
if not token.exists():
print("Error: Gitea token not found")
sys.exit(1)
token_str = token.read_text().strip()
repo = "Timmy_Foundation/timmy-config"
print(f"Fetching PRs for {repo}...")
prs = get_open_prs(repo, token_str)
if not prs:
print("No open PRs found")
return
print(f"Found {len(prs)} open PRs")
# Analyze PRs
analyzed = [analyze_pr(pr) for pr in prs]
categories = categorize_prs(analyzed)
# Generate report
report = generate_report(repo, analyzed, categories)
# Save report
output_dir = Path("reports")
output_dir.mkdir(exist_ok=True)
report_file = output_dir / f"pr-backlog-{datetime.now().strftime('%Y%m%d')}.md"
report_file.write_text(report)
print(f"\nReport saved to: {report_file}")
print(f"\nSummary:")
print(f" Total PRs: {len(prs)}")
print(f" Stale: {len(categories['stale'])}")
print(f" Approved: {len(categories['approved'])}")
print(f" Needs Review: {len(categories['needs_review'])}")
if __name__ == "__main__":
main()