Compare commits
1 Commits
fix/issue-
...
burn/1500-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2b9672d82b |
89
docs/duplicate-pr-prevention.md
Normal file
89
docs/duplicate-pr-prevention.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Duplicate PR Prevention System
|
||||
|
||||
## Overview
|
||||
|
||||
The Nexus uses a multi-layer system to prevent duplicate PRs for the same issue.
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Pre-flight Check (CI)
|
||||
|
||||
The `.github/workflows/pr-duplicate-check.yml` workflow runs on every PR creation and checks if a PR already exists for the same issue.
|
||||
|
||||
**How it works:**
|
||||
1. Extracts issue numbers from PR title and body
|
||||
2. Queries Gitea API for existing PRs referencing those issues
|
||||
3. Fails the check if duplicates are found
|
||||
4. Provides links to existing PRs for review
|
||||
|
||||
### 2. Cleanup Script
|
||||
|
||||
The `scripts/cleanup-duplicate-prs.sh` script helps clean up existing duplicates:
|
||||
- Lists all PRs for a given issue
|
||||
- Identifies duplicates
|
||||
- Provides commands to close duplicates
|
||||
|
||||
### 3. Milestone Checker
|
||||
|
||||
The `bin/check_duplicate_milestones.py` script prevents duplicate milestones:
|
||||
- Scans all milestones in the repo
|
||||
- Identifies duplicates by title
|
||||
- Reports for manual cleanup
|
||||
|
||||
## Usage
|
||||
|
||||
### Check for Duplicates Before Creating PR
|
||||
|
||||
```bash
|
||||
# Check if issue already has PRs
|
||||
curl -s -H "Authorization: token $GITEA_TOKEN" \
|
||||
"https://forge.alexanderwhitestone.com/api/v1/repos/Timmy_Foundation/the-nexus/pulls?state=open" \
|
||||
| jq '.[] | select(.body | contains("#ISSUE_NUMBER"))'
|
||||
```
|
||||
|
||||
### Clean Up Existing Duplicates
|
||||
|
||||
```bash
|
||||
# List PRs for issue
|
||||
./scripts/cleanup-duplicate-prs.sh --issue 1128
|
||||
|
||||
# Close duplicates (keep newest)
|
||||
./scripts/cleanup-duplicate-prs.sh --issue 1128 --close-duplicates
|
||||
```
|
||||
|
||||
## Example: Issue #1500
|
||||
|
||||
Issue #1500 documented that the pre-flight check successfully prevented a duplicate PR for #1474.
|
||||
|
||||
**What happened:**
|
||||
1. Dispatch attempted to work on #1474
|
||||
2. Pre-flight check found 2 existing PRs (#1495, #1493)
|
||||
3. System prevented creating a 3rd duplicate
|
||||
4. Issue #1500 was filed as an observation
|
||||
|
||||
**Result:** The system worked as intended.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always check before creating PRs** — use the pre-flight check
|
||||
2. **Close duplicates promptly** — don't let them accumulate
|
||||
3. **Reference issues in PRs** — makes duplicate detection possible
|
||||
4. **Use descriptive branch names** — helps identify purpose
|
||||
5. **Review existing PRs first** — don't assume you're the first
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Duplicate PR detected" error
|
||||
|
||||
This means a PR already exists for the issue. Options:
|
||||
1. Review the existing PR and contribute to it
|
||||
2. Close your PR if it's truly a duplicate
|
||||
3. Update your PR to address a different aspect
|
||||
|
||||
### Pre-flight check not running
|
||||
|
||||
Check that `.github/workflows/pr-duplicate-check.yml` exists and is enabled.
|
||||
|
||||
### False positives
|
||||
|
||||
The check looks for issue numbers in PR body. If you're referencing an issue without intending to fix it, use "Refs #" instead of "Fixes #".
|
||||
@@ -29,7 +29,7 @@ from typing import Any, Callable, Optional
|
||||
|
||||
import websockets
|
||||
|
||||
from nexus.bannerlord_trace import BannerlordTraceLogger
|
||||
from bannerlord_trace import BannerlordTraceLogger
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════
|
||||
# CONFIGURATION
|
||||
|
||||
@@ -49,62 +49,6 @@ def strip_ansi(text: str) -> str:
|
||||
return ANSI_RE.sub("", text or "")
|
||||
|
||||
|
||||
def clean_lines(text: str) -> list[str]:
|
||||
"""Clean ANSI codes and split text into non-empty lines."""
|
||||
text = strip_ansi(text).replace("\r", "")
|
||||
return [line.strip() for line in text.split("\n") if line.strip()]
|
||||
|
||||
|
||||
def parse_room_output(text: str) -> dict | None:
|
||||
"""Parse Evennia room output into structured data."""
|
||||
lines = clean_lines(text)
|
||||
if len(lines) < 2:
|
||||
return None
|
||||
title = lines[0]
|
||||
desc = lines[1]
|
||||
exits = []
|
||||
objects = []
|
||||
for line in lines[2:]:
|
||||
if line.startswith("Exits:"):
|
||||
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
|
||||
exits = [{"key": t.strip(), "destination_id": t.strip().title(), "destination_key": t.strip().title()} for t in raw.split(",") if t.strip()]
|
||||
elif line.startswith("You see:"):
|
||||
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
|
||||
parts = [t.strip() for t in raw.split(",") if t.strip()]
|
||||
objects = [{"id": p.removeprefix("a ").removeprefix("an "), "key": p.removeprefix("a ").removeprefix("an "), "short_desc": p} for p in parts]
|
||||
return {"title": title, "desc": desc, "exits": exits, "objects": objects}
|
||||
|
||||
|
||||
def normalize_event(raw: dict, hermes_session_id: str) -> list[dict]:
|
||||
"""Normalize raw Evennia event into Nexus event format."""
|
||||
from nexus.evennia_event_adapter import (
|
||||
actor_located, command_issued, command_result,
|
||||
room_snapshot, session_bound,
|
||||
)
|
||||
|
||||
out = []
|
||||
event = raw.get("event")
|
||||
actor = raw.get("actor", "Timmy")
|
||||
timestamp = raw.get("timestamp")
|
||||
if event == "connect":
|
||||
out.append(session_bound(hermes_session_id, evennia_account=actor, evennia_character=actor, timestamp=timestamp))
|
||||
parsed = parse_room_output(raw.get("output", ""))
|
||||
if parsed:
|
||||
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
|
||||
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
|
||||
elif event == "command":
|
||||
cmd = raw.get("command", "")
|
||||
output = raw.get("output", "")
|
||||
out.append(command_issued(hermes_session_id, actor, cmd, timestamp=timestamp))
|
||||
success = not output.startswith("Command '") and not output.startswith("Could not find")
|
||||
out.append(command_result(hermes_session_id, actor, cmd, strip_ansi(output), success=success, timestamp=timestamp))
|
||||
parsed = parse_room_output(output)
|
||||
if parsed:
|
||||
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
|
||||
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
|
||||
return out
|
||||
|
||||
|
||||
class LogTailer:
|
||||
"""Async file tailer that yields new lines as they appear."""
|
||||
|
||||
@@ -239,6 +183,56 @@ async def live_bridge(log_dir: str, ws_url: str, reconnect_delay: float = 5.0):
|
||||
|
||||
async def playback(log_path: Path, ws_url: str):
|
||||
"""Legacy mode: replay a telemetry JSONL file."""
|
||||
from nexus.evennia_event_adapter import (
|
||||
actor_located, command_issued, command_result,
|
||||
room_snapshot, session_bound,
|
||||
)
|
||||
|
||||
def clean_lines(text: str) -> list[str]:
|
||||
text = strip_ansi(text).replace("\r", "")
|
||||
return [line.strip() for line in text.split("\n") if line.strip()]
|
||||
|
||||
def parse_room_output(text: str):
|
||||
lines = clean_lines(text)
|
||||
if len(lines) < 2:
|
||||
return None
|
||||
title = lines[0]
|
||||
desc = lines[1]
|
||||
exits = []
|
||||
objects = []
|
||||
for line in lines[2:]:
|
||||
if line.startswith("Exits:"):
|
||||
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
|
||||
exits = [{"key": t.strip(), "destination_id": t.strip().title(), "destination_key": t.strip().title()} for t in raw.split(",") if t.strip()]
|
||||
elif line.startswith("You see:"):
|
||||
raw = line.split(":", 1)[1].strip().replace(" and ", ", ")
|
||||
parts = [t.strip() for t in raw.split(",") if t.strip()]
|
||||
objects = [{"id": p.removeprefix("a ").removeprefix("an "), "key": p.removeprefix("a ").removeprefix("an "), "short_desc": p} for p in parts]
|
||||
return {"title": title, "desc": desc, "exits": exits, "objects": objects}
|
||||
|
||||
def normalize_event(raw: dict, hermes_session_id: str) -> list[dict]:
|
||||
out = []
|
||||
event = raw.get("event")
|
||||
actor = raw.get("actor", "Timmy")
|
||||
timestamp = raw.get("timestamp")
|
||||
if event == "connect":
|
||||
out.append(session_bound(hermes_session_id, evennia_account=actor, evennia_character=actor, timestamp=timestamp))
|
||||
parsed = parse_room_output(raw.get("output", ""))
|
||||
if parsed:
|
||||
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
|
||||
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
|
||||
elif event == "command":
|
||||
cmd = raw.get("command", "")
|
||||
output = raw.get("output", "")
|
||||
out.append(command_issued(hermes_session_id, actor, cmd, timestamp=timestamp))
|
||||
success = not output.startswith("Command '") and not output.startswith("Could not find")
|
||||
out.append(command_result(hermes_session_id, actor, cmd, strip_ansi(output), success=success, timestamp=timestamp))
|
||||
parsed = parse_room_output(output)
|
||||
if parsed:
|
||||
out.append(actor_located(actor, parsed["title"], parsed["title"], timestamp=timestamp))
|
||||
out.append(room_snapshot(parsed["title"], parsed["title"], parsed["desc"], exits=parsed["exits"], objects=parsed["objects"], timestamp=timestamp))
|
||||
return out
|
||||
|
||||
hermes_session_id = log_path.stem
|
||||
async with websockets.connect(ws_url) as ws:
|
||||
for line in log_path.read_text(encoding="utf-8").splitlines():
|
||||
|
||||
Reference in New Issue
Block a user