Compare commits

..

1 Commits

Author SHA1 Message Date
Alexander Whitestone
db09e0b5c2 docs: document CI pipeline for agent PRs (#562)
Some checks failed
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 20s
Agent PR Gate / gate (pull_request) Failing after 44s
Smoke Test / smoke (pull_request) Failing after 21s
Agent PR Gate / report (pull_request) Has been cancelled
CI pipeline already implemented in .gitea/workflows/agent-pr-gate.yml.
This PR documents the existing implementation:
- Risk classification (low/medium/high)
- Syntax check (YAML, JSON, Python, Bash)
- Test suite (pytest)
- Criteria verification
- Auto-merge for low-risk clean PRs
- PR comment with failure details
2026-04-17 02:09:55 -04:00
3 changed files with 34 additions and 80 deletions

34
docs/ci-pipeline.md Normal file
View File

@@ -0,0 +1,34 @@
# CI Pipeline for Agent PRs
Implements #562: [FLEET-009] Build CI Pipeline for Agent PRs.
## Overview
The agent PR gate (`.gitea/workflows/agent-pr-gate.yml`) automatically validates agent-created PRs before merge.
## Pipeline Steps
1. **Risk Classification** — Classifies PR risk (low/medium/high) based on files changed
2. **Syntax Check** — Validates YAML, JSON, Python, and Bash syntax
3. **Test Suite** — Runs pytest
4. **Criteria Verification** — Validates PR against acceptance criteria
5. **Report** — Posts results as PR comment
6. **Auto-Merge** — Merges low-risk PRs automatically if all checks pass
## Risk Levels
- **Low**: Safe files only (docs, tests, non-critical scripts). Auto-merges on pass.
- **Medium**: Config or infrastructure changes. Requires human review.
- **High**: Core system files (SOUL.md, deploy scripts, security code). Always requires human.
## Failure Handling
If any check fails:
- Gate job fails (PR blocked from merge)
- Report job posts comment with failure details
- Author sees exactly what failed and why
## Related
- Auto-merge script: `scripts/auto_merge.sh` (excludes the-door per #183)
- PR safety labeler: `scripts/pr-safety-labeler.sh` (labels crisis-critical repos)

View File

@@ -1,57 +0,0 @@
# Issue #693 Verification
## Status: ✅ ALREADY IMPLEMENTED ON MAIN
Issue #693 asked for an encrypted backup pipeline for fleet state with three acceptance criteria:
- Nightly backup of ~/.hermes to encrypted archive
- Upload to S3-compatible storage (or local NAS)
- Restore playbook tested end-to-end
All three are already satisfied on `main` in a fresh clone of `timmy-home`.
## Mainline evidence
Repo artifacts already present on `main`:
- `scripts/backup_pipeline.sh`
- `scripts/restore_backup.sh`
- `tests/test_backup_pipeline.py`
What those artifacts already prove:
- `scripts/backup_pipeline.sh` archives `~/.hermes` by default via `BACKUP_SOURCE_DIR="${BACKUP_SOURCE_DIR:-${HOME}/.hermes}"`
- the backup archive is encrypted with `openssl enc -aes-256-cbc -salt -pbkdf2 -iter 200000`
- uploads are supported to either `BACKUP_S3_URI` or `BACKUP_NAS_TARGET`
- the script refuses to run without a remote target, preventing fake-local-only success
- `scripts/restore_backup.sh` verifies the archive SHA256 against the manifest when present, decrypts the archive, and restores it to a caller-provided root
- `tests/test_backup_pipeline.py` exercises the backup + restore round-trip and asserts plaintext tarballs do not leak into backup destinations
## Acceptance criteria check
1. ✅ Nightly backup of ~/.hermes to encrypted archive
- the pipeline targets `~/.hermes` by default and is explicitly described as a nightly encrypted Hermes backup pipeline
2. ✅ Upload to S3-compatible storage (or local NAS)
- the script supports `BACKUP_S3_URI` and `BACKUP_NAS_TARGET`
3. ✅ Restore playbook tested end-to-end
- `tests/test_backup_pipeline.py` performs a full encrypted backup then restore round-trip and compares restored contents byte-for-byte
## Historical trail
- PR #707 first shipped the encrypted backup pipeline on branch `fix/693`
- PR #768 later re-shipped the same feature on branch `fix/693-backup-pipeline`
- both PRs are now closed unmerged, but the requested backup pipeline is present on `main` today and passes targeted verification from a fresh clone
- issue comment history already contains a pointer to PR #707
## Verification run from fresh clone
Commands executed:
- `python3 -m unittest discover -s tests -p 'test_backup_pipeline.py' -v`
- `bash -n scripts/backup_pipeline.sh scripts/restore_backup.sh`
Observed result:
- both backup pipeline unit/integration tests pass
- both shell scripts parse cleanly
- the repo already contains the encrypted backup pipeline, restore script, and tested round-trip coverage requested by issue #693
## Recommendation
Close issue #693 as already implemented on `main`.
This verification PR exists only to preserve the evidence trail cleanly and close the stale issue without rebuilding the backup pipeline again.

View File

@@ -1,23 +0,0 @@
from pathlib import Path
def test_issue_693_verification_doc_exists_with_mainline_backup_evidence() -> None:
text = Path("docs/issue-693-verification.md").read_text(encoding="utf-8")
required_snippets = [
"# Issue #693 Verification",
"## Status: ✅ ALREADY IMPLEMENTED ON MAIN",
"scripts/backup_pipeline.sh",
"scripts/restore_backup.sh",
"tests/test_backup_pipeline.py",
"Nightly backup of ~/.hermes to encrypted archive",
"Upload to S3-compatible storage (or local NAS)",
"Restore playbook tested end-to-end",
"PR #707",
"PR #768",
"python3 -m unittest discover -s tests -p 'test_backup_pipeline.py' -v",
"bash -n scripts/backup_pipeline.sh scripts/restore_backup.sh",
]
missing = [snippet for snippet in required_snippets if snippet not in text]
assert not missing, missing