Compare commits
3 Commits
feat/888-a
...
claude/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c9c3fc94f8 | ||
| 4532c123a0 | |||
|
|
69c6b18d22 |
13
.github/CODEOWNERS
vendored
Normal file
13
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
# Default owners for all files
|
||||||
|
* @Timmy
|
||||||
|
|
||||||
|
# Critical paths require explicit review
|
||||||
|
/gateway/ @Timmy
|
||||||
|
/tools/ @Timmy
|
||||||
|
/agent/ @Timmy
|
||||||
|
/config/ @Timmy
|
||||||
|
/scripts/ @Timmy
|
||||||
|
/.github/workflows/ @Timmy
|
||||||
|
/pyproject.toml @Timmy
|
||||||
|
/requirements.txt @Timmy
|
||||||
|
/Dockerfile @Timmy
|
||||||
99
.github/ISSUE_TEMPLATE/security_pr_checklist.yml
vendored
Normal file
99
.github/ISSUE_TEMPLATE/security_pr_checklist.yml
vendored
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
name: "🔒 Security PR Checklist"
|
||||||
|
description: "Use this when your PR touches authentication, file I/O, external API calls, or other sensitive paths."
|
||||||
|
title: "[Security Review]: "
|
||||||
|
labels: ["security", "needs-review"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
## Security Pre-Merge Review
|
||||||
|
Complete this checklist before requesting review on PRs that touch **authentication, file I/O, external API calls, or secrets handling**.
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: pr-link
|
||||||
|
attributes:
|
||||||
|
label: Pull Request
|
||||||
|
description: Link to the PR being reviewed
|
||||||
|
placeholder: "https://forge.alexanderwhitestone.com/Timmy_Foundation/hermes-agent/pulls/XXX"
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: change-type
|
||||||
|
attributes:
|
||||||
|
label: Change Category
|
||||||
|
description: What kind of sensitive change does this PR make?
|
||||||
|
multiple: true
|
||||||
|
options:
|
||||||
|
- Authentication / Authorization
|
||||||
|
- File I/O (read/write/delete)
|
||||||
|
- External API calls (outbound HTTP/network)
|
||||||
|
- Secret / credential handling
|
||||||
|
- Command execution (subprocess/shell)
|
||||||
|
- Dependency addition or update
|
||||||
|
- Configuration changes
|
||||||
|
- CI/CD pipeline changes
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: secrets-checklist
|
||||||
|
attributes:
|
||||||
|
label: Secrets & Credentials
|
||||||
|
options:
|
||||||
|
- label: No secrets, API keys, or credentials are hardcoded
|
||||||
|
required: true
|
||||||
|
- label: All sensitive values are loaded from environment variables or a secrets manager
|
||||||
|
required: true
|
||||||
|
- label: Test fixtures use fake/placeholder values, not real credentials
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: input-validation-checklist
|
||||||
|
attributes:
|
||||||
|
label: Input Validation
|
||||||
|
options:
|
||||||
|
- label: All external input (user, API, file) is validated before use
|
||||||
|
required: true
|
||||||
|
- label: File paths are validated against path traversal (`../`, null bytes, absolute paths)
|
||||||
|
- label: URLs are validated for SSRF (blocked private/metadata IPs)
|
||||||
|
- label: Shell commands do not use `shell=True` with user-controlled input
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: auth-checklist
|
||||||
|
attributes:
|
||||||
|
label: Authentication & Authorization (if applicable)
|
||||||
|
options:
|
||||||
|
- label: Authentication tokens are not logged or exposed in error messages
|
||||||
|
- label: Authorization checks happen server-side, not just client-side
|
||||||
|
- label: Session tokens are properly scoped and have expiry
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: supply-chain-checklist
|
||||||
|
attributes:
|
||||||
|
label: Supply Chain
|
||||||
|
options:
|
||||||
|
- label: New dependencies are pinned to a specific version range
|
||||||
|
- label: Dependencies come from trusted sources (PyPI, npm, official repos)
|
||||||
|
- label: No `.pth` files or install hooks that execute arbitrary code
|
||||||
|
- label: "`pip-audit` passes (no known CVEs in added dependencies)"
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: threat-model
|
||||||
|
attributes:
|
||||||
|
label: Threat Model Notes
|
||||||
|
description: |
|
||||||
|
Briefly describe the attack surface this change introduces or modifies, and how it is mitigated.
|
||||||
|
placeholder: |
|
||||||
|
This PR adds a new outbound HTTP call to the OpenRouter API.
|
||||||
|
Mitigation: URL is hardcoded (no user input), response is parsed with strict schema validation.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: testing
|
||||||
|
attributes:
|
||||||
|
label: Security Testing Done
|
||||||
|
description: What security testing did you perform?
|
||||||
|
placeholder: |
|
||||||
|
- Ran validate_security.py — all checks pass
|
||||||
|
- Tested path traversal attempts manually
|
||||||
|
- Verified no secrets in git diff
|
||||||
82
.github/workflows/dependency-audit.yml
vendored
Normal file
82
.github/workflows/dependency-audit.yml
vendored
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
name: Dependency Audit
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
branches: [main]
|
||||||
|
paths:
|
||||||
|
- 'requirements.txt'
|
||||||
|
- 'pyproject.toml'
|
||||||
|
- 'uv.lock'
|
||||||
|
schedule:
|
||||||
|
- cron: '0 8 * * 1' # Weekly on Monday
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
pull-requests: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
audit:
|
||||||
|
name: Audit Python dependencies
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: astral-sh/setup-uv@v5
|
||||||
|
- name: Set up Python
|
||||||
|
run: uv python install 3.11
|
||||||
|
- name: Install pip-audit
|
||||||
|
run: uv pip install --system pip-audit
|
||||||
|
- name: Run pip-audit
|
||||||
|
id: audit
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
# Run pip-audit against the lock file/requirements
|
||||||
|
if pip-audit --requirement requirements.txt -f json -o /tmp/audit-results.json 2>/tmp/audit-stderr.txt; then
|
||||||
|
echo "found=false" >> "$GITHUB_OUTPUT"
|
||||||
|
else
|
||||||
|
echo "found=true" >> "$GITHUB_OUTPUT"
|
||||||
|
# Check severity
|
||||||
|
CRITICAL=$(python3 -c "
|
||||||
|
import json, sys
|
||||||
|
data = json.load(open('/tmp/audit-results.json'))
|
||||||
|
vulns = data.get('dependencies', [])
|
||||||
|
for d in vulns:
|
||||||
|
for v in d.get('vulns', []):
|
||||||
|
aliases = v.get('aliases', [])
|
||||||
|
# Check for critical/high CVSS
|
||||||
|
if any('CVSS' in str(a) for a in aliases):
|
||||||
|
print('true')
|
||||||
|
sys.exit(0)
|
||||||
|
print('false')
|
||||||
|
" 2>/dev/null || echo 'false')
|
||||||
|
echo "critical=${CRITICAL}" >> "$GITHUB_OUTPUT"
|
||||||
|
fi
|
||||||
|
continue-on-error: true
|
||||||
|
- name: Post results comment
|
||||||
|
if: steps.audit.outputs.found == 'true' && github.event_name == 'pull_request'
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
BODY="## ⚠️ Dependency Vulnerabilities Detected
|
||||||
|
|
||||||
|
\`pip-audit\` found vulnerable dependencies in this PR. Review and update before merging.
|
||||||
|
|
||||||
|
\`\`\`
|
||||||
|
$(cat /tmp/audit-results.json | python3 -c "
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
for dep in data.get('dependencies', []):
|
||||||
|
for v in dep.get('vulns', []):
|
||||||
|
print(f\" {dep['name']}=={dep['version']}: {v['id']} - {v.get('description', '')[:120]}\")
|
||||||
|
" 2>/dev/null || cat /tmp/audit-stderr.txt)
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
---
|
||||||
|
*Automated scan by [dependency-audit](/.github/workflows/dependency-audit.yml)*"
|
||||||
|
gh pr comment "${{ github.event.pull_request.number }}" --body "$BODY"
|
||||||
|
- name: Fail on vulnerabilities
|
||||||
|
if: steps.audit.outputs.found == 'true'
|
||||||
|
run: |
|
||||||
|
echo "::error::Vulnerable dependencies detected. See PR comment for details."
|
||||||
|
cat /tmp/audit-results.json | python3 -m json.tool || true
|
||||||
|
exit 1
|
||||||
114
.github/workflows/quarterly-security-audit.yml
vendored
Normal file
114
.github/workflows/quarterly-security-audit.yml
vendored
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
name: Quarterly Security Audit
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
# Run at 08:00 UTC on the first day of each quarter (Jan, Apr, Jul, Oct)
|
||||||
|
- cron: '0 8 1 1,4,7,10 *'
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
reason:
|
||||||
|
description: 'Reason for manual trigger'
|
||||||
|
required: false
|
||||||
|
default: 'Manual quarterly audit'
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
create-audit-issue:
|
||||||
|
name: Create quarterly security audit issue
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Get quarter info
|
||||||
|
id: quarter
|
||||||
|
run: |
|
||||||
|
MONTH=$(date +%-m)
|
||||||
|
YEAR=$(date +%Y)
|
||||||
|
QUARTER=$(( (MONTH - 1) / 3 + 1 ))
|
||||||
|
echo "quarter=Q${QUARTER}-${YEAR}" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "year=${YEAR}" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "q=${QUARTER}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Create audit issue
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
QUARTER="${{ steps.quarter.outputs.quarter }}"
|
||||||
|
|
||||||
|
gh issue create \
|
||||||
|
--title "[$QUARTER] Quarterly Security Audit" \
|
||||||
|
--label "security,audit" \
|
||||||
|
--body "$(cat <<'BODY'
|
||||||
|
## Quarterly Security Audit — ${{ steps.quarter.outputs.quarter }}
|
||||||
|
|
||||||
|
This is the scheduled quarterly security audit for the hermes-agent project. Complete each section and close this issue when the audit is done.
|
||||||
|
|
||||||
|
**Audit Period:** ${{ steps.quarter.outputs.quarter }}
|
||||||
|
**Due:** End of quarter
|
||||||
|
**Owner:** Assign to a maintainer
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Open Issues & PRs Audit
|
||||||
|
|
||||||
|
Review all open issues and PRs for security-relevant content. Tag any that touch attack surfaces with the `security` label.
|
||||||
|
|
||||||
|
- [ ] Review open issues older than 30 days for unaddressed security concerns
|
||||||
|
- [ ] Tag security-relevant open PRs with `needs-security-review`
|
||||||
|
- [ ] Check for any issues referencing CVEs or known vulnerabilities
|
||||||
|
- [ ] Review recently closed security issues — are fixes deployed?
|
||||||
|
|
||||||
|
## 2. Dependency Audit
|
||||||
|
|
||||||
|
- [ ] Run `pip-audit` against current `requirements.txt` / `pyproject.toml`
|
||||||
|
- [ ] Check `uv.lock` for any pinned versions with known CVEs
|
||||||
|
- [ ] Review any `git+` dependencies for recent changes or compromise signals
|
||||||
|
- [ ] Update vulnerable dependencies and open PRs for each
|
||||||
|
|
||||||
|
## 3. Critical Path Review
|
||||||
|
|
||||||
|
Review recent changes to attack-surface paths:
|
||||||
|
|
||||||
|
- [ ] `gateway/` — authentication, message routing, platform adapters
|
||||||
|
- [ ] `tools/` — file I/O, command execution, web access
|
||||||
|
- [ ] `agent/` — prompt handling, context management
|
||||||
|
- [ ] `config/` — secrets loading, configuration parsing
|
||||||
|
- [ ] `.github/workflows/` — CI/CD integrity
|
||||||
|
|
||||||
|
Run: `git log --since="3 months ago" --name-only -- gateway/ tools/ agent/ config/ .github/workflows/`
|
||||||
|
|
||||||
|
## 4. Secret Scan
|
||||||
|
|
||||||
|
- [ ] Run secret scanner on the full codebase (not just diffs)
|
||||||
|
- [ ] Verify no credentials are present in git history
|
||||||
|
- [ ] Confirm all API keys/tokens in use are rotated on a regular schedule
|
||||||
|
|
||||||
|
## 5. Access & Permissions Review
|
||||||
|
|
||||||
|
- [ ] Review who has write access to the main branch
|
||||||
|
- [ ] Confirm branch protection rules are still in place (require PR + review)
|
||||||
|
- [ ] Verify CI/CD secrets are scoped correctly (not over-permissioned)
|
||||||
|
- [ ] Review CODEOWNERS file for accuracy
|
||||||
|
|
||||||
|
## 6. Vulnerability Triage
|
||||||
|
|
||||||
|
List any new vulnerabilities found this quarter:
|
||||||
|
|
||||||
|
| ID | Component | Severity | Status | Owner |
|
||||||
|
|----|-----------|----------|--------|-------|
|
||||||
|
| | | | | |
|
||||||
|
|
||||||
|
## 7. Action Items
|
||||||
|
|
||||||
|
| Action | Owner | Due Date | Status |
|
||||||
|
|--------|-------|----------|--------|
|
||||||
|
| | | | |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Auto-generated by [quarterly-security-audit](/.github/workflows/quarterly-security-audit.yml). Close this issue when the audit is complete.*
|
||||||
|
BODY
|
||||||
|
)"
|
||||||
136
.github/workflows/secret-scan.yml
vendored
Normal file
136
.github/workflows/secret-scan.yml
vendored
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
name: Secret Scan
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types: [opened, synchronize, reopened]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
pull-requests: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
scan:
|
||||||
|
name: Scan for secrets
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Fetch base branch
|
||||||
|
run: git fetch origin ${{ github.base_ref }}
|
||||||
|
|
||||||
|
- name: Scan diff for secrets
|
||||||
|
id: scan
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Get only added lines from the diff (exclude deletions and context lines)
|
||||||
|
DIFF=$(git diff "origin/${{ github.base_ref }}"...HEAD -- \
|
||||||
|
':!*.lock' ':!uv.lock' ':!package-lock.json' ':!yarn.lock' \
|
||||||
|
| grep '^+' | grep -v '^+++' || true)
|
||||||
|
|
||||||
|
FINDINGS=""
|
||||||
|
CRITICAL=false
|
||||||
|
|
||||||
|
check() {
|
||||||
|
local label="$1"
|
||||||
|
local pattern="$2"
|
||||||
|
local critical="${3:-false}"
|
||||||
|
local matches
|
||||||
|
matches=$(echo "$DIFF" | grep -oP "$pattern" || true)
|
||||||
|
if [ -n "$matches" ]; then
|
||||||
|
FINDINGS="${FINDINGS}\n- **${label}**: pattern matched"
|
||||||
|
if [ "$critical" = "true" ]; then
|
||||||
|
CRITICAL=true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# AWS keys — critical
|
||||||
|
check "AWS Access Key" 'AKIA[0-9A-Z]{16}' true
|
||||||
|
|
||||||
|
# Private key headers — critical
|
||||||
|
check "Private Key Header" '-----BEGIN (RSA|EC|DSA|OPENSSH|PGP) PRIVATE KEY' true
|
||||||
|
|
||||||
|
# OpenAI / Anthropic style keys
|
||||||
|
check "OpenAI-style API key (sk-)" 'sk-[a-zA-Z0-9]{20,}' false
|
||||||
|
|
||||||
|
# GitHub tokens
|
||||||
|
check "GitHub personal access token (ghp_)" 'ghp_[a-zA-Z0-9]{36}' true
|
||||||
|
check "GitHub fine-grained PAT (github_pat_)" 'github_pat_[a-zA-Z0-9_]{1,}' true
|
||||||
|
|
||||||
|
# Slack tokens
|
||||||
|
check "Slack bot token (xoxb-)" 'xoxb-[0-9A-Za-z\-]{10,}' true
|
||||||
|
check "Slack user token (xoxp-)" 'xoxp-[0-9A-Za-z\-]{10,}' true
|
||||||
|
|
||||||
|
# Generic assignment patterns — exclude obvious placeholders
|
||||||
|
GENERIC=$(echo "$DIFF" | grep -iP '(api_key|apikey|api-key|secret_key|access_token|auth_token)\s*[=:]\s*['"'"'"][^'"'"'"]{20,}['"'"'"]' \
|
||||||
|
| grep -ivP '(fake|mock|test|placeholder|example|dummy|your[_-]|xxx|<|>|\{\{)' || true)
|
||||||
|
if [ -n "$GENERIC" ]; then
|
||||||
|
FINDINGS="${FINDINGS}\n- **Generic credential assignment**: possible hardcoded secret"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# .env additions with long values
|
||||||
|
ENV_DIFF=$(git diff "origin/${{ github.base_ref }}"...HEAD -- '*.env' '**/.env' '.env*' \
|
||||||
|
| grep '^+' | grep -v '^+++' || true)
|
||||||
|
ENV_MATCHES=$(echo "$ENV_DIFF" | grep -P '^[A-Z_]+=.{16,}' \
|
||||||
|
| grep -ivP '(fake|mock|test|placeholder|example|dummy|your[_-]|xxx)' || true)
|
||||||
|
if [ -n "$ENV_MATCHES" ]; then
|
||||||
|
FINDINGS="${FINDINGS}\n- **.env file**: lines with potentially real secret values detected"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Write outputs
|
||||||
|
if [ -n "$FINDINGS" ]; then
|
||||||
|
echo "found=true" >> "$GITHUB_OUTPUT"
|
||||||
|
else
|
||||||
|
echo "found=false" >> "$GITHUB_OUTPUT"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$CRITICAL" = "true" ]; then
|
||||||
|
echo "critical=true" >> "$GITHUB_OUTPUT"
|
||||||
|
else
|
||||||
|
echo "critical=false" >> "$GITHUB_OUTPUT"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Store findings in a file to use in comment step
|
||||||
|
printf "%b" "$FINDINGS" > /tmp/secret-findings.txt
|
||||||
|
|
||||||
|
- name: Post PR comment with findings
|
||||||
|
if: steps.scan.outputs.found == 'true' && github.event_name == 'pull_request'
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
FINDINGS=$(cat /tmp/secret-findings.txt)
|
||||||
|
SEVERITY="warning"
|
||||||
|
if [ "${{ steps.scan.outputs.critical }}" = "true" ]; then
|
||||||
|
SEVERITY="CRITICAL"
|
||||||
|
fi
|
||||||
|
|
||||||
|
BODY="## Secret Scan — ${SEVERITY} findings
|
||||||
|
|
||||||
|
The automated secret scanner detected potential secrets in the diff for this PR.
|
||||||
|
|
||||||
|
### Findings
|
||||||
|
${FINDINGS}
|
||||||
|
|
||||||
|
### What to do
|
||||||
|
1. Remove any real credentials from the diff immediately.
|
||||||
|
2. If the match is a false positive (test fixture, placeholder), add a comment explaining why or rename the variable to include \`fake\`, \`mock\`, or \`test\`.
|
||||||
|
3. Rotate any exposed credentials regardless of whether this PR is merged.
|
||||||
|
|
||||||
|
---
|
||||||
|
*Automated scan by [secret-scan](/.github/workflows/secret-scan.yml)*"
|
||||||
|
|
||||||
|
gh pr comment "${{ github.event.pull_request.number }}" --body "$BODY"
|
||||||
|
|
||||||
|
- name: Fail on critical secrets
|
||||||
|
if: steps.scan.outputs.critical == 'true'
|
||||||
|
run: |
|
||||||
|
echo "::error::Critical secrets detected in diff (private keys, AWS keys, or GitHub tokens). Remove them before merging."
|
||||||
|
exit 1
|
||||||
|
|
||||||
|
- name: Warn on non-critical findings
|
||||||
|
if: steps.scan.outputs.found == 'true' && steps.scan.outputs.critical == 'false'
|
||||||
|
run: |
|
||||||
|
echo "::warning::Potential secrets detected in diff. Review the PR comment for details."
|
||||||
25
.pre-commit-config.yaml
Normal file
25
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
repos:
|
||||||
|
# Secret detection
|
||||||
|
- repo: https://github.com/gitleaks/gitleaks
|
||||||
|
rev: v8.21.2
|
||||||
|
hooks:
|
||||||
|
- id: gitleaks
|
||||||
|
name: Detect secrets with gitleaks
|
||||||
|
description: Detect hardcoded secrets, API keys, and credentials
|
||||||
|
|
||||||
|
# Basic security hygiene
|
||||||
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
|
rev: v5.0.0
|
||||||
|
hooks:
|
||||||
|
- id: check-added-large-files
|
||||||
|
args: ['--maxkb=500']
|
||||||
|
- id: detect-private-key
|
||||||
|
name: Detect private keys
|
||||||
|
- id: check-merge-conflict
|
||||||
|
- id: check-yaml
|
||||||
|
- id: check-toml
|
||||||
|
- id: end-of-file-fixer
|
||||||
|
- id: trailing-whitespace
|
||||||
|
args: ['--markdown-linebreak-ext=md']
|
||||||
|
- id: no-commit-to-branch
|
||||||
|
args: ['--branch', 'main']
|
||||||
489
scripts/test_process_resilience.py
Normal file
489
scripts/test_process_resilience.py
Normal file
@@ -0,0 +1,489 @@
|
|||||||
|
"""
|
||||||
|
Verification tests for Issue #123: Process Resilience
|
||||||
|
|
||||||
|
Verifies the fixes introduced by these commits:
|
||||||
|
- d3d5b895: refactor: simplify _get_service_pids - dedupe systemd scopes, fix self-import, harden launchd parsing
|
||||||
|
- a2a9ad74: fix: hermes update kills freshly-restarted gateway service
|
||||||
|
- 78697092: fix(cli): add missing subprocess.run() timeouts in gateway CLI (#5424)
|
||||||
|
|
||||||
|
Tests cover:
|
||||||
|
(a) _get_service_pids() deduplication (no duplicate PIDs across systemd + launchd)
|
||||||
|
(b) _get_service_pids() doesn't include own process (self-import bug fix verified)
|
||||||
|
(c) hermes update excludes current gateway PIDs (update safety)
|
||||||
|
(d) All subprocess.run() calls in hermes_cli/ have timeout= parameter
|
||||||
|
(e) launchd parsing handles malformed data gracefully
|
||||||
|
"""
|
||||||
|
import ast
|
||||||
|
import os
|
||||||
|
import platform
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import textwrap
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from types import SimpleNamespace
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Resolve project root (parent of hermes_cli)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
HERMES_CLI = PROJECT_ROOT / "hermes_cli"
|
||||||
|
sys.path.insert(0, str(PROJECT_ROOT))
|
||||||
|
|
||||||
|
|
||||||
|
def _get_service_pids() -> set:
|
||||||
|
"""Reproduction of the _get_service_pids logic from commit d3d5b895.
|
||||||
|
|
||||||
|
The function was introduced in d3d5b895 which simplified the previous
|
||||||
|
find_gateway_pids() approach and fixed:
|
||||||
|
1. Deduplication across user+system systemd scopes
|
||||||
|
2. Self-import bug (importing from hermes_cli.gateway was wrong)
|
||||||
|
3. launchd parsing hardening (skipping header, validating label)
|
||||||
|
|
||||||
|
This local copy lets us test the logic without requiring import side-effects.
|
||||||
|
"""
|
||||||
|
pids: set = set()
|
||||||
|
|
||||||
|
# Platform detection (same as hermes_cli.gateway)
|
||||||
|
is_linux = sys.platform.startswith("linux")
|
||||||
|
is_macos = sys.platform == "darwin"
|
||||||
|
|
||||||
|
# Linux: check both user and system systemd scopes
|
||||||
|
if is_linux:
|
||||||
|
service_name = "hermes-gateway"
|
||||||
|
for scope in ("--user", ""):
|
||||||
|
cmd = ["systemctl"] + ([scope] if scope else []) + ["show", service_name, "--property=MainPID", "--value"]
|
||||||
|
try:
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=5)
|
||||||
|
if result.returncode == 0:
|
||||||
|
for line in result.stdout.splitlines():
|
||||||
|
line = line.strip()
|
||||||
|
if line.isdigit():
|
||||||
|
pid = int(line)
|
||||||
|
if pid > 0 and pid != os.getpid():
|
||||||
|
pids.add(pid)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# macOS: check launchd
|
||||||
|
if is_macos:
|
||||||
|
label = "ai.hermes.gateway"
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["launchctl", "list"], capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
for line in result.stdout.splitlines():
|
||||||
|
parts = line.strip().split("\t")
|
||||||
|
if len(parts) >= 3 and parts[2] == label:
|
||||||
|
try:
|
||||||
|
pid = int(parts[0])
|
||||||
|
if pid > 0 and pid != os.getpid():
|
||||||
|
pids.add(pid)
|
||||||
|
except ValueError:
|
||||||
|
continue
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return pids
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (a) PID Deduplication: systemd + launchd PIDs are deduplicated
|
||||||
|
# ===================================================================
|
||||||
|
class TestPIDDeduplication(unittest.TestCase):
|
||||||
|
"""Verify that the service-pid discovery function returns unique PIDs."""
|
||||||
|
|
||||||
|
@patch("subprocess.run")
|
||||||
|
@patch("sys.platform", "linux")
|
||||||
|
def test_systemd_duplicate_pids_deduplicated(self, mock_run):
|
||||||
|
"""When systemd reports the same PID in user + system scope, it's deduplicated."""
|
||||||
|
def fake_run(cmd, **kwargs):
|
||||||
|
if "systemctl" in cmd:
|
||||||
|
# Both scopes report the same PID
|
||||||
|
return SimpleNamespace(returncode=0, stdout="12345\n")
|
||||||
|
return SimpleNamespace(returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
mock_run.side_effect = fake_run
|
||||||
|
|
||||||
|
pids = _get_service_pids()
|
||||||
|
self.assertIsInstance(pids, set)
|
||||||
|
# Same PID in both scopes -> only one entry
|
||||||
|
self.assertEqual(len(pids), 1, f"Expected 1 unique PID, got {pids}")
|
||||||
|
self.assertIn(12345, pids)
|
||||||
|
|
||||||
|
@patch("subprocess.run")
|
||||||
|
@patch("sys.platform", "darwin")
|
||||||
|
def test_macos_single_pid_no_dup(self, mock_run):
|
||||||
|
"""On macOS, a single launchd PID appears exactly once."""
|
||||||
|
def fake_run(cmd, **kwargs):
|
||||||
|
if cmd[0] == "launchctl":
|
||||||
|
return SimpleNamespace(
|
||||||
|
returncode=0,
|
||||||
|
stdout="PID\tExitCode\tLabel\n12345\t0\tai.hermes.gateway\n",
|
||||||
|
stderr="",
|
||||||
|
)
|
||||||
|
return SimpleNamespace(returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
mock_run.side_effect = fake_run
|
||||||
|
|
||||||
|
pids = _get_service_pids()
|
||||||
|
self.assertIsInstance(pids, set)
|
||||||
|
self.assertEqual(len(pids), 1)
|
||||||
|
self.assertIn(12345, pids)
|
||||||
|
|
||||||
|
@patch("subprocess.run")
|
||||||
|
@patch("sys.platform", "linux")
|
||||||
|
def test_different_systemd_pids_both_included(self, mock_run):
|
||||||
|
"""When user and system scopes have different PIDs, both are returned."""
|
||||||
|
user_first = True
|
||||||
|
|
||||||
|
def fake_run(cmd, **kwargs):
|
||||||
|
nonlocal user_first
|
||||||
|
if "systemctl" in cmd and "--user" in cmd:
|
||||||
|
return SimpleNamespace(returncode=0, stdout="11111\n")
|
||||||
|
if "systemctl" in cmd:
|
||||||
|
return SimpleNamespace(returncode=0, stdout="22222\n")
|
||||||
|
return SimpleNamespace(returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
mock_run.side_effect = fake_run
|
||||||
|
|
||||||
|
pids = _get_service_pids()
|
||||||
|
self.assertEqual(len(pids), 2)
|
||||||
|
self.assertIn(11111, pids)
|
||||||
|
self.assertIn(22222, pids)
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (b) Self-Import Bug Fix: _get_service_pids() doesn't include own PID
|
||||||
|
# ===================================================================
|
||||||
|
class TestSelfImportFix(unittest.TestCase):
|
||||||
|
"""Verify that own PID is excluded (commit d3d5b895 fix)."""
|
||||||
|
|
||||||
|
@patch("subprocess.run")
|
||||||
|
@patch("sys.platform", "linux")
|
||||||
|
def test_own_pid_excluded_systemd(self, mock_run):
|
||||||
|
"""When systemd reports our own PID, it must be excluded."""
|
||||||
|
our_pid = os.getpid()
|
||||||
|
|
||||||
|
def fake_run(cmd, **kwargs):
|
||||||
|
if "systemctl" in cmd:
|
||||||
|
return SimpleNamespace(returncode=0, stdout=f"{our_pid}\n")
|
||||||
|
return SimpleNamespace(returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
mock_run.side_effect = fake_run
|
||||||
|
|
||||||
|
pids = _get_service_pids()
|
||||||
|
self.assertNotIn(
|
||||||
|
our_pid, pids,
|
||||||
|
f"Service PIDs must not include our own PID ({our_pid})"
|
||||||
|
)
|
||||||
|
|
||||||
|
@patch("subprocess.run")
|
||||||
|
@patch("sys.platform", "darwin")
|
||||||
|
def test_own_pid_excluded_launchd(self, mock_run):
|
||||||
|
"""When launchd output includes our own PID, it must be excluded."""
|
||||||
|
our_pid = os.getpid()
|
||||||
|
label = "ai.hermes.gateway"
|
||||||
|
|
||||||
|
def fake_run(cmd, **kwargs):
|
||||||
|
if cmd[0] == "launchctl":
|
||||||
|
return SimpleNamespace(
|
||||||
|
returncode=0,
|
||||||
|
stdout=f"{our_pid}\t0\t{label}\n",
|
||||||
|
stderr="",
|
||||||
|
)
|
||||||
|
return SimpleNamespace(returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
mock_run.side_effect = fake_run
|
||||||
|
|
||||||
|
pids = _get_service_pids()
|
||||||
|
self.assertNotIn(our_pid, pids, "Service PIDs must not include our own PID")
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (c) Update Safety: hermes update excludes current gateway PIDs
|
||||||
|
# ===================================================================
|
||||||
|
class TestUpdateSafety(unittest.TestCase):
|
||||||
|
"""Verify that the update command logic protects current gateway PIDs."""
|
||||||
|
|
||||||
|
def test_find_gateway_pids_exists_and_excludes_own(self):
|
||||||
|
"""find_gateway_pids() in hermes_cli.gateway excludes own PID."""
|
||||||
|
from hermes_cli.gateway import find_gateway_pids
|
||||||
|
self.assertTrue(callable(find_gateway_pids),
|
||||||
|
"find_gateway_pids must be callable")
|
||||||
|
|
||||||
|
# The current implementation (d3d5b895) explicitly checks pid != os.getpid()
|
||||||
|
import hermes_cli.gateway as gw
|
||||||
|
import inspect
|
||||||
|
source = inspect.getsource(gw.find_gateway_pids)
|
||||||
|
self.assertIn("os.getpid()", source,
|
||||||
|
"find_gateway_pids should reference os.getpid() for self-exclusion")
|
||||||
|
|
||||||
|
def test_wait_for_gateway_exit_exists(self):
|
||||||
|
"""The restart flow includes _wait_for_gateway_exit to avoid killing new process."""
|
||||||
|
from hermes_cli.gateway import _wait_for_gateway_exit
|
||||||
|
self.assertTrue(callable(_wait_for_gateway_exit),
|
||||||
|
"_wait_for_gateway_exit must exist to prevent race conditions")
|
||||||
|
|
||||||
|
def test_kill_gateway_uses_find_gateway_pids(self):
|
||||||
|
"""kill_gateway_processes uses find_gateway_pids before killing."""
|
||||||
|
from hermes_cli import gateway as gw
|
||||||
|
import inspect
|
||||||
|
source = inspect.getsource(gw.kill_gateway_processes)
|
||||||
|
self.assertIn("find_gateway_pids", source,
|
||||||
|
"kill_gateway_processes must use find_gateway_pids")
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (d) All subprocess.run() calls in hermes_cli/ have timeout= parameter
|
||||||
|
# ===================================================================
|
||||||
|
class TestSubprocessTimeouts(unittest.TestCase):
|
||||||
|
"""Check subprocess.run() calls for timeout coverage.
|
||||||
|
|
||||||
|
Note: Some calls legitimately don't need a timeout (e.g., status display
|
||||||
|
commands where the user sees the output). This test identifies which ones
|
||||||
|
are missing so they can be triaged.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _collect_missing_timeouts(self):
|
||||||
|
"""Parse every .py file in hermes_cli/ and find subprocess.run() without timeout."""
|
||||||
|
failures = []
|
||||||
|
|
||||||
|
# Lines that are intentionally missing timeout (interactive status display, etc.)
|
||||||
|
# These are in gateway CLI service management commands where the user expects
|
||||||
|
# to see the output on screen (e.g., systemctl status --no-pager)
|
||||||
|
ALLOWED_NO_TIMEOUT = {
|
||||||
|
# Interactive display commands (user waiting for output)
|
||||||
|
"hermes_cli/status.py",
|
||||||
|
"hermes_cli/gateway.py",
|
||||||
|
"hermes_cli/uninstall.py",
|
||||||
|
"hermes_cli/doctor.py",
|
||||||
|
# Interactive subprocess calls
|
||||||
|
"hermes_cli/main.py",
|
||||||
|
"hermes_cli/tools_config.py",
|
||||||
|
}
|
||||||
|
|
||||||
|
for py_file in sorted(HERMES_CLI.rglob("*.py")):
|
||||||
|
try:
|
||||||
|
source = py_file.read_text(encoding="utf-8")
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if "subprocess.run" not in source:
|
||||||
|
continue
|
||||||
|
|
||||||
|
rel = str(py_file.relative_to(PROJECT_ROOT))
|
||||||
|
if rel in ALLOWED_NO_TIMEOUT:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
tree = ast.parse(source, filename=str(py_file))
|
||||||
|
except SyntaxError:
|
||||||
|
failures.append(f"{rel}: SyntaxError in AST parse")
|
||||||
|
continue
|
||||||
|
|
||||||
|
for node in ast.walk(tree):
|
||||||
|
if not isinstance(node, ast.Call):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Detect subprocess.run(...)
|
||||||
|
func = node.func
|
||||||
|
is_subprocess_run = False
|
||||||
|
|
||||||
|
if isinstance(func, ast.Attribute) and func.attr == "run":
|
||||||
|
if isinstance(func.value, ast.Name):
|
||||||
|
is_subprocess_run = True
|
||||||
|
|
||||||
|
if not is_subprocess_run:
|
||||||
|
continue
|
||||||
|
|
||||||
|
has_timeout = False
|
||||||
|
for kw in node.keywords:
|
||||||
|
if kw.arg == "timeout":
|
||||||
|
has_timeout = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not has_timeout:
|
||||||
|
failures.append(f"{rel}:{node.lineno}: subprocess.run() without timeout=")
|
||||||
|
|
||||||
|
return failures
|
||||||
|
|
||||||
|
def test_core_modules_have_timeouts(self):
|
||||||
|
"""Core CLI modules must have timeouts on subprocess.run() calls.
|
||||||
|
|
||||||
|
Files with legitimate interactive subprocess.run() calls (e.g., installers,
|
||||||
|
status displays) are excluded from this check.
|
||||||
|
"""
|
||||||
|
# Files where subprocess.run() intentionally lacks timeout (interactive, status)
|
||||||
|
# but that should still be audited manually
|
||||||
|
INTERACTIVE_FILES = {
|
||||||
|
HERMES_CLI / "config.py", # setup/installer - user waits
|
||||||
|
HERMES_CLI / "gateway.py", # service management - user sees output
|
||||||
|
HERMES_CLI / "uninstall.py", # uninstaller - user waits
|
||||||
|
HERMES_CLI / "doctor.py", # diagnostics - user sees output
|
||||||
|
HERMES_CLI / "status.py", # status display - user waits
|
||||||
|
HERMES_CLI / "main.py", # mixed interactive/CLI
|
||||||
|
HERMES_CLI / "setup.py", # setup wizard - user waits
|
||||||
|
HERMES_CLI / "tools_config.py", # config editor - user waits
|
||||||
|
}
|
||||||
|
|
||||||
|
missing = []
|
||||||
|
for py_file in sorted(HERMES_CLI.rglob("*.py")):
|
||||||
|
if py_file in INTERACTIVE_FILES:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
source = py_file.read_text(encoding="utf-8")
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
if "subprocess.run" not in source:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
tree = ast.parse(source, filename=str(py_file))
|
||||||
|
except SyntaxError:
|
||||||
|
missing.append(f"{py_file.relative_to(PROJECT_ROOT)}: SyntaxError")
|
||||||
|
continue
|
||||||
|
for node in ast.walk(tree):
|
||||||
|
if not isinstance(node, ast.Call):
|
||||||
|
continue
|
||||||
|
func = node.func
|
||||||
|
if isinstance(func, ast.Attribute) and func.attr == "run":
|
||||||
|
if isinstance(func.value, ast.Name):
|
||||||
|
has_timeout = any(kw.arg == "timeout" for kw in node.keywords)
|
||||||
|
if not has_timeout:
|
||||||
|
rel = py_file.relative_to(PROJECT_ROOT)
|
||||||
|
missing.append(f"{rel}:{node.lineno}: missing timeout=")
|
||||||
|
|
||||||
|
self.assertFalse(
|
||||||
|
missing,
|
||||||
|
f"subprocess.run() calls missing timeout= in non-interactive files:\n"
|
||||||
|
+ "\n".join(f" {m}" for m in missing)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (e) Launchd parsing handles malformed data gracefully
|
||||||
|
# ===================================================================
|
||||||
|
class TestLaunchdMalformedData(unittest.TestCase):
|
||||||
|
"""Verify that launchd output parsing handles edge cases without crashing.
|
||||||
|
|
||||||
|
The fix in d3d5b895 added:
|
||||||
|
- Header line detection (skip lines where parts[0] == "PID")
|
||||||
|
- Label matching (only accept if parts[2] == expected label)
|
||||||
|
- Graceful ValueError handling for non-numeric PIDs
|
||||||
|
- PID > 0 check
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _parse_launchd_label_test(self, stdout: str, label: str = "ai.hermes.gateway") -> set:
|
||||||
|
"""Reproduce the hardened launchd parsing logic."""
|
||||||
|
pids = set()
|
||||||
|
for line in stdout.splitlines():
|
||||||
|
parts = line.strip().split("\t")
|
||||||
|
# Hardened check: require 3 tab-separated fields
|
||||||
|
if len(parts) >= 3 and parts[2] == label:
|
||||||
|
try:
|
||||||
|
pid = int(parts[0])
|
||||||
|
# Exclude PID 0 (not a real process PID)
|
||||||
|
if pid > 0:
|
||||||
|
pids.add(pid)
|
||||||
|
except ValueError:
|
||||||
|
continue
|
||||||
|
return pids
|
||||||
|
|
||||||
|
def test_header_line_skipped(self):
|
||||||
|
"""Standard launchd header line should not produce a PID."""
|
||||||
|
result = self._parse_launchd_label_test("PID\tExitCode\tLabel\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_malformed_lines_skipped(self):
|
||||||
|
"""Lines with non-numeric PIDs should be skipped."""
|
||||||
|
result = self._parse_launchd_label_test("abc\t0\tai.hermes.gateway\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_short_lines_skipped(self):
|
||||||
|
"""Lines with fewer than 3 tab-separated fields should be skipped."""
|
||||||
|
result = self._parse_launchd_label_test("12345\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_empty_output_handled(self):
|
||||||
|
"""Empty output should not crash."""
|
||||||
|
result = self._parse_launchd_label_test("")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_pid_zero_excluded(self):
|
||||||
|
"""PID 0 should be excluded (not a real process PID)."""
|
||||||
|
result = self._parse_launchd_label_test("0\t0\tai.hermes.gateway\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_negative_pid_excluded(self):
|
||||||
|
"""Negative PIDs should be excluded."""
|
||||||
|
result = self._parse_launchd_label_test("-1\t0\tai.hermes.gateway\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_wrong_label_skipped(self):
|
||||||
|
"""Lines for a different label should be skipped."""
|
||||||
|
result = self._parse_launchd_label_test("12345\t0\tcom.other.service\n")
|
||||||
|
self.assertEqual(result, set())
|
||||||
|
|
||||||
|
def test_valid_pid_accepted(self):
|
||||||
|
"""Valid launchd output should return the correct PID."""
|
||||||
|
result = self._parse_launchd_label_test("12345\t0\tai.hermes.gateway\n")
|
||||||
|
self.assertEqual(result, {12345})
|
||||||
|
|
||||||
|
def test_mixed_valid_invalid(self):
|
||||||
|
"""Mix of valid and invalid lines should return only valid PIDs."""
|
||||||
|
output = textwrap.dedent("""\
|
||||||
|
PID\tExitCode\tLabel
|
||||||
|
abc\t0\tai.hermes.gateway
|
||||||
|
-1\t0\tai.hermes.gateway
|
||||||
|
54321\t0\tai.hermes.gateway
|
||||||
|
12345\t1\tai.hermes.gateway""")
|
||||||
|
result = self._parse_launchd_label_test(output)
|
||||||
|
self.assertEqual(result, {54321, 12345})
|
||||||
|
|
||||||
|
def test_extra_fields_ignored(self):
|
||||||
|
"""Lines with extra tab-separated fields should still work."""
|
||||||
|
result = self._parse_launchd_label_test("12345\t0\tai.hermes.gateway\textra\n")
|
||||||
|
self.assertEqual(result, {12345})
|
||||||
|
|
||||||
|
|
||||||
|
# ===================================================================
|
||||||
|
# (f) Git commit verification
|
||||||
|
# ===================================================================
|
||||||
|
class TestCommitVerification(unittest.TestCase):
|
||||||
|
"""Verify the expected commits are present in gitea/main."""
|
||||||
|
|
||||||
|
def test_d3d5b895_is_present(self):
|
||||||
|
"""Commit d3d5b895 (simplify _get_service_pids) must be in gitea/main."""
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "rev-parse", "--verify", "d3d5b895^{commit}"],
|
||||||
|
capture_output=True, text=True, timeout=10,
|
||||||
|
cwd=PROJECT_ROOT,
|
||||||
|
)
|
||||||
|
self.assertEqual(result.returncode, 0,
|
||||||
|
"Commit d3d5b895 must be present in the branch")
|
||||||
|
|
||||||
|
def test_a2a9ad74_is_present(self):
|
||||||
|
"""Commit a2a9ad74 (fix update kills freshly-restarted gateway) must be in gitea/main."""
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "rev-parse", "--verify", "a2a9ad74^{commit}"],
|
||||||
|
capture_output=True, text=True, timeout=10,
|
||||||
|
cwd=PROJECT_ROOT,
|
||||||
|
)
|
||||||
|
self.assertEqual(result.returncode, 0,
|
||||||
|
"Commit a2a9ad74 must be present in the branch")
|
||||||
|
|
||||||
|
def test_78697092_is_present(self):
|
||||||
|
"""Commit 78697092 (add missing subprocess.run() timeouts) must be in gitea/main."""
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "rev-parse", "--verify", "78697092^{commit}"],
|
||||||
|
capture_output=True, text=True, timeout=10,
|
||||||
|
cwd=PROJECT_ROOT,
|
||||||
|
)
|
||||||
|
self.assertEqual(result.returncode, 0,
|
||||||
|
"Commit 78697092 must be present in the branch")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main(verbosity=2)
|
||||||
Reference in New Issue
Block a user