Compare commits

...

4 Commits

Author SHA1 Message Date
Timmy
52e3f6a253 fix: add test suite for quality gate modules (#629)
Some checks failed
Architecture Lint / Linter Tests (pull_request) Successful in 35s
Smoke Test / smoke (pull_request) Failing after 25s
Validate Config / YAML Lint (pull_request) Failing after 17s
Validate Config / JSON Validate (pull_request) Successful in 16s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 1m23s
Validate Config / Shell Script Lint (pull_request) Failing after 54s
Validate Config / Cron Syntax Check (pull_request) Successful in 12s
Validate Config / Deploy Script Dry Run (pull_request) Successful in 12s
Validate Config / Playbook Schema Validation (pull_request) Successful in 23s
PR Checklist / pr-checklist (pull_request) Failing after 6m34s
Validate Config / Python Test Suite (pull_request) Has been cancelled
Architecture Lint / Lint Repository (pull_request) Has been cancelled
34 tests covering both quality gate modules:

ci_automation_gate.py (17 tests):
- Trailing whitespace detection and auto-fix
- Missing final newline detection and fix
- JS/TS function length warnings (20 lines) and failures (50 lines)
- Python files excluded from JS function length check
- File type filtering (.py, .js, .ts checked; .md, .json skipped)
- Directory traversal with node_modules and .git exclusions
- Exit codes (0 for clean, 1 for failures)

task_gate.py (17 tests):
- Filter tag detection (EPIC, DO NOT CLOSE, PERMANENT, MORNING REPORT)
- Agent lane checking (no config, missing agent, present agent)
- Pre-task gate: issue not found, filter tags, assigned agents,
  existing PRs, clean pass
- Post-task gate: missing branch, naming convention warning,
  no commits failure
- Integration: gate runs on its own source files

Closes #629
2026-04-14 21:18:39 -04:00
ad751a6de6 docs: add pipeline scheduler README 2026-04-14 22:47:12 +00:00
130fa40f0c feat: add pipeline-scheduler cron job 2026-04-14 22:46:51 +00:00
82f9810081 feat: add nightly-pipeline-scheduler.sh 2026-04-14 22:46:38 +00:00
4 changed files with 909 additions and 0 deletions

View File

@@ -0,0 +1,9 @@
- name: Nightly Pipeline Scheduler
schedule: '*/30 18-23,0-8 * * *' # Every 30 min, off-peak hours only
tasks:
- name: Check and start pipelines
shell: "bash scripts/nightly-pipeline-scheduler.sh"
env:
PIPELINE_TOKEN_LIMIT: "500000"
PIPELINE_PEAK_START: "9"
PIPELINE_PEAK_END: "18"

View File

@@ -0,0 +1,50 @@
# Nightly Pipeline Scheduler
Auto-starts batch pipelines when inference is available.
## What It Does
1. Checks inference provider health (OpenRouter, Ollama, RunPod)
2. Checks if it's off-peak hours (configurable, default: after 6PM)
3. Checks interactive session load (don't fight with live users)
4. Checks daily token budget (configurable limit)
5. Starts the highest-priority incomplete pipeline
## Pipeline Priority Order
| Priority | Pipeline | Deps | Max Tokens |
|----------|----------|------|------------|
| 1 | playground-factory | none | 100,000 |
| 2 | training-factory | none | 150,000 |
| 3 | knowledge-mine | training-factory running | 80,000 |
| 4 | adversary | knowledge-mine running | 50,000 |
| 5 | codebase-genome | none | 120,000 |
## Usage
```bash
# Normal run (used by cron)
./scripts/nightly-pipeline-scheduler.sh
# Dry run (show what would start)
./scripts/nightly-pipeline-scheduler.sh --dry-run
# Status report
./scripts/nightly-pipeline-scheduler.sh --status
# Force start during peak hours
./scripts/nightly-pipeline-scheduler.sh --force
```
## Configuration
Set via environment variables:
- `PIPELINE_TOKEN_LIMIT`: Daily token budget (default: 500,000)
- `PIPELINE_PEAK_START`: Peak hours start (default: 9)
- `PIPELINE_PEAK_END`: Peak hours end (default: 18)
- `HERMES_HOME`: Hermes home directory (default: ~/.hermes)
## Cron
Runs every 30 minutes. Off-peak only (unless --force).
See `cron/pipeline-scheduler.yml`.

View File

@@ -0,0 +1,383 @@
#!/usr/bin/env bash
# nightly-pipeline-scheduler.sh — Auto-start batch pipelines when inference is available.
#
# Checks provider health, pipeline progress, token budget, and interactive load.
# Starts the highest-priority incomplete pipeline that can run.
#
# Usage:
# ./scripts/nightly-pipeline-scheduler.sh # Normal run
# ./scripts/nightly-pipeline-scheduler.sh --dry-run # Show what would start
# ./scripts/nightly-pipeline-scheduler.sh --status # Pipeline status report
set -euo pipefail
# --- Configuration ---
HERMES_HOME="${HERMES_HOME:-$HOME/.hermes}"
BUDGET_FILE="${HERMES_HOME}/pipeline_budget.json"
STATE_FILE="${HERMES_HOME}/pipeline_state.json"
LOG_FILE="${HERMES_HOME}/logs/pipeline-scheduler.log"
TOKEN_DAILY_LIMIT="${PIPELINE_TOKEN_LIMIT:-500000}"
PEAK_HOURS_START="${PIPELINE_PEAK_START:-9}"
PEAK_HOURS_END="${PIPELINE_PEAK_END:-18}"
# Pipeline definitions (priority order)
# Each pipeline: name, script, max_tokens, dependencies
PIPELINES=(
"playground-factory|scripts/pipeline_playground_factory.sh|100000|none"
"training-factory|scripts/pipeline_training_factory.sh|150000|none"
"knowledge-mine|scripts/pipeline_knowledge_mine.sh|80000|training-factory"
"adversary|scripts/pipeline_adversary.sh|50000|knowledge-mine"
"codebase-genome|scripts/pipeline_codebase_genome.sh|120000|none"
)
# --- Colors ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
CYAN='\033[0;36m'
NC='\033[0m'
# --- Helpers ---
now_hour() { date +%-H; }
is_peak_hours() {
local h=$(now_hour)
[[ $h -ge $PEAK_HOURS_START && $h -lt $PEAK_HOURS_END ]]
}
ensure_dirs() {
mkdir -p "$(dirname "$LOG_FILE")" "$(dirname "$BUDGET_FILE")" "$(dirname "$STATE_FILE")"
}
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"; }
get_budget_used_today() {
if [[ -f "$BUDGET_FILE" ]]; then
local today=$(date +%Y-%m-%d)
python3 -c "
import json, sys
with open('$BUDGET_FILE') as f:
d = json.load(f)
print(d.get('daily', {}).get('$today', {}).get('tokens_used', 0))
" 2>/dev/null || echo 0
else
echo 0
fi
}
get_budget_remaining() {
local used=$(get_budget_used_today)
echo $((TOKEN_DAILY_LIMIT - used))
}
update_budget() {
local pipeline="$1"
local tokens="$2"
local today=$(date +%Y-%m-%d)
python3 -c "
import json, os
path = '$BUDGET_FILE'
d = {}
if os.path.exists(path):
with open(path) as f:
d = json.load(f)
daily = d.setdefault('daily', {})
day = daily.setdefault('$today', {'tokens_used': 0, 'pipelines': {}})
day['tokens_used'] = day.get('tokens_used', 0) + $tokens
day['pipelines']['$pipeline'] = day['pipelines'].get('$pipeline', 0) + $tokens
with open(path, 'w') as f:
json.dump(d, f, indent=2)
"
}
get_pipeline_state() {
if [[ -f "$STATE_FILE" ]]; then
cat "$STATE_FILE"
else
echo "{}"
fi
}
set_pipeline_state() {
local pipeline="$1"
local state="$2" # running, complete, failed, skipped
python3 -c "
import json, os
path = '$STATE_FILE'
d = {}
if os.path.exists(path):
with open(path) as f:
d = json.load(f)
d['$pipeline'] = {'state': '$state', 'updated': '$(date -Iseconds)'}
with open(path, 'w') as f:
json.dump(d, f, indent=2)
"
}
is_pipeline_complete() {
local pipeline="$1"
python3 -c "
import json, os
path = '$STATE_FILE'
if not os.path.exists(path):
print('false')
else:
with open(path) as f:
d = json.load(f)
state = d.get('$pipeline', {}).get('state', 'not_started')
print('true' if state == 'complete' else 'false')
" 2>/dev/null || echo false
}
is_pipeline_running() {
local pipeline="$1"
python3 -c "
import json, os
path = '$STATE_FILE'
if not os.path.exists(path):
print('false')
else:
with open(path) as f:
d = json.load(f)
state = d.get('$pipeline', {}).get('state', 'not_started')
print('true' if state == 'running' else 'false')
" 2>/dev/null || echo false
}
check_dependency() {
local dep="$1"
if [[ "$dep" == "none" ]]; then
return 0
fi
# For knowledge-mine: training-factory must be running or complete
if [[ "$dep" == "training-factory" ]]; then
local state=$(python3 -c "
import json, os
path = '$STATE_FILE'
if not os.path.exists(path):
print('not_started')
else:
with open(path) as f:
d = json.load(f)
print(d.get('training-factory', {}).get('state', 'not_started'))
" 2>/dev/null || echo "not_started")
[[ "$state" == "running" || "$state" == "complete" ]]
return $?
fi
# For adversary: knowledge-mine must be at least 50% done
# Simplified: check if it's running (we'd need progress tracking for 50%)
if [[ "$dep" == "knowledge-mine" ]]; then
local state=$(python3 -c "
import json, os
path = '$STATE_FILE'
if not os.path.exists(path):
print('not_started')
else:
with open(path) as f:
d = json.load(f)
print(d.get('knowledge-mine', {}).get('state', 'not_started'))
" 2>/dev/null || echo "not_started")
[[ "$state" == "running" || "$state" == "complete" ]]
return $?
fi
return 0
}
check_inference_available() {
# Check if any inference provider is responding
# 1. Check OpenRouter
local or_ok=$(curl -s -o /dev/null -w "%{http_code}" \
--connect-timeout 5 "https://openrouter.ai/api/v1/models" 2>/dev/null || echo "000")
# 2. Check local Ollama
local ollama_ok=$(curl -s -o /dev/null -w "%{http_code}" \
--connect-timeout 5 "http://localhost:11434/api/tags" 2>/dev/null || echo "000")
# 3. Check RunPod (if configured)
local runpod_ok="000"
if [[ -n "${RUNPOD_ENDPOINT:-}" ]]; then
runpod_ok=$(curl -s -o /dev/null -w "%{http_code}" \
--connect-timeout 5 "$RUNPOD_ENDPOINT/health" 2>/dev/null || echo "000")
fi
if [[ "$or_ok" == "200" || "$ollama_ok" == "200" || "$runpod_ok" == "200" ]]; then
return 0
fi
return 1
}
check_interactive_load() {
# Check if there are active interactive sessions (don't fight with live users)
# Look for tmux panes with active hermes sessions
local active=$(tmux list-panes -a -F '#{pane_pid} #{pane_current_command}' 2>/dev/null \
| grep -c "hermes\|python3" || echo 0)
# If more than 3 interactive sessions, skip pipeline start
if [[ $active -gt 3 ]]; then
return 1
fi
return 0
}
start_pipeline() {
local name="$1"
local script="$2"
local max_tokens="$3"
local budget_remaining="$4"
local mode="${5:-run}"
if [[ "$budget_remaining" -lt "$max_tokens" ]]; then
log "SKIP $name: insufficient budget ($budget_remaining < $max_tokens tokens)"
return 1
fi
if [[ ! -f "$script" ]]; then
log "SKIP $name: script not found ($script)"
return 1
fi
if [[ "$mode" == "dry-run" ]]; then
log "DRY-RUN: Would start $name (budget: $budget_remaining, needs: $max_tokens)"
return 0
fi
log "START $name (budget: $budget_remaining, max_tokens: $max_tokens)"
set_pipeline_state "$name" "running"
# Run in background, capture output
local log_path="${HERMES_HOME}/logs/pipeline-${name}.log"
bash "$script" --max-tokens "$max_tokens" >> "$log_path" 2>&1 &
local pid=$!
# Wait a moment to check if it started OK
sleep 2
if kill -0 $pid 2>/dev/null; then
log "RUNNING $name (PID: $pid, log: $log_path)"
# Record the PID
python3 -c "
import json, os
path = '$STATE_FILE'
d = {}
if os.path.exists(path):
with open(path) as f:
d = json.load(f)
d['$name']['pid'] = $pid
with open(path, 'w') as f:
json.dump(d, f, indent=2)
"
return 0
else
log "FAIL $name: script exited immediately"
set_pipeline_state "$name" "failed"
return 1
fi
}
# --- Main ---
main() {
local mode="${1:-run}"
ensure_dirs
log "=== Pipeline Scheduler ($mode) ==="
# Check 1: Is inference available?
if ! check_inference_available; then
log "No inference provider available. Skipping all pipelines."
exit 0
fi
log "Inference: AVAILABLE"
# Check 2: Is it peak hours?
if is_peak_hours && [[ "$mode" != "--force" ]]; then
local h=$(now_hour)
log "Peak hours ($h:00). Skipping pipeline start. Use --force to override."
exit 0
fi
log "Off-peak: OK"
# Check 3: Interactive load
if ! check_interactive_load && [[ "$mode" != "--force" ]]; then
log "High interactive load. Skipping pipeline start."
exit 0
fi
log "Interactive load: OK"
# Check 4: Token budget
local budget=$(get_budget_remaining)
log "Token budget remaining: $budget / $TOKEN_DAILY_LIMIT"
if [[ $budget -le 0 ]]; then
log "Daily token budget exhausted. Stopping."
exit 0
fi
# Check 5: Pipeline status
if [[ "$mode" == "--status" ]]; then
echo -e "${CYAN}Pipeline Status:${NC}"
echo "────────────────────────────────────────────────────"
for entry in "${PIPELINES[@]}"; do
IFS='|' read -r name script max_tokens dep <<< "$entry"
local state=$(python3 -c "
import json, os
path = '$STATE_FILE'
if not os.path.exists(path):
print('not_started')
else:
with open(path) as f:
d = json.load(f)
print(d.get('$name', {}).get('state', 'not_started'))
" 2>/dev/null || echo "not_started")
local color=$NC
case "$state" in
running) color=$YELLOW ;;
complete) color=$GREEN ;;
failed) color=$RED ;;
esac
printf " %-25s %b%s%b (max: %s tokens, dep: %s)\n" "$name" "$color" "$state" "$NC" "$max_tokens" "$dep"
done
echo "────────────────────────────────────────────────────"
echo " Budget: $budget / $TOKEN_DAILY_LIMIT tokens remaining"
echo " Peak hours: $PEAK_HOURS_START:00 - $PEAK_HOURS_END:00"
exit 0
fi
# Find and start the highest-priority incomplete pipeline
local started=0
for entry in "${PIPELINES[@]}"; do
IFS='|' read -r name script max_tokens dep <<< "$entry"
# Skip if already running or complete
if [[ "$(is_pipeline_running $name)" == "true" ]]; then
log "SKIP $name: already running"
continue
fi
if [[ "$(is_pipeline_complete $name)" == "true" ]]; then
log "SKIP $name: already complete"
continue
fi
# Check dependency
if ! check_dependency "$dep"; then
log "SKIP $name: dependency $dep not met"
continue
fi
# Try to start
if start_pipeline "$name" "$script" "$max_tokens" "$budget" "$mode"; then
started=1
# Only start one pipeline per run (let it claim tokens before next check)
# Exception: playground-factory and training-factory can run in parallel
if [[ "$name" != "playground-factory" && "$name" != "training-factory" ]]; then
break
fi
fi
done
if [[ $started -eq 0 ]]; then
log "No pipelines to start (all complete, running, or blocked)."
fi
log "=== Pipeline Scheduler done ==="
}
main "$@"

467
tests/test_quality_gate.py Normal file
View File

@@ -0,0 +1,467 @@
"""Tests for the Quality Gate modules.
Tests for:
- ci_automation_gate.py: linting, function length, auto-fix, counters
- task_gate.py: pre/post task gate logic, lane checking, filter tags
Refs: #629
"""
import json
import os
import sys
import tempfile
from pathlib import Path
from unittest.mock import patch, MagicMock
# Add scripts/ to path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent / "scripts"))
from ci_automation_gate import QualityGate
# ===========================================================================
# CI AUTOMATION GATE TESTS
# ===========================================================================
# -- helpers ---------------------------------------------------------------
def _write_file(dirpath, relpath, content):
"""Write a file in a temp directory and return its Path."""
p = Path(dirpath) / relpath
p.parent.mkdir(parents=True, exist_ok=True)
p.write_text(content)
return p
def _run_gate_on_file(dirpath, relpath, content, fix=False):
"""Write a file, run QualityGate on it, return the gate instance."""
p = _write_file(dirpath, relpath, content)
gate = QualityGate(fix=fix)
gate.check_file(p)
return gate
# -- trailing whitespace ---------------------------------------------------
def test_trailing_whitespace_warns():
"""Lines with trailing whitespace should produce a warning."""
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.py", "x = 1 \ny = 2\n")
assert gate.warnings >= 1, "Expected warning for trailing whitespace"
def test_trailing_whitespace_fixes():
"""With fix=True, trailing whitespace should be removed."""
with tempfile.TemporaryDirectory() as tmp:
p = _write_file(tmp, "test.py", "x = 1 \ny = 2\n")
gate = QualityGate(fix=True)
gate.check_file(p)
fixed = p.read_text()
assert "x = 1 \n" not in fixed, "Trailing whitespace should be removed"
assert fixed == "x = 1\ny = 2\n"
def test_clean_file_no_warnings():
"""A clean file should produce no warnings."""
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.py", "x = 1\ny = 2\n")
assert gate.warnings == 0
assert gate.failures == 0
# -- missing final newline -------------------------------------------------
def test_missing_final_newline_warns():
"""File without trailing newline should warn."""
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.py", "x = 1")
assert gate.warnings >= 1, "Expected warning for missing final newline"
def test_missing_final_newline_fixed():
"""With fix=True, missing final newline should be added."""
with tempfile.TemporaryDirectory() as tmp:
p = _write_file(tmp, "test.py", "x = 1")
gate = QualityGate(fix=True)
gate.check_file(p)
fixed = p.read_text()
assert fixed.endswith("\n"), "Fixed file should end with newline"
# -- function length (JS/TS) -----------------------------------------------
def test_short_function_passes():
"""A short JS function should not warn or fail."""
with tempfile.TemporaryDirectory() as tmp:
code = "function hello() {\n return 1;\n}\n"
gate = _run_gate_on_file(tmp, "test.js", code)
assert gate.failures == 0
assert gate.warnings == 0
def test_medium_function_warns():
"""JS function over 20 lines should warn."""
body = "\n".join(f" console.log({i});" for i in range(22))
code = f"function big() {{\n{body}\n}}\n"
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.js", code)
assert gate.warnings >= 1, "Expected warning for function over 20 lines"
def test_long_function_fails():
"""JS function over 50 lines should fail."""
body = "\n".join(f" console.log({i});" for i in range(52))
code = f"function huge() {{\n{body}\n}}\n"
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.js", code)
assert gate.failures >= 1, "Expected failure for function over 50 lines"
def test_python_function_length_not_checked():
"""Python functions should not be checked by the JS regex."""
body = "\n".join(f" print({i})" for i in range(60))
code = f"def huge():\n{body}\n"
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.py", code)
assert gate.failures == 0, "Python functions should not trigger JS length check"
# -- file type filtering ---------------------------------------------------
def test_non_code_file_skipped():
"""Non-code files (.md, .json, .txt) should be skipped."""
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "README.md", "# Title \ntrailing ws\n")
assert gate.warnings == 0, "Markdown files should be skipped"
assert gate.failures == 0
def test_typescript_checked():
"""TypeScript files should be checked."""
with tempfile.TemporaryDirectory() as tmp:
gate = _run_gate_on_file(tmp, "test.ts", "x = 1 \n")
assert gate.warnings >= 1, "TypeScript files should be checked"
# -- directory traversal ---------------------------------------------------
def test_run_scans_directory():
"""Gate.run() should scan all files in a directory tree."""
with tempfile.TemporaryDirectory() as tmp:
_write_file(tmp, "clean.py", "x = 1\n")
_write_file(tmp, "dirty.js", "x = 1 \n")
_write_file(tmp, "sub/nested.ts", "y = 2 \n")
gate = QualityGate()
gate.run(tmp)
assert gate.warnings >= 2, "Should find trailing whitespace in both dirty files"
def test_run_skips_node_modules():
"""Gate.run() should skip node_modules directories."""
with tempfile.TemporaryDirectory() as tmp:
_write_file(tmp, "clean.py", "x = 1\n")
_write_file(tmp, "node_modules/pkg/index.js", "x = 1 \n")
gate = QualityGate()
gate.run(tmp)
assert gate.warnings == 0, "node_modules should be skipped"
def test_run_skips_git_dir():
"""Gate.run() should skip .git directories."""
with tempfile.TemporaryDirectory() as tmp:
_write_file(tmp, "clean.py", "x = 1\n")
_write_file(tmp, ".git/hooks/pre-commit", "x = 1 \n")
gate = QualityGate()
gate.run(tmp)
assert gate.warnings == 0, ".git should be skipped"
# -- exit code -------------------------------------------------------------
def test_failures_cause_exit_code_1():
"""Gate with failures should exit with code 1."""
import subprocess
with tempfile.TemporaryDirectory() as tmp:
body = "\n".join(f" console.log({i});" for i in range(52))
_write_file(tmp, "huge.js", f"function f() {{\n{body}\n}}\n")
r = subprocess.run(
[sys.executable, str(Path(__file__).resolve().parent.parent / "scripts" / "ci_automation_gate.py"), tmp],
capture_output=True, text=True
)
assert r.returncode == 1, f"Expected exit 1, got {r.returncode}"
def test_clean_directory_exits_0():
"""Gate on clean directory should exit 0."""
import subprocess
with tempfile.TemporaryDirectory() as tmp:
_write_file(tmp, "clean.py", "x = 1\ny = 2\n")
r = subprocess.run(
[sys.executable, str(Path(__file__).resolve().parent.parent / "scripts" / "ci_automation_gate.py"), tmp],
capture_output=True, text=True
)
assert r.returncode == 0, f"Expected exit 0, got {r.returncode}"
# ===========================================================================
# TASK GATE TESTS
# ===========================================================================
# Import task_gate functions directly — test the pure logic
from task_gate import check_agent_lane, FILTER_TAGS, AGENT_USERNAMES
# -- filter tags -----------------------------------------------------------
def test_epic_tag_filtered():
"""Issues with [EPIC] tag should be filtered."""
title = "[EPIC] Build the thing"
for tag in FILTER_TAGS:
tag_clean = tag.upper().replace("[", "").replace("]", "")
if tag_clean in title.upper():
return # Found
assert False, "EPIC tag should be detected by FILTER_TAGS"
def test_permanent_tag_filtered():
"""Issues with [DO NOT CLOSE] tag should be filtered."""
title = "[DO NOT CLOSE] Keep this open forever"
title_upper = title.upper()
matched = any(
tag.upper().replace("[", "").replace("]", "") in title_upper
for tag in FILTER_TAGS
)
assert matched, "[DO NOT CLOSE] should be filtered"
def test_normal_title_not_filtered():
"""Normal issue titles should not be filtered."""
title = "Fix the login bug in auth.py"
title_upper = title.upper()
matched = any(
tag.upper().replace("[", "").replace("]", "") in title_upper
for tag in FILTER_TAGS
)
assert not matched, "Normal title should not be filtered"
def test_morning_report_filtered():
"""[MORNING REPORT] issues should be filtered."""
title = "[MORNING REPORT] Fleet status 2026-04-13"
title_upper = title.upper()
matched = any(
tag.upper().replace("[", "").replace("]", "") in title_upper
for tag in FILTER_TAGS
)
assert matched, "[MORNING REPORT] should be filtered"
# -- agent lane checker ----------------------------------------------------
def test_lane_check_no_config():
"""With no lane config, lane check should pass."""
ok, msg = check_agent_lane("groq", "Fix bug", [], {})
assert ok
assert "No lane config" in msg
def test_lane_check_agent_not_in_config():
"""Agent not in lane config should pass."""
lanes = {"ezra": ["docs"]}
ok, msg = check_agent_lane("groq", "Fix bug", [], lanes)
assert ok
assert "No lanes defined" in msg
def test_lane_check_agent_in_config():
"""Agent in lane config should return their lanes."""
lanes = {"groq": ["code", "infra"]}
ok, msg = check_agent_lane("groq", "Fix bug", [], lanes)
assert ok
assert "groq" in msg
assert "code" in msg
# -- agent usernames -------------------------------------------------------
def test_known_agents_in_usernames():
"""Core agent usernames should be registered."""
assert "groq" in AGENT_USERNAMES
assert "ezra" in AGENT_USERNAMES
assert "bezalel" in AGENT_USERNAMES
assert "timmy" in AGENT_USERNAMES
assert "codex-agent" in AGENT_USERNAMES
# -- pre-task gate (mocked API) -------------------------------------------
def test_pre_task_gate_issue_not_found():
"""Pre-task gate should fail if issue doesn't exist."""
from task_gate import pre_task_gate
with patch("task_gate.gitea_get", return_value=None):
passed, msgs = pre_task_gate("timmy-config", 99999, "groq")
assert not passed
assert any("not found" in m for m in msgs)
def test_pre_task_gate_filter_tag_blocks():
"""Pre-task gate should block filtered issues."""
from task_gate import pre_task_gate
mock_issue = {
"title": "[EPIC] Big thing",
"assignees": [],
"labels": [],
}
def mock_gitea_get(path):
if "issues/100" in path:
return mock_issue
if "branches" in path:
return []
if "pulls" in path:
return []
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = pre_task_gate("timmy-config", 100, "groq")
assert not passed
assert any("filter" in m.lower() for m in msgs)
def test_pre_task_gate_assigned_agent_blocks():
"""Pre-task gate should block issues assigned to other agents."""
from task_gate import pre_task_gate
mock_issue = {
"title": "Fix bug",
"assignees": [{"login": "ezra"}],
"labels": [],
}
def mock_gitea_get(path):
if "issues/100" in path:
return mock_issue
if "branches" in path:
return []
if "pulls" in path:
return []
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = pre_task_gate("timmy-config", 100, "groq")
assert not passed
assert any("Already assigned" in m for m in msgs)
def test_pre_task_gate_existing_pr_blocks():
"""Pre-task gate should block issues with existing PRs."""
from task_gate import pre_task_gate
mock_issue = {
"title": "Fix bug",
"assignees": [],
"labels": [],
}
mock_prs = [{"number": 50, "title": "Fix for #100", "body": "Closes #100"}]
def mock_gitea_get(path):
if "issues/100" in path:
return mock_issue
if "branches" in path:
return []
if "pulls" in path:
return mock_prs
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = pre_task_gate("timmy-config", 100, "groq")
assert not passed
assert any("Open PR" in m for m in msgs)
def test_pre_task_gate_clean_passes():
"""Pre-task gate should pass for clean issues."""
from task_gate import pre_task_gate
def mock_gitea_get(path):
if "issues/100" in path:
return {"title": "Fix bug", "assignees": [], "labels": []}
if "branches" in path:
return []
if "pulls" in path:
return []
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = pre_task_gate("timmy-config", 100, "groq")
assert passed
# -- post-task gate (mocked API) ------------------------------------------
def test_post_task_gate_missing_branch():
"""Post-task gate should fail if branch doesn't exist."""
from task_gate import post_task_gate
with patch("task_gate.gitea_get", return_value=None):
passed, msgs = post_task_gate("timmy-config", 100, "groq", "groq/fix-100")
assert not passed
assert any("does not exist" in m for m in msgs)
def test_post_task_gate_no_agent_prefix_warns():
"""Post-task gate should warn if branch doesn't start with agent name."""
from task_gate import post_task_gate
def mock_gitea_get(path):
if "branches/fix-100" in path:
return {"name": "fix-100"}
if "compare" in path:
return {"commits": [{"id": "abc"}], "diff_files": ["file.py"]}
if "pulls" in path:
return []
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = post_task_gate("timmy-config", 100, "groq", "fix-100")
assert passed # Warning, not failure
assert any("doesn't start with agent" in m or "convention" in m for m in msgs)
def test_post_task_gate_no_commits_fails():
"""Post-task gate should fail if branch has no commits ahead of main."""
from task_gate import post_task_gate
def mock_gitea_get(path):
if "branches/" in path:
return {"name": "groq/fix-100"}
if "compare" in path:
return {"commits": [], "diff_files": []}
if "pulls" in path:
return []
return None
with patch("task_gate.gitea_get", side_effect=mock_gitea_get):
passed, msgs = post_task_gate("timmy-config", 100, "groq", "groq/fix-100")
assert not passed
assert any("no commits" in m.lower() for m in msgs)
# ===========================================================================
# INTEGRATION: gate on real script files
# ===========================================================================
def test_ci_gate_on_actual_task_gate():
"""Run QualityGate on task_gate.py itself — should pass."""
gate_path = Path(__file__).resolve().parent.parent / "scripts" / "task_gate.py"
if gate_path.exists():
gate = QualityGate()
gate.check_file(gate_path)
assert gate.failures == 0, f"task_gate.py should pass quality gate, got {gate.failures} failures"
def test_ci_gate_on_actual_ci_automation_gate():
"""Run QualityGate on ci_automation_gate.py itself — should pass."""
gate_path = Path(__file__).resolve().parent.parent / "scripts" / "ci_automation_gate.py"
if gate_path.exists():
gate = QualityGate()
gate.check_file(gate_path)
assert gate.failures == 0, f"ci_automation_gate.py should pass quality gate, got {gate.failures} failures"